Skip to content
This repository has been archived by the owner on Aug 10, 2023. It is now read-only.

Releases: hfxunlp/transformer

Pre-Release v0.1.6

11 Mar 16:16
Compare
Choose a tag to compare
Pre-Release v0.1.6 Pre-release
Pre-release

Support Transparent Attention;
Accelerate the decoding of test set through sorting.

Pre-Release v0.1.5

01 Mar 10:32
Compare
Choose a tag to compare
Pre-Release v0.1.5 Pre-release
Pre-release

Add support for original paper described transformer.

Pre-Release v0.1.4

08 Feb 13:38
Compare
Choose a tag to compare
Pre-Release v0.1.4 Pre-release
Pre-release

fix potential break with multi-gpu decoding with 1 as batch size

Pre-Release v0.1.3

28 Jan 12:50
Compare
Choose a tag to compare
Pre-Release v0.1.3 Pre-release
Pre-release

fixes

Pre-Release v0.1.2

24 Jan 08:30
Compare
Choose a tag to compare
Pre-Release v0.1.2 Pre-release
Pre-release

A ranking tool is added to help data selection;
Averaging of probabilities rather than log probabilities in Ensemble;
Extracting and preventing generation of source side only words is supported for shared vocabulary (with tools/fbindexes.py);
A tool (mkcy.py) is added to convert python modules into C libraries, but no additional code level improvement for Cython;
Several typos are fixed with above tool.

Pre-Release v0.1.1

18 Jan 11:29
Compare
Choose a tag to compare
Pre-Release v0.1.1 Pre-release
Pre-release

Bias in MultiHeadAttn is removed in this release;
Only parameters of the trained model is saved, rather than state dict, trained model with v0.1.0 is not loadable as fine_tune_m without additional convertion;
Try to support RNMT, but recurrent is slow due to less efficient parallelization.

Init

11 Jan 08:13
Compare
Choose a tag to compare
Init Pre-release
Pre-release
v0.1.0

Init