You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This commit was created on GitHub.com and signed with GitHub’s verified signature.
The key has expired.
[1.18.78]
Changed
Dynamic batch sizes: Translator.translate() will adjust batch size in beam search to the actual number of inputs without using padding.
[1.18.77]
Added
sockeye.score now loads data on demand and doesn't skip any input lines
[1.18.76]
Changed
Do not compare scores from translation and scoring in integration tests.
Added
Adding the option via the flag --stop-training-on-decoder-failure to stop training in case the checkpoint decoder dies (e.g. because there is not enough memory).
In case this is turned on a checkpoint decoder is launched right when training starts in order to fail as early as possible.
[1.18.75]
Changed
Do not create dropout layers for inference models for performance reasons.
[1.18.74]
Changed
Revert change in 1.18.72 as no memory saving could be observed.
[1.18.73]
Fixed
Fixed a bug where source-factors-num-embed was not correctly adjusted to num-embed
when using prepared data & source-factor-combine sum.