Skip to content

Commit

Permalink
Merge pull request #822 from awslabs/sockeye_2_merge_again
Browse files Browse the repository at this point in the history
Exception merge commit to move master to Sockeye 2
  • Loading branch information
fhieber authored Jun 3, 2020
2 parents 482f9d4 + ed01ab8 commit 88dc440
Show file tree
Hide file tree
Showing 148 changed files with 7,659 additions and 19,547 deletions.
2 changes: 0 additions & 2 deletions .github/workflows/push_pr.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,9 @@ name: push and pull request testing
on:
push:
branches:
- sockeye_2
- master
pull_request:
branches:
- sockeye_2
- master

jobs:
Expand Down
2 changes: 0 additions & 2 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -18,5 +18,3 @@
.pytest_cache
tags
sockeye/__pycache__
git_version.py

3 changes: 0 additions & 3 deletions .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,6 @@ before_install:
- docker pull ubuntu:16.04

python:
- "3.4"
- "3.5"
- "3.6"

Expand All @@ -26,9 +25,7 @@ script:
- mypy --version
- mypy --ignore-missing-imports --follow-imports=silent @typechecked-files --no-strict-optional
- check-manifest --ignore sockeye/git_version.py
- if [ "$TRAVIS_EVENT_TYPE" != "cron" ]; then python -m pytest -k "Copy:lstm:lstm" --maxfail=1 test/system; fi
- if [ "$TRAVIS_EVENT_TYPE" != "cron" ]; then python -m pytest -k "Copy:transformer:transformer" --maxfail=1 test/system; fi
- if [ "$TRAVIS_EVENT_TYPE" != "cron" ]; then python -m pytest -k "Copy:cnn:cnn" --maxfail=1 test/system; fi
- if [ "$TRAVIS_EVENT_TYPE" = "cron" ]; then python -m pytest --maxfail=1 test/system; fi
- if [ "$TRAVIS_EVENT_TYPE" = "cron" ]; then python -m sockeye_contrib.autopilot.test; fi

Expand Down
128 changes: 93 additions & 35 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
# Changelog

All notable changes to the project are documented in this file.

Version numbers are of the form `1.0.0`.
Expand All @@ -10,63 +11,120 @@ Note that Sockeye has checks in place to not translate with an old model that wa

Each version section may have have subsections for: _Added_, _Changed_, _Removed_, _Deprecated_, and _Fixed_.

## [1.18.115]
### Added
- Added requirements for MXnet compatible with cuda 10.1.
## [2.1.7]

## [1.18.114]
### Fixed
- Fix bug in prepare_train_data arguments.
### Changed

## [1.18.113]
### Fixed
- Added logging arguments for prepare_data CLI.
- Optimize prepare_data by saving the shards in parallel. The prepare_data script accepts a new parameter `--max-processes` to control the level of parallelism with which shards are written to disk.

## [2.1.6]

### Changed

- Updated Dockerfiles optimized for CPU (intgemm int8 inference, full MKL support) and GPU (distributed training with Horovod). See [sockeye_contrib/docker](sockeye_contrib/docker).

## [1.18.112]
### Added
- Option to suppress creation of logfiles for CLIs (`--no-logfile`).

## [1.18.111]
- Official support for int8 quantization with [intgemm](https://github.com/kpu/intgemm):
- This requires the "intgemm" fork of MXNet ([kpuatamazon/incubator-mxnet/intgemm](https://github.com/kpuatamazon/incubator-mxnet/tree/intgemm)). This is the version of MXNet used in the Sockeye CPU docker image (see [sockeye_contrib/docker](sockeye_contrib/docker)).
- Use `sockeye.translate --dtype int8` to quantize a trained float32 model at runtime.
- Use the `sockeye.quantize` CLI to annotate a float32 model with int8 scaling factors for fast runtime quantization.

## [2.1.5]

### Changed

- Changed state caching for transformer models during beam search to cache states with attention heads already separated out. This avoids repeated transpose operations during decoding, leading to faster inference.

## [2.1.4]

### Added
- Added an optional checkpoint callback for the train function.

- Added Dockerfiles that build an experimental CPU-optimized Sockeye image:
- Uses the latest versions of [kpuatamazon/incubator-mxnet](https://github.com/kpuatamazon/incubator-mxnet) (supports [intgemm](https://github.com/kpu/intgemm) and makes full use of Intel MKL) and [kpuatamazon/sockeye](https://github.com/kpuatamazon/sockeye) (supports int8 quantization for inference).
- See [sockeye_contrib/docker](sockeye_contrib/docker).

## [2.1.3]

### Changed
- Excluded gradients from pickled fields of TrainState

## [1.18.110]
- Performance optimizations to beam search inference
- Remove unneeded take ops on encoder states
- Gathering input data before sending to GPU, rather than sending each batch element individually
- All of beam search can be done in fp16, if specified by the model
- Other small miscellaneous optimizations
- Model states are now a flat list in ensemble inference, structure of states provided by `state_structure()`

## [2.1.2]

### Changed
- We now guard against failures to run `nvidia-smi` for GPU memory monitoring.

## [1.18.109]
### Fixed
- Fixed the metric names by prefixing training metrics with 'train-' and validation metrics with 'val-'. Also restricted the custom logging function to accept only a dictionary and a compulsory global_step parameter.
- Updated to [MXNet 1.6.0](https://github.com/apache/incubator-mxnet/tree/1.6.0)

### Added

- Added support for CUDA 10.2

### Removed

- Removed support for CUDA<9.1 / CUDNN<7.5

## [2.1.1]

### Added
- Ability to set environment variables from training/translate CLIs before MXNet is imported. For example, users can
configure MXNet as such: `--env "OMP_NUM_THREADS=1;MXNET_ENGINE_TYPE=NaiveEngine"`

## [2.1.0]

## [1.18.108]
### Changed
- More verbose log messages about target token counts.

## [1.18.107]
- Version bump, which should have been included in commit b0461b due to incompatible models.

## [2.0.1]

### Changed
- Updated to [MXNet 1.5.0](https://github.com/apache/incubator-mxnet/tree/1.5.0)

## [1.18.106]
### Added
- Added an optional time limit for stopping training. The training will stop at the next checkpoint after reaching the time limit.
- Inference defaults to using the max input length observed in training (versus scaling down based on mean length ratio and standard deviations).

## [1.18.105]
### Added
- Added support for a possibility to have a custom metrics logger - a function passed as an extra parameter. If supplied, the logger is called during training.

## [1.18.104]
- Additional parameter fixing strategies:
- `all_except_feed_forward`: Only train feed forward layers.
- `encoder_and_source_embeddings`: Only train the decoder (decoder layers, output layer, and target embeddings).
- `encoder_half_and_source_embeddings`: Train the latter half of encoder layers and the decoder.
- Option to specify the number of CPU threads without using an environment variable (`--omp-num-threads`).
- More flexibility for source factors combination

## [2.0.0]

### Changed
- Implemented an attention-based copy mechanism as described in [Jia, Robin, and Percy Liang. "Data recombination for neural semantic parsing." (2016)](https://arxiv.org/abs/1606.03622).
- Added a <ptr\d+> special symbol to explicitly point at an input token in the target sequence
- Changed the decoder interface to pass both the decoder data and the pointer data.
- Changed the AttentionState named tuple to add the raw attention scores.

- Update to [MXNet 1.5.0](https://github.com/apache/incubator-mxnet/tree/1.5.0)
- Moved `SockeyeModel` implementation and all layers to [Gluon API](http://mxnet.incubator.apache.org/versions/master/gluon/index.html)
- Removed support for Python 3.4.
- Removed image captioning module
- Removed outdated Autopilot module
- Removed unused training options: Eve, Nadam, RMSProp, Nag, Adagrad, and Adadelta optimizers, `fixed-step` and `fixed-rate-inv-t` learning rate schedulers
- Updated and renamed learning rate scheduler `fixed-rate-inv-sqrt-t` -> `inv-sqrt-decay`
- Added script for plotting metrics files: [sockeye_contrib/plot_metrics.py](sockeye_contrib/plot_metrics.py)
- Removed option `--weight-tying`. Weight tying is enabled by default, disable with `--weight-tying-type none`.

### Added

- Added distributed training support with Horovod/OpenMPI. Use `horovodrun` and the `--horovod` training flag.
- Added Dockerfiles that build a Sockeye image with all features enabled. See [sockeye_contrib/docker](sockeye_contrib/docker).
- Added `none` learning rate scheduler (use a fixed rate throughout training)
- Added `linear-decay` learning rate scheduler
- Added training option `--learning-rate-t-scale` for time-based decay schedulers
- Added support for MXNet's [Automatic Mixed Precision](https://mxnet.incubator.apache.org/versions/master/tutorials/amp/amp_tutorial.html). Activate with the `--amp` training flag. For best results, make sure as many model dimensions are possible are multiples of 8.
- Added options for making various model dimensions multiples of a given value. For example, use `--pad-vocab-to-multiple-of 8`, `--bucket-width 8 --no-bucket-scaling`, and `--round-batch-sizes-to-multiple-of 8` with AMP training.
- Added [GluonNLP](http://gluon-nlp.mxnet.io/)'s BERTAdam optimizer, an implementation of the Adam variant used by Devlin et al. ([2018](https://arxiv.org/pdf/1810.04805.pdf)). Use `--optimizer bertadam`.
- Added training option `--checkpoint-improvement-threshold` to set the amount of metric improvement required over the window of previous checkpoints to be considered actual model improvement (used with `--max-num-checkpoint-not-improved`).

## [1.18.103]
### Added
- Added ability to score image-sentence pairs by extending the scoring feature originally implemented for machine
- Added ability to score image-sentence pairs by extending the scoring feature originally implemented for machine
translation to the image captioning module.

## [1.18.102]
Expand Down Expand Up @@ -95,7 +153,7 @@ Each version section may have have subsections for: _Added_, _Changed_, _Removed

## [1.18.96]
### Changed
- Extracted prepare vocab functionality in the build vocab step into its own function. This matches the pattern in prepare data and train where the main() function only has argparsing, and it invokes a separate function to do the work. This is to allow modules that import this one to circumvent the command line.
- Extracted prepare vocab functionality in the build vocab step into its own function. This matches the pattern in prepare data and train where the main() function only has argparsing, and it invokes a separate function to do the work. This is to allow modules that import this one to circumvent the command line.

## [1.18.95]
### Changed
Expand Down
3 changes: 2 additions & 1 deletion MANIFEST.in
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ include .flake8
include typechecked-files
include test/data/config_with_missing_attributes.yaml
include sockeye/git_version.py
include *.bib
recursive-include .github *
include CONTRIBUTING.md
exclude *.sh
Expand All @@ -21,8 +22,8 @@ recursive-include docs *.html
recursive-include docs *.png
recursive-include docs *.md
recursive-include docs *.py
recursive-include docs *.sh
recursive-include docs *.yml
recursive-include docs *.ico
recursive-include docs *.css
recursive-include test *.txt
include docs/tutorials/multilingual/prepare-iwslt17-multilingual.sh
80 changes: 69 additions & 11 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,29 +6,87 @@
[![Build Status](https://travis-ci.org/awslabs/sockeye.svg?branch=master)](https://travis-ci.org/awslabs/sockeye)
[![Documentation Status](https://readthedocs.org/projects/sockeye/badge/?version=latest)](http://sockeye.readthedocs.io/en/latest/?badge=latest)

This package contains the Sockeye project, a sequence-to-sequence framework for Neural Machine Translation based on Apache MXNet (Incubating).
It implements state-of-the-art encoder-decoder architectures, such as:
This package contains the Sockeye project, an open-source sequence-to-sequence framework for Neural Machine Translation based on [Apache MXNet (Incubating)](http://mxnet.incubator.apache.org/). Sockeye powers several Machine Translation use cases, including [Amazon Translate](https://aws.amazon.com/translate/). The framework implements state-of-the-art machine translation models with Transformers ([Vaswani et al, 2017](https://arxiv.org/abs/1706.03762)). Recent developments and changes are tracked in our [CHANGELOG](https://github.com/awslabs/sockeye/blob/master/CHANGELOG.md).

- Deep Recurrent Neural Networks with Attention [[Bahdanau, '14](https://arxiv.org/abs/1409.0473)]
- Transformer Models with self-attention [[Vaswani et al, '17](https://arxiv.org/abs/1706.03762)]
- Fully convolutional sequence-to-sequence models [[Gehring et al, '17](https://arxiv.org/abs/1705.03122)]
If you have any questions or discover problems, please [file an issue](https://github.com/awslabs/sockeye/issues/new). You can also send questions to *sockeye-dev-at-amazon-dot-com*.

In addition, it provides an experimental [image-to-description module](https://github.com/awslabs/sockeye/tree/master/sockeye/image_captioning) that can be used for image captioning.
Recent developments and changes are tracked in our [CHANGELOG](https://github.com/awslabs/sockeye/blob/master/CHANGELOG.md).
#### Version 2.0

If you have any questions or discover problems, please [file an issue](https://github.com/awslabs/sockeye/issues/new).
You can also send questions to *sockeye-dev-at-amazon-dot-com*.
With version 2.0, we have updated the usage of MXNet by moving to the [Gluon API](https://mxnet.incubator.apache.org/api/python/docs/api/gluon/index.html) and adding support for several state-of-the-art features such as distributed training, low-precision training and decoding, as well as easier debugging of neural network architectures.
In the context of this rewrite, we also trimmed down the large feature set of version 1.18.x to concentrate on the most important types of models and features, to provide a maintainable framework that is suitable for fast prototyping, research, and production.
We welcome Pull Requests if you would like to help with adding back features when needed.

## Installation

The easiest way to run Sockeye is with [Docker](https://www.docker.com) or [nvidia-docker](https://github.com/NVIDIA/nvidia-docker).
To build a Sockeye image with all features enabled, run the build script:

```bash
python3 sockeye_contrib/docker/build.py
```

See the [Dockerfile documentation](sockeye_contrib/docker) for more information.

## Documentation

For information on how to use Sockeye, please visit [our documentation](https://awslabs.github.io/sockeye/).
Developers may be interested in our [developer guidelines](https://awslabs.github.io/sockeye/development.html).

- For a quickstart guide to training a large data WMT model, see the [WMT 2018 German-English tutorial](https://awslabs.github.io/sockeye/tutorials/wmt_large.html).
- Developers may be interested in our [developer guidelines](https://awslabs.github.io/sockeye/development.html).

## Citation

For technical information about Sockeye, see our paper on the arXiv ([BibTeX](sockeye.bib)):
For more information about Sockeye 2, see our paper ([BibTeX](sockeye2.bib)):

> Felix Hieber, Tobias Domhan, Michael Denkowski, David Vilar. 2020.
> [Sockeye 2: A Toolkit for Neural Machine Translation](https://www.amazon.science/publications/sockeye-2-a-toolkit-for-neural-machine-translation). To appear in EAMT 2020, project track.
For technical information about Sockeye 1, see our paper on the arXiv ([BibTeX](sockeye.bib)):

> Felix Hieber, Tobias Domhan, Michael Denkowski, David Vilar, Artem Sokolov, Ann Clifton and Matt Post. 2017.
> [Sockeye: A Toolkit for Neural Machine Translation](https://arxiv.org/abs/1712.05690). ArXiv e-prints.
## Research with Sockeye

Sockeye has been used for both academic and industrial research. A list of known publications that use Sockeye is shown below.
If you know more, please let us know or submit a pull request (last updated: April 2020).

### 2020

* Dinu, Georgiana, Prashant Mathur, Marcello Federico, Stanislas Lauly, Yaser Al-Onaizan. "Joint translation and unit conversion for end-to-end localization." arXiv preprint arXiv:2004.05219 (2020)
* Hisamoto, Sorami, Matt Post, Kevin Duh. "Membership Inference Attacks on Sequence-to-Sequence Models: Is My Data In Your Machine Translation System?" Transactions of the Association for Computational Linguistics, Volume 8 (2020)
* Naradowsky, Jason, Xuan Zhan, Kevin Duh. "Machine Translation System Selection from Bandit Feedback." arXiv preprint arXiv:2002.09646 (2020)
* Niu, Xing, Marine Carpuat. "Controlling Neural Machine Translation Formality with Synthetic Supervision." Proceedings of AAAI (2020)

### 2019

* Agrawal, Sweta, Marine Carpuat. "Controlling Text Complexity in Neural Machine Translation." Proceedings of EMNLP (2019)
* Beck, Daniel, Trevor Cohn, Gholamreza Haffari. "Neural Speech Translation using Lattice Transformations and Graph Networks." Proceedings of TextGraphs-13 (EMNLP 2019)
* Currey, Anna, Kenneth Heafield. "Zero-Resource Neural Machine Translation with Monolingual Pivot Data." Proceedings of EMNLP (2019)
* Gupta, Prabhakar, Mayank Sharma. "Unsupervised Translation Quality Estimation for Digital Entertainment Content Subtitles." IEEE International Journal of Semantic Computing (2019)
* Hu, J. Edward, Huda Khayrallah, Ryan Culkin, Patrick Xia, Tongfei Chen, Matt Post, and Benjamin Van Durme. "Improved Lexically Constrained Decoding for Translation and Monolingual Rewriting." Proceedings of NAACL-HLT (2019)
* Rosendahl, Jan, Christian Herold, Yunsu Kim, Miguel Graça,Weiyue Wang, Parnia Bahar, Yingbo Gao and Hermann Ney “The RWTH Aachen University Machine Translation Systems for WMT 2019” Proceedings of the 4th WMT: Research Papers (2019)
* Thompson, Brian, Jeremy Gwinnup, Huda Khayrallah, Kevin Duh, and Philipp Koehn. "Overcoming catastrophic forgetting during domain adaptation of neural machine translation." Proceedings of NAACL-HLT 2019 (2019)
* Tättar, Andre, Elizaveta Korotkova, Mark Fishel “University of Tartu’s Multilingual Multi-domain WMT19 News Translation Shared Task Submission” Proceedings of 4th WMT: Research Papers (2019)

### 2018

* Domhan, Tobias. "How Much Attention Do You Need? A Granular Analysis of Neural Machine Translation Architectures". Proceedings of 56th ACL (2018)
* Kim, Yunsu, Yingbo Gao, and Hermann Ney. "Effective Cross-lingual Transfer of Neural Machine Translation Models without Shared Vocabularies." arXiv preprint arXiv:1905.05475 (2019)
* Korotkova, Elizaveta, Maksym Del, and Mark Fishel. "Monolingual and Cross-lingual Zero-shot Style Transfer." arXiv preprint arXiv:1808.00179 (2018)
* Niu, Xing, Michael Denkowski, and Marine Carpuat. "Bi-directional neural machine translation with synthetic parallel data." arXiv preprint arXiv:1805.11213 (2018)
* Niu, Xing, Sudha Rao, and Marine Carpuat. "Multi-Task Neural Models for Translating Between Styles Within and Across Languages." COLING (2018)
* Post, Matt and David Vilar. "Fast Lexically Constrained Decoding with Dynamic Beam Allocation for Neural Machine Translation." Proceedings of NAACL-HLT (2018)
* Schamper, Julian, Jan Rosendahl, Parnia Bahar, Yunsu Kim, Arne Nix, and Hermann Ney. "The RWTH Aachen University Supervised Machine Translation Systems for WMT 2018." Proceedings of the 3rd WMT: Shared Task Papers (2018)
* Schulz, Philip, Wilker Aziz, and Trevor Cohn. "A stochastic decoder for neural machine translation." arXiv preprint arXiv:1805.10844 (2018)
* Tamer, Alkouli, Gabriel Bretschner, and Hermann Ney. "On The Alignment Problem In Multi-Head Attention-Based Neural Machine Translation." Proceedings of the 3rd WMT: Research Papers (2018)
* Tang, Gongbo, Rico Sennrich, and Joakim Nivre. "An Analysis of Attention Mechanisms: The Case of Word Sense Disambiguation in Neural Machine Translation." Proceedings of 3rd WMT: Research Papers (2018)
* Thompson, Brian, Huda Khayrallah, Antonios Anastasopoulos, Arya McCarthy, Kevin Duh, Rebecca Marvin, Paul McNamee, Jeremy Gwinnup, Tim Anderson, and Philipp Koehn. "Freezing Subnetworks to Analyze Domain Adaptation in Neural Machine Translation." arXiv preprint arXiv:1809.05218 (2018)
* Vilar, David. "Learning Hidden Unit Contribution for Adapting Neural Machine Translation Models." Proceedings of NAACL-HLT (2018)
* Vyas, Yogarshi, Xing Niu and Marine Carpuat “Identifying Semantic Divergences in Parallel Text without Annotations”. Proceedings of NAACL-HLT (2018)
* Wang, Weiyue, Derui Zhu, Tamer Alkhouli, Zixuan Gan, and Hermann Ney. "Neural Hidden Markov Model for Machine Translation". Proceedings of 56th ACL (2018)
* Zhang, Xuan, Gaurav Kumar, Huda Khayrallah, Kenton Murray, Jeremy Gwinnup, Marianna J Martindale, Paul McNamee, Kevin Duh, and Marine Carpuat. "An Empirical Exploration of Curriculum Learning for Neural Machine Translation." arXiv preprint arXiv:1811.00739 (2018)

### 2017

* Domhan, Tobias and Felix Hieber. "Using target-side monolingual data for neural machine translation through multi-task learning." Proceedings of EMNLP (2017).
Loading

0 comments on commit 88dc440

Please sign in to comment.