Skip to content

Exploring attention weights in transformer-based models with linguistic knowledge.

License

Notifications You must be signed in to change notification settings

WuGuangY/wgy_dodrio

 
 

Repository files navigation

Dodrio

An interactive visualization system designed to help NLP researchers and practitioners analyze and compare attention weights in transformer-based models with linguistic knowledge.

build arxiv badge DOI:10.18653/v1/2021.acl-demo.16

For more information, check out our manuscript:

Dodrio: Exploring Transformer Models with Interactive Visualization. Zijie J. Wang, Robert Turko, and Duen Horng Chau. arXiv preprint 2021. arXiv:2103.14625.

Live Demo

For a live demo, visit: http://poloclub.github.io/dodrio/

Running Locally

Clone or download this repository:

git clone [email protected]:poloclub/dodrio.git

# use degit if you don't want to download commit histories
degit poloclub/dodrio

Install the dependencies:

npm install

Then run Dodrio:

npm run dev

Navigate to localhost:5000. You should see Dodrio running in your broswer :)

To see how we trained the Transformer or customize the visualization with a different model or dataset, visit the ./data-generation/ directory.

Credits

Dodrio was created by Jay Wang, Robert Turko, and Polo Chau.

Citation

@inproceedings{wangDodrioExploringTransformer2021,
  title = {Dodrio: {{Exploring Transformer Models}} with {{Interactive Visualization}}},
  shorttitle = {Dodrio},
  booktitle = {Proceedings of the 59th {{Annual Meeting}} of the {{Association}} for {{Computational Linguistics}} and the 11th {{International Joint Conference}} on {{Natural Language Processing}}: {{System Demonstrations}}},
  author = {Wang, Zijie J. and Turko, Robert and Chau, Duen Horng},
  year = {2021},
  pages = {132--141},
  publisher = {{Association for Computational Linguistics}},
  address = {{Online}},
  language = {en}
}

License

The software is available under the MIT License.

Contact

If you have any questions, feel free to open an issue or contact Jay Wang.

About

Exploring attention weights in transformer-based models with linguistic knowledge.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Svelte 69.9%
  • JavaScript 15.1%
  • Python 14.5%
  • Other 0.5%