Publications
If you are using OpenNMT for academic work, please cite the initial system demonstration paper published in ACL 2017:
@inproceedings{klein-etal-2017-opennmt,
title = "{O}pen{NMT}: Open-Source Toolkit for Neural Machine Translation",
author = "Klein, Guillaume and
Kim, Yoon and
Deng, Yuntian and
Senellart, Jean and
Rush, Alexander",
booktitle = "Proceedings of {ACL} 2017, System Demonstrations",
month = jul,
year = "2017",
address = "Vancouver, Canada",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/P17-4012",
pages = "67--72",
}
Other papers were also published more recently:
- The OpenNMT Neural Machine Translation Toolkit: 2020 Edition. Guillaume Klein, François Hernandez, Vincent Nguyen, Jean Senellart. AMTA 2020
- OpenNMT: Neural Machine Translation Toolkit. Guillaume Klein, Yoon Kim, Yuntian Deng, Vincent Nguyen, Jean Senellart, Alexander Rush. AMTA 2018
Research
Here is a list of selected papers using OpenNMT:
- Challenges in Data-to-Document Generation. Sam Wiseman, Stuart M. Shieber, Alexander M. Rush. 2017.
- Model compression via distillation and quantization. Antonio Polino, Razvan Pascanu, Dan Alistarh. 2018.
- A causal framework for explaining the predictions of black-box sequence-to-sequence models. David Alvarez-Melis, Tommi S. Jaakkola. 2017.
- Deep Learning Scaling is Predictable, Empirically. Joel Hestness, Sharan Narang, Newsha Ardalani, Gregory F. Diamos, Heewoo Jun, Hassan Kianinejad, Md. Mostofa Ali Patwary, Yang Yang, Yanqi Zhou. 2017.
- What You Get Is What You See: A Visual Markup Decompiler. Yuntian Deng, Anssi Kanervisto, Alexander M. Rush. 2016.
- Semantically Equivalent Adversarial Rules for Debugging NLP models. Ribeiro, Marco Tulio, Singh, Sameer, Guestrin, Carlos. 2018.
- A Regularized Framework for Sparse and Structured Neural Attention. Niculae, Vlad, Blondel, Mathieu. 2017.
- Controllable Invariance through Adversarial Feature Learning. Xie, Qizhe, Dai, Zihang, Du, Yulun, Hovy, Eduard, Neubig, Graham. 2017.
- Neural Semantic Parsing by Character-based Translation: Experiments with Abstract Meaning Representations. Rik van Noord, Johan Bos. 2017.
- When to Finish? Optimal Beam Search for Neural Text Generation (modulo beam size). Liang Huang, Kai Zhao, Mingbo Ma. 2018.
- Handling Homographs in Neural Machine Translation. Frederick Liu, Han Lu, Graham Neubig. 2017.
- Bottom-Up Abstractive Summarization. Sebastian Gehrmann, Yuntian Deng, Alexander M. Rush. 2018.
- Dataset for a Neural Natural Language Interface for Databases (NNLIDB). Florin Brad, Radu Iacob, Ionel Hosu, Traian Rebedea. 2017.
- Coarse-to-Fine Attention Models for Document Summarization. Ling, Jeffrey, Rush, Alexander. 2017.
- Seq2Seq-Vis: A Visual Debugging Tool for Sequence-to-Sequence Models. Hendrik Strobelt, Sebastian Gehrmann, Michael Behrisch, Adam Perer, Hanspeter Pfister, Alexander M. Rush. 2018.
Find more references on Google Scholars.