Jooste, Wandri, Way, Andy, Haque, Rejwanul and Superbo, Riccardo (2022) Knowledge Distillation for Sustainable Neural Machine Translation. In: Proceedings of the 15th Biennial Conference of the Association for Machine Translation in the Americas (Volume 2: Users and Providers Track and Government Track). Association for Machine Translation in the Americas, Orlando, USA, pp. 221-230.
Full text not available from this repository.Abstract
Knowledge distillation (KD) can be used to reduce model size and training time, without significant loss in performance. However, the process of distilling knowledge requires translation of sizeable data sets, and the translation is usually performed using large cumbersome models (teacher models). Producing such translations for KD is expensive in terms of both time and cost, which is a significant concern for translation service providers. On top of that, this process can be the cause of higher carbon footprints. In this work, we tested different variants of a teacher model for KD, tracked the power consumption of the GPUs used during translation, recorded overall translation time, estimated translation cost, and measured the accuracy of the student models. The findings of our investigation demonstrate to the translation industry a cost-effective, high-quality alternative to the standard KD training methods.
Item Type: | Book Section |
---|---|
Subjects: | Q Science > QA Mathematics > Electronic computers. Computer science T Technology > T Technology (General) > Information Technology > Electronic computers. Computer science P Language and Literature > P Philology. Linguistics > Language Services |
Divisions: | School of Computing > Staff Research and Publications |
Depositing User: | Clara Chan |
Date Deposited: | 26 Sep 2022 10:56 |
Last Modified: | 27 Sep 2022 12:33 |
URI: | https://norma.ncirl.ie/id/eprint/5791 |
Actions (login required)
View Item |