ADAPT researchers Wandri Jooste, Rejwanul Haque and Andy Way have recently published the article ‘Knowledge Distillation: A Method for Making Neural Machine Translation More Efficient, now available to access on MDPI.
As the researchers outline in their article, Neural Machine Translation (NMT) systems have significantly improved the quality available from Machine Translation (MT) compared to Statistical Machine Translation (SMT) systems. However, the updated NMT models use more computing power and data than the SMT models, resulting in an unsustainable system in the long run that is of limited benefit in low-resource scenarios. Within their work, the researchers also provide an in-depth investigation of knowledge distillation on a simulated low-resource German-to-English translation task to demonstrate that sequence-level knowledge distillation could be a potential solution to this problem.
Read the full article here.