Font Size: 
The Effect of Encoder and Decoder Stack Depth of Transformer Model to Performance of Machine Translator for Low-resource Languages
Yaya Heryadi, Bambang Dwi Wijanarko, Dina Fitria Murad, Cuk Tho, Kiyota Hashimoto

Last modified: 2022-06-09

Abstract


This paper presents experimentation results on the effect of encoder-decoder stack depth to performance of vanilla transformer model used as a neural machine translation of low-resource languages. In this study, a pretrained transformer model is fine-tuned using a parallel corpus of Indonesian and Sundanese languages. The experiment results showed that performances of vanilla transformer model with 2, 4, or 6 stack depth are higher than performance of the model with 8 stack depth. In particular, the highest performances achieved by the transformer model with 2 stack depth are: 0.99 training accuracy, 0.97 validation accuracy, and 0.99 testing similarity.

Keywords


transformer model, machine translation, deep learning

An account with this site is required in order to view papers. Click here to create an account.