1.Faculty of Information Engineering and Automation,, Kunming University of Science and Technology,;2.Yunnan Key Laboratory of Artificial Intelligence
Abstract: Due to significant grammatical differences and a scarcity of linguistic resources between Chinese and Vietnamese, the task of Chinese-Vietnamese neural machine translation faces challenges in the accurate translation of nouns. This paper proposes a novel multimodal neural machine translation method that integrates a text-based pre-trained model with a visual-linguistic joint pre-training model. The text-based model captures deep linguistic structures and semantics, while the visual-linguistic joint training model provides visual context related to the text, which helps the model understand and translate nouns more accurately. The two models are combined through a streamlined and efficient mapping network and dynamically integrate multimodal information via a Gumbel gating module to optimize translation outputs. In both Chinese-Vietnamese and Vietnamese-Chinese translation tasks, this method has achieved improvements of 7.13 and 4.27 BLEU points, respectively, compared to the traditional Transformer model.
Key words : Chinese-Vietnamese neural machine translation;vision-language joint pre-training,;multimodal,;attention