Home|Journals|Articles by Year|Audio Abstracts

Research Article

EEO. 2021; 20(2): 3110-3116

Deep Learning Model For Sequential Data -Machine Language Translation

Prakash Srivastava, Prem Ranjan Pattanayak, Ms.Shivani Arora.


This paper discusses Deep Neural Networks (DNN) and deep learning as it relates to machine translation, form of natural language processing. DNN is now a key component of machine learning methodologies. One of the best techniques for machine learning is the recursive recurrent neural network (R2NN). Recursive and recurrent neural networks are combined to create it (such as Recursive auto encoder). In this research, semi-supervised learning techniques are used to train the LSTM for reordering from source to target language. To create word vectors for the source language, the Seq2word tool is necessary, and the auto encoder aids in the reconstruction of the vectors for the destination language in a tree structure. The output of seq2word is crucial for the input vectors' word alignment. Due to the LSTM structure's complexity and time required to train the enormous data set using seq2seq. The performance accuracy is anlayzed using BLEU score.

Key words: DNN, LSTM, deep learning model, Machine language translation

Full-text options

Share this Article

Online Article Submission
• ejmanager.com

ejPort - eJManager.com
Refer & Earn
About BiblioMed
License Information
Terms & Conditions
Privacy Policy
Contact Us

The articles in Bibliomed are open access articles licensed under Creative Commons Attribution 4.0 International License (CC BY), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.