Title: Sign Language Translation Using Multi Context Transformer
Abstract: Sign Language Translation (SLT) is an important sequence-to-sequence problem that has been challenging to solve, because of the various factors which influence the meaning of a sign. In this paper, we implement a Multi Context Transformer architecture that attempts to solve this problem by operating on batched video segment representations called context vectors, intending to capture various temporal dependencies present between the frames to accurately translate the input signs. This architecture, being end-to-end also eliminates the need for sign language intermediaries known as glosses. Our model produces results that are on par with the state-of-the-art (98.19% score retention in the ROUGE-L score and 86.65% in the BLEU-4 score) while simultaneously achieving a 30.88% reduction in model parameters, which makes the model suitable for real-world applications. Our implementation is available on GitHub.(\(^{1}\)https://github.com/MBadriNarayanan/MultiContextTransformer)
Publication Year: 2021
Publication Date: 2021-01-01
Language: en
Type: book-chapter
Indexed In: ['crossref']
Access and Citation
Cited By Count: 3
AI Researcher Chatbot
Get quick answers to your questions about the article from our AI researcher chatbot