With the invention of the computer came a seemingly basic goal: automatic translation of text. However, this task proved itself to be extremely complex due to the ever-changing, ambiguous nature of language. Early systems were rule-based, but those were replaced in the 1990s in favour of statistical methods. Neural machine translation (NMT), however, is a state-of-the-art system that’s more effective than ever. To help you understand NMT, we’ll go over the history of Rules-based Machine Translation (RBMT) and Statistical Machine Translation (SMT) and explain why they’re being replaced with NMT.
Because there is no single perfect translation of a sentence in one language to another, automatic machine translation is incredibly difficult. The first attempt at solving this issue was RBMT. RBMT relies on rules, created by linguists on semantic, lexical, and syntactic levels, to dictate the conversion of text from the source to the target language. However, RBMT is very limited, due to the expansive number of rule and exceptions there are for every language.
SMT was developed as a solution to the drawbacks of RBMT in the 1990s. Since it uses a statistical model to maximize the probability of an output for a given input, it requires a large suite of sample translations. It quickly outperformed RBMT, however SMT has its issues too. SMT focuses on translating phrases, which means that broader context is often lost for long texts. The answer to this problem? NMT.
NMT uses neural network models to learn a statistical model for translation. Unlike SMT, which is composed of many different finely-tuned subcomponents, NMT builds and trains a single (and huge) neural network. Its biggest strength is that it can learn directly from input to output. NMT is capable of putting attention on different parts of a sentence during the translation process to learn and build semantic details that result in the translation of each proceeding word. Any questions?
Check out our other blog posts on NMT!