Detangling the modern tower of Babel
Communicating with each other has always been a daunting task despite the ability to do so being nothing less than a miracle. Even within the bounds of the same language, there’s more than a slight chance of misunderstandings ranging from the minor to those of epic proportions. So the idea of trying to convey thoughts, feelings and facts to people that speak a language entirely different to yours is something of a hurdle to say the least.
In the current social climate, being able to share information in as many languages as possible is more crucial than ever. Both online and offline communications require increased accessibility in this global village we now live in. The internet can easily be described as the biggest encyclopaedia ever, always being updated, the medium through which so many people get their information from corporations and friends alike. Most of it is in English, statistics putting it near 60%, leaving it inaccessible to people who can’t speak or read the language. Vice versa, the English-speaking community may miss out on information using English keywords only. How can this mismatch be remedied? This language barrier overcome? The same way we do when communicating via word of mouth: translation.
The volume of information on the internet is gargantuan. Translating all it manually is technically possible but would take many lifetimes during which much more translatable information would come on-stream. So, humans being humans, we saw an idea that seemed like a tedious slog and decided to automate it. But what does that entail? And how successful can the automation of such a process be?
Essentially the translation process is made up of only two steps. Step one: decode the meaning of the words in the language you want to translate. Step two: encode this information into the target language. That’s it. When broken down into algorithmic basics like that, it sounds very easy but it’s far from it.
There’s a reason why critical thinking is a coveted skill and why people try so hard to attain it any way they can—being able to understand something for all its nuances is inherently difficult, let alone being able to transfer those nuances to other languages. This is part of the reason why human translation is so expensive with companies charging $0.10 to $0.30 per word depending on the languages involved.
The core of machine translation (MT) is found in substitution. Simple as that: change the words in one language to words in the other. This is the idea behind SMT, statistical machine translation. What units are used in this substitution differ from model to model with some using whole phrases and others using individual words as a base for substitution. Singular-word-based translation is only one example of the different approaches underneath the SMT umbrella. With this model, bilingual dictionaries are created for specific contexts and are updated over time to avoid the emergence of errors. As I’m sure you can already tell, SMT like the kind described above is not a process that produces 100% accuracy.
Even languages that are part of the same language tree don’t always benefit directly from substitution translation because of how different they are. But how about languages that are very different, like English and Japanese, my native language and the one I’ve been studying for the better part of a decade. English and Japanese have a lot of obvious differences from the style of characters used to slightly subtler ones like the grammatical structures themselves.
Let’s bring to the table an example of a really simple sentence to show why substitution translation works on some occasions and not others. This is the kind of sentence you see in picture books teaching small children how to read. “I have an apple.” Easy enough, right?
SMT would translate this with the help of their dictionaries. Now, from English to French for example, this kind of substitution is accurate enough. “I have” translates directly to “J’ai” and “an apple” translates to “une pomme”. Substitution translation here makes one grammatically accurate sentence from another. This doesn’t often work with English and Japanese as a language pair.
The first issue you run into when using substitution translation for English to Japanese is word order. The word order in the sentence “I have an apple” is subject, verb, object, standard in English. But Japanese word order is different. The same sentence in Japanese, “I have an apple”, comes in the order of subject, object, verb. Even then, Japanese language rules mean that a sentence’s subject is often implied so the “natural” version of the sentence doesn’t even include the subject; it’s just the object, something called a particle and then the verb. The issue here is clear.
One of the biggest hurdles in the development of machine translation is being able to create a program that allows a computer to understand a text for all its nuances whether those are in terms of context or language patterns, and also to be able to generate a translation that sounds like it was produced by a human being. Statistical machine translation still cannot do this despite the effort put in to train and run these applications.
But non-sophisticated machine translation like SMT isn’t the only kind out there on the market. The study of translation using computers has been around since the 1950s and has become more and more complex over time. One of the more promising methods with the potential to overcome the industry’s developmental hurdles is called neural machine translation or NMT. It’s based on neural networks, a machine learning approach modelled after the human brain. In this method, input data is passed through several interconnected nodes before generating output. NMT programs are capable of learning, generating nuanced translations after being trained and producing more natural-sounding text than their contemporary counterparts. Even with all these pros though, NMT still struggles with long sentences and contextual ambiguities.
Despite the existence of these automated methods, human translation is alive and well. Most people and companies tend to opt for cheaper, less accurate translation via software to get the job done since in some cases they have enough accuracy for what’s needed. But “good enough” isn’t what we should aspire for.
There are realms where both machine and human translation individually fall short. Humans can’t be expected to know all the possible words in a language for example, especially in those that have thousands of pictorial characters with meanings that can change as you combine them. Humans are very intelligent but that kind of expectation is unfair on translators. Computers on the other hand, while able to “remember” all these characters and their combinations in the way a human cannot, lack the hands-on experience to understand when to use all this vocabulary that they have. What might seem like a good translation just because it’s exact might be too direct, impolite in certain situations, or flat out incorrect. So what should be done?
Make the best of both.
I’m not suggesting something that hasn’t already been thought of. Similar industries that have experienced some degree of automation, like the transcription industry, have already been making use of practices like this. Let a computer have a crack at translating something and then have an experienced human being look over it to make sure the nuances in translation aren’t lost. The emergence of Large Language Models like GPT have proven the strength of a neural learning model coupled with reinforcement style learning driven by human engineers. Is it cheaper to use machine-based methods with human support than it is to have a human being translate the whole thing from scratch? Yes. Is it more expensive than pure machine translation? If you’re hiring ethical companies who pay their workers fairly, it should be. It’s the middle ground for this issue of automation. Maybe one day a program will be able to do what a human can but until then, it might be good enough to be able to streamline the process. Making things easier and faster is definitely valuable. It is a goal we have in mind for most of the software we create here at BrightMinded.