Technological innovation in translation is rapid and ongoing, and recent developments have again raised the spectre of a future in which machines reign supreme and humans need no longer concern themselves with finding the mot juste. Ian Henderson explains.
As always, the hype doesn’t tell the whole story: Spot innovations haven’t brought the promise of fully-automated, high-quality translations at low (or no) cost any closer.
New directions
A quick look at recent innovations yields the following notable examples:
– Chief among the challengers to human translation are consumer-focused machine translation (MT) tools like Google Translate, which has made considerable inroads and continues to make headlines.
– Social Web platforms (Facebook, Twitter, Youtube and WeChat) all have built-in translation functions.
– Microsoft recently demoed Skype Translator, which combines speech recognition, text-to-speech and MT to let users call and text others regardless of their language.
– World Lens is a quirky app that translates foreign language signs picked up with the cameras of smart devices.
For what?
There is place in our lives for all these tools. But is MT good enough? Actually, notes www.machinetranslation.net, with MT it is more accurate to ask: “good enough for what?”
The site posits: “The need for human post-editing depends on the intended use of the output. It is needed if content must be of publication quality, but for real-time communication such as online chat, ‘understandable’ translations are ‘good enough’ and additional human involvement is not needed.”
Complexity requires humans
But a good many other concerns may also rule out MT.
Software consumes an ever greater proportion of translation resources today, as globalising enterprises cater for new regions with interfaces, menus and support in their own languages. Much of the action has further migrated online, as cloud platforms gain in popularity, media increasingly goes online, and Web properties proliferate as an easy medium for globalisation.
It is axiomatic that where digital media or software is to be translated, significant complexities enter the fray. Companies should look further than automated tools and instead use a language service provider (LSP) that combines tools, software engineering and project management skills to control volumes and complexity, attain efficiencies and manage quality outcomes.
Project complexity
In this new deal, translation is no longer just an art. From one perspective, it is a business activity concerned with efficiencies to lower cost and speed up deliverables as volumes spike.
Since consumer MT output is of insufficient quality and sophistication to handle workloads of modern magnitude and complexity, and large teams of translators are moreover unaffordable and unmanageable, the efficiencies must come from increasing control while maintaining quality outputs through the use of project management discipline.
High-end enterprise MT tools aren’t a real alternative, as they’re out of reach for all but the biggest multinationals. For one thing, such tools are licensed in language pairs – if your work is predominantly English-to-French, that makes sense, but if you do translation into 56 language pairs, it is not economically feasible.
Technology skills
Translation is definitely not an automated outcome either in this new software-dominated paradigm, at least not yet. With the complexities inherent in software development, a degree of software engineering is required, which only a special breed of language service provider (LSP) offers.
Quite often the translation component of a project is fairly small, while the testing and installation portion consumes high-calibre technological expertise in spades. For example, when expanding into new territories more users may be using outdated browsers, requiring software engineering to fix incompatibilities with older browser versions.
The same applies to text-to-voice translation tools – the text still has to be presented to the voice synthesiser in a form it can recognise, even if the voice synthesis itself is 100% automatic.
Automation also tends to destroy context (a common example being ambiguity when Internet terms like ‘address’ and ‘home’ occur alongside off-line use of those terms). If, in a post-editing job, if you don’t devote budget to having an LSP study the original language and vetting the translation, it is pretty much guaranteed to be deficient.
Security
Then there’s security.
When one enters text into Google Translate, any post-editing improvement is shared with the collective online user base, which can threaten the intellectual property of the contributor or client. Indeed, MT tools depend on users improving their translations and giving back for improved accuracy.
Improvement improbable for now
So where can tool developers improve? In our experience, the human factor cannot be replicated in any one tool.
What is still needed is a core translation server with a client-specific translation memory database that speeds up translations, combined with project management and coding expertise.
As concerns coding, software build scripts may need to be edited before they work in the translation. In one example, we identified and changed a version number error. In another case, the name reference of a file, “article_en”, needed to be changed to “article_af” (or indeed “artikel_af”) to work.
In short, modern translators need to be able encode files and verify that the encoding is correct; verify that formatting in the source has been carried over into the target correctly, where appropriate; and verifying that numbers have been correctly translated (or not), as required by the client.[AC1]
Supporting these competencies, translators work with a slew of standalone but interoperable tools (30-40) providing additional functions that are often in translation clients’ blind spots but in fact fulfil very necessary functions, such as project management and workflow.
When one views translation for the complex beast it is, automation and mega tools just don’t cut i