Pages

Monday, November 20, 2017

Post Editing - What does it REALLY mean?

While many people may consider that all post-editing is the same, there are definitely variations that are worth a closer look. This is a guest post by Mats Dannewitz Linder that digs into three very specific PEMT scenarios that a translator might view quite differently. Mats has a more translator-specific perspective and as the author of the Trados Studio manual, I think provides a greater sensitivity to the issues that do matter to translators. 

From my perspective as a technology guy, this post is quite enlightening as it provides real substance and insight on why there have been communication difficulties between MT developers and translator editors. PEMT can be quite a range of different editor experiences as Mats describes here, and if we now factor in the change that Adaptive MT can have, we now have even more variations on the final PEMT user experience.  

I think a case can be made for both major cases of PEMT that I see from my vantage post, the batch chunk mode and the interactive TU inside the CAT mode.  Batch approaches can make it easier to do multiple corrections in a single search and replace action, but interactive CAT interfaces may be preferred by many editors who have very developed skills in a preferred CAT tool. Adaptive MT, I think, is a blend of both and thus I continue to feel that it is especially well suited for any PEMT scenario as described in this post. The kind of linguistic work done for very large data sets is quite different and focuses on correcting high-frequency word patterns in bulk data, described in this post: The Evolution in Corpus Analysis Tools. This is not PEMT as we describe here, but it is linguistic work that would be considered high value for eCommerce, customer support and service content and any kind of customer review data that has become the mainstay of MT implementations today. 

For those in the US, I wish you a Happy Thanksgiving holiday this week, and I hope that you enjoy your family time. I have pointed out previously, however, that for the indigenous people of the Americas, Thanksgiving is hardly a reason to celebrate.“Thanksgiving” has become a time of mourning for many Native People, hopefully, this changes, but it can only change when at least a few recognize the historical reality and strive to alter it in small and sincere ways.

The emphasis and images below are all my doing so please do not blame Mats for them.

==========


I have read – and also listened to – many articles and presentations and even dissertations on post-editing of machine translation (PEMT), and strangely, very few of them have made a clear distinction between the editing of a complete, pre-translated document and the editing of machine-translated segments during interactive translation in a CAT tool. In fact, in many of them, it seems as if the authors are primarily thinking of the latter. Furthermore, most descriptions or definitions of “post-editing” do not even seem to take into account any such distinction. All the more reason, then, to welcome the following definition in ISO 17100, Translation services – Requirements for translation services:

      post-edit

      edit and correct machine translation output

Note: This definition means that the post-editor will edit output automatically generated by a machine translation engine. It does not refer to a situation where a translator sees and uses a suggestion from a machine translation engine within a CAT (computer-aided translation) tool.

And yet… in ISO 18587, Translation services – Post-editing of machine translation output – Requirements, we are once again back in the uncertain state: the above note has been removed, and there are no clues as to whether the standard makes any difference between the two ways of producing the target text to be edited.


This may be reasonable in view of the fact that the requirements on the “post-editor” arguably are the same in both cases. Still, that does not mean that the situation and conditions for the translator are the same, nor that the client – in most cases a translation agency, or language service provider (LSP) – see them as the same. In fact, when I ask translation agencies whether they see the work done during interactive translation using MT as being post-editing, they tell me that it’s not.

But why should this matter, you may ask. And it really may not, as witness the point of view taken by the authors of ISO 18587 – that is, it may not matter to the quality of the work performed or the results achieved. But it matters a great deal to the translator doing the work. Basically, there are three possible job scenarios:
  1. Scenario A:- The job consists of editing (“post-editing”) a complete document which has been machine-translated; the source document is attached. The editor (usually an experienced translator) can reasonably assess the quality of the translation and based on that make an offer. The assessment includes the time s/he believes it will take, including any necessary adaptation of the source and target texts for handling in a CAT tool.
  2. Scenario B:- The job is very much like a normal translation in a CAT tool except that in addition to, or instead of, an accompanying TM the translator is assigned an MT engine by the client (normally a translation agency). Usually, a pre-analysis showing the possible MT (and TM) matches is also provided. The translator is furthermore told that the compensation will be based on a post-analysis of the edited file and depend on how much use has been made of the MT (and, as the case may be, the TM) suggestions. Still, it is not possible for the translator either to assess the time required or the final payment. Also, s/he does not know how the post-analysis is made so the final compensation will be based on trust.
  3. Scenario C:- The job is completely like a normal translation in a CAT tool, and the compensation is based on the translator’s offer (word price or packet price); a TM and a customary TM matches analysis may be involved (with the common adjustment of segment prices). However, the translator can also – on his or her own accord – use MT; depending on the need for confidentiality it may be an in-house engine using only the translator’s own TMs; or it may be online engines with confidentiality guaranteed; or it may be less (but still reasonably) confidential online engines. Whatever the case, the translator stands to win some time thanks to the MT resources without having to lower his or her pricing.
In addition to this, there are differences between scenarios A and B in how the work is done. For instance, in A you can use Find & replace to make changes in all target segments; not so in B (unless you start by pre-translating the whole text using MT) – but there you may have some assistance by various other functions offered by the CAT tool and also by using Regular expressions. And if it’s a big job, it might be worthwhile, in scenario A, to create a TM based on the texts and then redo the translation using that TM plus any suitable CAT tool features (and regex).

Theoretically possibly, but practically not

There is also the difference between “full” and “light” post-editing: Briefly, the former means that the resulting text is comprehensible and accurate, but the editor need not – in fact, should not – strive for a much “better” text than that, and should use as much of the raw MT version as possible. The purpose is to produce a reasonably adequate text with relatively little effort. The latter situation means that the result should be of “human” translation quality. (Interestingly, though, there are conflicting views on this: some sources say that stylistic perfection is not expected and that clients actually do not expect the result to be comparable to “human” translation.) Of course these categories are only the end-points on a continuous scale; it is difficult to objectively test that a PEMT text fulfils the criteria of one or the other (is the light version really not above the target level? is the full version really up to the requirements?), even if such criteria are defined in ISO 18587 (and other places).

Furthermore, all jobs involving “light-edit” quality is likely to be avoided by most translators 

Source: Common Sense Advisory

These categories mainly come into play in scenario A; I don’t believe any translation agency will be asking for anything but the “full” quality in scenario B. Furthermore, all jobs involving “light” quality is likely to be avoided by most translators. Not only does it go against the grain of everything a translator finds joy in doing, i.e. the best job possible; experience also shows that all the many decisions that have to be made regarding which changes need to be made and which not often take so much time that the total effort with “light” quality editing is not much less than that with “full” quality.

Furthermore, there are some interesting research results as to the efforts involved, insights which may be of help to the would-be editor. It seems that editing medium quality MT (in all scenarios) takes more effort than editing poor ones – this is cognitively more demanding than discarding and rewriting the text. Also, the amount of effort needed to detect an error and decide how to correct it may be greater than the rewriting itself and reordering words and correcting mistranslated words takes the longest time of all. Furthermore, it seems that post-editors differ more in terms of actual PE time than in the number of edits they make. Interestingly, it also seems that translators leave more errors in TM-matched segments than in MT-matched ones. And the mistakes are of different kinds.

These facts, plus the fact that MT quality today is taking great steps forward (not least thanks to the fast development of neural MT, even taking into account the hype factor), are likely to speed up the current trend, which according to Arle Lommel, senior analyst at CSA Research and an expert in the field, can be described thus:
"A major shift right now is that post-editing is being replaced by “augmented translation.” In this view, language professionals don't correct MT, but instead, use it as a resource alongside TM and terminology. This means that buyers will increasingly just look for translation, rather than distinguishing between machine and human translation. They will just buy “translation” and the expectation will be that MT will be used if it makes sense. The MT component of this approach is already visible in tools from Lilt, SDL, and others, but we're still in the early days of this change."

In addition, this will probably mean that we can do away with the “post-editing” misnomer – editing is editing, regardless of whether the suggestion presented in the CAT tool interface comes from a TM or an MT engine. Therefore, the term “post-editing” should be reserved only for the very specific case in scenario A, otherwise, the concept will be meaningless. This view is taken in, for instance, the contributions by a post-editor educator and an experienced post-editor in the recently published book Machine Translation – What Language Professionals Need to Know (edited by Jörg Porsiel and published by BDÜ Fachverlag).

Thus it seems that eventually we will be left with mainly scenarios B and C – which leaves the matter, for translators, of how to come to grips with B. This is a new situation which is likely to take time and discussions to arrive at a solution (or solutions) palatable to everyone involved. Meanwhile, we translators should aim to make the best possible use of scenario C. MT is here and will not go away even if some people would wish it to.


-------------



Mats Dannewitz Linder has been a freelance translator, writer and editor for the last 40 years alongside other occupations, IT standardization among others. He has degrees in computer science and languages and is currently studying national economics and political science. He is the author of the acclaimed Trados Studio Manual and for the last few years has been studying machine translation from the translator’s point of view, an endeavour which has resulted in several articles for the Swedish Association of Translators as well as an overview of Trados Studio apps/plugins for machine translation. He is self-employed at Nattskift Konsult.

Thursday, November 16, 2017

How Adaptive MT turns Post-Editing Janitors into Cultural Consultants

At the outset of this year, I felt that Adaptive MT technology would rapidly establish itself as a superior implementation of MT technology for professional and translator use. Especially, in those scenarios where extensive post-editing is a serious requirement. However, it has been somewhat overshadowed by all the marketing buzz and hype that floats around Neural MT's actual capabilities. Had I been a translator, I would have at least experimented with Adaptive MT, even if I were not to use it every day. If one does the same type of translation (focused domain) work on a regular basis, I think the benefits are probably much greater. Jost Zetzsche has also written favorably about his experiences with Adaptive MT in his newsletter.

We have two very viable and usable Adaptive MT solutions available in the market that I have previously written about:

Lilt :- An Interactive & Adaptive MT Based Translator Assistant or CAT Tool

and

A Closer Look at SDL's Adaptive MT Technology

 

I am told that MMT also offers a solution, but my efforts to gather more information about the product have not met with success, and I am loathe to suggest that anybody seriously look at something I have little knowledge of. Given my unfortunate experience with MT development that never actually lived up to promises, I think it is wiser to focus on clearly established and validated products, that have already been examined by many.

 We are now already reaching that point where Neural MT and Adaptive MT  come together and  Lilt recently announced their Adaptive Neural MT. I am also aware that SDL is exploring this combination and has beta versions of Adaptive Neural MT running as well. 

The Lilt announcement stated:
"In a blind comparison study conducted by Zendesk, a Lilt customer, reviewers were asked to choose between Lilt’s new adaptive NMT translations and Lilt’s previous adaptive machine translation (MT) system. They chose NMT to be of superior or equal quality 71% of the time."
From all that we know about these technologies, it seems that Adaptive Neural MT should become a preferred technology for the "localization" content that receives a lot of post-editing attention. It is, however, not clear if this approach makes sense for every type of content and MT use scenario where custom NMT models may make more sense.  

This is a guest post by Greg Rosner who assures me that he believes that human skills of authenticity, idea generation and empathy will only grow more important, even as we add more and more technology to our daily and professional lives.  

We should remember that as recently as 1980, official business correspondence was produced by typist pools, usually, groups of women working on  (IBM Selectric) typewriters who also knew something called shorthand.  Often, these women were called secretaries. When word processor systems from a company called Wang reduced the need for these kinds of workers, a trend exacerbated by PC Word Processing software, many of these workers evolved into new roles. Secretaries became Executive Assistants who have often had Office Suite expertise and thus perform much more complex and hopefully more interesting work. Perhaps we will see similar patterns with translation, where translators will need to pay less attention to handling file format transformations and developing arcane TM software expertise. And rather, focus on real linguistic issue resolution and developing more strategic language translation strategies for ever-growing content volumes that can improve customer experience.

====== 

I saw the phrase “linguistic janitorial work” in this Deloitte whitepaper on “AI-augmented government, using cognitive technologies to redesign public sector work”, used to describe the drudgery of translation work that so many translators are required to do today through Post-editing of Machine Translation. And then it hit me what's really going on.

The sad reality over the past several years is that many professional linguists, who have decades of particular industry experience, expertise in professional translation and have earned degrees in writing, whose jobs have been reduced to sentence-by-sentence clean-up of translations that flood out of Google Translate or other Machine Translation (MT) systems.



The Deloitte whitepaper takes the translator's job as an example of how AI will help automate tasks through different approaches, such as relieving work, splitting-up work, replacing work, and augmenting work.

THE FOUR APPROACHES APPLIED TO TRANSLATION

"...A relieve approach might involve automating lower-value, uninteresting work and reassigning professional translators to more challenging material with higher quality standards, such as marketing copy.

To split up, machine translation might be used to perform much of the work—imperfectly, given the current state of machine translation—after which professional translators would edit the resulting text, a process called post-editing. Many professional translators, however, consider this “linguistic janitorial work,” believing it devalues their skills.

With the replace approach, the entire job a translator used to do, such as translating technical manuals, is eliminated, along with the translator’s position.

And finally, in the augment approach, translators use automated translation tools to ease some of their tasks, such as suggesting several options for a phrase, but remain free to make choices. This increases productivity and quality while leaving the translator in control of the creative process and responsible for aesthetic judgments.”

Many translators hate translation technology because it has reduced their enormous cultural understanding, language knowledge and industry expertise that they can offer organizations who want to connect with global customers, to being grammaticians.



HOW IS ADAPTIVE MACHINE TRANSLATION DIFFERENT FROM STATISTICAL OR NEURAL MACHINE TRANSLATION?

 

Post-editing whatever comes out of the machine has been a process used since the 1960’s when professional linguists would clean up poor translations that were output from the system. Sadly, this is still most of what’s still happening today in spite of Adaptive systems available in the market. But more on why this might be in my next blog,

The biggest problem with the job of post-editing machine translation is having to make the same corrections again and again since there is no feedback mechanism when translators make a change. This is true with the output of every Machine Translation system today including Google Translate and Microsoft Translator. Training MT engines for a specific domain is time-consuming and costs a lot of money, thus it typically only happens once or twice a year. The effort results in a static system that will inevitably need to be trained again to create yet another static system.

Adaptive Machine Translation is a new category of AI software which is learning all the time. The training it is getting happens as the translator is working, so there is never a re-training. This side-by-side translation activity is poised to be the biggest revolution in the language translation industry since Translation Memory (statistical sentence matching) was introduced in the 1980s.



(Example of Lilt Adaptive Machine Translation interface working in collaboration with the translator sentence by sentence.)

HOW DOES ADAPTIVE MACHINE TRANSLATION INCREASE THE VALUE OF A PROFESSIONAL TRANSLATOR?

 

There is an enormous amount of untapped value that a professional translator can bring to an organization working in an Adaptive Machine Translation model vs through a Post-Editing model. Given that they are a native linguist, familiar with the country and customs of the target market, there is a lot of human intelligence and understanding ready to be tapped right in the localization process. In addition, over time their familiarity with a product and service will have them be a much more valuable asset to localizing content than simply an in-language grammatician.

Like other fields, AI will help remove tasks of our jobs that can be replicated or made more efficient. It’s sad that the current mode of translation technology that we’ve been working with so long has put professional translators in a position to clean up the mess a machine makes. It seems it should be the other way around. (i.e Grammarly) I’m optimistic that AI will help us become better translators, enabling us to spend more time being creative, having more connected relationships and become more what it means to be human.
“Chess grand master Garry Kasparov pioneered the concept of man-plus-machine matches, in which AI augments human chess players rather than competes against them. “Centaur,” which is the human/AI cyborg that Kasparov advocated, will listen to the moves suggested by the AI but will occasionally override them - much the way we use the GPS. Today the best chess player alive is a centaur. It goes by the name of Intagrand, a team of several humans and several different chess programs. AI can help humans become better chess players, better pilots, better doctors, better judges, better teachers.”

========




Greg started in the translation business counting sentences to be translated with a chrome hand-tally-counter and a dream to help business go global in 1995. Since then, he’s worked as a language solution advisor for the global 5,000 clients of Berlitz, LanguageLine, SDL and Lilt.

Greg Rosner


Greg Rosner


 

Wednesday, November 15, 2017

BabelNet - A Next Generation Dictionary & Language Research Tool

This is a guest post by Roberto Navigli of BabelNet, a relatively new "big language data" initiative that is currently a lexical-semantic research and analysis tool, that can do disambiguation and has been characterized by several experts as a next-generation dictionary. It is a tool where "concepts" are linked to the words used to express them. BabelNet can also function as a semantics-savvy, disambiguation capable MT tool. The use possibilities are still being explored and could expand as grammar related big data is linked to this foundation. As Roberto says:"We are using the income from our current customers to enrich BabelNet with new lexical-semantic coverage, including translations and definitions. In terms of algorithms, the next step is multilingual semantic parsing, which means moving from associating meanings with words or multiword expressions to associating meanings with entire sentences in arbitrary languages. This new step is currently funded by the European Research Council (ERC). The Babelscape startup already has several customers, among them, are Lexis Nexis, Monrif (a national Italian newspaper publisher), XTM (computer-assisted translation), and several European and national government agencies. 

While the initial intent of the project was broader than being a next-generation dictionary, attention and interest from the Oxford UP have steered this initiative more in this direction. 

I expect we will see many new kinds of language research and analysis tools become available in the near future, as we begin to realize that all the masses of linguistic data that we have access to that can be used for many different linguistically focused projects and purposes.  The examples presented in the article below are interesting, and the Babelscape referenced tools mentioned here are easy to access and experiment with. I would imagine that these kinds of tools and built-in capabilities would be an essential element of next-generation translation tools where this kind of a super-dictionary would be combined and connected with MT, Translation Memory, Grammar checkers and other linguistic tools that can be leveraged for production translation work.



=========

 

 

 

BabelNet: a driver towards a society without language barriers?


In 2014 the Crimean war broke out and is currently going on, but no national media is talking about it anymore. So now Rachel has been trying for 1 hour to find information about the current situation but she can only find articles written in Cyrillic that she is not able to understand. She is about to give up when her sister says: “Have you tried to use BabelNet and its related technology? It is the best way to understand a text written in a language that you do not know!”, so she tries and gets the information from the article.

The widest multilingual semantic network


But what is BabelNet and how could it help Rachel in her research? 

We are talking about the largest multilingual semantic network and encyclopedic dictionary, created by Roberto Navigli, founder and CTO of Babelscape and full professor at the Department of Computer Science at the Sapienza University of Rome, born as a merger of two different resources, WordNet and Wikipedia. However, what makes BabelNet special is not the specific resources used, but how they interconnect with each other. In fact, it is not the first system to exploit Wikipedia or WordNet, but it is the first one to merge them, taking encyclopedic entries from Wikipedia and lexicographic entries from WordNet. Thus BabelNet is a combo of resources that people usually access separately.  

Furthermore, one of the main features of BabelNet is its versatility, since its knowledge enables to design applications to analyze text in multiple languages and extract various types of information. For example, Babeltex, a concept and entity extraction system based on BabelNet, is able to spot entities and extract terms and their meaning from sentences in a text (an article, a tweet, and any other type of phrase) and, as a result, Rachel is able to understand what the article is talking about. However, she realizes that Babelfy is not a translator, but a tool to identify concepts and entities within text and get their meanings in different languages: when Rachel uses it, the network spots the entities in the article, it finds the multiple definitions of a word and matches their meaning with an image and their translations in other languages, so in this way she can get the content itself of the text. In addition, Babelfy shows the key concepts related to any entities.
               
Let me show you two examples of how Babelfy works. First, look at the following statement. “Lebron and Kyrie have played together in Cleveland”.


In this case Babelfy has to disambiguate a text written in English and to explain it in the same language: its task is to recognize concepts (highlighted in green) and named entities (highlighted in yellow) and to match the proper meaning to every concept according to the sentence; finally it provides an information sheet based on the BabelNet’s knowledge for any entity and concept. Thus in the previous example, Babelfy works first as a disambiguator able to understand that “Cleveland” means “the basketball team of the city” and not the city itself, and then as an encyclopedia, by providing information sheets about the various entities and concepts.

The second example shows how Babelfy faces Rachel’s problem. We have a text written in Spanish. Babelfy recognizes the concepts (es, abogado, político, mexicano) and the named entity (Nieto) and provides the information sheets in the selected language (English). Babelfy can, therefore, help you understand a text written in a language you do not speak.



This can be repeated in hundreds of languages: no one can guarantee a linguistic coverage as wide as Navigli’s company, currently about 271 languages, including Arabic, Latin, Creole, and Cherokee. That is why BabelNet won the META Prize in 2015, motivated by the jury “for groundbreaking work in overcoming language barriers through a multilingual lexicalized semantic network and ontology making use of heterogeneous data sources. The resulting encyclopedic dictionary provides concepts and named entities lexicalized in many languages, enriched with semantic relations”.


Roberto Navigli awarded the META Prize  (Photo of the META-NET 2015 prize ceremony in Riga, Latvia).


Make your life better: have fun with knowledge, get insights for your business


But, we have to be practical. We know what this network does, but can it improve our life? The answer is “of course!”, whether you are a computer scientist or just a user. If you are a user, you can follow BabelNet’s slogan, “Search, translate, learn!”, and enjoy your time by exploring the network and dictionary. People can have fun by discovering the interconnection among the words, playing with the knowledge.  But BabelNet is not just about this: in her article “"Redefining the modern dictionary", Katy Steinmetz, a journalist of Time Magazine, states that BabelNet is about to revolutionize the current dictionaries and to take them to the next level. According to Steinmetz, the merit of BabelNet is “going far beyond the ‘what’s that word mean’ use case” because the multilingual network has been organized using the meaning of the words, not their spelling and the useless alphabetical order as the print dictionaries do, and in addition it offers a wider language coverage and an illustration for any term.  Why should you use a common dictionary when you have another with any entry matched to a picture and definitions in multiple languages? Thus BabelNet is a pioneer at the turning point from dictionaries to a semantic network structure with labeled relations, pictures, and multilingual entries, and makes gaining knowledge and information easier for users.

Computer scientists, on the other hand, can exploit BabelNet to disambiguate a written text in one of the hundreds of covered languages. For example, BabelNet can be used to build a term extractor able to analyze tweets or any social media chat about the products of a company and spot the entities with the matched picture and concepts. In this way, the marketing manager can understand what a text is talking about regardless of language and can get insights to improve the business activities.


A revolution in progress


Even though its quality is already very high, the current BabelNet should be considered as “a starting point” for much richer versions to come of the multilingual network, because new lexical knowledge is continuously added with daily updates  (for example, when a new state president is elected this fact will be integrated into BabelNet as soon as Wikipedia is updated). The focus on upgrading technology and linguistic level comes from the background of Roberto Navigli (winner of the Prominent Paper Award 2017 from Artificial Intelligence, the most prestigious journal in the field of AI), who has put together a motivated operational team.

After the starting combination between Wikipedia and WordNet, new and different resources (Open Multilingual WordNet, Wikidata, Wiktionary, OmegaWiki, ItalWordNet, Open Dutch WordNet, FrameNet, Wikiquote, VerbNet, Microsoft Terminology, GeoNames, WoNeF, ImageNet) have been added to the next versions in order to provide more synonyms and meanings and to increase the available knowledge. The BabelNet team is not going to stop innovating, so who knows which other usages BabelNet could offer in the future: the revolution of our life by BabelNet has just begun. Should we start thinking about a society without language barriers?


REFERENCES