27 May 2008 Like all technology, machine translation (MT) has its limits, says Mike Dillinger, President of the Association for Machine Translation in the Americas and Director of Linguistics at Spoken Translation, Inc. (as well as Adjunct Professor of Psychology at San José State University). At the invitation of the Department of Artificial Intelligence, Mike Dillinger has been giving a course on paraphrasing and text mining at the School of Computing. Dillinger considers that without clean and clear texts machine translation will not work well and that, despite technological advances, there will always be a demand for human translators to render legal or literary texts. Machine translation, he adds, for the most part targets Internet and technical texts. MT’s bias points up the need for training content creators to assure that the documents are machine translatable.
- As an expert in machine translation, how would you define the state of the art in this discipline?
The state of the art is one of rapid change. A far-reaching new approach was introduced fifteen to twenty years ago. At that time, we faced a two-part problem. First, it took a long time and a lot of money to develop both the grammatical rules required to analyse the original sentences and the “transfer” or translation rules. Second, it seemed impossible to account for the vast array of words and sentence types in documents.The new approach uses statistical techniques to identify qualitatively simpler rules. This it does quickly, automatically and on a massive scale, covering much more of the language. Similar techniques are used to identify terms and their possible translations. These are huge advances! Before, system development was very much a cottage industry; now rules are mass produced. Today’s research aims to increase the qualitative complexity of the rules to better reflect syntactic structures and more aspects of meaning. We are now trying to reuse the qualitative advances of the previous, manual approach.
- Machine translation systems have been in use since the 1970s, is this technology now mature?
If maturity means “for use in industrial applications”, then the answer is definitely “yes”. MT has been widely used by top-ranking industrial and military institutions for over 30 years. The European Community, Ford, SAP, Symantec, the US Armed Forces and many other organizations use MT every day. If maturity means “for use by the general public” which enters random sentences for translation, I would say, just as definitely, “no”. Like all technology, machine translation has its limits. You don’t expect a Mercedes to run well in the snow or sand: where it performs best is on a dry, surfaced road. Neither do you expect a Formula 1 car to win a race using ordinary petrol or alcohol, it needs special fuel.Unfortunately, very often people expect a perfect translation of texts that are not very clear and/or are full of errors. For the time being at least, without clean and correct texts, machine translation will not work properly.
- Do you think society understands MT?
Not at all! It’s something I come across all the time. A lot of people think that “translation” is being able to tell just what authors mean, even if they have not expressed themselves clearly and correctly.Therefore, many people have over-optimistic expectations about what a translation system (and a human translator!) will be able to do. This is why they are almost always disappointed. On the other hand, those of us working in MT have to make more of an effort to get society to better understand what use it is and when it works well: this is the specific mandate of our Association.
- What is MT about? Developing programs, translation systems, computerized translation, manufacturing electronic dictionaries? How exactly would you define this discipline?
MT is concerned with building computerized translation systems. Of course, this includes building electronic dictionaries, electronic grammars, databases of word co-occurrences and other linguistic resources. But it also involves developing automatic translation evaluation processes, processes for “cleaning” input texts and analyzing them, as well as processes for guaranteeing that everything will run smoothly when a 300,000-page translation order arrives. Since these are all very different processes and components, MT requires the cooperation of linguists, programmers, engineers and, increasingly, their clients.
- What are the stages of the machine translation process?
1. Document preparation. This is arguably the most important stage, because you have to assure that the sentences of each document are understandable and correct.
2. Adaptation of the translation system. Just like a human translator, the machine translation system needs information about all the words it will come across in the documents. It can be taught new words through a process known as customization.
3. Document translation. Each document format, like Word, pdf or HTML, has many different features, apart from the sentences that actually have to be translated. This stage separates the content from the wrapping as it were. Then the system translations the sentences one by one, at faster than 400 words per second, compared to humans, who translate about 400 words per hour.
4. Translation verification. Quality control is very important for human and machine translators. Neither words nor sentences have just one meaning, and they are very easy to misinterpret.
5. Document distribution. This stage is more complex than is generally thought. When you receive 10,000 documents to be translated to 10 different languages, checking that they were all translated, putting them all in the right order without mixing up languages, etc., takes a lot of planning and effort.
- Is this technology a threat to human translators? Do you really think it creates jobs?
Not at all! MT takes the most routine work out of translators’ hands so that they can apply their expertise to more difficult tasks. We will always need human translators for more complex legal and literary texts.MT today is mostly applied to situations where there is no human participation. It would even be cruel to have people translate e-mails, chats, SMS messages and random web pages. The requirements for text throughput and translation speed are so huge that it would be excruciating for a human being. It is a question of scale: an average human translator translates from 8 to 10 pages per day, whereas, on the web scale, 8 to 10 pages per second would be very slow.The adoption of new technologies, especially in a global economy, seldom boosts job creation. What it seems to do is open up an increasingly clear divide between low-skilled routine jobs and specialized occupations.
- Is the deployment of this technology a technical or social problem?
First and foremost it is a social engineering problem because people have to change their behaviours and the way they see things. The MT process reproduces exactly the same stages as human translation, except for two key differences:a) In translation systems, you have to be very careful about wording. Human translators apply their technical knowledge (if any) to make up for incorrect wording, but machine translation systems have no such knowledge: they reproduce all too faithfully the mistakes in the original/source text. It is hard to get them to translate error-prone texts more accurately, but there are now extremely helpful automatic checking tools to produce more systematic input texts. Symantec is a recent example of a company that uses an automatic checker and a translation system to achieve extremely fast and very good results.b) Translation systems can handle a lot of translated documents. What happens if an organization receives 5,000 instead of the customary 50 translated documents per week? Automating the translation process ends up uncovering problems with other parts of document handling processes.
- You mentioned the British National Corpus that includes a cross section of texts that are representative of the English language. It contains 15 million different terms, whereas machine translation dictionaries only contain 300,000 entries. How can this barrier to the construction of an acceptable MT system for society be overcome?
This collection of over 100 million English words is a good mirror of macro features of the language. One is that we use very many words. However, word frequency is extremely variable: of 15 million terms 70% are seldom used!To overcome the “barrier” of variability in vocabulary usage, commercial MT systems use the most common words to create a core system and then add 5,000 to 10,000 customer-specific words. This has been a reasonably successful approach.For web applications, however, this simply does not work. Even the best systems are missing literally millions of words, plus the fact that new words are invented every day. At least three remedies are applied at present: ask the user to “try again”, ask the user to enter a synonym, and automatically or semi-automatically build synonym databases.As I see it, we will have to develop systems to guide authoring of web content, such as have already been developed for technical documents. There are strong economic arguments for moving in that direction.
- You have just given a course on paraphrasing and text mining at the School of Computing, what were the key points of your talks?
My intention was to focus on the convergence between different research fields and their commercial applications. Text mining uses computational linguistics techniques for commercial purposes. Paraphrasing (or different ways of saying the same thing) is a widespread problem and at the same time a new research field. I want to get students thinking about what the union of different research fields brings in practical terms. In fact, these two fields represent future trends in that they focus on meaning and how to process it automatically.
- The Association for Machine Translation in the Americas (AMTA) that you coordinate is organizing the AMTA 2008 conference to be held in Hawaii next October, what innovations does conference have in store?
There is always something new! Come and see! One difference this year is that several groups are holding conferences together with us. AMTA, the International Workshop on Spoken Language Translation (IWSLT), a workshop by the US government agency NIST about how to evaluate translation evaluation methods, a Localization Industry Standards Association meeting attracting representatives from large corporations, and another group of Empirical Methods in Natural Language Processing (EMNLP) researchers will all be at the same hotel in the same week. Finally, because it will be held in Hawaii, our colleagues from Asia will be there to add an even more international perspective. For more information, see the conference web site.