Noami Yamashita and Toru Ishida (Kyoto) Computer Supported Cooperative Work (CSCW 2006). [pdf]
Abstract: Even though multilingual communities that use machine translation to overcome language barriers are increasing, we still lack a complete understanding of how machine translation affects communication. In this study, eight pairs from three different language communities–China, Korea, and Japan–worked on referential tasks in their shared second language (English) and in their native languages using a machine translation embedded chat system. Drawing upon prior research, we predicted differences in conversational efficiency and content, and in the shortening of referring expressions over trials. Quantitative results combined with interview data show that lexical entrainment was disrupted in machine translation-mediated communication because echoing is disrupted by asymmetries in machine translations. In addition, the process of shortening referring expressions is also disrupted because the translations do not translate the same terms consistently throughout the conversation. To support natural referring behavior in machine translation-mediated communication, we need to resolve asymmetries and inconsistencies caused by machine translations.

Task for experiments: order figures through a chat interface, via a third language (English) and with own lanuage+ MT.
The process of agreeing on a perspective on a referent is known as lexical entrainment [4, 11].

Although machine translation liberates members from language barriers, it also poses hurdles for establishing mutual understanding. As one might expect, translation errors are the main source of inaccuracies that complicate mutual understanding [25]. Climent found that typographical errors are also a big source of translation errors that hinder mutual understanding [7]. Yamashita discovered that members tend to misunderstand translated messages and proposed a method to automatically detect misunderstandings [30].

In machine translation-mediated communication, shortened referring expressions are not necessarily translated correctly; even when referring expressions overlap considerably, machine translation may generate something totally different based on very small changes. Because abbreviation is problematic for machine translation, we expect that participants will identify a figure using identical referring expressions throughout the conversation.

… translations between two different languages are not transitive: translation from language A to B and back to A does not yield the original expression. The intransitive nature of machine translations results from its development process; translation from language A to B is built independently of translation from language B to A. In such conversations, the addressee cannot echo the speaker’s expression as a way of accepting it, illustrating that they are referring to the same thing.

We also found that in their second trial, speakers using machine translation preferred to narrow expressions rather than simplify them. …We infer that “narrowing” is observed more frequently in machine translation-mediated communication because distinctive terms such as “kimono” have few alternatives in translation, and thus, participants feel safe using them to match the figures.

Moreover, participants avoided focusing on the incomprehensible part of messages to discover what was wrong. Since translations are not transitive, it appears that they cannot efficiently solve the problem. Speakers have little choice but to offer more information and proceed with the task.

Consistent with quantitative results, speakers tended to describe the figures more frequently in machine translation than in English.

It seems that participants can minimize mutual effort in collaboration by offering more and more information until their partner confirms understanding.

Since such an unwieldy conversational style would not be useful in general conversation, there is a need to support natural referential behavior in machine translation-mediated communication. For example, support that creates correspondences among references (or keywords) between the two languages may help. Also, support that creates correspondences among referring expressions before and after shortening may help.


I. Dan Melamed, Ryan Green and Joseph P. Turian. (2003) HLT.
Computer Science Department NYU
Contact: {lastname}

Abstract: ŽMachine translation can be evaluated using precision, recall, and the F-measure. These standard measures have signicantly higher correlation with human judgments than recently proposed alternatives. More importantly, the standard measures have an intuitive interpretation, which can facilitate insights into how MT systems might be improved. The relevant software is publicly available.

My Notes: they define both P and R as a conditioned by a set of references X given for a particular test set, and so if Y is the set of translation candidates generated by the system, they define:

Precision(X|Y) = |X ∩Y| / |Y|

Recall (Y|X) = |X ∩Y| / |X|

Multiple References: “One of the main sources of variance in MT evaluation measures is the multiple ways to express any given concept in natural language. A candidate translation can be prefectly correct but very different from a given reference translation. One approach to reducing this source of variance, and thereby improving reliability of MT evaluation, it to use multiple references (Thompson, 1991).”

I can see this is a practical way to solve this, but I guess I have my four years of translation training to blame for my resistance to just accept this interpretation of the recall measure. See the random thoughts that lead me to this paper.

Researchers in NLP and more specifically in IR have made extensive used of precision (P) and recall (R) to evaluate their systems. Widely used definitions for P and R are as follows:
P = relevant items system got correct / total number of items system produced or generated

R = relevant items system got correct / total number of relevant items (which the system should have produced)

Now, if we think about evaluating an MT system, the items are the translations and so precision is straightforward to calculate. P = number of correct translations produced by the system / total number of translations produced by the system.

But who do we calculate recall? The numerator is the same as for P (number of correct translations generated by the system), but how does one determine what is the number of relevant translations? This is almost a philosophical question, since there is not just one (set of) translation(s) that is correct given a SL sentence. Unless there is a fixed set of reference translations (often used by the MT community to evaluate systems with automatic metrics such as BLEU and METEOR), then there is no way to know a priori what is the number of possible translations given a SL sentence.

And this is indeed how most people use recall to evaluate MT systems, taking a set of references as their absolute truth of what is possible and relevant for any sentence. Melamed et al. 2003 define both P and R as a conditioned by a set of references X given for a particular test set, and so if Y is the set of translation candidates generated by the system, they define:

Precision(X|Y) = |X ∩Y| / |Y|

Recall (Y|X) = |X ∩Y| / |X|

Multiple References: “One of the main sources of variance in MT evaluation measures is the multiple ways to express any given concept in natural language. A candidate translation can be prefectly correct but very different from a given reference translation. One approach to reducing this source of variance, and thereby improving reliability of MT evaluation, it to use multiple references (Thompson, 1991).”

I can see this is a practical way to solve this, but I guess I have my four years of translation training to blame for my resistance to just accept this interpretation of the recall measure.

Estelle Ramey

September 12, 2006

The New York Times Obituary.

Throughout her career, Dr. Ramey decried sexist comments and situations that treated women as less than fully human. She felt very strongly about how little, if anything, it took to extend a helping hand to someone else in a way that could really make a huge difference in their lives. As she wrote in her book called Letters to our Grandchildren, “If I could leave you with any advice, it would be to speak words of caring not only to those closest to you, but to all the hungry ears you encounter on your journey through a cold world. Stop on the mountain climb to bring all those less lucky, less agile or well endowed. It will make the view even more beautiful when you get to the top.”

by John Lee and Stephanie Seneff @ Spoken Language Systems, MIT CSAIL
Interspeech – ICSLP (Pittsburgh) 17-21 September

Taken from Interspeech website:

Session Wed3A3O: Technologies for Specific Populations: Learners and Challenged
it’s a poster
A computer conversational system can potentially help a foreign-language student improve his/her fluency through practice dialogues. One of its potential roles could be to correct ungrammatical sentences. This paper describes our research on a sentence-level, generation-based approach to grammar correction: first, a word lattice of candidate corrections is generated from an ill-formed input. A traditional n-gram language model is used to produce a small set of N-best candidates, which are then reranked by parsing using a stochastic context-free grammar. We evaluate this approach in a flight domain with simulated ill-formed sentences. We discuss its potential applications in a few related tasks.

Notes: They take a couple of error categories relevant to Japanese speakers conversing in English (articles and prepositions, noun number, verb aspect, mode and tense) and use them for their experiments/analysis. They do not use data from real second-language learners for this paper.

First they reduce the supposedly erroneous sentence (in my case it would be incorrect MT output) to its canonical form, where articles, preps, and auxiliaries are stripped off, and nouns and verbs are reduced to their citation form. All their alternative inflections are inserted into the lattice; insertions of articles, preps and aux. are allowed at every position. Second, an n-gram and a stochastic CFG are used as LMs to score all the paths in the lattice. In their experiments, they treat the transcript as a gold-standard and they find that their method can correctly reconstruct the transcript 88.7% of the time.
What’s nice about this approach is that it doesn’t need any human corrections. In a way, my thesis research can be seen as a great source of data to train systems similar to this one. A nice side-effect of my research is that we obtain MT output annotated with human corrections. so in this setting, one can use correction annotated data in order to build systems that can recover from ill-formed MT output and generate correct translations for such output automatically.

Nizar Habash (Columbia University)’s contribution to the AMTA Hybird MT Panel.

The Intuition: StatMT and RuleMT have complementary advantages:
Syntactic structure produces better global target linguistic structure,
Statistical phrase-based translation is more robust locally.

The Resource Challenge
Parallel corpora as models of performance vs. Dictionaries/analyzers as models of competence
“More is better” is true for both approaches

Parallel corpora are domain/genre specific
Dictionaries and parsers can be domain/genre specific

Hybrids may need more data: Annotated resources.

Federico Gaspari (F.Gaspari @ from University of Manchester, United Kingdom:

• Social impact of MT very visible on the Internet

• Only small minority of language supported

• Online MT has established a niche for itself

• Online MT promotes social interchange

• Users prepared to accept low-quality output

• Human translation simply not an option

Tsunami webpage to help find/identify victims in English translated into many languages with online MT systems such as Google and Altavista: and

Michael McCord (mcmccord @ from IBM Research:
Two social impact projects, sponsored by IBM Corporate Community Relations (CCR) and IBM Research:

1. ¡Tradúcelo Ahora!(Translate it Now): English↔Spanish MT for Latinos.
Server-based: Users need not install anything.
Web page translation. Uses enhancement of IBM product WebSphere Translation Server (WTS).
Email translation. Using any email client, and without installing any software, a user simply writes an email to anyone and copies a certain email account on our server. The email gets translated and sent to the user’s recipients and the user. Handles either Es or En source, and these can be mixed (does language ID).
Smart cross-lingual web search.
Work done by Nelson Correa and Esmé Manandise, M. McCord

To address the Hispanic Digital Divide, CCR has been working in partnership with nearly three dozen major agencies serving the Latino community since 2004.
These agencies receive grants from CCR, use the TA software, and give us feedback for improving the En-Es MT.
This year we are continuing that work, and also working with K-12 schools – doing web page translation, and translation for email between (mainly) Spanish-speaking parents and English-speaking school staff.

A study by the Tomás Rivera Policy Institute concluded that the TA project has benefited the participant organizations and their constituents in significant ways:
It simplified community outreach specialists’ efforts to conduct educational sessions on medical disorders for Spanish-speaking clients;
It enabled staff to more easily research online information about public services, jobs, clinical and legal issues, and translate the web pages for their clients;
It enriched English as a Second Language (ESL) program educational resources; It augmented and improved Spanish literacy courses;
It made it easier for clients to find employment at popular job search web sites, helped them apply for jobs online, and write resumes and cover letters;
It provided GED and ESL students a significant new tool for conducting research, reading the news, viewing transcripts, etc., and
It provided an additional teaching resource to enhance basic computer-training courses.

2. Cooperation with Meadan on English
Chat/blog system to foster Western-Islamic dialog

CCR and other parts of IBM are cooperating with the Meadan organization (Ed Bice et al.) to build this system. IBM is contributing mainly certain technical pieces: Arabic↔English MT. Salim Roukos’ group. Arabic Slot Grammar parser. McCord, Cavalli-Sforza. Uses Buckwalter’s BAMA for morphology. Will be used to: improve Ar→En MT + analyze Arabic text entries directly to make them into a searchable database (also ESG used for English entries). Parts of networking platform (IBM group in England).

Is MT a necessity for social justice in a multi-ethnic society?
Certainly translation is. MT should help when there aren’t enough human translators, and the MT is good enough.

Rami B. Safadi (safadi @ from Sakhr Software USA. Social Impact of Translation Via SMS:

User sents message to be translated dialing a number (#2020), MT Server translates message and sents it back.

Motivation: For Sakhr Software: Revenues per message translated + Develop a dialect preprocessor. For Mobile phone companies: Value added services to retain customers + Free service.

English to Arabic (80%)
Over 50% Mobile advertisements & subscriptions
About 25% Dictionary, expressions, terminologies and short phrases
About 20% Chatting
About 5% Notifications for Bank accounts, Credit Cards, Prepaid cards….

Arabic to English (20%)
Over 70% Chatting
30% Dictionary, expressions, terminologies and short phrases

Available in 11 countries
Over 10,000 messages per day

Win Laptops, Mp3 players & more!.. Join the Al Shamil Quiz Competition from 3 – 9 August; 5pm – 9pm at the Mall of the Emirates. (School Students only)
Sorry the transferred failed. You do not have sufficient credit.
Tell me ur coming or no i have duty 7 am