… when I took a look at Ed Bice’s slides for the AMTA Social Impact of MT Panel. Ed Bice is the founder of Meadan (ebice @ meadan.org), among many other things (his Pop web page).

hybrid distributed natural language translation (hdnlt) ‘web 2.0’ approach
• Language translation as a distributed service
• People/machines collaborate to provide service
• Volunteer translators as a social network
• Harness collective intelligence – value arises from small, shared
contributions
• Reputation driven – translator reputations adjusted by feedback
and performance
• Abstractions ease adding devices and services

Advertisements


In 2004, less than 1% of the 6800 languages of the world profits from a high level of computerization, including a broad range of services going from text processing to machine translation. This thesis, which focuses on the other languages – the pi-languages – aims at proposing solutions to cure their digital underdevelopment. In a first part, intended to show the complexity of the problem, we present the languages’ diversity, the technologies used, as well as the approaches of the various actors: linguistic populations, software publishers, the United Nations, States… A technique for measuring the computerization degree of a language – the sigma-index – is proposed, as well as several optimization methods. The second part deals with the computerization of the Laotian language and concretely presents the results obtained for this language by applying the methods described previously. The described achievements contributed to improve the sigma-index of the Laotian language by approximately 4 points, this index being currently evaluated with 8.7/20. In the third part, we show that an approach by groups of languages can reduce the computerization costs thanks to the use of a modular architecture associating existing general software and specific complements. For the most language-related parts, complementary generic lingware tools give the populations the possibility to computerize their languages by themselves. We validated this method by applying it to the syllabic segmentation of Southeast Asian languages with unsegmented writings, such as Burmese, Khmer, Laotian and Siamese (Thai).

From an email Christian Boitet sent to mt-list@eamt.org:

1) On the terms tau-, mu-, pi- languages and pairs of languages

The point is to CHARACTERIZE in an EXACT an NON-DEPRECATING way languages and pairs of languages for which there is a lack of computerized resources and tools used or directly usable in NLP applications concerning them.

By the way, I forgot to include “pi-pairs” in the previous e-mail, but they do exist.

A pi-pair of languages is a pair for which NLP-related data, resources and tools are lacking. [pi=poorly informatisées?].

read more about that here:
Méthodes pour informatiser les langues et les groupes de langues «peu dotées» by Vincent Berment
Berment also uses the terms:
– tau-language (pair) = well (totally / très bien) equipped
– mu-language (pair) = medium (moyennement bien) equipped

Example: while French and Thai are reasonably “NLP-equipped” (tau-language and mu-language), the 2 pairs FT, TF are not.

Example: Spanish is a tau-language, Catalan and Galician are mu or pi languages, the pairs SC and SG are Tau-pairs because there are 2 quite good MT systems translating newspapers ofr these pairs (Comprendium, using the METAL shell, see Proc.
EAMT-05).

2) Other terms proposed and why they are not good terms for these concepts

The terms

* minority languages
* less-prevalent languages
* less(er) widely used languages
* less-dominant (non-dominant) languages
* traditionally oral/spoken/unwritten languages
* endangered languages
* indigenous languages
* neglected languages
* New Member State languages (used for the new languages of the European Union)

don’t really say anything about the degree of “equipement” as far as computer applications are concerned, and many of them are deprecating in some way.

(I agree 100% with Jeff Allen on that!)

The terms

* sparse-data languages
* low-density languages

also don’t fit:

– The idea that data is “sparse” means there ARE data, but in fragmentary and heterogeneous form.
But pi-languages often have NO data or resources usable, even for simple applications such as hyphenation — where are “sparse data” for hyphenating khmer?

– “Low-density” is quite worse as it can only mean that a language is spoken by a small fraction of the population where it is spoken.
But what can be the reference? A country? A region? — To the extreme, almost any language is of high density in families where it is spoken.

About the 2 other terms proposed:

* commercially disadvantaged/inhibited/challenged languages
* low market-value languages

These terms also miss the point above. A language may suddenly acquire a high market value (see Chinese since 10-15 years), or lose it somewhat (e.g., Russian since 1991), this is independent of the resources and tools existing for it. The reason is that these are often developed NOT in order to build commercial products. Why were Eurodicautom, Euramis and EuroParl developed?

When NLP firms will discover than
Malay/Indonesian can be commercially interesting, they will find there are quite a lot of resources for them, including a modern unified terminology (istilah). But if the same happens for tagalog (or maybe for swahili), they will find next to nothing usable to quickly build applications for them.

Best regards,

Ch.Boitet

By Erick Schonfeld, Om Malik, and Michael V. Copeland

SOCIAL MEDIA

Incumbent To Watch: Yahoo!
Hoping to dominate social media, it’s gobbling up promising startups (Del.icio.us, Flickr, Webjay) and experimenting with social search (My Web 2.0) that ranks results based on shared bookmarks and tags.

MASHUPS AND FILTERS

Incumbent To Watch: Google
Already the ultimate Web filter through general search as well as blog, news, shopping, and now video search, it’s encouraging mashups of Google Maps and search results, and offers a free RSS reader.

THE NEW PHONE

For nearly a century, the phone, and voice as we know it, has existed largely in the confines of a thin copper wire. But now service providers can convert voice calls into tiny Internet packets and let them loose on fast connections, thus mimicking the traditional voice experience without spending hundreds of millions on infrastructure. All you need are powerful–but cheap–computers running specialized software. The Next Net will be the new phone, creating fertile ground for new businesses.

Incumbent To Watch: eBay (Skype)
The pioneer in the field and still the front-runner, Skype brings together free calling, IM, and video calling over the Web; eBay will use it to create deeper connections between buyers and sellers. [And I’d say Google Talk is following closely…]

THE WEBTOP

It’s been a long time — all the way back to the dawn of desktop computing in the early 1980s — since software coders have had as much fun as they’re having right now. But today, browser-based applications are where the action is. A killer app no longer requires hundreds of drones slaving away on millions of lines of code. Three or four engineers and a steady supply of Red Bull is all it takes to rapidly turn a midnight brainstorm into a website so hot it melts the servers. What has changed is the way today’s Web-based apps can run almost as seamlessly as programs used on the desktop, with embedded audio, video, and drag-and-drop ease of use. Company: 37Signals (Chicago)
What it is: Online project management
Next Net bona fides: Its Basecamp app, elegant and inexpensive, enables the creation, sharing, and tracking of to-do lists, files, performance milestones, and other key project metrics; related app Backpack, recently released, is a powerful online organizer for individuals.
Company: Writely (Portola Valley, CA)
What it is: Online word processing
Next Net bona fides: It enables online creation of documents, opens them to collaboration by anyone anywhere, and simplifies publishing the end result on a website as a blog entry.

UNDER THE HOOD

A growing number of companies are either offering themselves as Web-based platforms on which other software and businesses can be built or developing basic tools that make some of the defining hallmarks of the Next Net possible.

Incumbent To Watch: Amazon
It’s becoming a major Web platform by opening up its software protocols and encouraging anyone to use its catalog and other data; its Alexa Web crawler, which indexes the Net, can be used as the basis for other search engines, and its Mechanical Turk site solicits humans across cyberspace to do things that computers still can’t do well, such as identify images or transcribe podcasts.

In theory, if we had a very good LM trained on huge amounts of data, the kind of errors that can be corrected by a monolingual posteditor (GALE), which are generation, fluency errors, should be already taken care of by such LM, right?

The problem is that even LMs trained on really large datasets face sparseness problems for high-order models. From a practical point of vew, since the number of parameters of an n-gram model is O(|W|^n), finding the resources to compute and store all these parameters becomes a hopeless task for n > 5. Or even if we did (read Google did), in actual text, the majority of n-grams that one sees are bigrams or at most trigrams, and it’s very rear to see very high n-grams.

Therefore, current LMs, in spite having smoothed and improved MT output, still generate disfluencies.

An aside: I love the term lexical miopia and shortsightedness to describe low n-gram models (Beeferman et al. 1997).

Doug Beeferman, Adam Berger, John Lafferty (1997). Proceedings of the Second Conference on Empirical Methods in Natural Language Processing.

Abstract: This paper introduces a new statistical approach to partitioning text automatically into coherent segments. Our approach enlists both short-range and long-range language models to help it sniff out likely sites of topic changes in text. To aid its search, the system consults a set of simple lexical hints it has learned to associate with the presence of boundaries through inspection of a large corpus of annotated data. We also propose a new probabilistically motivated error metric for use by the natural language processing and information retrieval communities, intended to supersede precision and recall for appraising segmentation algorithms. Qualitative assessment of our algorithm as well as evaluation using this new metric demonstrates the effectiveness of our approach in two very different domains, Wall Street Journal articles and the TDT Corpus, a collection of newswire articles and broadcast news transcripts.

My Notes: Partitioning is at the text document level, not at the sentence level, used to segment large collections of texts (IR).

Splitting long sentences into fluent and coherent shorter sentences is much harder to do automatically, since it would require some sort of language generation module, which could turn sentential fragments into sentences. Has anybody looked at this problem?
An aside: I love the term lexical miopia and shortsightedness to describe low n-gram models.

by Alon Lavie, Donna Gates, Noah Coccaro and Lori Levin (1996). ECAI Workshop on Dialogue Processing in Spoken Language Systems.

Abstract: JANUS is a multi-lingual speech-to-speech translation system designed to facilitate communication between two parties engaged in a spontaneous conversation in a limited domain. In this paper we describe how multi-level segmentation of single utterance turns improves translation quality and facilitates accurate translation in our system. We define the basic dialogue units that are handled by our system, and discuss the cues and methods employed by the system in segmenting the input utterance into such units. Utterance segmentation in our system is performed in a multi-level incremental fashion, partly prior and partly during analysis by the parser. The segmentation relies on a combination of acoustic, lexical, semantic and statistical knowledge sources, which are described in detail in the paper. We also discuss how our system is designed to disambiguate among alterantive possible input segmentations.

My Notes: Split input into semantic dialog units (~= speech act), namely semantically coherent pieces of information that can be translated independently.