http://www.dancohen.org/blog/posts/no_computer_left_behind said:
> Google researchers have demonstrated
> (but not yet released to the general public)
> a powerful method for creating 'good enough'
> translations—not by understanding the grammar
> of each passage, but by rapidly scanning and
> comparing similar phrases on countless electronic
> documents in the original and second languages.
> Given large enough volumes of words in a variety
> of languages, machine processing can find parallel phrases
> and reduce any document into a series of word swaps.
> Where once it seemed necessary to have a human being
> aid in a computer's translating skills, or to teach that
> machine the basics of language, swift algorithms functioning
> on unimaginably large amounts of text suffice. Are such new
> computer translations as good as a skilled, bilingual human being?
> Of course not. Are they good enough to get the gist of a text? Absolutely.
> So good the National Security Agency and the Central Intelligence Agency
> increasingly rely on that kind of technology to scan, sort, and mine
> gargantuan amounts of text and communications
> (whether or not the rest of us like it).
sounds like something you might find interesting, michael.
of course, a "good enough" translation probably wouldn't be,
not for literature, where the realm of creativity is instantiated,
but could it work as a "first pass" that would do the bulk of the
"heavy lifting", so a person knowledgeable in both languages
could come in and spend relatively little time smoothing it out?
well, it's certainly possible, i would think. and maybe probable.
especially if progress on the technique proves to be forthcoming...
-bowerbird