
This is all just nitpicking and progress is being made, that's all that counts, not now many think ENUF progress. "Those doing the impossible should not be interrupted by those who say impossible." Ancient Chinese Proverb I am currently running from an emergency backup mail system, so please reply to hart@pobox.com as usual, but cc: me at: hart@metalab.unc.edu until I let you know I am back @pglaf. Please also add hart@pglaf.org to your email alias for me. Thanks!!!!!!! Michael On Fri, 10 Mar 2006 Bowerbird@aol.com wrote:
keith said:
Just the opposite is the case. Believe me as a computer linguist.
i believe that the computer linguists have not been able to solve the problem.
i also believe that google's research lab _will_ be able to solve it. i doubt they have "solved" it yet, and i'm sure when they do, their "solution" won't be "perfect enough" for the computer linguists, but nonetheless...
What has happened. Vaporware and results. It simply does not work. Language can not be sucessfully model. Languages are regularly formed, nor well formed.
and here's a great example of why it won't be "perfect". just in the sentences quoted above: there should be a question-mark after "happened"; there seems to be a missing adjective before "results";' "successfully" is not spelled correctly. and there seems to be a missing word between "regularly" and "formed"; yet despite all these shortcomings, i know exactly what you meant to say...
(and i don't mean to be picking on you if english is not your first language. i only speak one language, so i am the last person to criticize anyone else on that dimension. the point is that human beings are very good at resolving the ambiguity that results from incomplete information, and we probably can't reasonably expect that of machines. but it is simply not that case that ambiguity permeates _every_aspect_ of language; clarity is not impossible.)
All AI projects so far have failed and failure has been admitted.
yes it has been, yet deep blue can still beat all but the best of the world's grandmasters...
if you give up on teaching a machine "meaning", and concentrate on giving it enough rules that give the correct results most of the time, you can get very close to finishing the job you want done.
of course, this approach is considered "a trick" by the artificial-intelligence people, whose aim was to "teach meaning" rather than solve a task, but that's why those artificial-intelligence people have been such a failure themselves...
When I see a OCR system that just uses raw results, then I will bow my head in recognition of true achieve meant.
a perfect example of what i just said: the objective is to get accurate o.c.r., by whatever means necessary, and _not_ to limit yourself to "raw results".
if doing some voodoo gave better o.c.r., we would do it. this isn't some kind of "intellectual challenge" where we find it necessary to tie our hands behind our back; it is a practical job that needs to be done...
-bowerbird