
Hi, Am 10.03.2006 um 18:59 schrieb Bowerbird@aol.com:
keith said:
Just the opposite is the case. Believe me as a computer linguist.
i believe that the computer linguists have not been able to solve the problem. Exactly.
i also believe that google's research lab _will_ be able to solve it. i doubt they have "solved" it yet, and i'm sure when they do, their "solution" won't be "perfect enough" for the computer linguists, but nonetheless... If coputers linguists have not solved the problems in 20 years, google probably will not either ;-)) They might, but very unlikely.
What has happened. Vaporware and results. It simply does not work. Language can not be sucessfully model. Languages are regularly formed, nor well formed.
and here's a great example of why it won't be "perfect". just in the sentences quoted above: there should be a question-mark after "happened"; there seems to be a missing adjective before "results";' "successfully" is not spelled correctly. and there seems to be a missing word between "regularly" and "formed"; yet despite all these shortcomings, i know exactly what you meant to say...
(and i don't mean to be picking on you if english is not your first language. i only speak one language, so i am the last person to criticize anyone else on that dimension. Ouch! I am very sorry. Please excuse me. I had alot of work to do long hours last week(,) and people in and out of the office. I knew I had alot of booboos in my post.
the point is that human beings are very good at resolving the ambiguity that results from incomplete information, and we probably can't reasonably expect that of machines. but it is simply not that case that ambiguity permeates _every_aspect_ of language; clarity is not impossible.)
All AI projects so far have failed and failure has been admitted.
yes it has been, yet deep blue can still beat all but the best of the world's grandmasters... Gottcha ,-)))) Big blue is not AI it is brute force. I ould be glad to dicuss this one. Directly with you, if you care to. This would be OT.
if you give up on teaching a machine "meaning", and concentrate on giving it enough rules that give the correct results most of the time, you can get very close to finishing the job you want done.
That has been tried in some AI projects, and failed!
of course, this approach is considered "a trick" by the artificial-intelligence people, whose aim was to "teach meaning" rather than solve a task, but that's why those artificial-intelligence people have been such a failure themselves...
This is getting OT, too. But, The reason they are failing is due to the pardigm that language is meaning. Humans when resolving language(understanding) and more so translating it use moren than thier knowledge or the language to solve these tasks.
When I see a OCR system that just uses raw results, then I will bow my head in recognition of true achieve meant.
a perfect example of what i just said: the objective is to get accurate o.c.r., by whatever means necessary, and _not_ to limit yourself to "raw results".
Yet, i order to get better over all result, we need better "raw results" OCR has come a long way since they use dictionary. Adding a DB with phrasal information will bring along another 2 %, but the costs of the other side would be about 50% in resources. Sure cheaper computers, memory, availibity of google will help. Yet it is not the holy grail. Also, as a OT example. How long have we been waiting for the 3 liter car(3 liters per 100km). Well, it has be here since the 80s. A engineer had modify a VW rabbit(just the form of the pistons) and it only need 1 Gallon per 62 miles!! Money rules. ( O.K. very off topic)
if doing some voodoo gave better o.c.r., we would do it. this isn't some kind of "intellectual challenge" where we find it necessary to tie our hands behind our back; it is a practical job that needs to be done...
Exactly, my point. Things, work for most simple every day tasks, but .... Keith.