Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AI Technology

Why Computers Still Don't Understand People 277

Gary Marcus writes in the New Yorker about the state of artificial intelligence, and how we take it for granted that AI involves a very particular, very narrow definition of intelligence. A computer's ability to answer questions is still largely dependent on whether the computer has seen that question before. Quoting: "Siri and Google’s voice searches may be able to understand canned sentences like 'What movies are showing near me at seven o’clock?,' but what about questions—'Can an alligator run the hundred-metre hurdles?'—that nobody has heard before? Any ordinary adult can figure that one out. (No. Alligators can’t hurdle.) But if you type the question into Google, you get information about Florida Gators track and field. Other search engines, like Wolfram Alpha, can’t answer the question, either. Watson, the computer system that won “Jeopardy!,” likely wouldn’t do much better. In a terrific paper just presented at the premier international conference on artificial intelligence (PDF), Levesque, a University of Toronto computer scientist who studies these questions, has taken just about everyone in the field of A.I. to task. ...Levesque argues that the Turing test is almost meaningless, because it is far too easy to game. ... To try and get the field back on track, Levesque is encouraging artificial-intelligence researchers to consider a different test that is much harder to game ..."
This discussion has been archived. No new comments can be posted.

Why Computers Still Don't Understand People

Comments Filter:

Real Programmers don't eat quiche. They eat Twinkies and Szechwan food.

Working...