Why Computers Still Don't Understand People 277
Gary Marcus writes in the New Yorker about the state of artificial intelligence, and how we take it for granted that AI involves a very particular, very narrow definition of intelligence. A computer's ability to answer questions is still largely dependent on whether the computer has seen that question before. Quoting:
"Siri and Google’s voice searches may be able to understand canned sentences like 'What movies are showing near me at seven o’clock?,' but what about questions—'Can an alligator run the hundred-metre hurdles?'—that nobody has heard before? Any ordinary adult can figure that one out. (No. Alligators can’t hurdle.) But if you type the question into Google, you get information about Florida Gators track and field. Other search engines, like Wolfram Alpha, can’t answer the question, either. Watson, the computer system that won “Jeopardy!,” likely wouldn’t do much better. In a terrific paper just presented at the premier international conference on artificial intelligence (PDF), Levesque, a University of Toronto computer scientist who studies these questions, has taken just about everyone in the field of A.I. to task. ...Levesque argues that the Turing test is almost meaningless, because it is far too easy to game. ... To try and get the field back on track, Levesque is encouraging artificial-intelligence researchers to consider a different test that is much harder to game ..."
people can only answer questions they know (Score:5, Interesting)
the other day my almost 6 year old said we live on 72nd doctor. the correct designation is 72nd Dr
since doctors use dr as shorthand, he thought streets use the same style
Re:Missing the point as usual (Score:5, Interesting)
One of the great open questions about the future of humanity is which will happen first: A) we figure out how our minds are able to understand the world and solve the problems involved in surviving and reproducing. B) we figure out how to build machines that are better than humans at understanding the world and solving the problems involved in surviving and reproducing.
I think it is not at all clear which one will happen first. I think the article's point is exactly right. It doesn't matter what intelligence is. It only matters what intelligence does. The whole field of AI is built around the assumption that we can solve B without solving A. They may be right. Evolution often builds very complicated solutions. Compare a human 'computer' to a calculator doing arithmetic. Clearly we don't need to understand how the brain does this in order to build something better than a human. Maybe the same can be done for general intelligence. Maybe not. I advocate pursuing both avenues.
I think a better SHRDLU is needed (Score:2, Interesting)
Once you have this system, start databasing real objects in them(another time consuming task), and see how they interact. Natural language processing follows though since you have a bunch of nouns(the objects you databased), and verbs(actions on the objects). The thing is,"Even if AI has a complex imagination space possible of imagining and simulating scenarios", it still wouldn't talk like a guy you meet off the street at first. I think sci fi has this covered with social awkward Data and such.
Re:Missing the point as usual (Score:4, Interesting)
I think everyone harbors 'intelligent design sympathies' as you put it. The deists believe the soul and intelligence is other worldly and wholly separate from the physical. Where-as the atheists seem hell bent on the idea that intelligence and self awareness are illusions or somehow not real. Both refuse to believe that the mind, understanding and all spirituality is actually a part of this real and physical world. Of all the complex and seemingly intractable questions about the universe we have, the most complex, most unbelievable question we face is the thing that is closest to home. The fact that the human mind exists at all is so unfathomable that in all of human history no one has even remotely began to explain how it could possibly exist.
Re:Missing the point as usual (Score:5, Interesting)
Reductionists might say that intelligence is an illusion, but they'd say that everything else outside of quantum fields and pure math is an illusion too. If you step away from the absurd world of the reductionist, you will find that atheists aren't saying that it's all an illusion. It's quite obviously not. Things are going on in the brain, quite a lot of them. The atheist would say that instead of copping out with some sort of soul-based black box, that the answer lies in the emergent behavior of a complex web of interacting neurons and other cells.
Re:Missing the point as usual (Score:5, Interesting)
AI has a high burden of proof (Score:5, Interesting)
The interesting thing happens when you ask the same premise to a 5 year old, who only knows that a bird can fly and has never seen a penguin before. If you tell them that a penguin is a bird, they will quite happily think that a penguin can fly. They are extremely surprised to find out that they can't. We as adults find such quirks in life, and do things like laugh at the unexpected absurdity, such as ironies. I.e. you work with a woman you hate named Joy, or people are amazed at unexpected contradictions.
The point is that intelligence is about the tolerance of those pieces of feedback, and what happens when it is encountered. I.e. your head doesn't explode at an absurdity, or unexpected result, and you only make the same mistake once.
The major difference between man and machine, will be the fact that a machine can copy their knowledge verbatim to another system, and thus have some degree of immortality, whereas the shelf life of a human brain seems to be around 80 years or so right now. Thus, even if machines are slower to learn than us, they will out live our great great grandchildren.
Furthermore, who says that an intelligence we create should be like ours? It may be more beneficial to all around if in fact we never generate an intelligence which operates just like ours, but is just as effective if not more. If this happens, there may even still be a future use for the human race, rather than just overlords to grow fat and complacent to be overthrown.
Re:AI has a high burden of proof (Score:4, Interesting)
Language seems to be the burden of proof required for an AI system, and has been so since the days of Turing. Language is by itself a representation of symbolic logic, and the most common bunk of proof is that transitive logic fails in symbolic logic.
That's where you're wrong. Natural language is not a representation of symbolic logic. It's a representation of human perception, thought and social interaction, which do not work by formal logic at all. Language is an organic and dynamic product of biology and society. Formal logic, in all its forms, is a product of mathematics, which is a tiny subset of all that is human thought.