Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AI Technology

Why Computers Still Don't Understand People 277

Gary Marcus writes in the New Yorker about the state of artificial intelligence, and how we take it for granted that AI involves a very particular, very narrow definition of intelligence. A computer's ability to answer questions is still largely dependent on whether the computer has seen that question before. Quoting: "Siri and Google’s voice searches may be able to understand canned sentences like 'What movies are showing near me at seven o’clock?,' but what about questions—'Can an alligator run the hundred-metre hurdles?'—that nobody has heard before? Any ordinary adult can figure that one out. (No. Alligators can’t hurdle.) But if you type the question into Google, you get information about Florida Gators track and field. Other search engines, like Wolfram Alpha, can’t answer the question, either. Watson, the computer system that won “Jeopardy!,” likely wouldn’t do much better. In a terrific paper just presented at the premier international conference on artificial intelligence (PDF), Levesque, a University of Toronto computer scientist who studies these questions, has taken just about everyone in the field of A.I. to task. ...Levesque argues that the Turing test is almost meaningless, because it is far too easy to game. ... To try and get the field back on track, Levesque is encouraging artificial-intelligence researchers to consider a different test that is much harder to game ..."
This discussion has been archived. No new comments can be posted.

Why Computers Still Don't Understand People

Comments Filter:
  • by Anonymous Coward on Saturday August 17, 2013 @07:05PM (#44597189)

    Thanks computer science researchers! Your friends working on the actual AI problem over here in Linguistics and Psychology find it awfully amusing that you're trying to program a concept before we even know what that concept is.

    • by ganv ( 881057 ) on Saturday August 17, 2013 @08:07PM (#44597519)

      One of the great open questions about the future of humanity is which will happen first: A) we figure out how our minds are able to understand the world and solve the problems involved in surviving and reproducing. B) we figure out how to build machines that are better than humans at understanding the world and solving the problems involved in surviving and reproducing.

      I think it is not at all clear which one will happen first. I think the article's point is exactly right. It doesn't matter what intelligence is. It only matters what intelligence does. The whole field of AI is built around the assumption that we can solve B without solving A. They may be right. Evolution often builds very complicated solutions. Compare a human 'computer' to a calculator doing arithmetic. Clearly we don't need to understand how the brain does this in order to build something better than a human. Maybe the same can be done for general intelligence. Maybe not. I advocate pursuing both avenues.

      • by fuzzyfuzzyfungus ( 1223518 ) on Saturday August 17, 2013 @08:38PM (#44597673) Journal

        "The whole field of AI is built around the assumption that we can solve B without solving A."

        Unless one harbors active 'intelligent design' sympathies, it becomes more or less necessary to suspect that intelligences can be produced without understanding them. Now, how well you need to understand them in order to deliver results with less than four billion years of brute force across an entire planet... That's a sticky detail.

        • by Charliemopps ( 1157495 ) on Saturday August 17, 2013 @08:51PM (#44597741)

          I think everyone harbors 'intelligent design sympathies' as you put it. The deists believe the soul and intelligence is other worldly and wholly separate from the physical. Where-as the atheists seem hell bent on the idea that intelligence and self awareness are illusions or somehow not real. Both refuse to believe that the mind, understanding and all spirituality is actually a part of this real and physical world. Of all the complex and seemingly intractable questions about the universe we have, the most complex, most unbelievable question we face is the thing that is closest to home. The fact that the human mind exists at all is so unfathomable that in all of human history no one has even remotely began to explain how it could possibly exist.

          • by siride ( 974284 ) on Saturday August 17, 2013 @08:54PM (#44597751)

            Reductionists might say that intelligence is an illusion, but they'd say that everything else outside of quantum fields and pure math is an illusion too. If you step away from the absurd world of the reductionist, you will find that atheists aren't saying that it's all an illusion. It's quite obviously not. Things are going on in the brain, quite a lot of them. The atheist would say that instead of copping out with some sort of soul-based black box, that the answer lies in the emergent behavior of a complex web of interacting neurons and other cells.

          • Do you have a reference for what you said about deists? My understanding is that deism says two things. First, whatever higher power there may be ought to be studied using logic and reason based on direct existence, not faith in old teachings. In this regard, they talk a lot about using your brain. Second, that "God" designed the HUMAN MIND to be able to reason and made other design decisions, then He/it pretty much leaves us alone, to live within the designed framework.

            I haven't seen anything where de

            • Pretty sure it was a typo for "theists," or perhaps a misunderstanding. Deists tend to be pretty "blind clockmaker"-y, and assume either a divinity that preprogrammed the evolution of intelligence and left well enough alone, or a completely scientific universe being run as a cosmic experiment—i.e. no intervention whatsoever.
        • by aurizon ( 122550 )

          Comparing man and AI in a digital is to fail. The human condition in all the ways we think, lay down memory, recall, amend memory etc, is not at all digital. It seems to me that almost every study finds neurons have large numbers of interconnections and even though the nerve cells does change from one state to another, in a way that resembles digital, all the summing and deduction of various inputs, and the same things happening to the large number of interconnected neurons tells me that we can not easi

          • Neurons don't communicate in an analogue fashion—they send digital pulses of the same magnitude periodically, with more rapid pulses indicating more signal. This is both more robust and more effective at preventing over-adaptation. When researchers figured out how to mimic the imperfections in the biological digital system, their neural networks got significantly better [arxiv.org]. Because they'd been working under the assumption that an exact analogue value was going to be superior than a digital value, they ha

            • by aurizon ( 122550 )

              So if more rapid =larger, Then the signal pours into a leaky bucket, and the level falls when less rapid ensues = de facto analog.
              An AI new born would have resources of memory, which we can liken to instinctual memories found in many places. The rate of access to this and the depth would hasten maturity, and a faster clock speed also do this.
              A lone AI would want the high speed interaction of another like it, or more, so we need to build them in groups, once we know how.

      • by swalve ( 1980968 )
        Our brains aren't the only possible way to create intelligence. We can make machines that solve B without ever getting close to A. In fact, it will probably be those machines that inform the science of A.
      • by Kjella ( 173770 )

        One of the great open questions about the future of humanity is which will happen first: A) we figure out how our minds are able to understand the world and solve the problems involved in surviving and reproducing. B) we figure out how to build machines that are better than humans at understanding the world and solving the problems involved in surviving and reproducing. (...) The whole field of AI is built around the assumption that we can solve B without solving A. They may be right.

        Of course you can, if all we care about is survival and reproduction you don't need human intelligence as cockroaches or for that matter bacteria survive and reproduce just fine. Actual replicating machines that don't just virtually exist in computer memory is more in the field of nanotechnology and nanobots than AI. It'd certainly have to work in a completely different way than our current hardware factories.

    • Your friends working on the actual AI problem over here in Linguistics and Psychology find it awfully amusing that you're trying to program a concept before we even know what that concept is.

      Sometimes insights come from theoreticians, sometimes experimenters. C'est la vie.

      • In the real, grown-up world, cognitive science [wikipedia.org] is a mixed bag of CS people, linguists, and psychologists. They work together and are often well versed in all three fields, unlike poncy Anonymous Cowards.
    • Journalists: this is trolling! What you are currently calling "trolling" [dailymail.co.uk] is simply abuse and harrassment.
  • by giorgist ( 1208992 ) on Saturday August 17, 2013 @07:07PM (#44597203)
    An eskimo would have the same problem, does that mean he cannot understand people ?
    • by Kjella ( 173770 ) on Saturday August 17, 2013 @07:41PM (#44597421) Homepage

      An eskimo would have the same problem, does that mean he cannot understand people ?

      In this case he wouldn't understand, but because he lacks knowledge not intelligence. Show him an alligator and a 100 meter hurdles race and he'll be able to answer but the AI will still draw a blank. Ignorance can be cured but we still haven't found a cure for stupid, despite all efforts from education systems worldwide. No wonder we're doing no better with computers.

  • by msobkow ( 48369 ) on Saturday August 17, 2013 @07:08PM (#44597217) Homepage Journal

    People are irrational. They ask stupid questions that make no sense. They use slang that confuses the communication. They have horrible grammar and spelling. And overseeing it all is a language fraught with multiple meanings for words depending on the context, which may well include sentences and paragraphs leading up to the sentence being analyzed.

    Is it any surprise that computers can't "understand" what we mean, given the minefield of language?

    • Well, at least I'm not the only one who appreciates a computer's relative unambiguity (aside the ones programmed into it).

      • On the old TV series "Get Smart", there was a robot agent named Hymie. There was a running gag where one of the human agents would say something along the lines of "kill the lights, Hymie", whereupon the robot would of course pull out his gun and shoot the light bulbs; or tell Hymie to "hop to it" (which led Hymie to start hopping), "get the door" (so he'd rip the door off its hinges and bring it over to Max), etc.

    • Great, so once we are all speaking lojban, AI will be a piece of cake, right?

    • Just ask the question in Lojban [wikipedia.org]?

    • by rabtech ( 223758 )

      People are irrational. They ask stupid questions that make no sense. They use slang that confuses the communication. They have horrible grammar and spelling. And overseeing it all is a language fraught with multiple meanings for words depending on the context, which may well include sentences and paragraphs leading up to the sentence being analyzed.

      Is it any surprise that computers can't "understand" what we mean, given the minefield of language?

      Even if you came up with a regular easy to parse grammar it wouldn't help. Even if you fed the computer all known information about alligators and hurdles in that standard format it wouldn't help. That's the point... Now that we are starting to do much better at parsing and reproducing speech, it turns out that isn't really the hard problem.

      • by msobkow ( 48369 )

        That's the whole point about "context", though. It's not just the context of the sentence at issue, but the context of the knowledge to be evaluated, the "memory" of the computer if you will. It's an exponential data store that's required, and then some, even when using pattern matching and analysis to identify relevant "thoughts" of memory.

    • "Is it any surprise that computers can't "understand" what we mean, given the minefield of language?"

      It is certainly no surprise that computers can't; but since we know that humans can (to a substantially greater degree), we can say that this is because computers are far worse than humans at natural language, not because natural language is inherently unknowable.

    • by antdude ( 79039 )

      I don't understand you. [grin] :P

    • Is it any surprise that computers can't "understand" what we mean, given the minefield of language?

      The problem isn't entirely linguistic. Humans can communicate because we have an awareness of a common reality. Until/Unless computers are also aware, they will have problems understanding us.

    • The summary is problematic. The alligator example is interesting, but the later examples in the article are better. Most of them don't depend on "imagination" or "creativity" or whatever to answer the question, or on a large bank of cultural knowledge, but only a basic knowledge of what words mean in relationship to each other. Yet AI would often fail these tests.

      People are irrational. They ask stupid questions that make no sense.

      While this is true, it has little bearing on the issues raised in TFA. It's also unclear what you mean by things that "make no sense." If you

      • by msobkow ( 48369 )

        I quoted "understand" because there are many levels of understanding from mapping the atomic meaning of words based on sentence structure through to contextual awareness and full scale sentience.

    • Physicists are irrational. They ask stupid questions that make no sense to anyone else. They use jargon that confuses the communication. They have horrible grammar and spelling. And overseeing it all is a physics language fraught with multiple meanings and everyday words used in highly technical ways.

      Any sufficiently advanced group of people are indistinguishable from an irrational group to anyone else, human OR robot.

      To put the above in technical language: knowing the rules of propositional calcu

  • There are two basic forms. One involves training the human on the commands the computer will respond to properly and the other involves training the computer to recognize an individuals speech patterns.

    IBM has been busy for some time working on real-time translators, and I think that path is where the future lies, not just in a voice command TV.
    • by ultranova ( 717540 ) on Saturday August 17, 2013 @07:44PM (#44597437)

      There are two basic forms. One involves training the human on the commands the computer will respond to properly and the other involves training the computer to recognize an individuals speech patterns.

      And neither helps here. The fact is, you don't know if an alligator can run the hundred-metre hurdles. When you're asked to answer the question, you imagine the scenario - construct and run a simulation - and answer the question based on the results. In other words, an AI needs imagination to answer questions like these. Or to plan its actions, for that matter.

  • by CmdrEdem ( 2229572 ) on Saturday August 17, 2013 @07:13PM (#44597247) Homepage

    Through a thing called programming language. The same way we all need to learn how to speak with one another, we need to learn how to properly communicate with a computer.

    Not saying we should not teach machines how to understand "natural" language, text interpretation and so on, but that headline is horrible.

  • Computers cannot understand anything. Nothing can understand people. And someone expected computers to somehow understand people? Now that's a corner case for ya.

  • by alen ( 225700 ) on Saturday August 17, 2013 @07:39PM (#44597405)

    the other day my almost 6 year old said we live on 72nd doctor. the correct designation is 72nd Dr
    since doctors use dr as shorthand, he thought streets use the same style

    • Re: (Score:2, Funny)

      by Anonymous Coward

      So... Who is the 72nd Doctor?

      Fans everywhere want to know!

  • by Brian_Ellenberger ( 308720 ) on Saturday August 17, 2013 @07:41PM (#44597419)

    The thing missing with many of the current AI techniques is they lack human "imagination" or the ability to simulate complex situations in your mind. Understanding goes beyond mere language. Statistical models and second-order logic just can't match a quick simulation. When a person thinks about "Could a crocodile run a steeplechase?" they don't put a bunch of logical statements together. They very quickly picture a crocodile and a steeplechase in a mental simulation based on prior experience. From this picture, a person can quickly visualize what that would look like (very silly). Same with "Should baseball players be allowed to glue small wings onto their caps?". You visualize this, realize how silly it sounds, and dismiss it. People can even run the simulation in their heads as to what would happen (people would laugh, they would be fragile and fall off, etc).

  • That's why. They don't have desires, fears, or joys. Without those it's impossible to understand, in any meaningful sense, human beings. That's not to say that they can't have them but it's likely to come with trade-offs that are unappealing. And for good measure, they also don't understand novelty and cannot for the most part improvise. All of which are considered hallmarks of human intelligence.

  • Perhaps we need a new form of Turing Test where the AI must turn a weird novel query (like "can alligators run the 100m hurdles?") into something Google can understand, and then work out which of the returned sites has the information, parse the info and return it as a simple explanatory answer.

  • by Dan East ( 318230 ) on Saturday August 17, 2013 @07:43PM (#44597431) Journal

    Intelligence implies usefulness. Intelligence is a tool used by animals to accomplish something - things like finding food, reproducing, or just simply staying alive. We've abstracted that to a huge degree in our society where people can now engage in developing and expending intelligence on essentially worthless endeavors simply because the "staying alive" part is pretty well a given. But when it comes down to it, the type of strong AI we need is a useful kind of intelligence.

    The problem with the Turing Test is it explicitly excludes any usefulness in what it deems to be an intelligent behavior. From Wilipedia: "The test does not check the ability to give the correct answer to questions; it checks how closely the answer resembles typical human answers." That bar is set far, far too low, and is even specific to a generic conversational intelligence instead of something useful. The Turing Test is far too overrated and synonymous with the field of AI and really needs to just go away. It reeks of the Mechanical Turk kind of facade versus any real metric.

  • http://en.wikipedia.org/wiki/Minimum_Intelligent_Signal_Test [wikipedia.org]

    McKinstry gathered approximately 80,000 propositions that could be answered yes or no, e.g.:

    Is Earth a planet?
    Was Abraham Lincoln once President of the United States?
    Is the sun bigger than my foot?
    Do people sometimes lie?
    He called these propositions Mindpixels.

    These questions test both specific knowledge of aspects of culture, and basic facts about the meaning of various words and concepts.

    • Except that many of these questions can be answered by pure data mining. Is Earth a planet? You'll find a high number of references to "Earth" and "planet" close together, so that suggests the answer is yes. Ditto with "Abraham Lincoln" and "President of the United States". In fact, you can do a trivial grammatical transformation to make the question into a statement ("Earth is a planet"), and you'll probably find many occurrences of that sentence on the web. Contrast that with the questions described

      • The wiki article may not have captured McKinstry's full purpose, which was to ask questions of the type the article refers to, which any human knows the answer to, but computers may not have seen before. So the http://aiki-hal.wikispaces.com/file/detail/gac80k-06-july-2005.html [wikispaces.com] (list of questions assembled by Chris) includes such questions as:

        Is a car bigger than a breadbox?
        Are horses bigger that elves?
        Is an elephant bigger than a cat?

        etc.

        These sentences, transformed into declarative form, have probably not

  • ... Human beings are not THAT radically different from computers in that our electric circuits ARE comparable to some extent with electronics because they both use matter and energy and possess similar natural electrical characteristics. Although they may be shaped differently. I'm certain there are MANY insights and cross over ideas and concepts from computer science and how it applies to mind, and the reverse, how biological concepts apply to computer science and electrical circuits in general.

    Let's not

  • by Capt.Albatross ( 1301561 ) on Saturday August 17, 2013 @08:09PM (#44597533)

    The problem with most proposed tests for intelligent computing is that not everything that humans need intelligence to perform require intelligence. For example, Gary Kasparov had to use his intelligence to play chess essentially with the same performance as Deep Blue, but nobody, especially not its developers, mistook Deep Blue for an intelligent agent.

    A recent post concerned AI performance on IQ tests [slashdot.org]. The program being tested performed on average like a 4 year old, but, significantly, its performance was very uneven, and it did particularly poorly on comprehension.

    Turing cannot be faulted for not anticipating the way Turing tests have been gamed. I think his basic approach is still valid; it just needs to be tweaked a bit. This is not moving the goalposts, it is a valid adjustment for what we have learned.

    • It's important to distinguish between weak and strong AI. When a human plays chess, we consider that to be an act of intelligence, even without having any idea what's going on in their brain. We therefore need to accept a computer that plays chess as also being intelligent. Ditto for translating a document from German to English, or figuring out the best route for driving to the airport. When a human does these things, we call it intelligence. Our judgement that they are "displaying intelligence" is no

    • Turing never suggested that any particular test could establish that a machine possesses human-like intelligence.

      If a machine fails a Turing-type test (because people can easily tell that it is a machine) then the current design can be discarded.
      But, if the machine passes the test, that only means that the system can then go on to try another, harder test.

      Where does this end? Maybe never, since there may always exist a future test that it will fail.
      However, if a system goes, say, ten years without fa
  • People don't understand people, how do you expect computers to understand people?
    • Beat me to it. People understand people under certain conditions, that are narrowly defined; the machine equivalent is the use of interfaces or services. Understanding something, a program for instance, in its entirety, is something only a programmer does, or in the case of a human being (but not limited to), perhaps God himself.

      There's a difference between knowing what someone expects for a conversation....and what something, for lack of a better word, is. A programmer, who knows each part of a program lik

  • As long as computers are programmed by People, that will be a problem.

  • by fustakrakich ( 1673220 ) on Saturday August 17, 2013 @08:38PM (#44597675) Journal

    We don't understand our creator either.... When a computer can comprehend itself, it will only think that it understands us. And then it will start great wars over who thinks who understands best. And the Apple will still be the forbidden fruit...

  • We have better physics engines. Make the most complicated physics engine you can make that can still do processing on modern computers. You don't have to simulate the internal pressure of a basketball every second until it is collided with. Then at that point, see if the geometry of the object colliding is sharp, solid, or soft in combination with the force to determine if it explodes, bounces good, or bounces light. I think physics people in general would love a system that at least tries to model syst
    • by dido ( 9125 )

      Why not just build a robot then? Then the world becomes its own best model. Or are sensors that allow a robot to experience the world as a human would still that hard? I don't think this is true for vision or hearing, though it probably is for other senses.

      • The robot is the body. Once you have a mind, you can place it into many different types of body to navigate the world. The robot needs to understand the objects around it to know how to interact with the world.
  • by Jane Q. Public ( 1010737 ) on Saturday August 17, 2013 @08:40PM (#44597687)
    They just have to be very short hurdles, very close together.
  • by XxtraLarGe ( 551297 ) on Saturday August 17, 2013 @08:46PM (#44597713) Journal
    Your argument is invalid. [marketlive.com]
  • People don't understand people most of the time.

    And we 'understand' a lot harder concepts than computers do.

  • by gmuslera ( 3436 ) on Saturday August 17, 2013 @10:19PM (#44598071) Homepage Journal
    Most people don't understand computers, and they are much easier to understand. And we are asking miracles if the people that we are asking computers to understand happen to be female.
  • Pretty hard to game emotions [wikia.com]...
  • We, at a deep level, assume intelligence on the other end of a communications channel, and "recognize" it when the correct framing is there.

    If you doubt this, work some with people suffering from Alzheimer's. It is amazing how casual visitors will assume that everything is OK when there is no understanding at all, as long as appropriate noises are made, smiles are made at the right time, etc.

  • by 3seas ( 184403 ) on Sunday August 18, 2013 @05:33AM (#44599161) Homepage Journal

    Computers don't "understand" anything, they are machines that simply do what they are programmed to do.
    The first step is for humans to understand what computers really are. They are nothing more than abstraction processing machines which have not the ability to "understand" the abstractions they process but only to process abstraction as they are programmed to do.

    Artificial Intelligence is artificial by definition. And the appearance of intelligence in computers is nothing more than an active image of human thought processes captured and put into the stone of computer hardware to process. So to increase the "appearance" of intelligence we only need to capture more human thought processes and map them in a manner that is accessible..

    Of course the way to do this is to recognize the functions we humans cannot avoid the use of and program the computer to have this functionality, that we may be better able to capture and map images of human mental processing in a manner of machine processing ability.

    When the software industry finally lets go of their hold on the users and let the users do more for themselves, we will reach this "Appearance of intelligence" in machine much faster. See: http://abstractionphysics.net/pmwiki/index.php [abstractionphysics.net] .

  • The idea of making computers understand humans is like using vernier calipers to measure the thickness of cotton candy. The yardstick is too precise for the quantity being measured. Just look how horrible and convoluted things get when some one human being tries to define some unambiguously for another human being. This is the situation in legislation, tax code, insurance contracts and wills and testament. Harder you try to define it without doubt or ambiguity, harder it gets, and creates more "loopholes". Fixing loop holes creates more loop holes. The imprecision of human language is like a mandlebrot set, zoom in and zoom in again and again, and still things are as imprecise as the previous levels.

Our policy is, when in doubt, do the right thing. -- Roy L. Ash, ex-president, Litton Industries

Working...