Why Computers Still Don't Understand People 277
Gary Marcus writes in the New Yorker about the state of artificial intelligence, and how we take it for granted that AI involves a very particular, very narrow definition of intelligence. A computer's ability to answer questions is still largely dependent on whether the computer has seen that question before. Quoting:
"Siri and Google’s voice searches may be able to understand canned sentences like 'What movies are showing near me at seven o’clock?,' but what about questions—'Can an alligator run the hundred-metre hurdles?'—that nobody has heard before? Any ordinary adult can figure that one out. (No. Alligators can’t hurdle.) But if you type the question into Google, you get information about Florida Gators track and field. Other search engines, like Wolfram Alpha, can’t answer the question, either. Watson, the computer system that won “Jeopardy!,” likely wouldn’t do much better. In a terrific paper just presented at the premier international conference on artificial intelligence (PDF), Levesque, a University of Toronto computer scientist who studies these questions, has taken just about everyone in the field of A.I. to task. ...Levesque argues that the Turing test is almost meaningless, because it is far too easy to game. ... To try and get the field back on track, Levesque is encouraging artificial-intelligence researchers to consider a different test that is much harder to game ..."
Missing the point as usual (Score:4, Funny)
Thanks computer science researchers! Your friends working on the actual AI problem over here in Linguistics and Psychology find it awfully amusing that you're trying to program a concept before we even know what that concept is.
Re:Missing the point as usual (Score:5, Interesting)
One of the great open questions about the future of humanity is which will happen first: A) we figure out how our minds are able to understand the world and solve the problems involved in surviving and reproducing. B) we figure out how to build machines that are better than humans at understanding the world and solving the problems involved in surviving and reproducing.
I think it is not at all clear which one will happen first. I think the article's point is exactly right. It doesn't matter what intelligence is. It only matters what intelligence does. The whole field of AI is built around the assumption that we can solve B without solving A. They may be right. Evolution often builds very complicated solutions. Compare a human 'computer' to a calculator doing arithmetic. Clearly we don't need to understand how the brain does this in order to build something better than a human. Maybe the same can be done for general intelligence. Maybe not. I advocate pursuing both avenues.
Re:Missing the point as usual (Score:5, Insightful)
"The whole field of AI is built around the assumption that we can solve B without solving A."
Unless one harbors active 'intelligent design' sympathies, it becomes more or less necessary to suspect that intelligences can be produced without understanding them. Now, how well you need to understand them in order to deliver results with less than four billion years of brute force across an entire planet... That's a sticky detail.
Re:Missing the point as usual (Score:4, Interesting)
I think everyone harbors 'intelligent design sympathies' as you put it. The deists believe the soul and intelligence is other worldly and wholly separate from the physical. Where-as the atheists seem hell bent on the idea that intelligence and self awareness are illusions or somehow not real. Both refuse to believe that the mind, understanding and all spirituality is actually a part of this real and physical world. Of all the complex and seemingly intractable questions about the universe we have, the most complex, most unbelievable question we face is the thing that is closest to home. The fact that the human mind exists at all is so unfathomable that in all of human history no one has even remotely began to explain how it could possibly exist.
Re:Missing the point as usual (Score:5, Interesting)
Reductionists might say that intelligence is an illusion, but they'd say that everything else outside of quantum fields and pure math is an illusion too. If you step away from the absurd world of the reductionist, you will find that atheists aren't saying that it's all an illusion. It's quite obviously not. Things are going on in the brain, quite a lot of them. The atheist would say that instead of copping out with some sort of soul-based black box, that the answer lies in the emergent behavior of a complex web of interacting neurons and other cells.
citation re deism? (Score:2)
Do you have a reference for what you said about deists? My understanding is that deism says two things. First, whatever higher power there may be ought to be studied using logic and reason based on direct existence, not faith in old teachings. In this regard, they talk a lot about using your brain. Second, that "God" designed the HUMAN MIND to be able to reason and made other design decisions, then He/it pretty much leaves us alone, to live within the designed framework.
I haven't seen anything where de
Re: (Score:3)
Re: (Score:3)
most of whom vehemently deny the existence of USPS (Score:3)
While many people with different beliefs may take any label, the atheists I've spoken to are more like "people who religiously deny the possibility that anything like a postal service could exist.". I think the term "agnostic" better describes those who simply aren't interested in the topic, as well as those who are open-minded about it.
Re: (Score:3)
The problem here is the fundamental misunderstanding or misuse of the words (a)theist and (a)gnostic.
Theism and atheism merely describe your position on the existence of a God or Gods.
Gnosticism describes the nature of the position--do you know, or do you not know?
Someone that is gnostic "knows" that their position is correct. Someone that is "agnostic" doesn't really know either way.
A theist can be gnostic ("I KNOW God exists") or agnostic ("I believe God exists, but I have no way to prove it; the position
Re: (Score:2)
Comparing man and AI in a digital is to fail. The human condition in all the ways we think, lay down memory, recall, amend memory etc, is not at all digital. It seems to me that almost every study finds neurons have large numbers of interconnections and even though the nerve cells does change from one state to another, in a way that resembles digital, all the summing and deduction of various inputs, and the same things happening to the large number of interconnected neurons tells me that we can not easi
Re: (Score:3)
Neurons don't communicate in an analogue fashion—they send digital pulses of the same magnitude periodically, with more rapid pulses indicating more signal. This is both more robust and more effective at preventing over-adaptation. When researchers figured out how to mimic the imperfections in the biological digital system, their neural networks got significantly better [arxiv.org]. Because they'd been working under the assumption that an exact analogue value was going to be superior than a digital value, they ha
Re: (Score:2)
So if more rapid =larger, Then the signal pours into a leaky bucket, and the level falls when less rapid ensues = de facto analog.
An AI new born would have resources of memory, which we can liken to instinctual memories found in many places. The rate of access to this and the depth would hasten maturity, and a faster clock speed also do this.
A lone AI would want the high speed interaction of another like it, or more, so we need to build them in groups, once we know how.
Re: (Score:2)
Yes, We humans all have varying forms of intelligence. Some are math whizzes who never solve the reproduction equation on saturday night.Some can compute complex 3D motion and solve the muscle equation to hit baskets.
I think that as we understand the mix of processes and brain areas that make up the average mind we will also learn what make the extraordinary mind, in math, social skills and athletics and what the tradeoffs are.
Is there a maximum degree of intelligence? Can an AI have an IQ OF 3,500,000 - i
Re: (Score:2)
Re: (Score:3)
Re: (Score:2)
One of the great open questions about the future of humanity is which will happen first: A) we figure out how our minds are able to understand the world and solve the problems involved in surviving and reproducing. B) we figure out how to build machines that are better than humans at understanding the world and solving the problems involved in surviving and reproducing. (...) The whole field of AI is built around the assumption that we can solve B without solving A. They may be right.
Of course you can, if all we care about is survival and reproduction you don't need human intelligence as cockroaches or for that matter bacteria survive and reproduce just fine. Actual replicating machines that don't just virtually exist in computer memory is more in the field of nanotechnology and nanobots than AI. It'd certainly have to work in a completely different way than our current hardware factories.
Re: (Score:2)
Your friends working on the actual AI problem over here in Linguistics and Psychology find it awfully amusing that you're trying to program a concept before we even know what that concept is.
Sometimes insights come from theoreticians, sometimes experimenters. C'est la vie.
Re: (Score:3)
Nice Troll :-) (Score:3)
Re:Missing the point as usual (Score:5, Insightful)
I'm pretty sure that 'computer science' is either math or dishonestly labelled trade school, depending on where you get it.
Re:Missing the point as usual (Score:5, Insightful)
I've long been a proponent of the idea that there would be far less misunderstandings if it were renamed to "Computational Science". The discipline is the study of how to sequentially break down and solve problems. That we do so with these electronic devices we've so named "computers" is kinda tangential.
Re:Missing the point as usual (Score:5, Interesting)
Re: (Score:2)
Exactly.
Re: (Score:2)
Re: (Score:2)
I'm using the modern definition; doing that helps cut down on confusion, no?
Note: I don't have a problem with non-sciences; I enjoy and am fairly good at computer "science," but I can say the same about Aristotelian philosophy. But neither one can hold a candle to, say, modern physics (which I also enjoy).
I just don't have much patience for glorified coders sneering at psychology and linguistics as if they're somehow better.
Re:Missing the point as usual (Score:5, Insightful)
Even if we restrict the definition of "science" to your definition; that is that science is purely "evidence-based, hypothesis-driven testing", computer science would still fit the bill.
Remember, that CS is as diverse a field as modern physics is. You have theoretical CS, where you tackle questions like: "What is a good, logical definition for computability?" or "How can you logically prove that a program terminates/runs in X time/consumes X resources, no matter the input". This is fully equivalent to the questions of theoretical physics, where you tackle the Grand Unified Theory -- joining gravity, the weak and strong force as well es electromagnetism.
These theoretical question can be brought up without need of evidence -- if all you're interested in is disproving something. According to your definition, this means that the theoretical aspects of both physics and CS are not "science". Okay, let's run with that.
The nice aspect of theoretical questions that can't be disproven by pure thought is, that they lead us on to try to discover concrete evidence that a given theory is true or false in real application! And this is where your rather narrow definition of science comes in, and the point where we find that both practical physics and practical CS fulfill the criteria.
For example in physics, we can test the theory of relativity by building telescopes that look at stars and black holes, to see whether the hypothesis' predictions hold true to raise the hypothesis to the state of a theory. As can be seen with the term people use for "X of relativity", this has happened for relativity.
But if you look with even more than a superficial glance at CS, you will see that the same process is at work in moving from theoretical CS to practical CS. One open question of theoretical CS is whether P = NP or not [1]. So far, we are incapable of disproving either possibility with pure thought. Thus, we turn to practical CS where people try to find evidence of either in the real world. After all, if you can create a program on a real computer that solves an NP-hard problem while never leaving the limits of P, you have conclusively shown that P = NP. So far, we've only found approximative or heuristic solutions that do that, so after 50 years of turning up with "no evidence" we are allowing ourselves to say that the hypothesis of "P != NP" should be treated (even if only cautiously) as a theory -- and we're indeed doing that, as you can see if you look at most modern encryption methods.
But you might say: That is not enough! After all, you could reduce any written computer program on a physical hardware to a sequence of logical steps in a system modeled with pure-thought. And indeed you can, as the Turing-Model of computation promises exactly that -- and so far physical evidence agrees with us. But isn't the same true for physics? After all, physicists search for such a description, too! It's what Maxwell-Clark, Einstein and lots of other physicist were and are after when they ultimately search(ed) for the Grand Unified Theory. How can you blame CS for already having found its Unified Theory?
But the last example finally puts the nail in you view: What about Quantum Computers? They are the point where physics and CS meet; both on the theoretical part (Quantum Theory / Quantum Computation) as well as the practical part (building the thing and proving that the shit actually works as advertised).
So, if we accept your definition of science; then it follows directly that if CS is not a science, Physics can't be either.
[1] - http://en.wikipedia.org/wiki/P_versus_NP_problem [wikipedia.org]
Re: (Score:2)
And, by the way, you might note that I was replying to the person who brought up "pseudoscience."
I suspect that we are more in agreement than you might think.
AI has a high burden of proof (Score:5, Interesting)
The interesting thing happens when you ask the same premise to a 5 year old, who only knows that a bird can fly and has never seen a penguin before. If you tell them that a penguin is a bird, they will quite happily think that a penguin can fly. They are extremely surprised to find out that they can't. We as adults find such quirks in life, and do things like laugh at the unexpected absurdity, such as ironies. I.e. you work with a woman you hate named Joy, or people are amazed at unexpected contradictions.
The point is that intelligence is about the tolerance of those pieces of feedback, and what happens when it is encountered. I.e. your head doesn't explode at an absurdity, or unexpected result, and you only make the same mistake once.
The major difference between man and machine, will be the fact that a machine can copy their knowledge verbatim to another system, and thus have some degree of immortality, whereas the shelf life of a human brain seems to be around 80 years or so right now. Thus, even if machines are slower to learn than us, they will out live our great great grandchildren.
Furthermore, who says that an intelligence we create should be like ours? It may be more beneficial to all around if in fact we never generate an intelligence which operates just like ours, but is just as effective if not more. If this happens, there may even still be a future use for the human race, rather than just overlords to grow fat and complacent to be overthrown.
Re: (Score:2)
I'm sure CYC [cyc.com]would make a good go of it. This is what it was built for.
Re:AI has a high burden of proof (Score:4, Interesting)
Language seems to be the burden of proof required for an AI system, and has been so since the days of Turing. Language is by itself a representation of symbolic logic, and the most common bunk of proof is that transitive logic fails in symbolic logic.
That's where you're wrong. Natural language is not a representation of symbolic logic. It's a representation of human perception, thought and social interaction, which do not work by formal logic at all. Language is an organic and dynamic product of biology and society. Formal logic, in all its forms, is a product of mathematics, which is a tiny subset of all that is human thought.
Re: (Score:3)
Correct. I'd go a bit further.
The questions Levesque proposes are questions that will test a language processing system, not intelligence. Language is not required for intelligent behavior and is insufficient (as various language parsers and knowledge-web systems have shown).
I don't believe any system that has language as its primary tool can be intelligent. Language is far too blunt an instrument. Anything we would be likely to call intelligence has to rest on a modelling system with is far more subtle and
Re: (Score:3)
The interesting thing happens when you ask the same premise to a 5 year old, who only knows that a bird can fly and has never seen a penguin before. If you tell them that a penguin is a bird, they will quite happily think that a penguin can fly. They are extremely surprised to find out that they can't. We as adults find such quirks in life, and do things like laugh at the unexpected absurdity
To see something almost grasped, yet it slips like quicksilver through the fingers of the reaching hand...
The five year old demonstrates intelligence when they (3rd person plural analog of "you" to avoid gender bias) change their mental model of the world to accommodate the new fact ('peguins are birds that cannot fly'). When the five year old does this quickly, they are considered bright. When they do it slowly and with evident difficulty, others begin to suspect autism or some other defect. When they ha
Re: (Score:3)
A true artificial intelligence will show evidence of maintaining a mental model of reality, and of testing that model against incoming data, and adjusting the model when necessary. This strongly implies that the AI models itself in some manner, such that it can "imagine" a different way of "looking" at the world, and then judge whether the new model is a better way of thinking about things than the old model. The process is clearly fractal, since at the next level the software would be "imagining" a different way of judging which of two models was better, and eventually reaching the point where it makes decisions about whether in the current context it should act pragmatically or ethically.
Indeed. "Mental" modeling — maintaining and manipulating an abstract computational representation of beliefs — is at the heart of strong AI. Such models include, for example, beliefs about the world, beliefs about other agents (including what they believe about you), and beliefs about self. This is where computer scientists, linguists, cognitive psychologists and others all have some common ground and interdisciplinary research can be very productive. Learning is the ability to make systemati
An eskimo would have the same problem (Score:4, Insightful)
Re:An eskimo would have the same problem (Score:5, Insightful)
An eskimo would have the same problem, does that mean he cannot understand people ?
In this case he wouldn't understand, but because he lacks knowledge not intelligence. Show him an alligator and a 100 meter hurdles race and he'll be able to answer but the AI will still draw a blank. Ignorance can be cured but we still haven't found a cure for stupid, despite all efforts from education systems worldwide. No wonder we're doing no better with computers.
Re: (Score:2, Insightful)
So can some computer programs: Watson includes a confidence percentage in its answer.
*People* can't understand people (Score:5, Insightful)
People are irrational. They ask stupid questions that make no sense. They use slang that confuses the communication. They have horrible grammar and spelling. And overseeing it all is a language fraught with multiple meanings for words depending on the context, which may well include sentences and paragraphs leading up to the sentence being analyzed.
Is it any surprise that computers can't "understand" what we mean, given the minefield of language?
Re: (Score:2)
Well, at least I'm not the only one who appreciates a computer's relative unambiguity (aside the ones programmed into it).
Re: (Score:2)
On the old TV series "Get Smart", there was a robot agent named Hymie. There was a running gag where one of the human agents would say something along the lines of "kill the lights, Hymie", whereupon the robot would of course pull out his gun and shoot the light bulbs; or tell Hymie to "hop to it" (which led Hymie to start hopping), "get the door" (so he'd rip the door off its hinges and bring it over to Max), etc.
Re: (Score:2)
Great, so once we are all speaking lojban, AI will be a piece of cake, right?
Re: (Score:2)
Great, so once we are all speaking lojban, AI will be a piece of cake, right?
Only if we are speaking lojban on the semantic web. And after we've abandoned empiricism for syllogism.
Re: (Score:2)
How do we deal with multiple quantification by syllogism?
Re: (Score:3)
Re: (Score:2)
well, once we are all insane, it would be easier to find an acceptable AI. unfortunately, it would also be harder to develop. :-/
Re: (Score:2)
Just ask the question in Lojban [wikipedia.org]?
Re: (Score:2)
People are irrational. They ask stupid questions that make no sense. They use slang that confuses the communication. They have horrible grammar and spelling. And overseeing it all is a language fraught with multiple meanings for words depending on the context, which may well include sentences and paragraphs leading up to the sentence being analyzed.
Is it any surprise that computers can't "understand" what we mean, given the minefield of language?
Even if you came up with a regular easy to parse grammar it wouldn't help. Even if you fed the computer all known information about alligators and hurdles in that standard format it wouldn't help. That's the point... Now that we are starting to do much better at parsing and reproducing speech, it turns out that isn't really the hard problem.
Re: (Score:3)
That's the whole point about "context", though. It's not just the context of the sentence at issue, but the context of the knowledge to be evaluated, the "memory" of the computer if you will. It's an exponential data store that's required, and then some, even when using pattern matching and analysis to identify relevant "thoughts" of memory.
Re: (Score:3)
"Is it any surprise that computers can't "understand" what we mean, given the minefield of language?"
It is certainly no surprise that computers can't; but since we know that humans can (to a substantially greater degree), we can say that this is because computers are far worse than humans at natural language, not because natural language is inherently unknowable.
Re: (Score:2)
I don't understand you. [grin] :P
Re: (Score:3)
Is it any surprise that computers can't "understand" what we mean, given the minefield of language?
The problem isn't entirely linguistic. Humans can communicate because we have an awareness of a common reality. Until/Unless computers are also aware, they will have problems understanding us.
Re: (Score:3)
The summary is problematic. The alligator example is interesting, but the later examples in the article are better. Most of them don't depend on "imagination" or "creativity" or whatever to answer the question, or on a large bank of cultural knowledge, but only a basic knowledge of what words mean in relationship to each other. Yet AI would often fail these tests.
People are irrational. They ask stupid questions that make no sense.
While this is true, it has little bearing on the issues raised in TFA. It's also unclear what you mean by things that "make no sense." If you
Re: (Score:2)
I quoted "understand" because there are many levels of understanding from mapping the atomic meaning of words based on sentence structure through to contextual awareness and full scale sentience.
Re: (Score:2)
Any sufficiently advanced group of people are indistinguishable from an irrational group to anyone else, human OR robot.
To put the above in technical language: knowing the rules of propositional calcu
Helps to remember... (Score:2)
IBM has been busy for some time working on real-time translators, and I think that path is where the future lies, not just in a voice command TV.
Re:Helps to remember... (Score:5, Insightful)
And neither helps here. The fact is, you don't know if an alligator can run the hundred-metre hurdles. When you're asked to answer the question, you imagine the scenario - construct and run a simulation - and answer the question based on the results. In other words, an AI needs imagination to answer questions like these. Or to plan its actions, for that matter.
Re: (Score:3)
Computers understands humans (Score:3)
Through a thing called programming language. The same way we all need to learn how to speak with one another, we need to learn how to properly communicate with a computer.
Not saying we should not teach machines how to understand "natural" language, text interpretation and so on, but that headline is horrible.
computers and people (Score:2)
Computers cannot understand anything. Nothing can understand people. And someone expected computers to somehow understand people? Now that's a corner case for ya.
Re: (Score:2)
people can only answer questions they know (Score:5, Interesting)
the other day my almost 6 year old said we live on 72nd doctor. the correct designation is 72nd Dr
since doctors use dr as shorthand, he thought streets use the same style
Re: (Score:2, Funny)
So... Who is the 72nd Doctor?
Fans everywhere want to know!
Missing human "imagination" (Score:5, Insightful)
The thing missing with many of the current AI techniques is they lack human "imagination" or the ability to simulate complex situations in your mind. Understanding goes beyond mere language. Statistical models and second-order logic just can't match a quick simulation. When a person thinks about "Could a crocodile run a steeplechase?" they don't put a bunch of logical statements together. They very quickly picture a crocodile and a steeplechase in a mental simulation based on prior experience. From this picture, a person can quickly visualize what that would look like (very silly). Same with "Should baseball players be allowed to glue small wings onto their caps?". You visualize this, realize how silly it sounds, and dismiss it. People can even run the simulation in their heads as to what would happen (people would laugh, they would be fragile and fall off, etc).
Because they don't understand purpose or intention (Score:3)
That's why. They don't have desires, fears, or joys. Without those it's impossible to understand, in any meaningful sense, human beings. That's not to say that they can't have them but it's likely to come with trade-offs that are unappealing. And for good measure, they also don't understand novelty and cannot for the most part improvise. All of which are considered hallmarks of human intelligence.
Re: (Score:2)
Oblig. XKCD (Score:2)
Perhaps we need a new form of Turing Test where the AI must turn a weird novel query (like "can alligators run the 100m hurdles?") into something Google can understand, and then work out which of the returned sites has the information, parse the info and return it as a simple explanatory answer.
Turing test (Score:3)
Intelligence implies usefulness. Intelligence is a tool used by animals to accomplish something - things like finding food, reproducing, or just simply staying alive. We've abstracted that to a huge degree in our society where people can now engage in developing and expending intelligence on essentially worthless endeavors simply because the "staying alive" part is pretty well a given. But when it comes down to it, the type of strong AI we need is a useful kind of intelligence.
The problem with the Turing Test is it explicitly excludes any usefulness in what it deems to be an intelligent behavior. From Wilipedia: "The test does not check the ability to give the correct answer to questions; it checks how closely the answer resembles typical human answers." That bar is set far, far too low, and is even specific to a generic conversational intelligence instead of something useful. The Turing Test is far too overrated and synonymous with the field of AI and really needs to just go away. It reeks of the Mechanical Turk kind of facade versus any real metric.
Chris McKinstry's MIST covered this years ago (Score:2)
http://en.wikipedia.org/wiki/Minimum_Intelligent_Signal_Test [wikipedia.org]
Re: (Score:2)
Except that many of these questions can be answered by pure data mining. Is Earth a planet? You'll find a high number of references to "Earth" and "planet" close together, so that suggests the answer is yes. Ditto with "Abraham Lincoln" and "President of the United States". In fact, you can do a trivial grammatical transformation to make the question into a statement ("Earth is a planet"), and you'll probably find many occurrences of that sentence on the web. Contrast that with the questions described
Re: (Score:3)
The wiki article may not have captured McKinstry's full purpose, which was to ask questions of the type the article refers to, which any human knows the answer to, but computers may not have seen before. So the http://aiki-hal.wikispaces.com/file/detail/gac80k-06-july-2005.html [wikispaces.com] (list of questions assembled by Chris) includes such questions as:
Is a car bigger than a breadbox?
Are horses bigger that elves?
Is an elephant bigger than a cat?
etc.
These sentences, transformed into declarative form, have probably not
A bit of hyperbole... (Score:2)
... Human beings are not THAT radically different from computers in that our electric circuits ARE comparable to some extent with electronics because they both use matter and energy and possess similar natural electrical characteristics. Although they may be shaped differently. I'm certain there are MANY insights and cross over ideas and concepts from computer science and how it applies to mind, and the reverse, how biological concepts apply to computer science and electrical circuits in general.
Let's not
The Trouble with Turing (Score:3)
The problem with most proposed tests for intelligent computing is that not everything that humans need intelligence to perform require intelligence. For example, Gary Kasparov had to use his intelligence to play chess essentially with the same performance as Deep Blue, but nobody, especially not its developers, mistook Deep Blue for an intelligent agent.
A recent post concerned AI performance on IQ tests [slashdot.org]. The program being tested performed on average like a 4 year old, but, significantly, its performance was very uneven, and it did particularly poorly on comprehension.
Turing cannot be faulted for not anticipating the way Turing tests have been gamed. I think his basic approach is still valid; it just needs to be tweaked a bit. This is not moving the goalposts, it is a valid adjustment for what we have learned.
Re: (Score:3)
It's important to distinguish between weak and strong AI. When a human plays chess, we consider that to be an act of intelligence, even without having any idea what's going on in their brain. We therefore need to accept a computer that plays chess as also being intelligent. Ditto for translating a document from German to English, or figuring out the best route for driving to the airport. When a human does these things, we call it intelligence. Our judgement that they are "displaying intelligence" is no
Re: (Score:2)
If a machine fails a Turing-type test (because people can easily tell that it is a machine) then the current design can be discarded.
But, if the machine passes the test, that only means that the system can then go on to try another, harder test.
Where does this end? Maybe never, since there may always exist a future test that it will fail.
However, if a system goes, say, ten years without fa
Fun facts (Score:2)
Re: (Score:3)
Beat me to it. People understand people under certain conditions, that are narrowly defined; the machine equivalent is the use of interfaces or services. Understanding something, a program for instance, in its entirety, is something only a programmer does, or in the case of a human being (but not limited to), perhaps God himself.
There's a difference between knowing what someone expects for a conversation....and what something, for lack of a better word, is. A programmer, who knows each part of a program lik
People don't understand People (Score:2)
As long as computers are programmed by People, that will be a problem.
Re: (Score:2)
Fair enough but thats the only criteria we know.
I wouldn't expect computers to understand people (Score:5, Funny)
We don't understand our creator either.... When a computer can comprehend itself, it will only think that it understands us. And then it will start great wars over who thinks who understands best. And the Apple will still be the forbidden fruit...
I think a better SHRDLU is needed (Score:2, Interesting)
Re: (Score:2)
Why not just build a robot then? Then the world becomes its own best model. Or are sensors that allow a robot to experience the world as a human would still that hard? I don't think this is true for vision or hearing, though it probably is for other senses.
Re: (Score:2)
Huh? Alligators Can Hurdle! (Score:4, Funny)
My alligator can hurdle. (Score:5, Funny)
Understanding? (Score:2)
People don't understand people most of the time.
And we 'understand' a lot harder concepts than computers do.
Is fair (Score:3)
Voight-Kampff test (Score:2)
The Turing Test IS meaningless (Score:2)
We, at a deep level, assume intelligence on the other end of a communications channel, and "recognize" it when the correct framing is there.
If you doubt this, work some with people suffering from Alzheimer's. It is amazing how casual visitors will assume that everything is OK when there is no understanding at all, as long as appropriate noises are made, smiles are made at the right time, etc.
Re: (Score:3)
First people need to understand Computers (Score:3)
Computers don't "understand" anything, they are machines that simply do what they are programmed to do.
The first step is for humans to understand what computers really are. They are nothing more than abstraction processing machines which have not the ability to "understand" the abstractions they process but only to process abstraction as they are programmed to do.
Artificial Intelligence is artificial by definition. And the appearance of intelligence in computers is nothing more than an active image of human thought processes captured and put into the stone of computer hardware to process. So to increase the "appearance" of intelligence we only need to capture more human thought processes and map them in a manner that is accessible..
Of course the way to do this is to recognize the functions we humans cannot avoid the use of and program the computer to have this functionality, that we may be better able to capture and map images of human mental processing in a manner of machine processing ability.
When the software industry finally lets go of their hold on the users and let the users do more for themselves, we will reach this "Appearance of intelligence" in machine much faster. See: http://abstractionphysics.net/pmwiki/index.php [abstractionphysics.net] .
People don't understand legalese either. (Score:3)
Do better. (Score:3)
http://slashdot.org/submit [slashdot.org]
Re: (Score:3)
There's a big difference between editing and editorialising. The former is something I like to see on /. (but seldom do), and the latter is something I never like to see here.
Look up "editorial" and you'll see.
Re: (Score:3)
Not just that. The 'article' is not a scientific article, published or accepted in a Journal, but just a blog entry parsed through pdflatex. With sentences like "My feeling is that" it's obvious this won't pass peer review in this form. This seems to be quite popular in Computer 'Science' these days -- you can say you wrote a 'scientific article' without caring about whether its novel or sound, when all you did was to make a brain dump of your half knowledge.
Re: (Score:2)
No, it's not a peer reviewed research article. It's a modified lecture given at some conference.
This is a classic venue for opinion pieces / overviews / misogynist rants. Some presumably well known academician gives a 'distinguished talk' and it gets transcribed, cleaned up a bit and placed in a (usually) middling level journal.
I have no idea who this person is, nor his qualifications, nor the status of the journal in question. But it's a well known approach to publishing in various scientific fields an
Re: (Score:3, Informative)
Actually, IJCAI is the top conference in the field of Artificial Intelligence and every published paper goes through extensive peer review.
Computer Science is a bit different from most other science in that top conference proceedings (IJCAI, NIPS, ICCV, CVPR, etc.) have the weight of a journal. In fact, publishing there is more prestigious than most journals. Review period lasts 3-4 months and includes a rebuttal phase, like a journal.
This paper looks like an invited lecture or a position paper expected to
Re: (Score:3, Informative)
Sigh. This is a written account of a lecture presented as part of Levesque receiving the Research Excellence prize. The first footnote of the paper says so:
"This paper is a written version of the Research Excellence Lecture presented in Beijing at the IJCAI-13 conference. Thanks to Vaishak Belle and Ernie Davis for helpful comments."
Premier conferences don't give these prizes to just anyone, and the opinions of folks like these are worth thinking about.
From the IJCAI website http://ijcai13.org/program/award
Re: (Score:3)
Re:What's the point? (Score:5, Informative)
You're thinking of machine learning, which is a separate branch of AI that's more like an overfunded brand of applied statistics—their strategy is actually still to try and push the envelope (like Hinton, another U of T prof, did last year with dropout networks) but they do so in a more results-driven manner. The ML field as a whole is still sore from three or four decades of overpromising on the future, so they try to put their words where their mouths are, and focus on things that are attainable.
Levesque is in the knowledge representation group, which is more closely in step with cognitive science (the leading edge in modelling human thought) but still very philosophical in their approach. KR was the dominant AI field in the 80s (when Prolog and expert systems were all the rage) but it's matured a great deal since then. Here [toronto.edu] is his homepage, just to show you how different things are now.
Remember that neural networks aren't magic irreducible fairy dust: they're incredibly powerful, but at the end of the day there must be some program that is running within the network unless it's just a wildly complex ever-changing mapping function, which is unlikely given the illusion of consciousness. Given that quantum mechanics is believed to be Turing-complete, it's fairly likely we'll eventually discover some underlying model that lets us produce a human-like cognitive system without the same level of hardware parallelism that the brain has.
Re: (Score:3)