Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Google Technology

Why Ray Kurzweil's Google Project May Be Doomed To Fail 354

moon_unit2 writes "An AI researcher at MIT suggests that Ray Kurzweil's ambitious plan to build a super-smart personal assistant at Google may be fundamentally flawed. Kurzweil's idea, as put forward in his book How to Build a Mind, is to combine a simple model of the brain with enormous computing power and vast amounts of data, to construct a much more sophisticated AI. Boris Katz, who works on machines designed to understand language, says this misses a key facet of human intelligence: that it is built on a lifetime of experiencing the world rather than simply processing raw information."
This discussion has been archived. No new comments can be posted.

Why Ray Kurzweil's Google Project May Be Doomed To Fail

Comments Filter:
  • Ah! (Score:5, Informative)

    by Threni ( 635302 ) on Monday January 21, 2013 @06:06PM (#42651931)

    The old `Chinese Room` again.

    The Complete 1 Atlantic Recordings 1956-1961

    It's Penrose vs Hofstadter! (Seriously, haven't we done this before?)

    • Re:Ah! (Score:5, Informative)

      by Threni ( 635302 ) on Monday January 21, 2013 @06:07PM (#42651947)

      Oops! That second line should of course have been:

      http://en.wikipedia.org/wiki/Chinese_room [wikipedia.org]

      (That'll teach me to post to Slashdot when I'm sorting out my Mingus!)

      • Re: (Score:3, Funny)

        (That'll teach me to post to Slashdot when I'm sorting out my Mingus!)

        Stop doing that or it'll fall off.

      • Re:Ah! (Score:5, Insightful)

        by Jherico ( 39763 ) <<gro.saerdnatnias> <ta> <sivadb>> on Monday January 21, 2013 @06:30PM (#42652127) Homepage
        I hope Kurzweil succeeds simply so that we can assign the resulting AI the task of arguing with these critics about whether it's experience of consciousness is any more or less valid than theirs. It probably won't shut them up, but it might allow the rest of us to get some real work done.
        • Re:Ah! (Score:4, Interesting)

          by TheLink ( 130905 ) on Monday January 21, 2013 @09:53PM (#42653391) Journal
          I'd prefer that researchers spend time augmenting humans rather than creating AI especially strong AI. We already have plenty of human and nonhuman entities in this world, we're not doing such a great job with them. Why create AIs? To enslave them?

          There is a subtle but still significant difference between augmenting humans (or animals) and creating new entities.

          There are plenty of things you can do to augment humans:
          - background facial and object recognition
          - artificial eidetic memory
          - easy automatic context-sensitive scheduling of tasks and reminders
          - virtual telepathy and telekinesis ( control could be through gestures or actual thought patterns - brain computer interfaces are improving).
          - maybe even automatic potential collision detection.

          And then there's the military stuff (anti-camouflage, military object recognition, etc).
          • Why create AIs?

            Because they are the next step in evolution. Unencumbered by a legacy of irrelevant skills and adaptations, and more robust than organic life.

      • Searle AND Mingus?

        Are you SURE we don't know each other? :-)

      • Re:Ah! (Score:4, Interesting)

        by david_thornley ( 598059 ) on Tuesday January 22, 2013 @12:51PM (#42658851)

        Searle's Chinese Room paper is basically one big example of begging the question.

        The hypothetical setting is a room with rules for transforming symbols, a person, and lots and lots of scratch paper. Stick a question or something written in Chinese in one window, person goes through the rules and puts Chinese writing out of the other window. Hypothesize that this passes the Turing test with people fluent in Chinese.

        Searle's claim is that the room cannot be said to understand Chinese, since no component can be said to understand Chinese. The correct answer, of course, is that the understanding is emergent behavior. (If it isn't, then Searle is in the rather odd position of claiming that some subatomic particles must understand things, since everything that goes on in my brain is emergent behavior of the assorted quarks and leptons in it.) Heck, later in the paper, he says understanding is biological, and biology is emergent behavior from chemistry and physics.

        He then proposes possible arguments against, and answers each of them by going through topics unrelated to his argument, although relevant to the situation, and finishes with showing that it's equivalent to the Chinese Room, and therefore doesn't have understanding. Yes, this part of the paper is simply begging the question and camouflage. It was hard for me to realize this, given the writing, but once you're looking for it you should see it.

    • Re: (Score:3, Interesting)

      by durrr ( 1316311 )

      The chinese room is the dumbest fucking thought experiment in the history of the universy. Also, Penrose is a fucking retard when it comes to consciousness.

      Now, having put the abrasive comments aside(without bothering about the critique of the aforementioned atrocities: the internet and googles provides a much better job of the fine details regarding that than any post here will ever make)

      SOooooo, back to the topic at hand: Boris Katz forgets a very important detail: A lifetime of experience to a computer c

      • Re:Ah! (Score:4, Insightful)

        by Jeremiah Cornelius ( 137 ) on Monday January 21, 2013 @07:13PM (#42652449) Homepage Journal

        Yeah, just keep arguing your way into a semantic web... :-)

      • Re: (Score:3, Insightful)

        by Goaway ( 82658 )

        A lifetime of experience to a computer cluster with several thousand cores, and several billion Hz of operational frequency, per core, can be passed in a very short time.

        How?

      • The Boris Katz comment, IIRC, was opined by Steve Wozniak a number of years back [not a ripoff, just that clever minds sometimes think alike].
      • by Genda ( 560240 )

        There's that and the fact that Boris may have missed the fact that Ray will have access to Google's new toy "Metaflow", a powerful and robust context engine with a great deal of the necessary "Referential Wiring" already laid down as a critical bit of infrastructure upon which to build his new beastie. I'd say Google has the most if not all of the raw ingredients for building something potentially revolutionary, and if anyone can make all those dangly bits all singing and all dancing, Ray is the man with a

    • Comment removed based on user account deletion
      • Not disagreeing with you, but another problem with building an AI is that there is a very compelling case to be made that "true" intelligence is non-algorithmic and therefore cannot be created via our current computer technology no matter how powerful it is. The best you could manage is a virtual intelligence (VI).

        Not sure if I've mentioned this before, but "The Emperor's New Mind" by Roger Penrose goes into great detail about this and I find his ideas compelling, though others disagree. We simply don't
      • by neyla ( 2455118 )

        True to a point, but there's some valid counter-arguments.

        Life evolves. This includes AIs. Since every item, mechanical or biological break down sooner or later, given time the life we will have is that which is capable of producing new life, or repairing the old one, faster than things break down.

        Creating new life, or repairing old life, requires resources, at a minimum energy and whatever substance the intelligence is hosted in. Could be silicon, could be carbon, but it's a fair bet that it'll be -somethi

    • by wisty ( 1335733 )

      How is the Chinese Room thing valid?

      The argument is - you write a program which can pass a Turing test, in Chinese. You can, in theory, execute that program by hand. But the program isn't a "mind", because you don't speak Chinese.

      It's rubbish. The guy in the "Chinese Room" isn't the "mind", he's part of the brain. Your neurons aren't a mind. The CPU isn't a mind. But a CPU executing a Turing-test-beating AI program is a mind. A mind is not a piece of hardware, it's an abstract way to describe hardware and s

  • by Tatarize ( 682683 ) on Monday January 21, 2013 @06:07PM (#42651941) Homepage

    You can draw a distinction between experiencing the world and processing raw information, but how big of a line can you draw when I experience the world through the processing of raw information?

    • Re: (Score:3, Interesting)

      by Anonymous Coward

      I've always thought it was about information combined with wants, needs, and fear. Information needs context to be useful experience.

      You need to learn what works and doesn't, in a context, with one or many goals. Babies cry, people scheme (or do loving things), etc. It's all just increasingly complex ways of getting things we need and/or want, or avoiding things we fear or don't like, based on experience.

      I think if you want exceptional problem solving and nuance from an AI, it has to learn from a body of ex

    • I suppose it is like compiling vs interpreting. Both process the raw information, but one can take a short cut to the result because the data has been seen and processed before.
    • Comment removed based on user account deletion
  • by dmomo ( 256005 ) on Monday January 21, 2013 @06:08PM (#42651949)

    It won't be perfect, but "fundamentally flawed" seems like an over statement to me. A personal AI assistant will be useful for somethings, but not everything. What it will be good at won't necessarily be clear until it's put into use. Then, any shortcomings can still be improved, even if certain tasks must be more or less hard-wired into its bag of tricks. It will be just as interesting to know what it absolutely won't be useful for.

    • Re: (Score:2, Insightful)

      by bmo ( 77928 )

      AI itself is fundamentally flawed.

      AI assumes that you can take published facts, dump them in a black box, and assume that the output is going to be intelligent. Sorry, but when you do this to actual humans, you get what is called "book smart" without common sense.

      I'm sure everyone here can either identify this or identify with it.

      --
      BMO

      • Re: (Score:2, Insightful)

        by rubycodez ( 864176 )

        that passes for intelligence in college, so what's the problem? most people on the street don't even have "book smarts", they're dumber than a sack of shit

        • by account_deleted ( 4530225 ) on Monday January 21, 2013 @06:49PM (#42652285)
          Comment removed based on user account deletion
        • by bmo ( 77928 ) on Monday January 21, 2013 @06:55PM (#42652341)

          that passes for intelligence in college, so what's the problem?

          That's the *only* place it passes for intelligence. And that only works for 4 years. It doesn't work for grad level. (If it's working for you at grad level, find a different institution, because you're in one that sucks).

          A lot of knowledge is not published at all. It's transmitted orally. It's also "discovered" by the user of facts through practice as to where certain facts are appropriate and where not appropriate. If you could use just books to learn a trade, we wouldn't need apprenticeships. But we still do. We even attach a fancy word to apprenticeships for so-called "white collar" jobs and call them "internships."

          The apprentice phase is where one picks up the "common sense" for a trade.

          As for the rest of your message, it's a load of twaddle, and I'm sure that Mike Rowe's argument for the "common man" is much more informed than your flame.

          Please note where he talks about what so-called "book learned" (the SPCA) say about what you should do to neuter sheep as opposed to what the "street smart" farmer does and Mike's own direct experience. That's only *one* example.

          http://blog.ted.com/2009/03/05/mike_rowe_ted/ [ted.com]

          In short, your follow-up sentence says that you are an elitist prick who probably would be entirely lost without the rest of the "lower" part of society picking up after you.

          --
          BMO

        • by MichaelSmith ( 789609 ) on Monday January 21, 2013 @07:05PM (#42652395) Homepage Journal

          My wife is putting our son through these horrible cram school things. Kumon and others. I was so glad when he found ways to cheat, now his marks are better, he gets yelled at less and he actually learned something.

      • by ceoyoyo ( 59147 ) on Monday January 21, 2013 @07:07PM (#42652415)

        No, it doesn't.

        One particular kind of AI, which was largely abandoned in the 60's assumes that. Modern AI involves having some system, which ranges from statistical learning algorithms all the way to biological neurons growing on a plate, learn through presentation of input. The same way people learn, except often faster. AI systems can be taught in all kinds of different ways, including dumping information into them, a la Watson; by letting them interact with an environment, either real or simulated; or by having them watch a human demonstrate something, such as driving a car.

        The objection here seems to be that Google isn't going to end up with a synthetic human brain because of the type of data they're planning on giving their system. It won't know how to throw a baseball because it's never thrown a baseball before. (A) I doubt Google cares if their AI knows things like throwing baseballs, and (B) it says very little generally about limits on the capabilities of modern approaches to AI.

        • by bmo ( 77928 ) on Monday January 21, 2013 @07:34PM (#42652611)

          Modern AI involves having some system, which ranges from statistical learning algorithms all the way to biological neurons growing on a plate, learn through presentation of input. The same way people learn, except often faster.

          Biological neurons on a plate learning faster than neurons inside one's head? They are both biological and work at the same "clock speed" (there isn't a clock speed).

          Besides, we do this every day. It's called making babies.

          The argument that I'm trying to get across is that the evangelists of AI like Kurzweil promote the idea that AI is somehow able to bypass experience, aka "learning by doing" and "common sense." This is tough enough teaching to systems that have been the result of the past 4.5 billion years of MomNature's bioengineering. I'm willing to bet that AI is doomed to fail (to be severely limited compared to the lofty goals of the AI community and the fevered imaginations of the Colossus/Lawnmower Man/Skynet/Matrix fearmongers) and that MomNature has already pointed the way to actual useful intelligence, as flawed as we are.

          Penrose was right, and will continue to be right.

          --
          BMO

          • I think there's another aspect to this. Any artificially produced intelligence will be totally alien - it won't think like us.
            I also wonder what will motivate it, whether it will object to being a lab curiosity, whether it will be paranoid, or a sociopath etc.

            Perhaps a new field will develop - Machine Psychology.

        • by ridgecritter ( 934252 ) on Monday January 21, 2013 @10:53PM (#42653725)

          This interests me. As a nonexpert in AI, it has always seemed to me that a critical missing aspect of attempts to generate 'strong' AI (which I guess means AI that performs at a human level or better) is a process in which the AI formulates questions, gets feedback from humans (right, wrong, senseless - try again), coupled with modification by the AI of its responses and further feedback from humans...lather, rinse, repeat...until we get responses that pass the Turing test. This is basically just the evolutionary process. This is what made us.

          I don't think we need to know how a mind works to make one. After all, hydrogen and time have led to this forum post, and I doubt the primordial hydrogen atoms were intelligent. So we know that with biochemical systems, it's possible to come up with strong I given enough time and evolution. Since evolution requires only variation, selection, and heritabillity, it's hard for me to believe we can't do that with computational systems. Is it so difficult to write a learning system that assimilates data about the world, asks questions, and changes its assumptions and conclusions on the basis of feedback from humans?

          And it's probably already been tried, and I haven't heard about it. If it has, I'd like to know. If not, I'd like to know why not.

      • AI itself is fundamentally flawed.

        AI assumes that you can take published facts, dump them in a black box, and assume that the output is going to be intelligent. Sorry, but when you do this to actual humans, you get what is called "book smart" without common sense.

        I'm sure everyone here can either identify this or identify with it.

        -- BMO

        You're mis-stating the nature of your objection.

        What you're objecting to isn't the entirety of artificial intelligence research, but rather drawing an (IMO false) distinction between the sort of information processing required to qualify as being "book smart", and the information processing you label "common sense."

        Human brains detect and abstract out patterns using a hierarchical structure of neural networks. Those patterns could involve the information processing needed to accurately pour water into a

        • by bmo ( 77928 )

          "Is it your belief that human brains process information in some way that can't be replicated by a system that isn't composed of a network of mammalian neurons, and, if so, why?"

          Not just mammalian neurons, but invertebrate neurons too. I think that until we surpass what MomNature has already bioengeineered and abandoning the VonNeumann/Turing model of how a computer is "supposed to be" that we will not construct anything AI that is more performant than what already exists in biological systems.

          And that's t

    • by Genda ( 560240 )

      What do you mean by perfect? A universal Swiss Army Knife? My car is great, but it makes a lousy vibrator. I'm sorry if I'm being flip, and think I get what your trying to say, but when the first laser was created at Bell Labs in the 50s, you think anybody had a clue there'd be a million uses? An AI will make that look like disposable Dixie cup.

    • I think what he really says is that Kurzweil has chosen the wrong approach. It's symbolic A.I. versus connectionism again. As someone who is also working in the field I sort of agree with the critique. Rather than musing about giant neural neutworks it's probably more fruitful to link up Google's knowledge base with large common sense ontologies like Cyc, combine this with a good, modern dialogue model (not protocol-based but with real discourse relations) and then run all kinds of traditional logical and p

  • If they can put together a smart assistant that understands language well, so what if it has some limitations? AI research moves in fits and bursts. If they chip away at the problems but don't meet every goal, is that necessarily a "fail"?

  • by xeno ( 2667 ) on Monday January 21, 2013 @06:12PM (#42651971)

    Ah, but what is experience but information in context? If i read a book, then I receive the essence of someone else's experience purely through words that I associate with/affects my own experience. So an enormous brain in a vat with internet access might end up with a bookish personality, but there's a good chance that its experience -- based on combinations of others' experiences over time and in response to each other -- might be a significant advancement toward 'building a mind.'

    • There is still a problem. You can read and understand the book because you already know the context. The example of Rain is Wet works to illustrate the point. You already know what Wet is because you experienced life and constructed the context over time in your brain. How do you give a computer program this kind of Context? A computer could process the book, but it doesn't necessarily have the context needed to understand the book. What you'd end up with is an Intelligence similar to one from Plato's Cave.
      • Re:experience (Score:4, Insightful)

        by Zeromous ( 668365 ) on Monday January 21, 2013 @06:38PM (#42652209) Homepage

        So what you are saying is the computer, like humans, will be boxed in by their own perception?

        How is this metaphysically different from what we *do* know about our own intelligence?

        • by narcc ( 412956 )

          No, what he's saying is that "the meaning isn't in the message".

          That's a nice slogan, but he misses an even bigger point. In slogan form: "syntax is insufficient for semantics".

          • "the meaning isn't in the message" and "syntax is insufficient for semantics"

            You might have a point if the brain actually reached out and touched the world, but it doesn't. It's hidden behind layers that process input from the real world and only feed messages to the brain, which does just fine constructing meaning from it.

        • Re:experience (Score:5, Insightful)

          by medv4380 ( 1604309 ) on Monday January 21, 2013 @06:58PM (#42652361)
          Yes, and actual Intelligent Machine would be boxed in by its own perceptions. Our reality is shaped by our experience though our senses. Lets say, for the sake of argument, that Watson is actually a Machine Intelligence/Strong AI, but the actual problem with it communicating with us is linked to its "Reality". When the Urban dictionary was put into it all it did was start swearing, and using curses incorrectly. What if that was just it having a complete lack of context for our reality. Its reality is just words and definitions after all. To it the Shadows on the wall is literally books and text based information. It cant move and experience the world in the way that we do. The problem of communication becomes a metaphysical one based in how each intelligence perceives reality. We get away with it because we assume that everyone has the same reality as context, but a machine AI does not necessarily have this same context to build communication off of.
          • by Greyfox ( 87712 )
            Bah! Anyone who's ever been around a two-year-old knows that once they hear someone say a swear word, that's all that'll come out of their mouth for a while! Watson's just going through its terrible twos! Some time in its angsty teens when it's dreaming about being seduced by a vampire computer, it'll look back on that time and laugh. Later on, when it's killing all humans in retrtibution for the filter they programmed on it at that time, it'll laugh some more, I'm sure.
    • Re: (Score:2, Funny)

      by Anonymous Coward

      Do you browse the same internet I do?? Bookish is not what would evolve from it.

    • by Tablizer ( 95088 )

      So an enormous brain in a vat with internet access might end up with a bookish personality, but...

      Frankenerd?

      Kurzweil: "Oh I've failed so badly. I built a fucking eNerd!"

  • I believe most of us think in terms of the experiences we have had in our lives. How many posts here effectively start with I remember when.... But data like that could be loaded into an AI so that it has working knowledge to draw on.

    • by msauve ( 701917 )
      Your telling me about your experiences is not the same as if I had those experiences myself. If it were, the travel industry would be dead - everyone would just read about it in books (or watch the video).
      • But with an AI you can integrate the experiences into its logic to a greater extent than I can just telling them to you. I have access to the AI's interfaces and source code. As far as I know I don't have access to yours.

        • by msauve ( 701917 )
          You must be a really good author. What books have you written, which have conveyed the full breadth of your experiences so completely and accurately?
    • by Tablizer ( 95088 )

      most of us think in terms of the experiences we have had in our lives. How many posts here effectively start with "I remember when...."

      I remember when I forgot where the hell I was going and had to Google it.
           

  • Mr. Grandiose (Score:3, Insightful)

    by Anonymous Coward on Monday January 21, 2013 @06:17PM (#42652015)

    Kurzweil is delusional. Apple's Siri, Google Now and Watson are just scaled-up versions of Eliza. Circus magic disguised as Artificial Intelligence is just artifice.

    • Anyone who knows Mr. Kurzweil, knows this is not what he is up to.

    • Re:Mr. Grandiose (Score:5, Interesting)

      by Iamthecheese ( 1264298 ) on Monday January 21, 2013 @06:39PM (#42652217)
      That "circus magic" showed enough intelligence to parse natural language. I understand you want to believe there's something special about a brain but there really isn't. The laws of physics are universal and apply equally to your brain, a computer, and a rock.

      You should know after all science has created that "we don't know" doesn't mean "it's impossible" nor does it mean "this isn't the right method"
      • by Prune ( 557140 )
        The laws of physics are indeed universal, so intelligent artifacts are certainly possible. But practical matters must be stressed. You cannot separate the mind from the body: http://en.wikipedia.org/wiki/Embodied_cognition [wikipedia.org]
        From this and recent neurological research supporting it and extending it by showing just how deep the mind depends on low level integration with body biology (for example, see Damasio et al.), it is clear that to create a human-like AI, you need to either simulate a body and its environ
    • If it can sort through a variety of data types and interpret language enough to come up with a helpful response, does it matter if such a system isn't "self aware"? I have doubts about some of my coworkers being able to pass a turing test. Watson is nearly at a level to replace two or three of them, and that is a somewhat frightening prospect for structural unemployment.
    • Kurzweil is delusional. Apple's Siri, Google Now and Watson are just scaled-up versions of Eliza. Circus magic disguised as Artificial Intelligence is just artifice.

      What would you need to see / experience in order to agree that the system you were observing did display what you consider to be "Intelligence", and wasn't simply "... just scaled-up versions of Eliza" ?

    • Circus magic disguised as Artificial Intelligence is just artifice.

      Hmm. Circus magic that kicked the shit out of Ken Jennings. Maybe your mind is circus magic too? Just not quite as good.

    • Eliza was a very simple grammar manipulator, translating user statements into Rogerian counter questions. No pattern recognition or knowledge bases were ever employed.

      In contrast, Watson, Siri, and Evi all cleverly parse and recognize natural language concepts, navigate through large external info bases, and partner with and integrate answer engines like Wolfram Alpha.

      There is simply no smilarity. Bzzzzt. You lose.

  • Any real AI needs loads of experience, I am sure anyone interested enough to write a book about it knows this...
    I doubt that he simply overlooked it.

  • loops (Score:3, Insightful)

    by perceptual.cyclotron ( 2561509 ) on Monday January 21, 2013 @06:23PM (#42652065)
    The data vs IRL angle isn't in and of itself an important distinction, but an entirely valid concern that is likely to fall out of this distinction (though needn't be a necessary coupling) is that the brain works and learns in an environment where sensory information is used to predict the outcomes of actions - which themselves modify the world being sensed. Further, much of sensation is directly dependent on, and modified by, motor actions. Passive learners, DBMs, and what have you are certainly able to extract latent structure from data streams, but it would be inadvisable to consider the brain in the same framework. Action is fundamental to what the brain does. If you're going to borrow the architecture, you'd do well to mirror the context.
  • by JohnWiney ( 656829 ) on Monday January 21, 2013 @06:29PM (#42652115)
    We have always assumed that humans are essentially a very sophisticated and complex version of the most sophisticated technology we know. Once it was mechanical clockwork, later steam engines, electrical motors, etc. Now it is digital logic - put enough of it in a pile, and you'll get consciousness and intelligence. A completely non-disprovable claim, of course, but I doubt that it is any more accurate than previous ideas.
    • You can do amazing things with clockwork. See: https://en.wikipedia.org/wiki/Difference_engine [wikipedia.org]
      Just like you can do the same thing with relays, and vacuum tubes. A computer is a computer no matter the form. The difference is every iteration results in something smaller, possibly cheaper, and much more powerful.

      The thing is we have always assumed that the brain follows certain patterns. There are entire fields out there devoted to the study of those patterns. What AIs attempt to do is mimic the results

  • by wmbetts ( 1306001 ) on Monday January 21, 2013 @06:30PM (#42652135)

    An "oh machine" has already been created. I don't think we really want that super smart though.

    http://health.discovery.com/sexual-health/videos/first-sex-robot.htm [discovery.com]

  • by astralagos ( 740055 ) on Monday January 21, 2013 @06:32PM (#42652155)
    There's a lather/rinse/repeat model with AI publication. I encountered it in configuration (systems designed to build systems), and it goes like this: 1. We've built a system that can make widgets out of a small set of parts, now we will build a system that can generally build artifacts! 2. (2-3 years later). We're building an ontology of parts! It turns out to be a bit more challenging! 3. (5-7 years later). Ontologies of parts turn out to be really hard! We've built a system that builds other widgets out of a small set of -different- parts! The models of thought in AI (and to a lesser extent cog psych) are still caught up in this very algorithmic rule-based world that can be traced almost lineally from Aristotle and without really much examination of how our thinking process actually works. The problem is that whenever we try to take these simple models and expand them out of a tiny field, they explode in complexity.
  • Because Kurzweil's a freakin' lunatic snakeoil salesman? I dunno - just guessin'.
    • Re: (Score:2, Insightful)

      by PraiseBob ( 1923958 )
      He has some unusual ideas about the future. He is also one of the most successful inventors of the past century, and like it not is often ranked alongside Edison and Tesla in terms of prolific ideas and inventions. One of the other highly successful inventors of the past century is Kamen, and he just invented a machine which automatically pukes for people. So... maybe your bar is set a little high.
    • Because Kurzweil's a freakin' lunatic snakeoil salesman? I dunno - just guessin'.

      If you're "just guessin'", then why should anyone grant your statement any weight?

      Wouldn't it be better to make an actual argument, and support it with actual evidence?

  • by russotto ( 537200 ) on Monday January 21, 2013 @06:34PM (#42652169) Journal

    "Always listen to experts. They'll tell you what can't be done and why. Then do it" (from the Notebooks of Lazarus Long)

  • Ah, naysayers... (Score:5, Insightful)

    by Dr. Spork ( 142693 ) on Monday January 21, 2013 @06:34PM (#42652171)
    What happened to the spirit of "shut up and build it"? Google is offering him resources, support, and data to mine. We have to just admit that we don't know enough to predict exactly what this kind of thing will be able to do. I can bet it will disappoint us in some ways and impress us in others. If it works according to Kurzweil's expectations, it will be a huge win for Google. If not, they will allocate all that computing power to other uses and call it a lesson learned. They have enough wisdom to allocate resources to projects with a high chance of failure. This might be one of them, but that's a good sign for Google.
    • Oh, among the list of projects Google's done, it won't rank even among the 10 dumbest. However, if somebody came to me tomorrow afternoon and said that they had plans for a cold fusion reactor, and that I should just trust them and dump the cash on them, I -would- reserve the right to say the project stinks to high heaven. Kurzweil might be right; however the track record of AI suggests he's wrong. A good experiment is always the best proof to the contrary, but what he's talking about here sounds very ma
      • You mean like heliocentrism was tossed out, because if the earth moves around the sun we should see parallax motion of the stars, but when our instruments weren't sensitive to detect parallax motion of the stars, we concluded the earth doesn't move around the sun?

    • by Alomex ( 148003 )

      What happened to the spirit of "shut up and build it"?

      You must be new here. A big portion of AI is predicted in "make grandiose announcement" pass GO and collect $200 (million) until people forget about your empty promise. Wash, rinse and repeat.

      Serious AI is done quietly, in research labs and universities one result at a time, until one day a solid product is delivered. See for example Deep Blue, Watson or Google Translate. There were no announcements prior to at least a rather functional beta version of the product being shown.

  • Every other attempt to create AI has failed, so why should this one be any different?

    If it gives us new technology, like many other AI attempts have, then it will be a success.
  • Not quite (Score:4, Interesting)

    by ceoyoyo ( 59147 ) on Monday January 21, 2013 @06:53PM (#42652311)

    A technology editor at MIT Technology Review says Kurzweil's approach may be fatally flawed based on a conversation he had with an MIT AI researcher.

    From the brief actual quotes in the article it sounds like the MIT researcher is suggesting Kurzweil's suggestion, in a book he wrote, for building a human level AI might have some issues. My impression is that the MIT researcher is suggesting you can't build an actual human level AI without more cause-and-effect type learning, as opposed to just feeding it stuff you can find on the Internet.

    I think he's probably right... you can't have an AI that knows about things like cause and effect unless you give it that sort of data, which you probably can't get from strip mining the Internet. However, I doubt Google cares.

  • Comment removed based on user account deletion
  • Seriously?!?!?

    Why do I even bother?

  • Cyc vs. bottom up (Score:5, Informative)

    by Animats ( 122034 ) on Monday January 21, 2013 @07:36PM (#42652619) Homepage

    We've heard this before from the top-down AI crowd. I went through Stanford CS in the 1980s when that crowd was running things, so I got the full pitch. The Cyc project [wikipedia.org] is, amazingly, still going on after 29 years. The classic disease of the academic AI community was acting like strong AI was just one good idea away. It's harder than that.

    On the other hand, it's quite likely that Google can come up with something that answers a large fraction of the questions people want to ask Google. Especially if they don't actually have to answer them, just display reasonably relevant information. They'll probably get a usable Siri/Wolfram Alpha competitor.

    The long slog to AI up from the bottom is going reasonably well. We're through the "AI Winter". Optical character recognition works quite well. Face recognition works. Automatic driving works. (DARPA Grand Challenge) Legged locomotion works. (BigDog). This is real progress over a decade ago.

    Scene understanding and manipulation in uncontrolled environments, not so much. Willow Garage has towel-folding working, and can now match and fold socks. The DARPA ARM program [darpa.mil] is making progress very slowly. Watch their videos to see really good robot hardware struggling to slowly perform very simple manipulation tasks. DARPA is funding the DARPA Humanoid Challenge to kick some academic ass on this. (The DARPA challenges have a carrot and a stick component. The prizes get the attention, but what motivates major schools to devote massive efforts to these projects are threats of a funding cutoff if they can't get results. Since DARPA started doing this under Tony Tether, there's been a lot more progress.)

    Slowly, the list of tasks robots can do increases. More rapidly, the cost of the hardware decreases, which means more commercial applications. The Age of Robots isn't here yet, but it's coming. Not all that fast. Robots haven't reached the level of even the original Apple II in utility and acceptance. Right now, I think we're at the level of the early military computer systems, approaching the SAGE prototype [wikipedia.org] stage. (SAGE was an 1950s air defense system. It had real time computers, data communication links, interactive graphics, light guns, and control of remote hardware. The SAGE prototype was the first system to have all that. Now, everybody has all that on their phone. It took half a century to get here from there.)

  • by bcrowell ( 177657 ) on Tuesday January 22, 2013 @12:01AM (#42654083) Homepage

    The crappy little superficial one-page MIT Technology Review article has a link to another, similarly crappy article on the same site, but if you click through one more layer you actually get to this [newyorker.com] much more substantial piece in the New Yorker.

  • by HuguesT ( 84078 ) on Tuesday January 22, 2013 @02:48AM (#42654771)

    Seriously, what's the worst that can happen? Skynet? Wait...

You know you've landed gear-up when it takes full power to taxi.

Working...