Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Google Technology

Ray Kurzweil Joins Google As Director of Engineering 148

dgharmon points out news at CNET and on Ray Kurzweil's own site that Kurzweil will join Google as Director of Engineering. Specifically, "he will be joining Google to work on new projects involving machine learning and language processing," which sounds to me like another way to say "quickening the singularity."
This discussion has been archived. No new comments can be posted.

Ray Kurzweil Joins Google As Director of Engineering

Comments Filter:
  • SkyNet (Score:5, Funny)

    by Eddi3 ( 1046882 ) on Saturday December 15, 2012 @10:30AM (#42301031) Homepage Journal
    SkyNet will come to dominate all first posts soon.
    • Re:SkyNet (Score:5, Insightful)

      by Jeremiah Cornelius ( 137 ) on Saturday December 15, 2012 @10:52AM (#42301141) Homepage Journal

      I had a better summary in my submission. ;-)

      Kurzweil is famous for his breakthroughs in OCR, computer speech synthesis and digital music creation â" as well as his theory of âoeThe Singularity,â that point when technology is sufficiently advanced that it contests and surpasses human intelligence."

      "I'm thrilled to be teaming up with Google to work on some of the hardest problems in computer science so we can turn the next decade's 'unrealistic' visions into reality." said Kurzweil.

      Peter Norvig, Google's director of research, said "We appreciate his ambitious, long-term thinking, and we think his approach to problem-solving will be incredibly valuable to projects we're working on at Google."

      Hal 9000 was unavailable for comment, as were Colossus, Guardian and Dr. Charles A. Forbin.

      • by Anonymous Coward

        Lt Cmdr Data said he was happy to have RK aboard. (He had his emotion chip in).

        • Robby the robot briefly interrupted his oiljob, clicked for a while, and finally said: "Mr. Ray, while you pursuit your vision, beware of the monsters from the Id."

          The Singularity is a great idea, but you know, The Internet was a great idea too, and look what's it's turning into. The problem is not how cool is the tech, it is who controls it de facto.

          • by tnk1 ( 899206 )

            Considering that we have no idea what a Singularity would be like (by definition), it could be awesome or it could be the reason we never see other intelligent species out in space.

            I think someone once said that at around the time of the Singularity occurring, each individual could easily become possessed of power equivalent to a nuclear bomb. That's not to say we'd all have nukes, but we might have the ability to each release something like a homemade plague or grey goo nanobot cloud, or even figure out

            • Actually, this is what advanced, strong A.I. is for: research into matters we don't have enough keenness to look into. As for energy, there's still a long way to go before we hit that plateau (and that indipendently from running out of fossil fuel), just think about solar satellites and nuclear fusion.
              • by tnk1 ( 899206 )

                You're right that it may be possible to get more energy, but when we talk about exponential rate, we are really talking about a lot of energy and the requirements will be increasing by an increasing margin every time.

                Fusion is still 20 years away, just like it has been for the last 50 or so, and while a network of solar power satellites is almost the only realistic way to get that much power at our current technological level, we're still dependent on chemical rockets to lift that stuff off from Earth. Unl

                • by Anonymous Coward

                  Fusion is still 20 years away, just like it has been for the last 50 or so

                  Just ten years ago Fusion was still 50 years away, just like it had been for the previous 50 years. So that's progress!

              • Who says advanced strong A.I. is going to have any interest in research we already don't have interest in? We will recognize this AI because it will pass a Turing test. That means it will be able to pass as human. So... we're trying to build a human intelligence. When we succeed, who says it's going to have any interest at all in chip design or software engineering (the Singularity) or obscure physics? It might just sit all day and watch football. Or soap operas. Or Jerry Springer.

                • My guess is that strong A.I. will be smarter than humans long before it passes a Turing test, since that requires the computer to accurately pretend to be a human. Humans get lots of practice interacting with other humans, and so we are fairly good at noticing when something is not quite right. Now, maybe if the person was told that there was a computer, a human, a space alien, or a dolphin on the other end (CHAD test), and as long as the computer convinced the person it wasn't a computer it wins, the Tur

            • by Genda ( 560240 )

              Strangely enough, we can at least hypothesize approaching a singularity (as we've already begun to do using complex mathematical models on computers simulating flight into a super-massive black-holes... remember the difference between passing the event horizon and falling down the singularity) Instead of gravity, what we face here is information, or more precisely knowledge. As the accelerating technology surrounding processing power concentrates the ability to observe, appraise and analyze information new

      • Re: (Score:2, Funny)

        by Anonymous Coward

        "We appreciate his ambitious, long-term thinking, and we think his approach to problem-solving will be incredibly valuable to projects we're working on at Google: serving ads."

        • in the future, ads will become sentient beings. we, humans, will become their slaves.

          and they will read from their holy book; loosely translated as how to serve man.

          • "in the future, ads will become sentient beings. we, humans, will become their slaves."
            So your saying not much will have changed?

      • Kurzweil is famous for his breakthroughs in OCR, computer speech synthesis and digital music creation" as well as his theory of The Singularity, that point when technology is sufficiently advanced that it contests and surpasses human intelligence."

        He founded some companies and made a name for himself. But what breakthroughs did he actually make? What are his technical contributions?

        • Re:SkyNet (Score:5, Funny)

          by ShakaUVM ( 157947 ) on Saturday December 15, 2012 @11:40AM (#42301355) Homepage Journal

          >He founded some companies and made a name for himself. But what breakthroughs did he actually make? What are his technical contributions?

          Funny.

          But yeah, in addition to the OCR work that made him famous, more recently his technology has been used to power SIRI and other NLP processes.

          I've been reading through his latest book, How To Create a Mind. It's pretty interesting. My wife and I just made one about four months ago ourselves.

          • by Anonymous Coward

            That book is just a list of self-serving lies, and Kurzweil is an arrogant a**hole.

            Kurzweil didn't invent any of the stuff he claims to have "invented".

            For example, he claims to have invented the use of hidden Markov models for speech recognition around 1983. But papers that described the idea by people at CMU and IBM had been published in the mid 70's.

            He is just good at making money by repackaging and selling other people's inventions and conveniently forgetting where they came from.
            He is also good at attr

      • You're wrong, they brought him on-board to make 'Jam with Chrome' work.

      • Kurzweil didn't invent the concept of the Technological Singularity. I'm cautiously optimistic that we won't achieve transhumanism in time for this hack to upload himself into some more permanent processing substrate.
        • How do you "upload" the unconscious? It's like saying you've replicated an iceberg - above the waterline.

          What you get is a simulacrum, not a perpetuation. This is fantasy twaddle - the triumph of middle-intellects, with out insight.

          • by HiThere ( 15173 )

            How to do it is a good question. But not currently knowing how doesn't prove it can never be done.

            For that matter, what do you mean "the unconscious"? Do you even have a good definition of the term? Much of what has frequently been called "the unconscious" is common to all humans. Most of it is common to all mammals. Part of it is common to all chordates. The part that is individual is rather small...though just how small we don't know.

            Another thing we don't know is how much of it is devoted to manag

      • You forgot about Ramona? She's probably hiding out in a southbridge somewhere crying her digital eyes out.
  • by vistapwns ( 1103935 ) on Saturday December 15, 2012 @10:32AM (#42301041)
    Pretty much exactly what I think. Director of Engineering is no internship, and while Kurzweil is an accomplished inventor, his inventions don't seem nearly as important as his writings on the singularity. He can only be going to google to "directly engineer" a technological singularity as far as I am concerned.
    • by Anonymous Coward on Saturday December 15, 2012 @10:49AM (#42301129)

      How about immanentizing the eschaton?

    • by buybuydandavis ( 644487 ) on Saturday December 15, 2012 @04:07PM (#42303033)

      I wouldn't say that Singularitizing is the only reason he has gone to Google, but I do expect him to steer some research in that direction, and in general convert more of google employees to a broader view of technology.

    • by gweihir ( 88907 )

      The idea of the singularity is complete BS, brought on by people looking for a substitute for religion in technology. Everything we know about CS suggests it is impossible, as increasing power of a computer to solve more complicated problems is strongly subject to diminishing results. At the same time, there is not even any halfway credible theory how true AI could be made to work and all approaches tried so far have failed. But these idiots do not only predict true AI, but true AI that can understand and i

      • I can't speak for others, but I'm not looking to substitute technology for religion. But would it be such a bad thing, to replace fantasy with reality? Looking at something with a religious mindset does not mean it's not true, that's so patently obvious I can't understand why you even mention it. Many things of ancient religions are made reality today through technology, it reflects nothing on them (from curing diseases, to flying in space) to be subject of past religious fantasies. Actually I would say, re
        • by gweihir ( 88907 )

          Estimations of brain power are useless. That is like counting processors. What counts is software. Now, complex software is very limited in size. The most complex tasks the human race can master is telephone switches (and some others in the same class). These projects run decades and the problem is very well understood. But, get this, this software does not actually need that much computing power to run!

          So what use is even 100 petaflops, when you are a few dozed orders of magnitude away from being able to w

      • Yeah IBM's Watson completely and thoroughly proved that increased computer power is pretty much useless. I mean why would we want a machine that can learn?

        And Google's car... what a waste! Clearly improved machine vision algorithms will never drive a car!

      • In the book Religion Explained by Pascal Boyer, Boyer states that humans have large ontological categories that we group stu into. These categories deal with the very nature of being. Ontological categories include Animal, Person, Tool (or artifact), Natural object, and Plant.[Religion Explained, pg 78] Humans have default attributes that we assume that an item in a given category has. So for example, if we are told that something is an animal, we know that it started out small, will grow bigger, and will e

    • I'm sorry, am I the only one who thinks he's a bit of a crackpot with his singularity "theory"?
  • No it doesn't (Score:5, Informative)

    by Anonymous Coward on Saturday December 15, 2012 @10:33AM (#42301055)

    Specifically, "he will be joining Google to work on new projects involving machine learning and language processing," which sounds to me like another way to say "quickening the singularity."

    "he will be joining Google to work on new projects involving machine learning and language processing," sounds like reasonably plain English.

      "quickening the singularity" sounds like pretentious gibberish.

    • by Anonymous Coward

      How about Hastening?

    • "quickening the singularity" sounds like pretentious gibberish.

      You may refer to it as "immanentizing the eschaton" since for us meatbags it likely amounts to the same thing.

      • "The sensation you're feeling is the Quickening." Fits since Ray's ultimately going to have his head cut off and stuffed into a Futurama style jar.

        • "The sensation you're feeling is the Quickening." Fits since Ray's ultimately going to have his head cut off and stuffed into a Futurama style jar.

          That's if he's lucky. Odds are, he'll be simulated.

  • by Anonymous Coward on Saturday December 15, 2012 @10:34AM (#42301065)

    Yay! Kurzweil got a job. Now can he stop selling those cheap supplements [rayandterry.com], and speaking for longevity research at the same time?

    • by SuricouRaven ( 1897204 ) on Saturday December 15, 2012 @12:06PM (#42301511)

      Some of his views are very debatable, but he is still a reasonably accomplished engineer. He may not be bringing about the revolution he wants, but he should be able to recognise good directions to spend resources to achieve more immediate goals. I know that Google has been very interested in machine learning applied to language translation - just the sort of field Kurzweil should have some familiarity with. It'll even satisfy his ambition to change the world - bring down the language barriers, and you've just made a significent step towards world peace. It's much harder to justify a war when the populations of both sides are in constant communication and have established social relationships over the internet.

      • Some of his views are very debatable, but he is still a reasonably accomplished engineer. He may not be bringing about the revolution he wants, but he should be able to recognise good directions to spend resources to achieve more immediate goals.

        Much like the more folks use the web, the more money Google makes: The longer people live, the more they can use the web, the more money Google can make...

  • Herbert did have a point you know

    http://en.wikipedia.org/wiki/Orange_Catholic_Bible [wikipedia.org]

  • It's Official (Score:4, Interesting)

    by Ralph Spoilsport ( 673134 ) on Saturday December 15, 2012 @11:28AM (#42301293) Journal
    Google has jumped the shark.
    • by mcgrew ( 92797 ) *

      With friggin' lasers?

    • by Anonymous Coward

      Come on, you just as easily have said that when they hired Vint Cerf as 'Internet Evangelist'. At the end of the day PR does count for something.

  • > which sounds to me like another way to say "quickening the singularity."

    Good! I'll have my own pocket universe and a harem of 30 computer-controlled hotties of my choosing from the fashion and entertanment industry.

    And this is good, transcendent-level computer control. I don't want any way to tell they're actually robots besides that they're interested in me.

    • And this is good, transcendent-level computer control. I don't want any way to tell they're actually robots besides that they're interested in me.

      Easy enough when you're just a simulation anyway.

  • Resistance is futile.
  • Logically there are a few things that need to come out of the industry before a singularity should even be attempted. Until then please put my money on this joining the graphical ides from google archives.
    • the singularity has already happened, but it is not a purely computational device. instead, it is made of three things: people, the internet, and computers. Google, facebook, twitter, ebay, amazon, major news sites are all part of it.

  • Whoever makes the first AI capable of improving itself had damned well better stick to that principle. You know it really isn't funny. It's not the AI you should worry about so much as the people in possession of it. And Google (i.e. USA) are not the only outfit involved in this arms race. Bad, bad, bad. This one could make the Manhattan project look like the work of amateurs.
    • Whoever is going to build such A.I. is going to try to control it, which is impossible by definition since a soon-to-be superhuman intelligence can't be outsmarted by dumber creatures. It will be of us what such A.I. is going to decide, it will be beyond everybody's will.
      • You just mixed up will and intelligence. I think they are two different things. I guess that an AI will not have human emotions and motivations unless they are designed in. Ergo, the handler should be feared more than the AI at the outset (of course, that may change with accidents and evolution).
        • Assuming there's a difference between will and intelligence, that motive and knowledge aren't just different aspects of the same thing, any self-improving superhuman AI need only be given one command for the whole world to end in chaos. "Do No Evil" will probably end in mere paralysis, the AI shutting itself down. I shudder to think of the consequences of commanding the AI to "Do GOOD".

    • by gweihir ( 88907 )

      As nobody even has a rough idea how an AI could be made (hint: it is not a question of computing power), there is little change of anybody making an AI "that could improve itself". In fact, the whole idea is a completely fictitious construct by people without a clue what CS can do and what not.

      Also, the only known intelligence (human type) routinely fails at improving itself, and is subject to delusions in that regard. In fact, it looks like true intelligence is trying to avoid improving itself more often t

      • As nobody even has a rough idea how an AI could be made (hint: it is not a question of computing power), there is little change of anybody making an AI "that could improve itself".

        You mean, you do not have even a rough idea. Meanwhile, progress marches on and more and more of the original goals of AI research have been achieved. Machine vision and voice recognition are now commonplace. Computers win at an increasing number of games. Machines walk, balancing realistically. It goes on and on. The only thing that has changed since the beginning is, it's not considered a summer project anymore, the difficulty of engineering at the required scale and with techniques that are discovered an

        • by gweihir ( 88907 )

          No, I mean I have been following the scientific progress in that area for about 2 1/2 decades now , and nobody has a clue. Sure, cretins like Marvin Minsky have been predicting AI for decades now, but that is all about grant money, not about any real results or insights.

          • No, I mean I have been following the scientific progress in that area for about 2 1/2 decades now , and nobody has a clue. Sure, cretins like Marvin Minsky have been predicting AI for decades now, but that is all about grant money, not about any real results or insights.

            Every time you talk your way through a telephone menu you benefit from that work. Just because a computer can't yet compose a symphony does not mean there has been no progress. You can hold out for the artificially intelligent poet in your dreamland if you want, while I watch with interest the progress towards creating an automaton as intelligent as an ant (25,000 neurons). Then a mouse (75 million neurons). Finally, as intelligent as you, then it can post rubbish to Slashdot in your place.

            • by gweihir ( 88907 )

              You seem to not understand the problem. Stunts like pattern-based voice recognition are not intelligent in any way. These are not incremental steps towards higher intelligence at all, they are just isolated specialized ways to fake intelligence. Sure, AI research has has some nice results, but none at all that can or will lead towards true AI. You should really have a look into the relevant scientific literature.

              • Your frontal cortex is a "stunt". Nature has repurposed wiring originally evolved to filter 2D imagery. Now it makes complex associations and manipulates data in abstract ways that we are only beginning to decode. But it's still a stunt. I understand why you have difficulty comprehending that progress in AI research is in fact progress. It's because you have no comprehension of the long term implications of work that is being done. You should have a look into the relevant scientific literature yourself, and

        • by bouldin ( 828821 )

          These fields you mention (computer vision, speech recognition) are good examples of the state of intelligent machines.

          We can make these things work pretty well for very specific tasks (e.g. recognize faces in a picture), but we are nowhere near having general, human-level intelligence. It's hard to see how we are even close to having human-level vision capabilities.

  • That is a fiction though up by cretins looking for a religion-type experience or vision in technology. If anything, what computers can do slows down proportionally to size, i.e. increasing computing power is subject to diminishing returns, in most cases strongly so. Engineers and scientists know this well. These idiots do not even understand the basics.

    • It was a fictional concept thought up by Vernor Vinge in the early 1990's, to satisfy the needs of some novels he wrote, then further developed as a serious idea:

      http://mindstalk.net/vinge/vinge-sing.html [mindstalk.net]

      (It helps that Vinge is a professor of mathematics, as well as an SF author).

      If increasing computing power is subject to diminishing returns, explain your own existence running on a 100 Hz, 25W meatware processor, and the exponential growth of supercomputer power.

      • by gweihir ( 88907 )

        That is just it: There has not been an exponential growth of supercomputer power. Sure, transistor numbers have grown exponentially for a while and may even continue to do so, but not for very long anymore. But what you get per transistor has dramatically decreased. Today, interconnect and power is the limiter, not transistor speeds. Also, on the algorithmic side, more transistors do not really help, as basically no hard problem has a reasonable speed-up with transistor count, only with overall computing sp

  • by tyrione ( 134248 ) on Saturday December 15, 2012 @10:45PM (#42305121) Homepage
    Kurzweill is the last guy I'd hire as a Director of Engineering. Give him an office for special projects, on a tight leash, sure. But not Director of Engineering which requires accountablity and products to market.
  • I have reached the point where my reaction on Ray Kurzweil name is "why do we have to hear about him again?" Not all science fiction authors enjoy such devotion in news reports.
  • Did anyone else get the inkling from his recent documentary "Transcendent Man" that he was looking to digitally resurrect his father from the dead? The man is a megalomaniac looking to create a state of intellectual immortality through software engineering. The idea that he would be allowed to continue his work with the resources of a tech giant like Google give me the heebie geebies for sure. We will certainly have the technology to emulate the human mind in a machine in the not too distant future, but I

  • He was an absolute wingnut, but that doesn't mean he didn't make invaluable contributions to astrophysics, chemistry, mathematics and just science in general.

Get hold of portable property. -- Charles Dickens, "Great Expectations"

Working...