Why Ray Kurzweil's Google Project May Be Doomed To Fail 354
moon_unit2 writes "An AI researcher at MIT suggests that Ray Kurzweil's ambitious plan to build a super-smart personal assistant at Google may be fundamentally flawed. Kurzweil's idea, as put forward in his book How to Build a Mind, is to combine a simple model of the brain with enormous computing power and vast amounts of data, to construct a much more sophisticated AI. Boris Katz, who works on machines designed to understand language, says this misses a key facet of human intelligence: that it is built on a lifetime of experiencing the world rather than simply processing raw information."
Ah! (Score:5, Informative)
The old `Chinese Room` again.
The Complete 1 Atlantic Recordings 1956-1961
It's Penrose vs Hofstadter! (Seriously, haven't we done this before?)
Re:Ah! (Score:5, Informative)
Oops! That second line should of course have been:
http://en.wikipedia.org/wiki/Chinese_room [wikipedia.org]
(That'll teach me to post to Slashdot when I'm sorting out my Mingus!)
Re: (Score:3, Funny)
(That'll teach me to post to Slashdot when I'm sorting out my Mingus!)
Stop doing that or it'll fall off.
Re:Ah! (Score:5, Insightful)
Re:Ah! (Score:4, Interesting)
There is a subtle but still significant difference between augmenting humans (or animals) and creating new entities.
There are plenty of things you can do to augment humans:
- background facial and object recognition
- artificial eidetic memory
- easy automatic context-sensitive scheduling of tasks and reminders
- virtual telepathy and telekinesis ( control could be through gestures or actual thought patterns - brain computer interfaces are improving).
- maybe even automatic potential collision detection.
And then there's the military stuff (anti-camouflage, military object recognition, etc).
Re: (Score:3)
Why create AIs?
Because they are the next step in evolution. Unencumbered by a legacy of irrelevant skills and adaptations, and more robust than organic life.
Re: (Score:2)
Searle AND Mingus?
Are you SURE we don't know each other? :-)
Re:Ah! (Score:4, Interesting)
Searle's Chinese Room paper is basically one big example of begging the question.
The hypothetical setting is a room with rules for transforming symbols, a person, and lots and lots of scratch paper. Stick a question or something written in Chinese in one window, person goes through the rules and puts Chinese writing out of the other window. Hypothesize that this passes the Turing test with people fluent in Chinese.
Searle's claim is that the room cannot be said to understand Chinese, since no component can be said to understand Chinese. The correct answer, of course, is that the understanding is emergent behavior. (If it isn't, then Searle is in the rather odd position of claiming that some subatomic particles must understand things, since everything that goes on in my brain is emergent behavior of the assorted quarks and leptons in it.) Heck, later in the paper, he says understanding is biological, and biology is emergent behavior from chemistry and physics.
He then proposes possible arguments against, and answers each of them by going through topics unrelated to his argument, although relevant to the situation, and finishes with showing that it's equivalent to the Chinese Room, and therefore doesn't have understanding. Yes, this part of the paper is simply begging the question and camouflage. It was hard for me to realize this, given the writing, but once you're looking for it you should see it.
Re: (Score:3, Interesting)
The chinese room is the dumbest fucking thought experiment in the history of the universy. Also, Penrose is a fucking retard when it comes to consciousness.
Now, having put the abrasive comments aside(without bothering about the critique of the aforementioned atrocities: the internet and googles provides a much better job of the fine details regarding that than any post here will ever make)
SOooooo, back to the topic at hand: Boris Katz forgets a very important detail: A lifetime of experience to a computer c
Re:Ah! (Score:4, Insightful)
Yeah, just keep arguing your way into a semantic web... :-)
Re: (Score:3, Insightful)
A lifetime of experience to a computer cluster with several thousand cores, and several billion Hz of operational frequency, per core, can be passed in a very short time.
How?
Re: (Score:3)
Re: (Score:2)
Re: (Score:2)
There's that and the fact that Boris may have missed the fact that Ray will have access to Google's new toy "Metaflow", a powerful and robust context engine with a great deal of the necessary "Referential Wiring" already laid down as a critical bit of infrastructure upon which to build his new beastie. I'd say Google has the most if not all of the raw ingredients for building something potentially revolutionary, and if anyone can make all those dangly bits all singing and all dancing, Ray is the man with a
Re: (Score:3)
Re: (Score:3)
Not sure if I've mentioned this before, but "The Emperor's New Mind" by Roger Penrose goes into great detail about this and I find his ideas compelling, though others disagree. We simply don't
Re: (Score:3)
True to a point, but there's some valid counter-arguments.
Life evolves. This includes AIs. Since every item, mechanical or biological break down sooner or later, given time the life we will have is that which is capable of producing new life, or repairing the old one, faster than things break down.
Creating new life, or repairing old life, requires resources, at a minimum energy and whatever substance the intelligence is hosted in. Could be silicon, could be carbon, but it's a fair bet that it'll be -somethi
Re: (Score:3)
How is the Chinese Room thing valid?
The argument is - you write a program which can pass a Turing test, in Chinese. You can, in theory, execute that program by hand. But the program isn't a "mind", because you don't speak Chinese.
It's rubbish. The guy in the "Chinese Room" isn't the "mind", he's part of the brain. Your neurons aren't a mind. The CPU isn't a mind. But a CPU executing a Turing-test-beating AI program is a mind. A mind is not a piece of hardware, it's an abstract way to describe hardware and s
It may be flawed, but that doesn't sound like it. (Score:4, Interesting)
You can draw a distinction between experiencing the world and processing raw information, but how big of a line can you draw when I experience the world through the processing of raw information?
Re: (Score:3, Interesting)
I've always thought it was about information combined with wants, needs, and fear. Information needs context to be useful experience.
You need to learn what works and doesn't, in a context, with one or many goals. Babies cry, people scheme (or do loving things), etc. It's all just increasingly complex ways of getting things we need and/or want, or avoiding things we fear or don't like, based on experience.
I think if you want exceptional problem solving and nuance from an AI, it has to learn from a body of ex
Re: (Score:2)
Re: (Score:3)
Re: (Score:2)
Just wait until you learn how to redirect the input from /dev/random
Re: (Score:3)
If you want a digital assistant that won't forget unless you tell it, get an iPad. (Or better still, a life) and obviously, avoid anything portable made by MS.
Re:It may be flawed, but that doesn't sound like i (Score:5, Informative)
But I'm curious why you think a mind is necessarily a neural network. Are you saying there is no other possible way to construct a mind? As far as I can tell, there are lots of other designs, many of them far superior to neural networks, especially for such basic things as representing knowledge.
Re: (Score:2)
And how is that not processing raw information? It's like saying a camera does preprocessing. After all a camera shows colors into electrical pulses. That's pretty raw information in my book, but so then are is the input from the eyes.
While I'm happy to say Kurzweil's plan is doomed to fail, after all he stole it from Jeff Hawkins and Hawkins never made that work. And it all seems like fundamentally indistinct versions of other AI applications. Kurzweil is basically saying way more data and computer power a
Re: (Score:3)
Being a little smug aren't we? Its not like you actally know anything about which you opine other than regurgitating someone else's more informed opinion. You have no idea if intelligence or sentience is a linear process, I would assert looking at the degree of intelligence as a function of brain size and complexity it's not. You have to have a sufficiently complex brain to manage symbolic reference and the rudiments of language to distinguish a "Self" and we know for certain chimpanzees do and mice not so
Re: (Score:2)
All the mind is, is a large set of pattern recognizers.
I've read theories promoting this, but I haven't seen any actual proof of it yet. When things graduate from cognitive "science" to neuroscience, I start to taken them seriously. This hasn't happened yet.
As much as I enjoy debates arising from cogsci, it is pretty much only a branch of philosophy as yet. This isn't an insult, I love philosophy (to the point of spending large amounts of time and money on it), but it hardly has the ability to make strong statements.
Further, the AI field is boring. It has s
You have to start somewhere. (Score:5, Insightful)
It won't be perfect, but "fundamentally flawed" seems like an over statement to me. A personal AI assistant will be useful for somethings, but not everything. What it will be good at won't necessarily be clear until it's put into use. Then, any shortcomings can still be improved, even if certain tasks must be more or less hard-wired into its bag of tricks. It will be just as interesting to know what it absolutely won't be useful for.
Re: (Score:2, Insightful)
AI itself is fundamentally flawed.
AI assumes that you can take published facts, dump them in a black box, and assume that the output is going to be intelligent. Sorry, but when you do this to actual humans, you get what is called "book smart" without common sense.
I'm sure everyone here can either identify this or identify with it.
--
BMO
Re: (Score:2, Insightful)
that passes for intelligence in college, so what's the problem? most people on the street don't even have "book smarts", they're dumber than a sack of shit
Comment removed (Score:5, Funny)
Re:You have to start somewhere. (Score:5, Insightful)
that passes for intelligence in college, so what's the problem?
That's the *only* place it passes for intelligence. And that only works for 4 years. It doesn't work for grad level. (If it's working for you at grad level, find a different institution, because you're in one that sucks).
A lot of knowledge is not published at all. It's transmitted orally. It's also "discovered" by the user of facts through practice as to where certain facts are appropriate and where not appropriate. If you could use just books to learn a trade, we wouldn't need apprenticeships. But we still do. We even attach a fancy word to apprenticeships for so-called "white collar" jobs and call them "internships."
The apprentice phase is where one picks up the "common sense" for a trade.
As for the rest of your message, it's a load of twaddle, and I'm sure that Mike Rowe's argument for the "common man" is much more informed than your flame.
Please note where he talks about what so-called "book learned" (the SPCA) say about what you should do to neuter sheep as opposed to what the "street smart" farmer does and Mike's own direct experience. That's only *one* example.
http://blog.ted.com/2009/03/05/mike_rowe_ted/ [ted.com]
In short, your follow-up sentence says that you are an elitist prick who probably would be entirely lost without the rest of the "lower" part of society picking up after you.
--
BMO
Re: (Score:2)
What if roombas pick up after him?
Re:You have to start somewhere. (Score:5, Interesting)
My wife is putting our son through these horrible cram school things. Kumon and others. I was so glad when he found ways to cheat, now his marks are better, he gets yelled at less and he actually learned something.
Re:You have to start somewhere. (Score:5, Insightful)
No, it doesn't.
One particular kind of AI, which was largely abandoned in the 60's assumes that. Modern AI involves having some system, which ranges from statistical learning algorithms all the way to biological neurons growing on a plate, learn through presentation of input. The same way people learn, except often faster. AI systems can be taught in all kinds of different ways, including dumping information into them, a la Watson; by letting them interact with an environment, either real or simulated; or by having them watch a human demonstrate something, such as driving a car.
The objection here seems to be that Google isn't going to end up with a synthetic human brain because of the type of data they're planning on giving their system. It won't know how to throw a baseball because it's never thrown a baseball before. (A) I doubt Google cares if their AI knows things like throwing baseballs, and (B) it says very little generally about limits on the capabilities of modern approaches to AI.
Re:You have to start somewhere. (Score:5, Interesting)
Modern AI involves having some system, which ranges from statistical learning algorithms all the way to biological neurons growing on a plate, learn through presentation of input. The same way people learn, except often faster.
Biological neurons on a plate learning faster than neurons inside one's head? They are both biological and work at the same "clock speed" (there isn't a clock speed).
Besides, we do this every day. It's called making babies.
The argument that I'm trying to get across is that the evangelists of AI like Kurzweil promote the idea that AI is somehow able to bypass experience, aka "learning by doing" and "common sense." This is tough enough teaching to systems that have been the result of the past 4.5 billion years of MomNature's bioengineering. I'm willing to bet that AI is doomed to fail (to be severely limited compared to the lofty goals of the AI community and the fevered imaginations of the Colossus/Lawnmower Man/Skynet/Matrix fearmongers) and that MomNature has already pointed the way to actual useful intelligence, as flawed as we are.
Penrose was right, and will continue to be right.
--
BMO
Re: (Score:3)
I think there's another aspect to this. Any artificially produced intelligence will be totally alien - it won't think like us.
I also wonder what will motivate it, whether it will object to being a lab curiosity, whether it will be paranoid, or a sociopath etc.
Perhaps a new field will develop - Machine Psychology.
Re:You have to start somewhere. (Score:5, Interesting)
This interests me. As a nonexpert in AI, it has always seemed to me that a critical missing aspect of attempts to generate 'strong' AI (which I guess means AI that performs at a human level or better) is a process in which the AI formulates questions, gets feedback from humans (right, wrong, senseless - try again), coupled with modification by the AI of its responses and further feedback from humans...lather, rinse, repeat...until we get responses that pass the Turing test. This is basically just the evolutionary process. This is what made us.
I don't think we need to know how a mind works to make one. After all, hydrogen and time have led to this forum post, and I doubt the primordial hydrogen atoms were intelligent. So we know that with biochemical systems, it's possible to come up with strong I given enough time and evolution. Since evolution requires only variation, selection, and heritabillity, it's hard for me to believe we can't do that with computational systems. Is it so difficult to write a learning system that assimilates data about the world, asks questions, and changes its assumptions and conclusions on the basis of feedback from humans?
And it's probably already been tried, and I haven't heard about it. If it has, I'd like to know. If not, I'd like to know why not.
Re: (Score:3)
AI itself is fundamentally flawed.
AI assumes that you can take published facts, dump them in a black box, and assume that the output is going to be intelligent. Sorry, but when you do this to actual humans, you get what is called "book smart" without common sense.
I'm sure everyone here can either identify this or identify with it.
-- BMO
You're mis-stating the nature of your objection.
What you're objecting to isn't the entirety of artificial intelligence research, but rather drawing an (IMO false) distinction between the sort of information processing required to qualify as being "book smart", and the information processing you label "common sense."
Human brains detect and abstract out patterns using a hierarchical structure of neural networks. Those patterns could involve the information processing needed to accurately pour water into a
Re: (Score:3)
"Is it your belief that human brains process information in some way that can't be replicated by a system that isn't composed of a network of mammalian neurons, and, if so, why?"
Not just mammalian neurons, but invertebrate neurons too. I think that until we surpass what MomNature has already bioengeineered and abandoning the VonNeumann/Turing model of how a computer is "supposed to be" that we will not construct anything AI that is more performant than what already exists in biological systems.
And that's t
Re: (Score:3)
1) Can machines fly? Yes, planes can fly 2) Can machines swim? No, submarines don't swim. If you can satisfactory explain the discrepancy between the answers for those two statements, you might be able to contribute. -
That's just a semantic trick that exploits the ambiguities of these two verbs in the English language. It doesn't say anything about the nature of reality, just about how English-speakers think about reality.
Re: (Score:3)
What do you mean by perfect? A universal Swiss Army Knife? My car is great, but it makes a lousy vibrator. I'm sorry if I'm being flip, and think I get what your trying to say, but when the first laser was created at Bell Labs in the 50s, you think anybody had a clue there'd be a million uses? An AI will make that look like disposable Dixie cup.
Re: (Score:3)
I think what he really says is that Kurzweil has chosen the wrong approach. It's symbolic A.I. versus connectionism again. As someone who is also working in the field I sort of agree with the critique. Rather than musing about giant neural neutworks it's probably more fruitful to link up Google's knowledge base with large common sense ontologies like Cyc, combine this with a good, modern dialogue model (not protocol-based but with real discourse relations) and then run all kinds of traditional logical and p
So what, really? (Score:2)
If they can put together a smart assistant that understands language well, so what if it has some limitations? AI research moves in fits and bursts. If they chip away at the problems but don't meet every goal, is that necessarily a "fail"?
experience (Score:3)
Ah, but what is experience but information in context? If i read a book, then I receive the essence of someone else's experience purely through words that I associate with/affects my own experience. So an enormous brain in a vat with internet access might end up with a bookish personality, but there's a good chance that its experience -- based on combinations of others' experiences over time and in response to each other -- might be a significant advancement toward 'building a mind.'
Re: (Score:3)
Re:experience (Score:4, Insightful)
So what you are saying is the computer, like humans, will be boxed in by their own perception?
How is this metaphysically different from what we *do* know about our own intelligence?
Re: (Score:3)
No, what he's saying is that "the meaning isn't in the message".
That's a nice slogan, but he misses an even bigger point. In slogan form: "syntax is insufficient for semantics".
Re: (Score:3)
"the meaning isn't in the message" and "syntax is insufficient for semantics"
You might have a point if the brain actually reached out and touched the world, but it doesn't. It's hidden behind layers that process input from the real world and only feed messages to the brain, which does just fine constructing meaning from it.
Re: (Score:3)
Re:experience (Score:5, Insightful)
Re: (Score:3)
Re: (Score:2, Funny)
Do you browse the same internet I do?? Bookish is not what would evolve from it.
Re: (Score:2)
Frankenerd?
Kurzweil: "Oh I've failed so badly. I built a fucking eNerd!"
Experience (Score:2)
I believe most of us think in terms of the experiences we have had in our lives. How many posts here effectively start with I remember when.... But data like that could be loaded into an AI so that it has working knowledge to draw on.
Re: (Score:2)
Re: (Score:2)
But with an AI you can integrate the experiences into its logic to a greater extent than I can just telling them to you. I have access to the AI's interfaces and source code. As far as I know I don't have access to yours.
Re: (Score:2)
Re: (Score:2)
I remember when I forgot where the hell I was going and had to Google it.
Mr. Grandiose (Score:3, Insightful)
Kurzweil is delusional. Apple's Siri, Google Now and Watson are just scaled-up versions of Eliza. Circus magic disguised as Artificial Intelligence is just artifice.
Re: (Score:3)
Anyone who knows Mr. Kurzweil, knows this is not what he is up to.
Re:Mr. Grandiose (Score:5, Interesting)
You should know after all science has created that "we don't know" doesn't mean "it's impossible" nor does it mean "this isn't the right method"
Re: (Score:3)
From this and recent neurological research supporting it and extending it by showing just how deep the mind depends on low level integration with body biology (for example, see Damasio et al.), it is clear that to create a human-like AI, you need to either simulate a body and its environ
Re: (Score:3)
Re: (Score:3)
Kurzweil is delusional. Apple's Siri, Google Now and Watson are just scaled-up versions of Eliza. Circus magic disguised as Artificial Intelligence is just artifice.
What would you need to see / experience in order to agree that the system you were observing did display what you consider to be "Intelligence", and wasn't simply "... just scaled-up versions of Eliza" ?
Re: (Score:3)
Circus magic disguised as Artificial Intelligence is just artifice.
Hmm. Circus magic that kicked the shit out of Ken Jennings. Maybe your mind is circus magic too? Just not quite as good.
Re:Mr. Grandiose... HARDLY. (Score:3)
Eliza was a very simple grammar manipulator, translating user statements into Rogerian counter questions. No pattern recognition or knowledge bases were ever employed.
In contrast, Watson, Siri, and Evi all cleverly parse and recognize natural language concepts, navigate through large external info bases, and partner with and integrate answer engines like Wolfram Alpha.
There is simply no smilarity. Bzzzzt. You lose.
Of Course (Score:2)
Any real AI needs loads of experience, I am sure anyone interested enough to write a book about it knows this...
I doubt that he simply overlooked it.
loops (Score:3, Insightful)
Sophisticated technology (Score:4, Interesting)
Re: (Score:2)
You can do amazing things with clockwork. See: https://en.wikipedia.org/wiki/Difference_engine [wikipedia.org]
Just like you can do the same thing with relays, and vacuum tubes. A computer is a computer no matter the form. The difference is every iteration results in something smaller, possibly cheaper, and much more powerful.
The thing is we have always assumed that the brain follows certain patterns. There are entire fields out there devoted to the study of those patterns. What AIs attempt to do is mimic the results
Oh machines (Score:3)
An "oh machine" has already been created. I don't think we really want that super smart though.
http://health.discovery.com/sexual-health/videos/first-sex-robot.htm [discovery.com]
We've been down THIS road enough (Score:3, Insightful)
Re: (Score:2)
on the other hand, computer systems that design computer systems are a done deal
Why Ray Kurzweil's Google Project May Be Doomed? (Score:2, Insightful)
Re: (Score:2, Insightful)
Re: (Score:2)
Because Kurzweil's a freakin' lunatic snakeoil salesman? I dunno - just guessin'.
If you're "just guessin'", then why should anyone grant your statement any weight?
Wouldn't it be better to make an actual argument, and support it with actual evidence?
A Heinlein quote comes to mind (Score:5, Insightful)
"Always listen to experts. They'll tell you what can't be done and why. Then do it" (from the Notebooks of Lazarus Long)
Re: (Score:2)
But when the expert tells you that something can be done, they are probably right.
-A.C.C
Ah, naysayers... (Score:5, Insightful)
Re: (Score:2)
Re: (Score:2)
You mean like heliocentrism was tossed out, because if the earth moves around the sun we should see parallax motion of the stars, but when our instruments weren't sensitive to detect parallax motion of the stars, we concluded the earth doesn't move around the sun?
Re: (Score:2)
What happened to the spirit of "shut up and build it"?
You must be new here. A big portion of AI is predicted in "make grandiose announcement" pass GO and collect $200 (million) until people forget about your empty promise. Wash, rinse and repeat.
Serious AI is done quietly, in research labs and universities one result at a time, until one day a solid product is delivered. See for example Deep Blue, Watson or Google Translate. There were no announcements prior to at least a rather functional beta version of the product being shown.
So what (Score:2)
If it gives us new technology, like many other AI attempts have, then it will be a success.
Not quite (Score:4, Interesting)
A technology editor at MIT Technology Review says Kurzweil's approach may be fatally flawed based on a conversation he had with an MIT AI researcher.
From the brief actual quotes in the article it sounds like the MIT researcher is suggesting Kurzweil's suggestion, in a book he wrote, for building a human level AI might have some issues. My impression is that the MIT researcher is suggesting you can't build an actual human level AI without more cause-and-effect type learning, as opposed to just feeding it stuff you can find on the Internet.
I think he's probably right... you can't have an AI that knows about things like cause and effect unless you give it that sort of data, which you probably can't get from strip mining the Internet. However, I doubt Google cares.
Re: (Score:2)
Re: (Score:2)
".. works oh machines .. " (Score:2)
Seriously?!?!?
Why do I even bother?
Cyc vs. bottom up (Score:5, Informative)
We've heard this before from the top-down AI crowd. I went through Stanford CS in the 1980s when that crowd was running things, so I got the full pitch. The Cyc project [wikipedia.org] is, amazingly, still going on after 29 years. The classic disease of the academic AI community was acting like strong AI was just one good idea away. It's harder than that.
On the other hand, it's quite likely that Google can come up with something that answers a large fraction of the questions people want to ask Google. Especially if they don't actually have to answer them, just display reasonably relevant information. They'll probably get a usable Siri/Wolfram Alpha competitor.
The long slog to AI up from the bottom is going reasonably well. We're through the "AI Winter". Optical character recognition works quite well. Face recognition works. Automatic driving works. (DARPA Grand Challenge) Legged locomotion works. (BigDog). This is real progress over a decade ago.
Scene understanding and manipulation in uncontrolled environments, not so much. Willow Garage has towel-folding working, and can now match and fold socks. The DARPA ARM program [darpa.mil] is making progress very slowly. Watch their videos to see really good robot hardware struggling to slowly perform very simple manipulation tasks. DARPA is funding the DARPA Humanoid Challenge to kick some academic ass on this. (The DARPA challenges have a carrot and a stick component. The prizes get the attention, but what motivates major schools to devote massive efforts to these projects are threats of a funding cutoff if they can't get results. Since DARPA started doing this under Tony Tether, there's been a lot more progress.)
Slowly, the list of tasks robots can do increases. More rapidly, the cost of the hardware decreases, which means more commercial applications. The Age of Robots isn't here yet, but it's coming. Not all that fast. Robots haven't reached the level of even the original Apple II in utility and acceptance. Right now, I think we're at the level of the early military computer systems, approaching the SAGE prototype [wikipedia.org] stage. (SAGE was an 1950s air defense system. It had real time computers, data communication links, interactive graphics, light guns, and control of remote hardware. The SAGE prototype was the first system to have all that. Now, everybody has all that on their phone. It took half a century to get here from there.)
a much better article (Score:4, Informative)
The crappy little superficial one-page MIT Technology Review article has a link to another, similarly crappy article on the same site, but if you click through one more layer you actually get to this [newyorker.com] much more substantial piece in the New Yorker.
Let them try (Score:3)
Seriously, what's the worst that can happen? Skynet? Wait...
Re: (Score:2)
Just because that's how a human brain works doesn't mean it's optimal or the best approach. Personally I think an AI that had as bad a memory as I do would be a pretty shitty personal assistant. So I'm rather glad they aren't listening to your "advice", otherwise my computer would become very useless very quickly.
Re:Bad approach. (Score:5, Insightful)
Hogwash! The weightings you talked about are the memories. They may not be easily recognized as a coherent memory (or part of) by a casual observer, but that's not the same as not being a "memory". You are confusing observer recognition with existence. Confusion does not end existence (except for stunt-drivers :-)
As far as whether following the brain's exact model is the only road to AI, well it's too early to say. We tried to get flight by building wings that flap to mirror nature, but eventually found other ways (propellers and jets).
Re: (Score:3)
Hogwash! The weightings you talked about are the memories. They may not be easily recognized as a coherent memory (or part of) by a casual observer, but that's not the same as not being a "memory". You are confusing observer recognition with existence. Confusion does not end existence (except for stunt-drivers :-)
As far as whether following the brain's exact model is the only road to AI, well it's too early to say. We tried to get flight by building wings that flap to mirror nature, but eventually found other ways (propellers and jets).
I'd vote you up if I had points left. The OP is missing on so many areas. I started laughing with the fMRI not discovering free will bit.
Re: (Score:2, Interesting)
We know from fMRI that "free will" does not exist and that "thoughts" are the brain's mechanism for justifying past actions whilst modifying the logic to reduce errors in future
No, we don't know this. Some researchers believe that this might be the case, but it certainly isn't a proven fact. Personally, I think it is a misinterpretation of the data, and that what the fMRI is observing is the process of consciousness.
Re: (Score:3)
Amasing how a species that lacks "Real-time Inteligence" and thus cannot think before acting, managed to create a freaking fMRI machine. I guess it's just like those million monkeys with a million typewriters.
Might need to go back to the drawing board on your theories....
Re:Bad approach. (Score:5, Insightful)
The human brain doesn't "store" information at all (and thus never processes it).
This sounds like mere semantics to me. Yes, there isn't a little television screen playing that one time when you broke your arm, with a post-it note attatched saying "memory #4 April, 3, 1956". But there is a deeply encoded structure of chemical potentials, and neural connections which represents this memory. It is stored, and it is, obviously, processed. If it wasn't so, then how could this memory be subject to action and further processing?
Yes, it isn't stored like a video file is stored on your computer, or a photo in your album; but this doesn't mean it isn't stored. If it is an object of thought, it is in the brain, and if it is re-callable, it is stored.
We know from fMRI that "free will" does not exist and that "thoughts" are the brain's mechanism for justifying past actions whilst modifying the logic to reduce errors in future - a variant on back-propagation. Real-time intelligence (thinking before acting) doesn't exist in humans or any other known creature, so you won't build it by mimicking humans.
Huh? I'm not going to get into the agency (free will) debate... But if it did exist, I don't think our understanding of the brain is really up to snuff enough to allow some fMRIs to show it. If it does exist (again, I'm not getting into it), I doubt very much that it will be a little glowing ball located in the middle of your brain (again with a post-it saying "free will"), it would be live pretty much everything else, distributed across large areas of the brain, and sharing functions with other processes of the brain (like memory, limbic functions, sensory processing, etc...).
This system creates the illusion of intelligence.
This sort of statement is why I generally laugh at the whole field of cogsci and AI. Look up p-zombies. At what point is an illusion not, and if you can't actually tell the difference with any test, how can you ever saying, meaningfully, that it IS actually a mere illusion? I make an AI, a very strong AI, and it acts exactly like a human. 100% indistinguishable from a human mind, to an outside observer. Is this an illusion? How do you find out? Given a Turing test like environment, where you can't judge on surface features, how could you ever tell? Ask it, and it will say it is intelligent (just like you or me), input stimulous, and you get the same output you or me would give.
At this point illusion becomes a meaningless statement, since it is completely unprovable.
I'm not a fan of Strong AI, and doubt it is possible, but these arguments have been pretty much beaten into the ground by now. I hate to say it, but with intelligence all that matters in inputs and output, the rest is a black box. This also ignores the fact that intelligence is a dumb term, completely meaningless when applied to anything non-human. In this case, by using "intelligence" we only mean "human-like", which pretty much means it gives an expected output to a given input.
Re: (Score:3)
So how do you account for effortful thought [wikipedia.org] or planning? It is true to say that there is no thinking before
Re: (Score:2)
depressing that someone could spend their whole lives deluding themselves in such a manner
Why? I have spent my whole life trying to not die. I would like to continue doing that.