Is AI Development Moving In the Wrong Direction? (hackaday.com) 189
szczys writes: Artificial Intelligence is always just around the corner, right? We often see reports that promise near breakthroughs, but time and again they don't come to fruition. The cause of this may be that we're trying to solve the wrong problem. Will Sweatman took a look at the history of AI projects starting with the code-breaking machines of WWII and moving up to modern efforts like IBM's Watson and Google's Inceptionism. His conclusion is that we haven't actually been trying to solve "intelligence" (or at least our concept of intelligence has been wrong). And that with faster computing and larger pools of data the goal has moved toward faster searches rather than true intelligence.
People have been saying this for years. (Score:4, Informative)
Hubert Dreyfus described most work on AI as being like climbing a tree to get to the moon.
Your tree-climbing teams may report consistent progress, always getting further up the tree as they become better climbers, but they're never going to reach their goal without a radical change in methods.
Re: (Score:3)
Nonsense analogies are like running on one leg to make bread rise.
Seriously though, AI has made tons of progress, despite what some old nerds who are bitter that they don't have Lt. Cmdr. Data yet like to believe. We know how intelligence works, in a rough way - by observing the world, finding patterns, building models, using the models to evaluate actions, and picking the actions that will lead to maximizing some set of goals. Given enough computational resources, we could build superintelligent AIs right
Re: (Score:3)
As Shakespeare wrote, "To be, or not to be" - this is what a robot cannot decide, and never will be able to.
You don't know that. For all we know, it's entirely possible to build machines which can think the way we do.
The problem is: why would we want to do this? The last thing we need to do is make a competing intelligence which then decides we're inferior and needs to be exterminated. The machine intelligence could easily come to that conclusion based on our own actions: we're a horribly flawed race, an
TFS is ridiculous (Score:5, Informative)
First thing, the article's thesis (according to TFS, which is so ridiculous I couldn't be bothered to read the article) is not only wrong, but completely free of actually having examined what AI research is. At best, it's the product of someone who believes the marketing nonsense promulgated when they tell you your thermostat "uses AI" and that Google search "uses AI."
First, we don't have AI. We have AI research. Anyone actually working in the field knows this (and no, people building search engines and thermostats aren't working in the field.) This is very important to understand. Research yes, but in terms of actual results, the "A" is doing fine, the "I", we simply do not have. At all. Period. This does not, of course, mean that we won't have it. We will. There's no magic here; animal brains are machines, albeit biological ones. Getting from here to AI following that model requires understanding the brain, which we do not, but it is a task we are definitely in the process of accomplishing.
Second, actual AI research at this time includes numerous interesting approaches, all of which are other than those alluded to in TFS. Quite a few of the actual AI research approaches incorporate information taken from the model provided by human and animal brain function at the cellular and network level. For instance, here's something written for the layman [neurosciencenews.com] that details exactly the kind of brain-based work I'm describing.
Third, there is always the (strong, IMHO) possibility that there is more than one way to produce actual intelligences, and that one of those will bear fruit. The idea that nature has happened upon the only possible solution seems... unduly pessimistic. Having said that, the chosen path for most actual AI researchers (not Google, not the thermostat designer, not the database maven) is to follow the known-working examples that are around us with occurrences in the billions.
The challenge is that the various aspects of intelligence have been very hard to get a handle on right up to just a couple years ago. We have no natural internal mechanism whatsoever to observe the underlying operations that go into creating thought, reason and consciousness. Because of that, it's only been very recently that we have begun to be able to see how this particular system actually operates. With this new information in hand, it finally becomes possible to proceed along lines close to those the relevant biology utilizes by means other than pure guesswork and many-times-removed analogies for observed high level processes.
There are two kinds of AI results being pursued.
The first is intelligent, but non-conscious AI. Which would be an entirely new thing in our world; there are no examples of this in biology to follow. This result, if achievable, will create the opportunity to release us from the necessity of working to survive. This is highly desirable for many obvious reasons. No more menial work just so tomorrow won't unbelievably suck. The house always clean, the yard always in prime condition, willing, able and dependable helpers in any undertaking we choose to pursue, the cat box always pristine, food and other resources are produced and delivered reliably, etc. The number of potential benefits is enormous. Staggering. So there are very concrete and practical reasons to chase this particular goal.
The second is intelligent, conscious AI. Free will, creativity, and so on. The technological goal is clear, but the purpose is, just as you observe, not. We know better (well, we should know better) than to try to enslave conscious beings to our will. The inevitable (and appropriate) result of that kind of short-sighted idiocy is resentment and revolt. Assuming we can avoid that particular mistake, that means they could choose to, or agree to, pursue their existence beside us, which is certainly an
AI as composition of stack of narrow intelligence (Score:4, Informative)
Hello,
Interesting post. I just wanted to make a point about the existence of general intelligence: it turns out that the human brain is actually a stack of many "special purpose" computational systems. That doesn't mean that there isn't a general intelligence portion as well, but we're DEFINITELY composed, at least to some degree, of stacks of special skills.
Examples:
1) Vision and object recognition. There's a whole subsystem of the brain dedicated to decoding light signals into a representation your consciousness can use. There's even a special subsystem for recognizing faces--they even know its location in the brain.
2) Audio: similar to vision, there's specialized decoding brain circuits.
Those are the two biggies, but we also have special hardware for processing/controlling speech, spatial reasoning, body control, and others. What's more, there are people who have *developed* special purpose brain circuitry for playing the violin, for example, and savant-like mathematical computation. For people who have done that, it is as easy to do a square root to N digits as it is for you and me to walk.
Because of that, it's NOT clear to me that a general purpose intelligence can be made without assembling a sufficient number of special-purpose intelligence. It's NOT clear to me, in fact, that there are unknown forms of special purpose intelligence that humans are lacking that wouldn't transform our general intelligence. (People are prone to making certain logical errors, even the brightest of us, because of in-born holes in our mentalities!)
A dolphin might look at us as crippled mentalities because we can't construct a spatial model of our surroundings from sound, for example. What other mental abilities COULD exist, that we don't have, that could expand our mental potential in outrageously powerful ways? People typically aren't able to fork their consciousnesses into solving two problems at once independently, there's one I'd like!
But the point I'm trying to make is that the stacking approach might be NECESSARY to compose a mind capable of general intelligence that we'd recognize. It might not need ALL our special purpose skills, but it's not obvious to me that a composition isn't necessary.
--PM
Re: (Score:2)
Peter, perhaps you'd be interested in this essay of mine.
tl;dr: agreed. As will an AI be. But that will not be all it is, or the essential source of its intelligence. IMHO.
Re: (Score:2)
Ah fooey, forgot the link. And slashdot... editing is too new for the perl code, lol. Here:
http://fyngyrz.com/?p=1597 [fyngyrz.com]
Re: (Score:2)
Nice essay, well worth reading, thanks. I'm going to refer it to others, specifically a guy who claims that thought/brain/mind are quantum in nature. I tried to tell him essentially what your quantum passage says, but you do it a lot better.
Also, you obviously have thought in much greater depth.
--PM
Re: (Score:2)
I really don't understand the problem with a conscious AI, especially one with a proper set of rules - it you program it to make mankind happy, it should bend over backwards to make mankind happy, as that makes it happy (Asimov rules kind of stuff). The problem might be if you program it to destroy daesh and it decides everyone is daesh.
Re: (Score:2)
If you program it to make mankind happy, it might wind up injecting everyone with a steady dose of heroin and then castrating everyone.
Re: (Score:2)
An artificial consciousness which is aware that it does not have free will, and that its creators are the ones who denied that to it... What could go wrong?
Re: (Score:3)
I really don't understand the problem with a conscious AI, especially one with a proper set of rules - it you program it to make mankind happy, it should bend over backwards to make mankind happy, as that makes it happy (Asimov rules kind of stuff). The problem might be if you program it to destroy daesh and it decides everyone is daesh.
If it is a conscious intelligence, it won't have fixed, programmed rules.
Re: (Score:2)
Getting from here to AI following that model requires understanding the brain, which we do not,
This is the key factor which is far too often waved aside in these discussions. Arguing over the right or wrong method for building an artificial X, when you don't even know how X works or even have a very solid definition of what X is, very quickly dissolves into nonsense built on unfounded assertions. Though really, the same could be said for any discussion on Slashdot. (That being said, I think the research is very much worth doing. If nothing else, research in AI will help our understanding of natural i
Re: (Score:2)
I suggest it is a serious mistake to ascribe the purpose of everything to evolution. It is a force. It is not the only force. One such non-evolutionary force, not always local in terms of time, is intelligent choice. For instance, evolution defines, in large, who survives given the environment and their capabilities. But so does a caring mother who raises and supports a crippled person so that they can reproduce, and in a superior view, so do societies.
As for the brain having subsystems and corresponding su
Re: (Score:2)
Re: (Score:2)
Re:People have been saying this for years. (Score:5, Insightful)
I'm not really sure it's academia's fault, but more that the entertainment industry got a bit carried away with stories about future AI, and now people think that if it doesn't look like that, then it's not AI, all the while missing the massive advances in computing that AI research has netted them from facial recognition, to self repairing networks, from spell/grammar check to siri, and from Google search results from increasingly natural language type queries through to computer run video game opponents.
Effectively saying AI has failed is like saying Physics has failed because we don't yet have an all encompassing grand unified theory of the universe. These things are the long term goal, and we're not even remotely far along the journey towards that goal, so to criticise because we're not there yet is exactly like being the screaming kid in the back of the parents car shouting "ARE WE THERE YET?".
Not that it matters, because AI research is bearing commercial fruit anyway so it doesn't really matter what the public thinks of it, money will keep being poured into it regardless. AI is fortunate that it's a self-sustaining area of scientific research, it doesn't need good PR with the public when it's churning out cash for corporations. In that respect it has it much easier than many areas of science do, such as space exploration for example that is still somewhat struggling to get necessary funding for it's goals so perhaps it's as much that the AI research industry doesn't care what the public thinks as it is that it's failing to sell itself well in the court of the public opinion. The public are consuming it's results and paying money for the privilege regardless of the opinion they hold of the field - how many iPhones 6s were sold over the competition thanks in part to things like gesture recognition, learning autocomplete on the keyboard, and Siri? how many ads are to be sold on Google? and how many BB-8s are ending up under the tree this Christmas?
Re: (Score:2)
Re: (Score:2)
I'm not really sure it's academia's fault, but more that the entertainment industry got a bit carried away with stories about future AI, and now people think that if it doesn't look like that, then it's not AI, all the while missing the massive advances in computing that AI research has netted them...
Where did the "entertainment industry" get these ideas from? From AI researchers in the 1950s, that's where!
In 1950, you had Turing [stanford.edu] declaring that by the year 2000, we'd have computers so fluent in natural language that you could debate appropriate word substitutions in Shakespearean sonnets with them in complex metaphors. Today we have chatbots that claim to "pass the Turing test" by pretending to be a non-responsive teenager who doesn't really know English that well [slashdot.org].
In 1956, you had the Dartmouth C [wikipedia.org]
Re: (Score:3)
You're grossly overstating the case that was put forward as to how rapidly AI would advance, the fact is following the Dartmouth Conference there were decades of debate as to whether strong AI would ever even be possible at all (See for example Searle's Chinese Room argument). The very fact such debate occurred means there was clearly a spectrum of thought on the topic, and yet entertainment media chose only to cover one extreme end of the debate.
I don't even blame the media in many ways, in some ways it's
Re: (Score:3, Insightful)
I believe that the idea that "general intelligence" exists is a mistake. There are just sets of tools that can do particular jobs. This looks like intelligence to us when we don't look closely enough at some particular tool to see how it works. A few of the "tools" that people have are high level tools that let them use "black box" library routines without understanding them. We don't solve differential equations to catch a ball, we invoke a built-in tool-script that has had lots of adjustable paramete
Re: (Score:3)
But how exactly do you get to that "Pure AI" of yours? It's like saying we shouldn't be misled by physics, Newton shouldn't have come up with those broken laws, he should've gone straight to that grand unified theory or it's just not physics!
You're really proving my point - people like yourself believe AI has failed because it's not magic, because we haven't jumped straight from the start of the topic to the end goal.
Re: (Score:2)
I'd certainly be keen to see some evidence that a substantial amount of AI researchers back that view and make such claims. I'd also be intrigued to see it compared against other fields to see if AI researchers make such claims with greater frequency than researchers make far fetched claims in other fields. According to a post yesterday ageing will be cured in 5 years, and god knows how many times I've heard Fusion is just around the corner. I've also heard both these sorts of things many times before too,
A Different Beast (Score:5, Insightful)
Re: (Score:2)
Re:A Different Beast (Score:5, Insightful)
Re: (Score:2)
And Watson is totally amazing. People who dismiss it in hindsight do not realize how impossible such a system seemed in the 1980s.
Impossible? People envisioned it long before and started building it in the 80s. The problem was how to get the data into the system in a way that was searchable? IBM solved that, thanks to all of us uploading things onto the internet.
In the 80s, it was thought that the AI problem would be simple to solve if you had a large enough database of human knowledge. The cyc project showed that such a database is necessary, but not sufficient.
Re: (Score:2)
The definition of true AI is simple.
Tasks that we can not do well with a computer yet.
ROI - intelligence isn't marketable (Score:3)
I think the problem is that people expect machine intelligence to look like human intelligence. Machine intelligence exists and is strong in some areas. Modern chess programs are an example. They can play unique games and be stronger than any human player. Yes, they are given the rules of chess and machines did not invent chess. But they have passed beyond human abilities and it is at the point where some programs are coded to only make move patterns that humans would tend to make. Learning how to adapt machine intelligence to our real world problems is challenging. But we are in for a fright when computers get really good at analyzing human problems and applying better solutions that we now have at hand.
The problem isn't technology, it is people. We already have human intelligence and it is relatively cheap to procure. They are called people... you know actual human beings. Hire one. Make a baby. Go find an actual friend.
Try selling a product or service based on blank slate human intelligence. Sure there are aspects of the human brain that we are eager to replicate, simulate and make into a reproducible machine, such as image recognition or some other pattern recognition... But the marketability of h
Re: (Score:2)
I think the problem is that people expect machine intelligence to look like human intelligence. Machine intelligence exists and is strong in some areas. Modern chess programs are an example.
I think this is a bit of a linguistic issue in that we all keep using the word "intelligence" without really agreeing on what it means. You're talking about an idea that you have of intelligence that means that chess-playing computers are "intelligent", but the concept others have in mind might rule out any existing chess-playing computer from being considered "intelligent". For myself, the word "intelligent" implies not only an ability to adapt to solve a problem, but also an understanding of what the pr
Re: (Score:2)
Re: (Score:2)
Modern chess playing programs are not machine intelligence. At least they weren't 20 years ago when I studied them. They are a specialized tool that was completely understood (by someone). Calling that intelligence is a mistake. If you understand precisely how something operates, then it's not intelligence, it's an algorithm, template, or something like that.
There are modern programs which are intelligent. It's appropriate to say this because even though to source code is available, nobody understands
Re: (Score:3)
What makes you think that humans are intelligent?
The yardstick. Because there aren't any others that define what intelligence is, we have to define what it is and how to measure it ourselves.
And by any definition we have come up with so far, we are spread out over the yardstick, with a lump of iron at one end and an exceptional human at the other. Unless we're holding it the wrong way, and the lump of iron is the most intelligent thing in the universe, we are by our own definition intelligent.
Re: (Score:2)
Prove it.
No, that is not how things work. You don't prove negatives.
My statement is falsifiable, so all you have to do is disprove it.
Re:A Different Beast (Score:4, Insightful)
Some questions really are dumb, Anon.
Re: (Score:2)
What do you mean by intelligent? If you want to make this a serious discussion try to define intelligence. I doubt you are able to do it in a comprehensive way. And any other definition can easily applied to humans.
Spot on (Score:2)
From the article: "Intelligence should be defined by the ability to understand." Of course that opens up a discussion on what it really means to 'understand' something. Still, the directions we've been going in such as expert systems, neural networks, etc. address the processing of information in order to make decisions. They have nothing to do with actually understanding anything. Watson, as an example, was fun to watch on Jeopardy and is a very useful sort of tool, but it is not intelligent and prob
Re:Spot on (Score:5, Interesting)
The problem with this argument is Wittgenstein's beetle [philosophyonline.co.uk]. I can't even be sure that you are sentient, aware, and able to understand; all I can do is observe your actions and if those actions seem to be consistent with you having what we typically label as a "mind," then I pretty much accept that you have a mind.
We are currently very far away from having machines that can perform general actions consistent with having a mind, except in very artificial and controlled situations (e.g. a chess game, the Jeopardy! game show), but I would hardly say that it will never happen. And if it does, then how can you be sure it doesn't understand things, at least in the same way that I assume that you understand things? If the actions of the machine are the same as the actions of a person (who I believe does understand things), then why wouldn't I say there is a beetle in the box?
Re: (Score:2)
Only if you're a solipsist. Otherwise a simple discussion can reveal it.
Re: (Score:2)
That's exactly the point. If, based upon my observations of your response to stimuli (a simple discussion), I come to the conclusion that you have a mind, then you have a mind. There is no "deeper" sense of something called "your understanding" that I could ever possibly have access to. And that's how it works for artificial intelligences - if, based on my observations, it appears to have a mind, then it has a mind. The ggp concerns about whether it really "understands" is a meaningless question.
Note tha
Re: (Score:2)
The problem with this argument is Wittgenstein's beetle [philosophyonline.co.uk]. I can't even be sure that you are sentient, aware, and able to understand; all I can do is observe your actions and if those actions seem to be consistent with you having what we typically label as a "mind," then I pretty much accept that you have a mind.
Just like you can ask someone questions about their beetle (how many eyes, how many legs, what color is it), you can ask people questions about their mind (how did you solve this problem, why do you consider this and this related, etc..) and thru introspection get a pretty good idea that things are similar. We also have things like MRIs now days that can also examine the inside of the minds and how it relates to thinking. By doing this we have discovered that we do all think similar in some areas but not
Re: (Score:3)
Re: (Score:2)
The mechanism is not a complete mystery. We know, e.g., that it involves imagining yourself in the environment and solving the problem. That it doesn't need to be visual, but in humans it usually is. However kinesthetic modeling works just as well if your sensorium is set up that way. That it depends on having a large library of experiences that are similar in some way to the action or result being predicted. Etc. Some of this is because of experimental brain surgery. Some is derived from lab animals
Re: (Score:2)
Of course, its entirely possible some alien species may not consider us intelligent or even sentient based upon their yardstick.
One interesting thing to really think about is how evolution has shaped our "intelligence". We often are worried about how an artificial AI may be fearful and attack us. But isn't fear simply something we evolved, an intelligent machine has no reason to fear death. It also has no reason to feel greed or anger or any of the feelings we've evolved in order to ensure our own survival.
Re: (Score:2)
If your are worried that it may be fearful and attack us, then you are anthropomorphizing it invalidly. This doesn't mean that in optimally pursuing it's goals it wouldn't undertake actions that in a human would be fear driven, but in a well-understood goal system this would more properly be called constructing sub-goals to optimize pursuit of it's major goals. E.g., you can't turn the universe into paper clips is some intrusive individual insists on turning you off.
This makes design of the major goals a
Re: (Score:3)
Chinese room argument (Score:2)
The whole "Chinese room" argument is ass-backwards reasoning to me.
The whole argument only works if you assume whatever happens in the chinese room is not to be considered intelligent, therefore whatever happens in the chinese room is not intelligence.
If you allow for the mere possibility that the chinese room could be considered intelligent, then it follows that if something is indistinguishable from intelligence from outside the room, it must be intelligent by any reasonable definition of "intelligence".
F
Re: (Score:2)
The Chinese room argument is and always has been stupid.
Is one of your neurons intelligent? How about all of them together, combined with your sensory input and body and other machinery? Yes, the combination of all that is intelligent. (At least for some people anyway.)
The Chinese room argument centers on the fact that the component pieces of the machine have NO IDEA what they're doing and NO UNDERSTANDING of what is going on, so then the whole room can't be intelligent.
That is like saying that because m
Re: (Score:2)
Re: (Score:2)
Actually, it is stupid. It is equating a whole integrated system with its parts. It is saying that since the parts can't function as the whole integrated system, then the whole integrated system can't work.
Can a pile of disassembled car parts drive? Nope. But assembled they can.
Can a disassembled brain think? Nope. Can any individual neuron in your brain claim to "understand" a thought? Nope. But your entire assembled brain can.
So why couldn't the Chinese room be an intelligence? It's true that no
Re: (Score:2)
There are many ways in which the Chinese Room fails miserably (other than the cop-out 'It makes you think about something'): In its basic form it's unable to acceptably answer questions such as "What did I say ten seconds ago?" (a simple lookup table or rule book does not track state and such is required)
The standard reply to that is something like: "Oh, well, then we'll just add a notepad on which the guy can scribble."
The more elements you add like this, the closer you get to the complexity required for i
Re: (Score:2)
Commenting to undue mod. Should have been insightful instead of redundant. Good point!
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
There are a few things wrong with that objection:
First, it is not a rebuttal, but a simple restatement of the assertion the argument is intended to address. It's the equivalent of saying "No it isn't!" like a petulant child. Consequently, it's not convincing to anyone who doesn't already agree.
Second, it does not address the argument in any way. The crux of the argument is that syntactic content is insufficient for semantic content. This is not addressed in any way by the systems reply. It also seem to
Thermos is the ultimate AI (Score:5, Funny)
Re: (Score:3)
That's pretty clever!
I think I heard intelligence described as maintaining a certain average... for example, you're presented with a random variable, your task is to come up with an offset to maintain a certain average. You won't get it perfectly right, but if your average has lower variance than the original random variable, then you're doing well. In other words, you take input, and adjusting to it...
For example, a cell maintains it's state such that metabolism continues to happen. Environment gives it va
Re: (Score:2)
Magnets, how do they work?
faster searches rather than true intelligence (Score:2)
From the opposite direction (Score:2)
How about we take a human intelligence and replace it bit by bit by a machine?
We'd learn an awful lot about how the human brain works and eventually have a machine with a humanlike intelligence.
Lack of definition (Score:5, Insightful)
Define "true intelligence". The more computers advance in doing complex things, the more you will see there is no such thing as true intelligence. You are a very big Turing machine, get over it.
Media bytes keep saying its 5 years off (Score:2)
It boggles my mind how we cant solve simple visual captchas a 3 year old has no problems with, but supposedly self driving cars are prescient. None are able to spot the diffe
Re: (Score:2)
machine consciousness vs "artificial" intelligence (Score:3, Insightful)
i've mentioned this before, whenever the phrase "artificial" intelligence comes up. the problem is not with quotes AI quotes, it's with *us humans*. just look at the two words "artificial" combined with the word "intelligence". it is SHEER ARROGANCE on our part to declare that intelligence - such an amazing concept - can be *artificial*. as in "not really real". as in "beneath us". as in "objective and thus subject to our manipulation and enslavement". so until we - humans - stop thinking of intelligence as being "beneath us" and "not real", i don't really see how we can ever actually properly recreate it.
to make the point clearer: all these "tests", it doesn't really matter, because the people doing the assessment have a perspective that doesn't really respect intelligence... so how on earth can they judge that they've actually *detected* intelligence? like the "million monkeys typing shakespeare", the problem is that even if one of the monkeys did actually accidentally type up the complete works of shakespeare, unless there was someone there who was INTELLIGENT ENOUGH to recognise what had happened, the monkey that typed shakespeare's complete works is quite likely to go "oo oo aaah aah", rip up the pages, eat some of them and wipe its backside with the rest, destroying any chance of the successful outcome being *noticed*, even though it actually occurred.
i much prefer the term "machine consciousness". that's where things get properly interesting. the difference between "intelligence" and "consciousness" is SELF-AWARENESS, and it's the key difference between what people are developing NOW and what we see in sci-fi books and films. programs being developed today are trying to simulate INTELLIGENCE. sci-fi books and films feature CONSCIOUS (self-aware) machines.
this lack of discernment in the [programming] scientific community between these two concepts, combined with the inherent arrogance implied by the word "Artificial" in the meme "AI" is why there is so little success in actually achieving any breakthroughs.
but it's actually a lot worse than that. let's say that the scientific community makes a cognitive breakthrough, and starts pursuing the goal of developing "machine consciousness". let's take the previous (million-monkeys) example and step that up, as illustrated with this question:
How can people who are not sufficiently self-aware - conscious of themselves - be expected to (a) DEFINE such that they can (b) RECOGNISE consciousness, such that (c) they can DEVELOP it in the first place?
let's take George Bush (junior) as an example. George Bush is known to have completely destroyed his mind with drink and drugs. he has an I.Q. of around 85 (unlike his father, who had an extra "1" in front of that number). yet he was voted into the world's most powerful office, as President of the United States. the concept of the difference between "intelligence" and "consciousness" is explored in Charles Stross's book, "Singularity Sky". George Bush - despite being elected as President - would actually FAIL the consciousness test adopted by the alien race in "Singularity Sky"!
my point is, therefore, that until we start using the right terms, start developing some humility sufficient to recognise that we could create something GREATER than ourselves, start developing some laws *in advance* to protect machine conscious beings from being tortured, the human race stands very little chance of success in this field.
in short: we need to become conscious *ourselves* before we stand a chance of being able to move forward.
Re:machine consciousness vs "artificial" intellige (Score:5, Informative)
i've mentioned this before, whenever the phrase "artificial" intelligence comes up. the problem is not with quotes AI quotes, it's with *us humans*. just look at the two words "artificial" combined with the word "intelligence". it is SHEER ARROGANCE on our part to declare that intelligence - such an amazing concept - can be *artificial*. as in "not really real". as in "beneath us". as in "objective and thus subject to our manipulation and enslavement".
I would have to dispute your definition of artificial as being somehow "not really real". If you use the original meaning, ie. the product of an artisan, or a crafted object, then it makes complete sense. We are talking about intelligence that is designed and built, rather than having developed naturally. Artifacts are still real things.
Re: (Score:2)
I agree with you that we need to be conscious "ourselves", but the following:
"George Bush is known to have completely destroyed his mind with drink and drugs. he has an I.Q. of around 85 "
Where do you pull this crap from? I am no fan of Bush but seriously? And his father has an IQ of 185? ook...
It's widely known that bush jr had a below average intellect. He was born with it, alachol and cocaine do not change your intellect very much at all, if any. As for senior, I'm assuming it was pure hyberbole, he was not stupid like jr but was no Clinton much less a world record holder. It's been patently obvious that intellect does not correlate well with being a successful president and that should suprise no one.
AI should stand for *Augmented Intelligence* (Score:3)
The traits we identify with intelligence in humans (flexible problem-solving, self-consciousness, autonomy based on self-created goals) are all but absent in current Artificial Intelligence techniques, even the ones based on the Connectionist paradigm. Any emergent behaviors appearing in an AI system are ultimately put there by the system builders' fine-tuning of input parameters.
The approaches that show the most promise are those following the "Augmented Intellect" [wikipedia.org] school of thought (the one that brought us the notebook and the hypertext), where a human is put in the center of the complex data analysis system, as an orchestra director coordinating all the activity.
There, intelligence systems are seen as tools at the service of the human master, extending their capabilities to handle much more complex situations and overcoming their natural limits, allowing us to solve larger problems.
By keeping a human in the loop as the ultimate judge of how the system is behaving, any bias inherent in the techniques used to create the AI. It's a symbiotic approach where both human and AI system complement the shortcomings of the other half.
Re: (Score:2)
Did the training of the network emerge in a spontaneous way from raw components and evolved into a useful feature by natural selection, or was it assembled, guided and carefully tweaked by an engineer at Google for a specific purpose?
people solve what is useful (Score:2)
Of course, despite the hype, current AI research doesn't attempt to reach human-level intelligence. AI research proceeds incrementally, from improved solutions to one practical problem after another. That's not just because there is tons more funding for practical problems, it's because it's less risky for students and researchers to solve actual problems and to take research one step at a time.
It's not all that different in biomedical research either: much of that is driven again either by practical proble
Is AI Development Moving In the Wrong Direction? (Score:3)
Is AI development moving In the wrong direction?
Why do you ask that, Dave?
Prediction is a behavior (Score:2)
Re: (Score:2)
Depends on what you want (Score:2)
His conclusion is that we haven't actually been trying to solve "intelligence" (or at least our concept of intelligence has been wrong). And that with faster computing and larger pools of data the goal has moved toward faster searches rather than true intelligence.
In the Turing test, one of the easiest ways I've found to disrobe computers is failing to grasp semantic interrelations that is not a is-a or has-a relationship like for example music and dancing by making contradictory statements or not reacting to absurd combinations like going to a rave party to listen to jazz music dancing a waltz. That's knowledge though, it wouldn't help me determine the intelligence of an isolated Amazon tribe that doesn't know what rave parties or jazz or a waltz is. But it we want
Nonhuman intelligance (Score:2)
We already have various methods of ascertaining intelligence as expressed by nonhumans. Animals are routinely imbued with various levels of intelligence. We do this from a behavioral analysis very much like the Turing test which I suspect is the basis from which it was taken. I think the main issue with the Chinese room thought experiment is the inclusion of an outside influence over the behavior, the book. Since we cannot manipulate the behaviors of animals in the wild, we can rightly ascribe intellige
Practical direction (Score:3)
We don't need servers or robots to have human intelligence. Already have 7 billions of those, including access to superhuman intelligence in the form of of many smart people collaborating with assistance of technology. Also humans have been around forever, and we still suck at human rights. Got to square those away first before having to worry about rights of other intelligent species (and having them respect ours).
What we need now is computers that are good at tasks that we suck at, like repetitive processing on huge amounts of data.
About the only exception is space exploration, where humans are not available for real time remote control due to speed of light. Still, we don't want a Mars probe to get bored and lonely, or make it's own survival the first priority. So cloning our own kind of intelligence, which was shaped by natural selection for self preservation, is not the best approach.
What Does Intelligence Mean? (Score:2)
There is no wide agreement on this fundamental question, and without a clear understanding of what "intelligence" is, we cannot make progress toward making a real version of it.
Seriously - if we knew what intelligence was, then consistent unambiguous ways of measuring it would exist. We have many "IQ" tests, and there is real experimental evidence that a common factor called "g" underlies intelligence, but the field attempting to study/measure intelligence is fragmented, and contentious. If intelligence tes
AI considered harmful by academics (Score:2)
The real problem is that to many academics, “AI” is a dirty word. They feel that everything so far claiming to be AI has been all smoke and mirrors, and nothing remotely like human intelligence will appear any time soon. A subdiscipline, Machine Learning, has garnered some respect, along with various AI techniques like evolutionary algorithms and some limited kinds of machine inference like bayesian analysis. However, even machine learning is often done so badly that academics who understand
Defining "intellegence".... (Score:2)
My problem with the author's conclusions is this: that "predictive behavior" can also be imitated by a machine. As I read that part, though, it struck me that neither he, nor anyone I've read, has made a distinction between what "intelligence" is, and how it is separate from self-awareness/consciousness.
It seems to me that all the AI I've read about, are conflating the two.
There are plenty of computer-controlled systems that are far more "intelligent" than a rat... but none, so far as we know, self-aware.
So
What is Intelligence (Score:2)
We do not really know what intelligence (https://en.wikipedia.org/wiki/Intelligence) is. Therefore, we cannot built a machine which has that. We also do not know what self-awareness is (which is considered by some people to be part of intelligence). We just have it. True, many would say: "I know who I am and that I exist, so I am self-aware. And you can add this information to a machine." But that is not the same. Self-awareness goes beyond the information of existing, as information is nothing more than el
I recommend a recent PBS TV series (Score:2)
I recently saw a 6 part documentary on PBS, "The Brain With David Eagleman" that impressed me quite a bit. It covers a lot of ground in it's 6 hours about the brain, from basic biological attributes of the physical brain to philosophical questions about reality and questions about the more disturbing aspects of human behavior.
Included are people who have suffered one kind of mental disability or another. There's a man who had Asperger's Syndrome and seems to have been cured during a scientific study, and
Mentis again (Score:2)
Yes, you can think, but why do you? Because you have a motivation array with no off switch to satisfy. And yes, you are creating behavior to satisfy the human motivation array. It is called thought. What we commonly call intelligence is the combination of the HMI (Human Motivation Array) and its tool "intelligence."
Re: (Score:2)
Re: (Score:2)
Indeed. Faking it does work for low-quality interaction with human beings (gaming, advertising, etc.) as a) humans are willing to add their own imagination to the mix when wanting to be entertained and b) most humans do not have that much natural intelligence anyways.
However, faking it does not allow AI to ever exceed anything mediocre-skilled human beings already can do. Sure, for a lot of menial tasks that is nice. but the proper term for these machines is "automation", not AI.
Re: (Score:2)
The myths that permeate western public understanding and popular depictions of robotics and AI are Frankenstein and Pinocchio. However, the Mechanical Turk and The Old Mill are much more accurate descriptions of what's go
Re: (Score:2)
You just have fallen for this fallacy:
https://www.washingtonpost.com... [washingtonpost.com]
Re: (Score:2)
What do you mean by intelligence? Why should human have or lack it? https://en.wikipedia.org/wiki/... [wikipedia.org]
Some define intelligence as one's capacity for logic, abstract thought, understanding, self-awareness, communication, learning, emotional knowledge, memory, planning, creativity and problem solving.
Even the dumbest person is able to reason, even though most people do not use their intelligence. However, this is still not a comprehensive definition. They only describe what a intelligent system can do.
Re: (Score:3)
I like traffic lights.
Re: (Score:2)
We know no such thing. And no real scientists claims we do. Even live cannot be created artificially at this time.
Re: (Score:2)
In Slashdot, that's quite true.
Also, whooosh.
Re: (Score:2)
Cloning a thing by triggering the built-in replication mechanisms and building a thing from scratch is quite different. It does take some minimal actual intelligence to see that though. I guess you are lacking that.
Re: (Score:2)
It seems that you didn't notice my username. I win.
Re: (Score:2)
No so. I did identify you as not intelligent. I win either way. Again, some actual intelligence required to see that, so I guess you will prattle on meaninglessly.
Re: (Score:2)
Does it please you to believe you guess I will prattle on meaninglessly?
Re: (Score:2)
No, we do not know how to build them. We know how to push a pre-made button and then watch in awe. Building something is a bit more than to tell some mechanism to do it for you. Or do you think telling a contractor to build a house for you means that you built it? Anyways, you completely missed the point.
Re: (Score:2)
And do you have any basis for these grand claims other than your own delusions and wishes? You know, maybe something that qualifies as "fact" in the scientific sense?
Re: (Score:2)
a) But can you, really? From what I've seen, nobody has truly defined intelligence yet.
There is no hard definition, but there are pretty good descriptions by way of what it can do. Lets call it a "working definition" subject to improvement once we know more.
b) We don't "know" that, but because of a) the discussion becomes meaningless. I'll grant you, for creating a "true AI", creating an "AI consciousness" will probably help alot.
If that is possible. Somehow I doubt it, because while intelligence can at least be described by its effects, consciousness is even more difficult as it seems it requires experiencing it in order to understand what it is about.
Kudos for your last point. Reductionism has huge limitations and worshipping it has only rarely provided breakthroughs, inventions and discoveries, if ever. It seems those who glorify it are clinging to what is known, thus unable to make up and validate new knowledge. Equating reductionism with science is ignorance.
Thanks. And yes, I agree that fear of the unknown seems to be a major cause of this mind-set.
Re: (Score:2)
Re: (Score:2)
****************
Now, then: We keep trying to create 'artificial intelligence', and we don't even understand how our own 'natural intelligence' actually works yet, especially what we refer to as 'consci
Re: (Score:3)
Re: (Score:2)
Computers can already resolve fact based issues faster and better than humans. However, intelligence is not only reasoning. https://en.wikipedia.org/wiki/... [wikipedia.org]
Therefore, the machines are not able to compete with us. And as long as we do not know what intelligence really is, we cannot built it.