Can computers win the Turing Test? Imagine a day when a machine will say, “Move over Turing! You can no longer consider machines to be less smart than humans! After all, we can think too. We do all the thinking and processing and you take all the credit, just because you are our creator! “. That would be an awkward and exciting situation. To be honest, there is a valid argument here in this imaginary conversation. As naive as it may sound for now, let me assure you that such a scenario is not far away. Applications are becoming more and more logic-oriented and increasingly intelligent.
Surely “pass” is meant by that? I know I’m nit-picking, but at least check the titles before publication
I thought such a thought like computer would one day has intelligence like humans (.. and they would probably dominate humans!) is only come up by non-geeks! (because they have not enough knowledge on computer architecture) At least in the current computer architecture that scenario is not possible at all, I believe. A.I. is still A.I. There is no A.I. without humans.
I think you’re being a little short-sighted with that comment.
At the end of the day, humans are just electrical impulses too. Just a very highly evolved mess of electrical pathways.
Plus emotions aside, all our senses (heat, touch, sight, etc) have all been recreated by technology.
In terms of the software: people often forget that humans take years before they’re “intelligent” and twice as long before they reach adult intelligence. So we’re literally talking YEARS of 100% up time and data input! No software in the world has had that level of development – so it’s hardly surprising that software seems incapable of reaching human awareness.
As for the logic behind the software: humans aren’t all too different from computers in the sense that we too make millions of decisions a second (we’re just not usually aware of them).
eg: you ask someone, “do you fancy a cup of tea or coffee?” and a persons answer would be based on:
* are they thirsty?
* do they prefer tea or coffee?
* do they like they way how tea/coffee is prepared by the requestor?
* have they drank lots of tea or coffee recently so fancy a change?
…and so on.
Calculations like that are possible to program into a computer. The hard part is getting a computer to program in those calculations itself based it’s own “life” experiences (remember my earlier comment regarding humans having an up time of years before their in any way “intelligent”? Now imagine having to record that into a computer on top of programming the core routines!)
So in short: it may well be possible in the future. But currently AI software is in it’s infancy and needs to evolve.
Personally, I think if people want AI to be a workable goal, then the only way to over come the biggest obstacle (ie the impossible up time before the software becomes “intelligent”) would be to standardise an AI database thus each developer wouldn’t be required to individually record years of a computers life to build a personality/reference model – they could just share each others and divide the work load/up time.
That is an important note. Even the smartest looking AI program only works because a group of smart humans have programmed it, and usually such programs need constant human operation too.
Deep Blue program plays good chess because chess is a very restricted field with restricted rules, and it makes it easy for humans to to program good playing algorithms learned from real human chess masters, and the rest is just brute force calculation. But Deep Blue or any other AI machine only does what it is programmed and told to do. It can only deal with the information that it is programmed to deal with. It cannot adapt into completely new cases and surroundings not programmed into it beforehand.
Just because a discussion simulator like Eliza may fool someone into believing that he is talking to a real person has nothing to do with real intelligence or understanding human emotions. It is basically just a a toy simulator, nothing more, that tries to guess a good answer from its database of ready-made answers to a sentence written by a human. Even with best such similators it is usually easy to find the limits of such programs and make them give silly answers because the humanly programmed database and algorithms can cope only with a restricted amount of pre-programmed cases.
Real human thinking is very different, flexible and adaptive, capable of learning new things all the time, and to adapt into completely new (“non-programmed”) situations.
Human thinking is related to consciousness, and you know what, even the top researches in the field still haven’t completely figured out what human consciousness actually really is. Human intelligence and consciousness is related to the whole human experience containing not only biology and emotions but also human history, social relationships, culture, languages, values, etc. It is practically impossible to program a machine that could deal with all that information in a flexible and adaptive human manner.
However, more realistic artificial intelligence like expert systems that can calculate good results from a huge amount of data do already work well and help people a lot, for example, when predicting weather. But again, those systems are not intelligent in the sense that they would be thinking anything in the human (or even animal) sense. They just repeat those algorithms and programs that humans have programmed them to follow.
Somehow I get the feeling that blind techno faith in the soon to come superb humanlike AI is almost magical in nature. Techno utopists are excited because of new smart looking machines, but may lose their sense of realism in their techno faith.
A calculator is basically still only a calculator, even if it is programmed to deal with many other kinds of information too than with simple numbers only.
Logical thinking has it’s very clear limits that were recently proved by Gödel, shown by others throughout our history and demonstrated in this article. I was a cyberpunker long time ago, so I won’t completely shatter the illusion.
To quote Rube from Dead Like Me:
Spot on. In short: Will. Never. Happen.
Edited 2009-09-23 12:14 UTC
Uh? Because suicides for emotional reasons are a necessary premise for intelligence??
Think about it: if we met aliens and learned that those aliens never commit suicide for emotional reason, would that be enough to tell that these aliens cannot be intelligent?
Obviously no!
In short: your quote is stupid.
I think you fell into his trap. Each person defines his interpretation of intelligence, and therefore defines when artificial intelligence is achieved only through his/her eyes. In your case, it just extends to a being’s ability to scoff at the idea of emotionally driven suicide as intelligent and that this idea is “stupid”.
I think unfortunately the chicken will always inevitably come before the egg. We will always make machines do only what we want them to.
Your points are obscure. One thing is certain: your last sentence means that you dogmatically believe AI will never exist. If you meant something else, feel free to rephrase.
Think of AI in simpler computer terms. Humans build the AI computer, the language, the code, and the compiler. Whatever input data humans allow the AI computer to process, the output is only new input for humans to process. The AI computer is always the program, never the programmer. Even if its output is slightly more intelligent than humanly possible, that increased intelligence will then be absorbed by the human who will then have to build a more intelligent artificial intelligence. If its output is too intelligent for a human to absorb, (or not intelligent enough) it will be considered bad output.
Then we seem to be on the same page (also figuratively).
Now please read my recent comment below (starting with “I’m not talking about artificially-created real intelligence, like the Bicentennial Man.”) and feel free to reply to it if you either agree or disagree to my other points.
There are not many animals that commit suicide out of emotional reasons (there are some that do it for the greater good) though there are animals that are near or equal to human intelligence, dolphins, elephants, rooks (they might not have schools, nifty little hands or an advanced language so we’re still more knowledgeable)
I don’t think committing suicide is a sign of intelligence, a lack of it seems actually more logical.
Edited 2009-09-23 19:11 UTC
While I certainly agree that AI has nothing to do with the examples used in this article (erroneously thought to show basic forms of AI), except one, and while I agree that now-unimaginable breakthroughs are needed before we can even dream of caricaturing sentient-level intelligence — I also fail to see how Incompleteness has anything to do with intelligence. Could you expand on that?
(But please don’t simply define it. Explain how come it affects *intelligence* and why it is not trivial to abstract/trick away.)
What is intelligence? Everybody knows and they all disagree, so let me summarize: the word intelligence actually means me-likeness. If you look and do like me, then you’re an intelligent person. Nothing wrong with that, but it is a cultural phenomena, not technical and you can not emulate it with switching (logic).
I do not claim that machine that completely emulates us can not be built, we are the living proofs. But to make it by ourselves, never mind how contradictory it sounds, the knowledge of how we work is needed – not only model of brains and interaction but model of human as a cultural being. Society at the moment goes into the wrong direction but this theory will never be built anyway. Still if you imagine one, you will have no problem seeing how it satisfies the incompleteness theorem.
One problem though: machine that beats you in a gun fight is already built… Those damn cultural and exsistentional details
The turing test only serves to see who/what is better at lying and deceiving, whether you’re a program or a human.
Better question is “Do we really want this?” With computers being used everywhere, do we really want to give them the ability to think? What if the first thing they think is that humanity is a virus?
I agree.
While simple logic A.I. is surely something useful to us in the form of simple robots, machines, software and so on, I would question the need for complex, rational and emotional A.I.
Why would we want to have computers that can act bitchy or sad or get angry? Purely an academic and potentially dangerous exercise really.
> While simple logic A.I. is surely something useful
> to us in the form of simple robots, machines,
> software and so on, I would question the need for
> complex, rational and emotional A.I.
>
> Why would we want to have computers that can act
> bitchy or sad or get angry? Purely an academic and
> potentially dangerous exercise really.
The answers are obvious, it’s just that they are still hypocritically uncomfortable for many people:
1. To run psychological tests and experiments that would be impossible on real people, due to various arbitrary legal or religious restrictions
2. To eliminate the need for human interaction; dealing with fully controllable artificial persons is much safer
3. To use them as sex toys; emotionally-capable sex toys offer a better sexual experience
4. Unlimited enactment of BDSM fantasies becomes fully accessible to the average consumer (no need for social, flirting, and exceptional self-assertion skills)
5. They can make “motherhood” accessible and cheaper for everybody (think of child custody discrimination that most men “enjoy”): children definitely need *emotional* machines (mimicking mothers), not some cold “simple-logic AIs” as you’re suggesting
6. They can be programmed to reflect our egos in ways that could tremendously boost our self-esteem
7. They would be a very strong, probably ultimate, proof against traditional religion (e.g. sanctity of life, human “soul” or “dignity”, and many similar religious dogmas and taboos that are still refusing to die)
Edited 2009-09-23 16:26 UTC
“1. To run psychological tests and experiments that would be impossible on real people, due to various arbitrary legal or religious restrictions”
“4. Unlimited enactment of BDSM fantasies becomes fully accessible to the average consumer (no need for social, flirting, and exceptional self-assertion skills)”
If this would be possible, it should be forbidden.
If there is (nearly) no difference between a computers and humans they should be granted something like the human rights.
“7. They would be a very strong, probably ultimate, proof against traditional religion (e.g. sanctity of life, human “soul” or “dignity”, and many similar religious dogmas and taboos that are still refusing to die)”
I don’t think it would be ultimate, depending on the religion. When it says god created humans as a model based on his own. In other situations it would be a really strong argument. I don’t see why destroying the believe of others is important/makes sense, … but that’s off-topic.
I think, it would have a similar effect as discovering alien life forms. “Wooo, we are not alone.”
Depending on how many and how powerful such computer-humans would be it could lead into moral problems. Slavery, or sex slavery like in your example or evil psychological experiments. I mean, if they are not dofferent from humans it would be _very_ unethical.
I’m not talking about artificially-created real intelligence, like the Bicentennial Man. (At the moment, that’s pure sci-fi anyway.)
I am talking about *artificial* intelligence (“AI”). I.e. programs that plausibly simulate human thinking and human emotion and react “emotionally” to human input simply because they logically (and impersonally) “conclude” that this is what they should do according to the way they are programmed.
Programs can never be “abused”. Operating systems will never have “rights”. Just like concepts and approaches don’t “think”, and just like ideas don’t “exist”. “Abusing” artificial emotions shown by a device (as opposed to abusing real emotions) makes as much sense as killing the blackness of a black cat (as opposed to killing the cat).
***
Conversely, a *real* intelligence that is created artificially can’t reasonably be considered “artificial”. Even calling “it” artificial would be gross discrimination (what the hell do I care how I came to exist or what I’m made of?) and an insult to *him* (not “it”).
*He* can’t even be considered “inorganic intelligence” (if *he*’s inorganic). To be politically correct, you can only call *him* “artificially-hosted (real/natural) intelligence”. Just as you are not “artificial intelligence” if you upload yourself into a computer, but a “computer-hosted (real/natural) intelligence”.
Naturalness of intelligence is not defined by the platform that hosts/fosters it, but by the fact that *he* (not “it”, not “she”) “experiences” and “learns” (etc.), which, if allowed to happen *naturally* (i.e. at own “will” and governed by pain/reward), must necessarily qualify *him* as *natural* intelligence.
Or, epistemologically, when it comes to intelligence itself, naturalness is an intrinsic (and a priori) concept, not some infantile extrinsic concept that you, the casual carbon-based slashdotter, can “award” or “project” on some random entity as you would biscuits or wedding rings. Just because you’re made of goo or just because you fathered it.
You can put it like this: intelligence is either “self-proclaimed” or nonexistent.
To conclude: if you are so sensitive as to contemplate “AI” rights, first stop calling natural intelligence “artificial”. And you’ll get a bonus: you’ll feel morally free to “abuse” genuinely artificial AIs. Sex is good for your prostate. And yes, that includes sex with underage AIs that perfectly imitate your own children. Well, as long as this protects both your *real* children and your sexual freedom.
Edited 2009-09-24 04:42 UTC
OK, fine. We agree in premise. To your first point, this is reality. Your tool just seems to be too phallic for me. To your second point, though, you elaborate rather than answer the original question. I think the answer is, “Don’t worry”. You can’t have *him* anyway. Go outdoors.
hmm when I see an improved AI I hink “wow this can help solve difficult problems!”
I don’t think “wow! this can be used for unlimited sex and attacking religions!” (because they get in the way of unlimited sex)
So it seems that when you see an AI there are more things that you try to avoid to think than things that you think. This is a bit too self-restraining.
However, note that I never said that sex was good or that sex was better than abstinence or that sex had any other purpose than procreation.
Congratulations! So you are a sentient being because that’s exactly what I think The real aim of this branch is to create a spineless humanoid slave. There are perfect implementations of such machines all around us, the only problem is in those pesky legal details again, heh.
One could equally argue that the ability for computers to emot might be important for the safety of mankind.
Without emoting, how would computers understand that their actions are harmful to humans? Or that harming humans is a bad thing?
Judgements like these are a very human attribute thus one could argue that there’s just as much logic behind teaching computers to think like humans to protect humans, as there is logic behind the argument that teaching human emotions might turn a computer angry, bitchy and spiteful towards humans.
I can only guess the read world scenario would be the result of the computers learning (in much the same way that humans personality are forged by the persons upbringing).
However, all this is uber hypothetical as we’re still a long way off emotionless AI let alone worrying about how to emulate/program human emotions and the risks resulted from.
I would think that this could be handled with programming as a set of rules (or ethics in other words). Emotions may not really be necessary for a sentient machine to follow rules. I would rather any sentient machine NOT have emotions or be able to make ethics based decisions.
Perhaps the safe way to go would to ‘sandbox’ any sentient AI within a lower level framework of rules that it cannot break free of.
The problem with rules or laws is they don’t always capture real world scenarios. On those occasions you need something more than just cold logic to determine right from wrong.
As much as I hate to reference Hollywood in a discussion like this, the film “I Robot” explains my point nicely (I’m going to assume you’ve seen the film as I don’t want to post a spoiler just in case anyone else reading this haven’t seen the film but plans to).
In fact, we see cases every day of when rules/laws either inadequately convict the immoral or, worse yet, unjustly penalise the “good guys”.
So who’s to say that a solid set of rules would work 100% of the time with AI? And who’s to say how catastrophic the outcome could be if and when the hard-coded rules fail to meet real-world scenarios?
Don’t get me wrong, I’m not saying emotions is the perfect solution either (for the same reasons that have already been posted earlier in this discussion).
I’m just saying that you could make pretty similar “human safety” arguments for emotional AI as you could against emotions.
Edited 2009-09-23 22:08 UTC
Indeed. No one likes a smart-ass who goes around stating the obvious.
I remember seeing a presentation (can’t remember by whom) who basically says that we have emotion allow us to act when we don’t have enough information.
We can rarely make a perfectly objective rational calculated choice. We have some information. But in general, we just need to do something.
You could spend eternity deciding where and what to eat for dinner. At the end of the day, you need emotion to just make that decision.
I could certainly imagine that we could get a computer to act human in time. I don’t think we’re close at all, but we could get there.
The problem I think will be that as we discover how to program the computer to act human, the more we’ll discover just programmable humans are. Maybe that all our emotions, all our decisions can be represented in a computer with some random weightings
One of the motivations of AI research seems to be replicating human traits to understand them better. Want to understand how an eye works, replicate it and see what functions are required and flaws presented. Thought being more so, the closer we get to replicating the functions of the mushy grey ball of goo.. the more we learn about how it functions.
Let’s just hope we’ve evolved beyond the point where the first thought is “how can I use this to manipulate other arrogant apes for my own benefit?”.
I suspect that a large percentage of human beings wouldn’t be able to pass the Turing Test, given the decline in written communication abilities. It would almost take deliberate effort to write an AI that was less-coherent than the E.g. the typical youtube comment.
We can barely make nice hardware and most software sucks, one way or the other. It’s hard to believe we are close to a machine that can think and feel. The thing that would most resemble a human brain would be a neural network. But it’s more probable we would be creating new life forms in the future than “intelligent” machines.
What do you compare with?
Whom do you mean with “we”?
Somehow this doesn’t make any sense to me.
I’ll give you the date of creation of the first computer that is intelligent: June 14, … 1822. The Difference Engine was already “intelligent”, it could calculate. It was indeed better than sloppy human calculators, it made reliable tables!
And what’s more: people treat pocket calculators as capable of thinking. At least, capable of logic.
Turns out that logic is not thinking, though. But to make a machine that can “think”, you will first have to define “intelligence”. And the date this will happen is: never.
In a very concrete sense, this is a philosophical question. When Turing wrote his essay, his point was not that computers can think. His point was just that we can bypass the philosophical question and do cool things like ELIZA.
ELIZA does not think. It is not aware. It is not useful, even. But it is cool. And we learned some things from writing ELIZAs. That’s enough.
On the other hand, defining intelligence is still important. For example, brute force calculation is not intelligence. Computers can brute force chess, but it does not mean anything. Where is the difference? Because there is a difference. And an important one. But AI researchers can’t solve it. Just like what we today call “computers” can’t be intelligent in a meaningful way: i believe in machine-intelligence, but it will not come from brute force number crunching…
Edited 2009-09-23 16:55 UTC
2 Definitions:
1. Self-awareness. The key distinguishing factor of “true” intelligence is the presence of both a subject and an object within. “I” can reflect upon “me”. Or to put it another way, I am aware that I am aware.
2. A compilation of information designed to validate ones wishes. For example, see Iraq invasion.
You misunderstand me. Definitions for intelligence are a dime a dozen. There are lots of ’em. But one that will be agreed upon you will find not. Mostly because each person will want to “accept” only the ones that make them look good 😉
WE aren’t even close to producing artificial intelligence.
Computers *appear* to be smart, but really they are only capable of solving logical problems using a base set of parameters that we give them. Even if a piece of software is able to ‘predict’ something, its only because we programmed in the variables which were used to make the prediction.
I’m not interested in computers that can fool gullible people. Any politician or lawyer can do that.
Intelligence means to be able to re-examine the accepted assumptions and come up with a simpler, more effective solution. A Kasparov that only looks a couple of moves ahead and ties is much more intelligent than the computer that needed to look up many many moves ahead to achieve the same result.
I read the article but none of the refs it links too.
The quality of the article can be summed up by one line
“Researchers are also proposing cars, made up of nanorobots, that can change their shape as per the situation and need on the road. ”
If one can believe that, then anything goes I suppose.
I studied AI at Uni some 30yrs ago, I have never seen any of the proposed work produce anything in the real world. Back then medical Doctors were expected to get expert systems to help them diagnose patients. I have never seen a Doctor use a computer for anything but note taking and record keeping.
The same elsewhere, by now there should have been knowledge domain expert systems everywhere helping professionals make better decisions. I am sure they exist somewhere, just never seen one. They never came to my field either in chip design but I remember seeing lots of papers but no products.
If we don’t have even basic AI productivity tools all over the place, then AI can never be tested on the ground with millions of users.
I remain as skeptical as ever, make research into products then we can believe it and see it evolve.
BTW, as someone else said, hardware and software really does mostly suck in ways that won’t help AI move forward. AI is intrinsically a parallel computation that doesn’t need to run at GHz. The brain only runs neurons in ms time scales. GHz PCs are darn good at transcoding videos, but the architecture for brain like computations needs to look more like a database engine with specialized hardware or software to mimic low level processes.
Passing the Turing test means nothing unless you know what the dialog was about and the Elbot dialog didn’t seem very deep to me, sort of canned responses like in Eliza. The real issue is that AI must exhibit really deep knowledge about many issues and be able to demonstrate that knowledge to the enquirer.
I’ve worked a bit on emotions in AI, and I think your summary of the field is slightly off. Having a sense of and using emotions is not the same as sensing and interpreting the emotions of human beeings.
For example, a robot can easily implement basic human emotions and use these to act and reason, e.g. anger and joyfulness can affect the robot in such a way that actions in his world, whatever that may be, can influence the way he acts both in general and towards objects or other agents in the world. If something happens to the robot that will make him “sad”, he can change his behaviour to decrease it, for example by doing things that the robot is programmed to find enjoyable.
Jitesh writes: “For emotional intelligence, we can simulate several emotions and record them into a robot. However, we cannot decide which one is correct or wrong.”
Yes we can. An agent, be it situated in a robot or living virtually in a computer, can learn which emotions are aproproiate in a given situation using e.g. a neural network and a big dataset. Emotions doesn’t need to be complex – anger can be implemented as a simple integer.
There is some research on this field, for example will “Emotional agents: A modeling and an application.” by Maria and Zitar (2007) describe how they implemented basic human emotions in a autonomous agent which acted in a world where his world goals basically was to feed orphans in an orphanage.
However, interpreting the emotions of a arbitrary human beeing based on e.g. their facial expressions or other physiological features such as body heat or heartbeat a totally different field altogether, and quite frankly – is not needed for a autonomous agent to use emotions.
PS: Laughter is not an emotion.
Edited 2009-09-23 21:52 UTC
I think you are talking about not much more than simple programmed toy emotions only. It is still very different from human emotions and intelligence. Humans have real human consciousness, human biology, they have human history, cultures, languages, traditions, values, morality etc. that programmed machines (always programmed by humans and not working without them, by the way) do not have.
Human intelligence is not just calculation, emotions are not just primitive causal reactions. You cannot separate human intelligence or emotions from the whole human experience that contains all the aspects of the human life.
Edited 2009-09-24 16:27 UTC
Of course our human “baggage” play a role, but I figure – if an emotion does it’s job well, does it really matter if its a robot or a human who feels it?
Many intelligent creatures have emotions, and they help the creature to act on a small dataset, without having to analyse everything. This is exactly what emotions do really well: they help creatures make decisions when they have insufficient data to make a calculated decision. For example, if a deer feels fear, he will run – and this will probably maximize his chance of survival, even if the emotion sometimes is irrational.
There are two opposing camps.
1. Sufficiently complex behaviour is indistinguishable from intelligence / magic.
2. Only non-deterministic quantum mechanisms can lead to real intelligence / consciousness.
I believe the second.
Deterministic computing devices “deduce” whereas real intelligence “induces” and “abduces” (creative leap).
http://psivision.blogspot.com/2007/06/deduction-abduction-induction…
“Artificial Intelligence (AI) is at the very core of the smartness embedded in machines. Artificial Intelligence is the field in which machines are provided with the logic and intelligence so that they can perform tasks like humans. In simple words, it is the field in which machines are made intelligent like humans. Researchers in AI have made significant advancements in this field. Researchers are developing nano-sized robots that can flow in blood. These nanorobots are fitted with nano-sized chips that allow them to find infections and steer in complex biological environments. These sensor chips the size of nano-levels can record the signals and process several tasks in seconds. Such high-powered chips combined with AI concepts and logic programming will make machines perform complex and inhuman tasks in future.”
I mean seriously? This is sophomoric (as in High School sophomore) crap.
There was another laughable bit about cars made out of nano-machines flowing into different shapes.
This is a rambling, unstructured, unedited, content-free waste of my time.
We’ve been hearing this for how long? And it’s as far away as ever. It’s not going to happen without a new type of computer. Perhaps one that is error prone and energy efficient (like a human brain) – see the work of Doctor Kwabena Boahen at Stanford ( http://www.stanford.edu/group/brainsinsilicon/ )
A brain is not a computer. Computers compute.
Computing is merely a very marginal product of a brain. Proudly inefficient and gloriously imperfect, at that.
English has mozillions of words, many of which are not “computer”. Explore it to find another word. You’ll also learn that a berry is not a tree biscuit.