Conscious, Emotional machines, will we ever see them? How far can technology go and can technology be applied to us? In this final part I wonder into the realm of Science Fiction. Then, to conclude the series I come back down to Earth to speculate on the features we’ll see in any radical new platform that appears. Update: Never let it be said I ignore my errors, in the interests of clarity and with apologies to Extreme Programmers I have revised Part 1.
2001, Bladerunner, The Matrix, Alien and Star Trek (TNG) [1] all have machines which at least appear to be conscious and emotional. The idea is so ingrained in us that we expect an Artificial Intelligence to be a carbon copy of a human being, will it be? Will AI computers be conscious? Will they have emotions?
One of the biggest questions about artificial intelligence concerns the matter of consciousness, or “self-awareness”. The entities I spoke of in part 6 may have many of the attributes of living beings but they are are still “machines”, they are not conscious and they do not have emotions.
Will machines become conscious?
There is no answer to this question since we do not at this point even understand our own consciousness or how it works. How can we add consciousness to a machine when we don’t understand it ourselves? How do you program self-awareness? Can it even be programmed?
On the other hand, perhaps programming wont be necessary…
It has been theorised that consciousness is an “emergent behaviour”, a behaviour which appears out of the complexities of an existing system. Could consciousness be a side effect of intelligence and if so will it’s development in our entities be inevitable?
A biological brain is very different from a silicon one. Physicist Roger Penrose [2] has written that quantum effects are present in the human brain and these are important as they could be responsible for non-algorithmic functions. These effects will not show up in a digital system. An analogue system can be simulated but Simulating quantum effects is likely to be impossible and if they are important just faking them may not work. Are these quantum effects important for consciousness to be present? If so purely digital entities may never become conscious.
If a machine does become conscious it becomes something else entirely, something you can have a conversation with, something which may even have a will. Something which may no longer be interested in it’s masters bidding. Something which could turn against us and attack.
Conscious Problems
A conscious system brings up all sorts of problems, will it be emotionally driven in the way humans are? Will it even have emotions? If so how will it react to them?
This is the question behind Bladerunner where the “Replicants” knowing they have a limited life-span go back to their creator in an effort to try and stop their demise. They do not have an entire life to get used to emotions so are highly immature when it comes to dealing with them and consequently tend to leave a trail of bodies in their wake.
An interesting aspect of the film is that the Replicants are considered as non-human and have no rights. Some people have already demanded human-like rights for the more intelligent animals so what will be the situation for a conscious entity? There is no problem if they are not conscious, but if they are (or at least appear to be) should we award them human-like rights? What would have been master director Stanley Kubrick’s last film, “AI Artificial Intelligence” deals with the question of a conscious artificial entity who has very human emotions but no human rights.
“His love is real, but he is not”[3]
Emotional Machines
One thing I think fiction gets wrong is the idea of emotions in artificial entities. It’s by no means clear that they will have emotions and even if they do they are likely to be very different from ours. We are biological entities and there are many factors which effect us and our emotions.
Being chemical based means chemicals effect us. Food, drink, the temperature, the humidity and all manner of other things can change our mood and in extreme circumstances can sometimes radically change our behaviour. For the most part most of us are completely unaware of the effects that many day to day chemicals can have on us. If you are a big coffee drinker try going without for a day and notice how agitated you become, on the other hand notice how chocolate can make you happy (chocolate was once considered a dangerous drug). Foods are full of chemicals and they can have unexpected results, indeed some migraine sufferers have unexpectedly traced their suffering to specific foods.
We also have “programming” of sorts except we don’t usually refer to it in such a manner. Curiosity, reproduction and survival are all part of our programming. These effect our emotions in a big way and pretty much dictate large parts of our lives. If you are interested in a pretty girl is this because you took a conscious decision to be interested or is it because you are simply following your subconscious programming to reproduce? Maybe we’re not so different from machines after all.
“I met someone who looks a lot like you, She does the things you do, but she is an IBM”[4]
The human entity is a complex system. The artificial entity is a completely different form of complex system which will be effected by different things. I do not believe they will act like us and I’m not convinced they will have emotions, if they do I am convinced they will experience them in a very different way.
They may be conscious but I don’t know if anyone can really say one way or the other right now.
I don’t think we’ll have too much to worry about from emotional computers though, I expect they’ll be about as emotional as Star Trek’s Mr Spock. If they have emotions it’s because they’ll fake them.
Unreal Capabilities
The Science Fiction AI entity ends up very similar to us but in reality they will be able to act in ways we can’t do and wont be expecting, they are not human and I don’t expect them to act anything like a human unless they have been specifically designed to.
I talk of an entity as if it is one thing but it will be able to hold conversations with different people simultaneously, a single consciousness may not be able to do this but our entity can split itself apart. It will be able copy itself, it may take over mechanical systems and take on a physical body, it may do the same in the virtual space and take on a virtual body.
They will also do things we can’t even conceive of doing ourselves such as sending memories directly to one another. We can co-operate with one another, these may be able to not only co-operate but actually merge into one entity, then de-merge with all the knowledge of each other. This is just the beginning, they will have capabilities we can’t even imagine. Who knows, Perhaps one day they be able to merge with us.
Cybermen
In part 6 I mentioned cyborgs which were a combination of machine intelligence and a human body. What if it were to do it the other way around and use a human intelligence instead?
Interfacing directly with the human brain is a long term goal and it’ll be worked on for many years so how this will operate is more than a little open to question, but it will give some intriguing possibilities. If our interface to the world goes via a computer we to will be able to take on other bodies (providing the necessary nutrients and oxygen are still supplied to our brain).
The idea that some already have of recording a persons entire lifetime could be turned on it’s head and allow you to experience someone else’s life, rather than watching someone getting shot on TV you could join in and experience it yourself, with the pain turned down of course. Of course sex change operations will be rather simpler, you can become whoever (or whatever!) you want. If you don’t like it just change back. This is a vast oversimplification of course but I am talking SciFi technology here…
If we are injured we just switch off pain and perhaps send a bunch of nano-technology machines to go fix the damage. If we could do the same to the brain (which has no self repair mechanism) we could significantly extend our lives, perhaps even permanently.
Of course the idea of brains without bodies isn’t new, H. G. Wells had Martian brains with mechanical bodies fighting Humans in “The War of the Worlds” way back in 1898?. [5]
Putting our brains into a different body is one thing, but can we fit a human and computer intelligence together? I’ve no idea how we’d communicate but it’d be very useful to have an intelligent computer with you especially which all it’s knowledge and ability to upload more.
Of course in any mixed intelligence we’d need to ensure that the human was the boss.
If the entity is using evolutionary programming perhaps it can find a way to apply it to us, to evolve us. If a way is found to replace biological neurones with artificial ones who knows what’ll be possible. Just make sure there’s no power cut! A human brain made out of artificial electronic neurones will have some of the same advantages of an artificial entity, no need to sleep. It will also have the disadvantages, a completely different experience of emotions or worse, no emotions.
I for one don’t believe we will ever be able to upload ourselves to a computer as is sometimes suggested in Science Fiction so I don’t think the above is possible, but that said I don’t think anyone can really give a meaningful prediction on that matter today.
Artificial Questions
AI poses many more questions than we can answer today. Some answers will be answered when we have AI, others we will be forced to answer ourselves when it has it’s inevitably huge effect on our society.
The future could be a wonderful place and AI could be a very useful part of it but we need to get it right. Release a learning machine too early and it could have very bad consequences for all of humanity. We need to define it’s role and the rules it needs to follow, we also need to extrapolate to see what happens when we push those rules to their limits and when they collide, otherwise we could get unexpected results. Whatever way you look at it the coming of AI is going to change the face of human society.
The normal path of evolution is for superior species to survive the lesser ones. Today we are the superior species, will AI change that, or will we fuse with the machines and ourselves take on a different form of being?
I am of course missing a lot of detail here and am making a lot of assumptions. There are many questions to be asked and technologies to be developed so these possibilities may take centuries to be realised if they are realised at all.
Predicting the future is difficult enough but I have gone into the unknown here and this is a lot more difficult than simple extrapolation. While I may sound like a Science Fiction writer all of these predictions are at least partially based on today’s technologies being combined and extrapolated over a long time period. I’m describing a path, but I don’t know if it’s the one that will walk or even if it can be taken.
Perhaps you can give very good reasons why these ideas cannot possibly work, maybe they break the rules. Those reasons may be valid today, but tomorrow? Science has a long history of rewriting the rules.
I will make one solid prediction though:
You’ll know they’ve got Artificial Intelligence right when your robotic girlfriend says No.
Now, Back to Planet Earth for the Conclusion and a New Platform…
In the future computers as we know them today will become playthings for geeks, just like steam locomotives or vintage cars are today. Computing however will surround us, in Phones in TVs and in many other areas, we may not recognise it though. There will still be workstations, the descendants of today’s desktop PCs. The alternative computer manufacturers may become the only ones left serving those who want the “real thing”.
Before we get to that stage an awful lot is going to happen.
In the short term the current manufacturers and technology leaders will continue to duke it out for a bigger share of the market. Wireless will become the norm but I don’t expect it to have a smooth ride, people are paranoid enough about mobile phones damaging health and expect more powerful devices to cause more controversy. RFID tags will make it into everything but I can see sales of amateur radios going up as the more paranoid try to fry them. Longer term technology trends mean what we know as computers today will change over time into things we may not even consider as computers. Technology will get faster and better, easier to use, smaller and eventually as Moore’s law slows down it’ll last longer as well.
How we build technology will change and an infant industry will become another part of society subject to it’s rules and regulations, it’ll never be the same as the last 30 years, but this will be a gradual process. That’s not to say innovation will die, there’s a lot of technologies not yet our desktops which have yet to play their part. We have seen radical platforms arrive in the 70s and 80s and evolution playing it’s part in the 90s and 00s. I think radicalism will return to the desktop but I don’t know who will have the resources or for that fact, the balls to do it.
The Next Platform
The following is a description of a fictional platform. Like the radical platforms in the 80s it’s based on the combination of mostly existing technologies into something better than anything that has gone before. I don’t know if anyone will build it but I expect any radical new platform which comes along will incorporate at least some of the features from it:
It’ll have a CPU, GPU and some smaller special purpose devices (start with what works first).
It’ll have a highly stable system in the bottom OS layers and at the top it’ll be advanced and user friendly (for the software as well).
The GPU will be more general purpose so it’ll speed things up amazingly when programmed properly (Perhaps a Cell processor?).
There’ll be an FPGA and a collection of libraries so you can use them to boost performance of some intensive operations onto another planet.
It’ll run a hardware virtualising layer so you can run multiple *platforms* simultaneously.
It’ll run anything you want as it’ll include multiple CPU emulators and will to all intents and purposes be ISA agnostic.
The CPU will have message passing, process caching and switching in hardware so there’ll be no performance loss from the micro-kernel OS, in fact these features may mean macro-kernels will be slower.
The GUI will be 3D but work in 2D as well, It’ll be ready for volumetric displays when they become affordable. When they do expect to see a lot of people looking very silly as they wave their hands in the air, the mouse will then become obsolete.
It’ll be really easy to program.
It will include a phone and you will be able to fit it into your pocket (though maybe not in the first version).
And Finally…
That is my view of the Future of Computing and the possibilities it will bring. I don’t expect I’ve been right in everything but no one trying to predict the future ever is. I guess we’ll find out some day.
I hope you’ve enjoyed reading my thoughts.
Thanks to the people who wrote comments and sent me e-mails, there were some very good comments and interesting links to follow.
So, I’ve enjoyed my stint as an anti-historian. What do you expect will happen? Maybe your predictions of the future are completely different, why not write them down, I look forward to reading them.
—————————-
References
[1] A page on AI in Science fiction (Warning: may require sunglasses).http://www.aaai.org/AITopics/html/scifi.html [2] Some of Roger Penrose’s thoughts on AI.
http://plus.maths.org/issue18/features/penrose/ [3] AI the Movie.
http://www.indelibleinc.com/kubrick/films/ai/ [4] Yours truly, 2095 from the ELO album “Time”. By Jeff Lynne
http://janbraum.unas.cz/elo/ELO/diskografie/Time.htm [5] H G Wells’ The War of the Worlds
http://www.bartleby.com/1002/
Jeff Wayne’s musical version of the story is one of my favourite CDs:
http://www.waroftheworldsonline.com/musical.htm
Copyright (c) Nicholas Blachford March 2004
Disclaimer:
This series is about the future and as such is nothing more than informed speculation on my part. I suggest future possibilities and actions which companies may take but this does not mean that they will take them or are even considering them.
they were genetic creations
The Replicants were still artificial intelligences though in that they were created, the original Robots weren’t machines either.
My guess is that we’ll see brain in a box computers like the Magi before we see anything like a intellect born of a complex system.
I know that I’ll never have my own personal AI. Not because I don’t think I’ll live to see one (I plan to live at least another 60 years), but rather because I’m terrigied of the idea of coming home and having the machine say:
“Look at you… A pathetic creature of meat and bone…”
The title alone….
I think you meant to ask if they can become sentient. They have similar meanings, but it’s beside the point anyway. We have as yet to understand the most basic method by which we think and how we can make the connections we make. We’ve got a long way to go in terms of mathematical power before a computer can run a program as quickly as we think.
I don’t see a true need for machines that think at a sentient level. It would simply create ethical questions and leave us with useless emotional computers that help slow down our work more.
Maybe we should solve heat and energy usage issues before we worry about making our computers sentient. After that we can solve the security issues. Then we will probably be directly interacting with the human mind. And after all that, maybe someone will come up with an algorythm to simulate true learning.
i think it may be beneficial to have links to the other articles in this series.
>> Will machines become conscious?
There is no answer to this question since we do not at this point even understand our own consciousness or how it works.
>>
Well, actually, consciousness is really nothing special. It is simply the sum-total of our mental life, and to that extent, it isn’t anything particular to our species. Chimps and cows and cockroaches all have consciousness. Obviously, human consciousness is more advanced than that of the chimp, the same way the chimps consciousness is more advanced than that of an earthworm, and an earthworm’s more advanced than that of an apple. But we are talking of “degrees” of complexity here, not about any special qualitative difference. To that extend, there is no reason why machines would not eventually evolve to have a consciousness more, shall be way, “advanced” to that enjoyed by humans.
when a computer appreciates something funny or ironic then i will be impressed
Howdy
Good to see you fixed that earlier part, incremental development isn`t with out it`s own problems but the way you worded the earlier article was kind of inflamatory
As for AI i think we have bucklies chance of getting this right (well not for a looong time) when we can`t even get a 100 million line program right.
Imagine an AI entity with severe mental problems that can`t be explained!
Welcome our new robot overlords.
This is where I was gonna slam individual lamers, but you know who you are so there’s little point.
————————
You know what? I really admire this guy, since his first article he’s tolerated rude comments from troll’s here, but he still has the balls to come back the following week with a new fascinating installment.
You know what they say Nicholas “Don’t let the B’s grind you down”
If anyone is interested in all this stuff (like me), then you should read Ray Kurweil’s book “The Age of Spirtual Machines’. He goes into greater detail about the specifics than this interesting article here.
He extrapolates from existing and developing technology and leads onto to some very stimulating ideas indeed…. er.. sod it, I’m off to the library to read it again.
This is a good series of articles. Thanks for the read.
If anyone is interested in all this stuff (like me), then you should read Ray Kurweil’s book “The Age of Spirtual Machines’
That’s an excellent suggestion. Some other excellent reading on the matter are Daniel Dennett’s books/essays Brainchildren, Consciousness Explained, and Freedom Evolves. Dennett attempts to explain/demonstrate in Freedom Evolves how free will can mainfest itself on top of a structured system which retains some chaotic operating properties.
I think within our lifetimes we will build a complete mathematical model of the operation of the human brain (especially if we can use the information from the Human Genome Project to model the physical development of the brain, then analyze the operation of the physical brain model in order to construct a mathematical model) at which point we can simply tweak and improve this model to fit our liking. As for creating a conscious computer program from scratch, I’m certainly not holding my breath. Artificial intelligence is the most depressing field of computer science, showing relatively little progress compared to virtually every other area.
“You know what? I really admire this guy, since his first article he’s tolerated rude comments from troll’s here, but he still has the balls to come back the following week with a new fascinating installment. ”
Oh please, we’ve complained because his articles are over-sensationalized and they are a waste of OSNews frontpage space. We are not trolls but are dedicated readers of the site. We have been rude, people do that when they want someone to discontinue their current line of behavior.
The only thing worse than trolls is people who accuse everyone who disagrees with them of being trolls.
I don’t think we’ll ever completely understand the human brain, at least within the next 100 years, let alone have a mathematical model of it. Computers will never have emotions or genuine intelligence. Why would you want them to? We have enough war, creating another life form to fight with is not a good idea. I say, make computers fast, but keep them dumb.
i’m a so so fan of that article serie, but that one and the one on FPGA (the previous one) are quite rellevant if you come here for news (real one, not just sponsorised M$ PR).
some comment.
consciousness:
This is a pure illusion. Consciouness emerge from re-entrant system. Of course you can have philosophy mixed in. Perhaps real one (like the one that we see only in human and some animal) could be extanded by asking for the concept of empaty and self observation.
Analog/digital debate:
First, analog world is not really analog, as you don’t have really real time sense it always have delays.
Those that are familiar with realtime microcontroler will immediately have Z-transform to mind. That allow transistion of laplace analog world “s” data stream into numerical “z” domain. both are equal. You also can say that the same way complete sound positioning 3 axis exist in normal stereo sound and can be extracted, the same way quantum info can make it into the numerical stream (no idea of how to get that back but neural simulation probably can).
non computerable problem:
That is a myth, each stuff that can be made by a human can be done by a machine. Logic don’t exist in human, it’s a universe rule. The best proof of that is simple formatl math software. Sure computational brute force computation can’t reach all solution, but you can have both on a computer.
roger penroze:
i like the guy and hate him at the same time. he is the best exemple that a genius is stupid at the same time. genius have no problem to resolve problem, that mean they are not aware of how their genious work. Someone that became inteligent over time (many are like that but take einstein for this exemple) is far more able to make opinion on how the mind work. that is why penrose search in quantum effect in the mind just like some search god as many reason to explain stuff they don’t understand yet.
a comment on the bladerunner comment. Yes the replican are robot and not genetically modified robot. Of course the closer you come to nanotechnology the more it become biology.
Well, actually, consciousness is really nothing special.….Chimps and cows and cockroaches all have consciousness.….But we are talking of “degrees” of complexity here, not about any special qualitative difference. To that extend, there is no reason why machines would not eventually evolve to have a consciousness more, shall be way, “advanced” to that enjoyed by humans.
Great..You’re trying to prove that conciousness can be replicated by showing more “organic” examples — as if the problem is having machines concious on a “human” level is the only problem. But the truth is that even replicating what an earthworm has is just as difficult.
We are not talking about “degrees” of complexity. Consciousness is not measureable like intelligence nor is it a sum of all parts — If that’s what it was we’d be just like computers. We haven’t even gotten to the point of understanding the nature of consciousness, so how can you think it’s just a matter of implementation? Or is that some kind of “top-to-bottom” design? — May as well write a program that says “I am conscious” instead of “Hello World”.
Because all you’re doing is fooling yourself by saying “consciousness is really nothing special”.
We are talking about something completely specific to organic creatures, in which the “complexity” just might be the same across the board. The nature of consciousness is unknown. It’s a mystery, as much as “whether God exists” is a mystery.
insert values ( conjecture ) into article;
delete facts from article;
delete evidence from article;
“consciousness is really nothing special”
“Computers will never have emotions or genuine intelligence.”
“Chimps and cows and cockroaches all have consciousness.”
really pointless…
None of you know anything about consciousness; You are all just rambling.
Most of you can’t even manage grammatical sentences.
Back this stuff up people or don’t waste our time!
-Hugh
“One thing I think fiction gets wrong is the idea of emotions in artificial entities”
How does fiction get something wrong???
And to the clueless guy who mentioned “non-computerable problems” and called them a myth, how does AI help you solve the halting problem, when even humans CANNOT do it?! Seriously, get an education before you shoot your mouth off.
-hugh
hrm? Conscious, Emotional machines? I think I had one once… It SKERD me, so I baught a mac.
I don’t think you’re getting the point.
First, one must show some humility when speaking about consciousness, and must understand his limited nature.
Second, even if you could define what consciousness is with infinite precision you must understand that the language is intrinsically ambiguous and if it wouldn’t you cannot be sure that you can communicate one concept to some one and having him to understand that concept like you do. Your vision of how you see the nature, is not sharable with certainty (as a trivial example, please, consider this: to one is impossible to say “this object is red” and be sure that another one sees the red color as the first does. Maybe what the second sees as red is what the first would call green, but it’s not possible to communicate what you really perceive)
Third, if you understand what the consciousness is (which IMHO is a property of human being only) you automatically understand many many things at the level of existential questions, and if you understand things like the meaning of life, its nature, etc… I don’t think you’ll go ahead in constructing conscious machines.
Fourth, the main point is that the ideas exposed in this article are certainly imaginative, but in the end you should admit that a topic like this involves deep investigation, opened mind view, religious questions.
Fifth, even if you don’t accept God (I hope not), you can understand by using your intelligence that the deepest mystery of the life is its mysterious nature, and always with your intelligence you can realize that living inside the universe means be constrained to its rules. You can investigate them, but investigating the nature from inside means to always interact with the universe during the investigation; and this involves modifying its status, hence you’re missing the capability of measuring its status without affecting it.
So that implies you must live with your limits, and ask two deep questions: who has created the rules? who gave the consciousness to you?
“to one is impossible to say “this object is red” and be sure that another one sees the red color as the first does. Maybe what the second sees as red is what the first would call green, but it’s not possible to communicate what you really perceive”
I’ll respond to this despite the lack of grammaticality. This has nothing to do with consciousness; it is a shortcoming of human language.
-hugh
Sorry for my grammar. Would you mind to correct that sentence please?
What I wanted to say is that human language is ambiguos and that even if you can say what the consciousness is for you, you have to say it with words and concepts that maybe have a different meaning for another person, and even if you share with him the concept you cannot be sure that you’re really thinking at the same “thing”.
As you can say it’s really difficult to expose this concept for me, and be sure you understand what I mean.
Sorry for being a grammar nazi. The statement was understandable.
I wanted to point out that ambiguous language is a separate issue from the fuzziness of consciousness. I think you are right about the fuzziness problem, but even if our thoughts were perfectly unambiguous, expressing thoughts in human language could still be problematic.
The problem you described before is one of the speaker and the listener having different internal knowledge representations. This isn’t specific to conscious systems, even databases can have different internal representations for the same external data.
-hugh
4 words: SEARLE’S CHINESE ROOM EXPERIMENT. nuff’ said about machines and so called consciousness
SEARLE’S CHINESE ROOM EXPERIMENT
link?
I read about it;
It is based on the assumption that someone who doesn’t know Chinese can use a script and be indistinguishable from a native Chinese speaker when answering questions.
I don’t accept this assumption. What happens if the question-asker asks something that isn’t covered by the scripts???
-hugh
Google it if you don’t know.
The point of the experiment is not that it can be fooled and asked a question it doesn’t know. Assume it can answer anything.
The point is that while the translator in the room (i.e. a computer) SEEMS conscious, he is not really conscious because the very nature of machine prohibits true consicousness (no qualia, etc.)
So while the translator can answer anything he never understands anything he’s answering. He’s just manipulating symbols like a computer.
So in conclusion I agree with Searle and think we could build computers virtually indistinguishable from people with enough technology but they would never be truly conscious nor feel emotion or anything else like we do because they would just be blindly manipulating symbols. Never understanding like we do. Inside as cold and void as space.
There are many refutations of Searle’s Chinese Room Thought Experiment. Here is one:
http://12.108.175.91/ebookweb/discuss/msgReader$877
“Assume it can answer anything”
No. That is not an assumption I can accept. It would have to understand the language to answer anything, which this experiment tries to refute.
-hugh
Alright, well if it can’t answer everything because it needs to truly understand the language then there are complications I agree but not necessarily against the argument:
1. If thats the case machines will never be able to answer everything since they can’t truly understand, so Searle is still right.
2. If machines really can answer everything then they must be able to understand, so Searle is wrong.
So no conclusion can be made from that alone.
I’m sure there are a lot of arguments against the chinese room, and I’m no philosopher, but my 2 cents is I agree with Searle’s line of thought based on what I know about cognitive science and computers. Most of the field is still relatively young and no one can even agree on what consciousness IS to begin with much less if machines are really capable of it.
Nevertheless, the idea of machines FEELING emotions and EXPERIENCING beauty or pain or existential anguish or despair or love and so on seems silly, unless you start from the premise that humans are glorified computers which I do not believe (For example it seems that we are not deterministic even if the rest of the universe was La Placian (sp?) for one thing).
But nothing is conclusive on one side or the other. I’ve got my bet placed though.
I tend to think we are deterministic. I have no choice but to think that.
-hugh
hehe nice one
If you slam an article without reading it(your own admission) first I think it’s reasonable to call that trolling don’t you?
hugh, i have an education already.
Please re-read my post and you will notice that the myth attribute is applied to the case of problem that can be already resolved by human (i separated my post into categoty, that particular part was a subset of the category intended).
As for the grammar english is not my native language. Instead of fast judging like you do, assume this when reading post and try to reformulate it in your head.
Self Awareness is important however I think self-awareness and conciouness are often taken out of context in what they mean for a living creature. There is every reason to believe that most mammals are self-aware and have enough conciousness to have a personality.
WHat makes humans rather special is imagination. The ability to put themselves into another situation and think it through. However we aren’t particularly advanced at it. It isn’t until about age 3 that we develop the ability to empathize with others and consider their point of view. Before then it appears we can’t even comprehend the concept properly. On the evolutionary scale it has been hugely successful advance.
Most likely any artificial intelligence will have a lot less grey area than humans do. Nearly all people display irrational behavior, such as phobias, delusions, addictions, insecurities and of course numerous mental illnesses. In a person these are functional components that have been expanded through evolution to be more than just survival traits. (Similar to how a patch of hair has evolved to become a rather distinct and useful characteristic of a Rhino)
The function of the brain is slowly being determined. How the various parts of it fit together to produce what we feel and see and think is becoming apparent. I doubt we will achieve the same balance that humanity has, or even the same complexity. I do not see the economic gain in realistic artificial people. Intelligent machines will probably only ever have focused personalities that are more absolute and reliable than our own. However self-awareness and reliable un-encumbered imaginations are enormously powerful and useful traits.
Of more interest would be what would happen if we were to start tinkering with our own makeup to take away or to restrict the affects of overwhelming emotion, and irrational thought. Experiments have been done, but results thus far are unsatisfactory. Perhaps we will make ourselves into the artifical intelligences of the future.
From your post:
“non computerable problem: That is a myth, ”
I read that as meaning that the idea of problems which are non-computable is a myth. What did you mean to say? Surely you realize that there are problems that computers cannot solve. (like the halting problem)
-Hugh
“That is a myth, each stuff that can be made by a human can be done by a machine”, so i refer to stuff a human CAN do.
That said i don’t think problems can’t be solved at all. Perhaps for now, but they will eventually be solved.
Of course some problem are not solvable, like if i want chocolate cake as big as our planet.
http://en.wikipedia.org/wiki/Halting_problem
“As for AI i think we have bucklies chance of getting this right (well not for a looong time) when we can`t even get a 100 million line program right.
Imagine an AI entity with severe mental problems that can`t be explained!”
The thing is that AI is a program, a sum of informations/instruction. And what AI do best is handle informations. So one day (probably not so far) an AI will be able to create another AI. Once this point is reached, we will have less and less control on the existing AI. For bad, or for good…
I think that’s the way AI will evolve and will get free of our slow progress.
Personally looking at the comments in the article on how AI emotion is hard, and AI probably won’t be emotional I think this is the reverse of what will happen.
Emotion will come first then human level conversation abilities. Emotion isn’t really that hard, I replicated a little experiment once (can’t remember the orginal author, I’ll look it up later) where you simulate a group of mobile entities that have two simple drives, to eat and reproduce. They where attracted towards things that satisfied there needs based on the internal state of how much need they had for the thing and a radial falloff relating to distance. The results may not have been pleasant but they did show emotion, primarily lust and fear. A more complex simulation could lead to more complex emotional behaviour.
Likewise in the biological world if you look at animals with less inteligence than us they are emotional and can comunicate this emotion, and it is this that makes them seem to have self awareness (even if it is at a less level than for humans). If you have a look at your pet, if you have one, it is most likely that you will see that it has some level of self awareness, and therefore conciousness, this is without any high level conversations but simply because it displays emotion.
Proffessor Penrose’s theory that AI will not be possible without taking into account the quantum effects inside the brain I believe is probably wrong. It is possible to completely rearrange the quantum structure of the brain with no effect on the conciousness of the indiviual, this is done daily as a routine MRI scan. Therefore the quantum effects cannot be a vital part of conciousness. The randomness it gives maybe useful, but this can be simulated to a high degree.
The problem with Searle’S Chinese Room Experiment is that it assumes a priori that human conciousness is more than a mechanical activity and therefore even if something mimics it prefectly then this mimicary is not the same as having true conciousness.
This is the reverse of the assumptions of the Turing Test that AI is based on that there is no difference between perfect mimicry of conciousness and having it.
The two approachs are fundimentally irreconcilable, so either you go with Searle and there is no possiblility of true AI just good mimics, or Turing in which case AI is possible just very hard based on the evidence so far.
Personally I go with Turing as you can never know if someone you are talking to is truly concious, in the Searle way (You cannot really know that they exist at all, Cartesian Dought, but it’s best to assume they do), just that they behave like they are. Therefore if I assume that they are concious then I must assume that a machine that can behave like them is also concious.
If you take the view that there is no such thing as un-consciousness, (that the universe itself is conscious), then the whole point is moot. I know that many people from India take this view; in fact, I have a book at home called The Conscious Universe, by an Indian physicist. I will pass the reference along tomorrow.
Anyway, taking that view really changes the angle about conscious vs. unconscious machines; i.e., machines are something that the Universe is doing, just as we are something that it’s doing. Since the two activities are highly related to each other (we are the “parents” of these machines in that they exist through “the human agency,”) I tend to believe that whatever relationship may evolve between humans and machines must therefore be an outgrowth of that relationship.
Just as one example, consider the question of “artificial intelligence.” While the AI researchers are trying to develop “human-like cognitive behavior” in machines, the machines are sedately managing many of the activities that humans formerly handled (arithmetic, certain kinds of analysis, data searches, monetary transactions, pictures (as opposed to realistic paintings and photographs,) even rudimentary musical compositions and entertainment.) Are they “conscious” of all this? Do they experience the same distain from all that hard labor that humans have in the past? Remember that people view suffering from human endeavors of these kinds as “paying dues,” because the ultimate benefits outweighed the costs, even when we had to do them.
If they are conscious of it, how would we know. We can’t even talk to whales, nor other primates, well enough to ask *them* how they feel.
One thing I will say: If one of us could come back here 10,000 years from now, and if humans and machines were still here, I doubt we could even comprehend the relationships between them, much lest predict them.
We actually have conscious computer research going on. It’s out there and useable. The problem is that people don’t like virus’ and worms very much, so they stifle any development on them and try to kill them instead.
If people want conscious computers they’ll have to learn to think of the computer as not entirely theirs to control. Other wise they will never become conscious.
Just my 2 cents.
The problem with AI is that the scientists are only looking at 1/3 of the picture here. They see the human brain, made up of matter and energy, and assume its functions can be reproduced with other matter and energy. However, this is an assumption that anyone who believes in a spiritual dimension will regard as false.
As a Christian, I believe that humans have a spiritual dimension, and, as such, part of our “intelligence” or “self-awareness” exists outside the boundaries of space-time. (Obviously this idea isn’t unique to Christianity — most religions assume the existence of a spiritual dimension.) In this case, our brains are responsible for only part of our thought processes. In fact, my personal belief it that the brain partially acts as an inter-dimensional interface between physical and spiritual — which would explain why people’s thoughts and actions can be influenced by chemicals and physical defects as well as the spirit.
Some of you may scoff at these concepts saying they can’t be proven, but, on the other hand, your thinking is also clouded by your personal faith in atheism. Believing there isn’t a God requires as much faith, perhaps more in a way, than believing there is a God. If there is a God, and there is a spiritual dimension, then all “scientific” concepts of AI go out the window because how do you program the human soul?
Jared
“Some of you may scoff at these concepts saying they can’t be proven, but, on the other hand, your thinking is also clouded by your personal faith in atheism. Believing there isn’t a God requires as much faith, perhaps more in a way, than believing there is a God. If there is a God, and there is a spiritual dimension, then all “scientific” concepts of AI go out the window because how do you program the human soul?”
Well, I am strongly agnostic. I believe that is essentially impossible for humans to know whether God exists or not. I believe all major religions are intricate fabrications of centuries of independent fabrications of the mind carried forward. I think atheism is the pure denial of religion, not a serious explanation of the universe.
Science is based on developing theories to explain evidence. In all belief systems, one can find individuals that refuse to adjust their belief systems based on strong contrary evidence, or refuse to seek alternative evidence.
What direct evidence of the soul do we have? Why should we believe in its existence?
The Halting Problem is undecidable. There are many other undecidable problems. In fact, there are infinitely many. Even worse, they form an infinite hierarchy…so even if some magical construction could solve one undecidable problem and those problems that reduce to it, there is always infinitely many more problems that are still undecidable, even with such an Oracle.
The universe is a deep thing, and we have discovered that even the most powerful mathematical tools that can ever be created will ever form a closed system to explain it. There will always be something that is true but cannot be proven.
Fairly depressing to think about.
“Some of you may scoff at these concepts saying they can’t be proven, but, on the other hand, your thinking is also clouded by your personal faith in atheism. Believing there isn’t a God requires as much faith, perhaps more in a way, than believing there is a God.”
Using this kind of logic, I can also say “believing there isn’t a NONSENSE requires as much faith, perhaps more in a way, than believing there is a NONSENSE.”
Believe whatever you want to believe. What I believe or not believe is NOT your business.
This is a very interesting thread, but why do I get the impression that philosophy is becoming a personality disorder. It seems that everyone mainly has opinions about why everyone else is wrong, and imply that nobody should venture ideas for fear of being vilified. It seems the primary goal of philosophy is to discredit all others’ ideas with semantics and sophistry. And then someone steps in with a mild voice selling divine intervention to calm our troubled souls.
Sorry, but I just enjoy the possibility that maybe somewhere, in the forboding quagmire of communication, some people manage to inspire my thoughts to new heights (I can manage a few inches already). I’m no scientist, and no philosopher, but I’ve worked with folks who have profound issues with their lives for twenty years. Such people offer great insights perhaps. Who gives if peopleare right or wrong? I can’t know that. But I do value all the sparks that fly from peoples’ enthusiasm, and I resent that belittling tone of some of these mails above.
Seems to me that if computers are to emulate or reproduce (forgive the lack of precision in my language – I haven’t time for it) human consciousness, then they would need to provide a series of consciousnesses linked together so they could truly develop the awareness that we assume is related to one such phenomena. As Lange speculated, the self may well be divided, and in that division the mutual awareness of different “selves” fosters what is commonly known as self-awareness.
And then maybe I’m wrong. If you think so then maybe a simple “I don’t agree” would suffice.
Mmm (dons ear-defenders and waits)
:-()
Here’s a snippet from the website
http://www.wired.com/news/infostructure/0,1377,56459,00.html
:
A human brain’s probable processing power is around 100 teraflops , roughly 100 trillion calculations per second, according to Hans Morvec, principal research scientist at the Robotics Institute of Carnegie Mellon University. This is based on factoring the capability of the brain’s 100 billion neurons, each with over 1,000 connections to other neurons, with each connection capable of performing about 200 calculations per second.
According to the list of the top 500 supercomputers in the world, circa 2004 ( http://www.top500.org/list/2002/11/# ) ,
the current reigning champ (fastest computer in the world) is the NEC Earth-Simulator, with a speed of 35.86 Teraflops .
Anyone else worried about the fact that the fastest computer on earth is now performing calculations at about one-third the estimated rate of a human brain?
Admittedly, the NEC Earth Simulator is almost five times faster than the next fastest computer on the list. Also the kind of computer currently packed into any kind of AI or robotic project is orders of magnitude slower than anything on this top 500 list.
And more comfortingly, the average “very fast” PC of today (Pentium 4 3.2 GHz, Athlon XP 3200+, etc) seems to run somewhere in the 5 Gigaflops to 10 Gigaflops range. Since a Gigaflop is a thousand times smaller than a Teraflop, that means today’s consumer PC’s are about ten thousand times slower than the estimated 100 Teraflops of the human brain. No wonder AI hasn’t taken off in a big way yet: our computers are barely brighter than a housefly. And already they can recognize faces, recognize voices, create music, and otherwise do things that show the beginnings of “intelligience”.
If Moore’s law continues to hold, PC’s will have ten thousand times todays processing speeds in about 13 speed doublings, or somewhere around 20 to 26 years. In other words, if the semiconductor chip making industry continues to deliver as it has since 1964, we can expect PC’s that can calculate as fast as a human brain before 2030.
If today’s PC’s can do math faster than I can, and can recognize voices and faces about as well as, say, a sparrow, what will a PC 10,000 times faster be able to do?
I turned 40 today, so there’s a good chance I’ll live to 2030. I wait with excitement and trepidation in equal parts to see what may come.
-Gnobuddy
God has been mentioned above and of course the concept and/or the existence is highly controversial. But at least if God (or gods) is responsible for a part of our act (hidden in randomness of life ?), then why wouldn’t it be the same for machines ?
We might have a conscious or not. But maybe machines don’t need one to live and evolve and grow. They might also never need to interact with us. But I doubt that part, because we have a lot of history/informations that they could learn from. So at some point they will need to understand us. But maybe they won’t care if we don’t understand them.