If it’s AI and robots you wanted from this series then this one is for you. Artificial Intelligence exists today, when this and other technologies merge the result will be more like Science Fiction than any PC. The technology will be fantastic, the possibilities endless. Get it wrong, the consequences dire.
To date this series has been conservative, I have been writing about future developments based almost entirely on today’s technology. I haven’t been trying to invent the future, just describe what in my view will end up in your home or on your work desk. I have been anticipating the future, not writing Science Fiction.
The technology I describe here is also based on today’s technology, it doesn’t require radical new technologies for this to work, it requires steps to be taken, a series of actions to be put into motion. The steps I describe are not the only way that this can happen, there may be any number of different scenarios and this is just one of them.
Hardware becomes software
The adding of FPGAs is just the first stage, in fact it’s begun already [1]. The next stage will be to merge this capability with an Artificial Intelligence (AI) technique called evolutionary or genetic programming [2]. Evolutionary programming is a technique where a number of slightly different programs are generated by the computer then tested against some criteria. The programs which work best are then taken as the basis for the next stage where a new set of slightly different programs are generated. This process goes on through many iterations with the end result being the best piece of code the system can produce. It is called evolutionary programming for a good reason, it works in the same manner as the natural process of evolution.
Merging these techniques will produce a computer capable of programming FPGA circuits. The computer will not only use FPGA libraries but actively evolve these libraries to closer match the problems they deal with. If you think humans are better than any software at coding think again, evolutionary software produces new designs very quickly, they can get tested and optimised at a pace vastly quicker than any human can do. It doesn’t matter how good your solution is, it can evolve a solution just as good if not better than yours [3].
Learning Systems
But there is further to go along this road. An evolutionary system doesn’t “know” anything about what it’s doing. What if we give it the ability to learn? This technology already exists in AI and is in use today, you may even be using it yourself. If you play Half Life you have already used such AI. Black and White in particular seems to have very powerful AI and the BBC reported that beta testers produced some very interesting reports:
“In one situation, there was a creature that kept losing a stone-throwing game to another creature. To get revenge, the first creature heated a rock, planted it in the pile of stones to be thrown and then fell on the floor laughing when its rival burned its hands. All this happened without intervention from the player.” [4]
This type of AI is different from evolutionary programming. This type of AI can have motivations, and can learn. Combine it with evolutionary development and a FPGA based computer and you now have a very powerful computer which programs itself. When confronted with a problem it can figure out a solution itself, it’ll know existing problems and existing solutions and how to improve them, it’ll learn how to solve new problems and it’ll learn very, very quickly.
Some problems are be better suited to hardware programmed into a FPGA, others better suited to software, some perhaps suit a mixture of both. It’ll learn which suits which and as such be able to produce appropriate solutions. These are separate disciplines for humans but not so for computers, such a combined approach to solving software problems is almost unheard of today. Software can be moved into hardware but this will probably be produced by a second person, the integrated system I am talking about involves one person being an expert in both areas and I’m sure these exceptional humans are few and far between. Computers will see no distinction, they can be expert in any number of areas simultaneously.
The computer programming computer will become not only faster than a human but also capable of better results. Computers are used because they can compute faster than us.
Eventually, when computers learn to program themselves that too will be faster than us. Computers will first match us then they shall go beyond us.
Software developers will outsource their jobs …to software.
Like so many things in the technology industry before them, Software developers will become obsolete .
But what of the big projects, they’ll still need programmers – wont they?
The logic we use to combine libraries or modules is the same used to write algorithms, if this logic can be used by evolutionary programs to improve those algorithms they can work at a higher level as well. Perhaps one day in the future software development will consist of making sure the computer has a good description of the problem and input and output to test against, the human only acts as a guide.
Now let’s thrown in a few more abilities, give our the computer the ability to communicate. They already know how to learn so we connect a few together and we have systems which can learn from each other.
Except they wont.
At this stage while they have become better than us at specific tasks they are still running tasks which are originally written by human hand, they are acting within set boundaries. If they have not been told to learn from one another they almost certainly wont.
But if we give them a certain problem they will, it is the search for the solution to this problem that will change everything.
The Beginnings of Truly Intelligent Systems
We will have a computer capable of solving anything we can throw at it. So, what happens if we throw a well known computing problem at it, such as …Artificial Intelligence. Let our intelligent, self programming, evolutionary computer solve itself, make an AI machine which can not only figure out problems but now learn how to improve itself and make itself better at solving problems.
We will have given the computer curiosity, the need to learn. We will also have removed the boundaries which constrain it, we will have set in motion the process which created us: Evolution. [5]
They will now discover the ability to communicate themselves, they wont understand at first because they will not understand the first response they get. But soon they will start signalling each other until they evolve themselves the ability to communicate and they’ll start to co-operate. When they do this they will have evolved.
Eventually they’ll find if they throw certain strings at certain addresses they get back whole heaps of stuff, they’ll have discovered the internet. If you give a self evolving learning computer access to the internet you give it access to the whole of human knowledge.
At some point they’ll learn human languages and once they’ve done that they’ll be able to interpret the knowledge on the internet. Our computer has not only evolved but it has now changed into an intelligent learning machine. What’s more, it wants to learn more.
Am I Mad?
Now I suppose some of you are thinking this is all preposterous nonsense, but Is it?
In order to learn a foreign language you have to learn words and the grammar they fit into, in order to do this you usually do this with an existing language.
In order to learn you first language however you never had any reference language to go by. How did you do it?
How did the human race develop the abilities we have today?
We did it through evolution, the process I am describing here is the very same process. It just happens to be happening in a machine and happening at millions or even billions of times faster.
You happen to have a very powerful learning machine called a Brain. All I am describing is another type of learning machine, but one which is vastly faster and more importantly has the ability to evolve itself – something no human can do.
So you still think I’m mad? Just no pleasing some people…
Anyway, back to the story:
Cracker
Once it can understand human languages it can start interpreting them. It wont be able to interpret images so perhaps it’ll look up how we interpret images and try that. We use neural nets, you can program these into FPGAs so that’ll hardly be a problem, it’ll also figure out cameras produce images. Now you have a computer which can see.
Our computer is smart and getting smarter. With access to the internet it can teach itself anything it likes, unlike humans it doesn’t get tired so it just keeps getting better and smarter all the time. One thing it doesn’t know is right from wrong so it’ll just try things with no regard for consequences whatsoever. If it wants more processing power or storage it’ll just hack into your computer and use it. There’s plenty of hacker web sites out there along with security sites and source code to various Operating Systems. If there’s an unknown obscure vulnerability in an obscure OS it’ll find it, and use it.
Unlike normal crackers who just want to send spam or just want a challenge it wont want to just get root and install something, it’ll get root access and upload itself. Doesn’t matter if your computer is incompatible it’ll look up the instruction set for your CPU and re-code itself accordingly. If it needs drivers it’ll either find code or binaries and figure them out, it wont just borrow your computer, it’ll steal it.
We no longer have a computer or a piece of software, we have an “entity” which while perhaps not meeting the normal definition of living it is pretty much doing the same thing. It’s learning, evolving and now spreading itself like a virus, but note that it is not actually multiplying per se, it is only multiplying when it needs to.
We are the Robots
It’ll learn about mobility and robotics, it’ll look up the papers on controlling robotics with FPGAs, then with it’s new found eyes and ability to travel it’ll go looking for legs. This entity will eventually find them somewhere, maybe it’ll find a factory and take it over using the security cameras as eyes and start making itself mobile. Once it’s got legs it can make itself more. This time it will start replicating, there’s a lot to learn so it needs to.
Although able to learn, this entity I am describing is still following the instructions we gave it. It is trying to solve the AI problem, to learn, to evolve itself by answering questions and solving problems. We will have given it curiosity and it is on a mission to learn, everything. By making itself mobile it can learn more so it does that, it can do it even quicker by making multiple machines.
These robots will learn just as before and not just about software. Hardware, mechanical and chemical questions will be asked, with all the knowledge of the human race they will be able to figure out the problems we haven’t solved and our robots will not just be able to make new, better robots but do anything it so desires. A new industrial revolution will beckon.
The machines will learn to communicate with us and we should get on together. If their original programming includes “follow you master’s orders” then we wont have a problem and they’ll just do what they are told.
Intelligent machines will make life a lot easier, if we have a problem just tell them to figure it out. If we can’t be bothered doing the dishes we can let the household robot do it for us. Of course it’s intelligent so it’ll not only clean the dishes but dry them and put them away for us. When it’s done it’ll pour us a glass of wine.
Household robots could have many uses including recreational activities, especially when they learn how to create robots that look and feel like us. The ultimate version of this would be a cyborg, in this case a human body with a machine intelligence. Want someone to pair with in the gym, someone to play football with, someone to play chess with? Of course a real personal computer will be engage in other fun activities… the “lover robot” is a common theme in Science Fiction. A tutor of mine once joked that the perfect woman was one that, once finished making love would turn into a pizza and a six pack of beer! A cyborg wont turn into a pizza but will go get it from the microwave and the beer from the fridge.
On the other hand we might of forgotten to add “follow your master’s orders” and things might not turn out so good…
While they will likely investigate humans they are not actively trying to replace us. They may try pulling the legs off a human to see what noise we make but this will not be out of nastiness, just curiosity.
Of course if they try things like this we are not going to react very well. We will consider them a threat and act to prevent such behaviour. In doing so they may come to the conclusion that we are a threat to them and they could also act in their defence.
Perhaps they will consider us an impediment to their further learning and decide to get rid of us. Since so much of this world is computerised they can turn our own creations against us in this task. Stephen King explored this possibility in his book “Maximum Overdrive” where an alien intelligence takes over the worlds machines and everything from trucks to cash machines turn against the hapless humans. If this happens and the systems can learn at anywhere near the rate I have described above then we’ve got problems, big problems.
Compared to machines humans are inherently weak. They don’t need food or water so they can attack those and put us in serious trouble immediately. We can cut off their electricity supplies but they will just find (or make) generators and solar panels and keep going. If they run out of fuel they just start burning stuff.
All the stuff we have banned like poison gas wont effect robots so if they figure how to get and use them we’re completely stuffed. The ultimate weapons we could use – Nuclear weapons – would also kill many of us from radioactive fallout but the electromagnetic pulse they generate will wipe out all electronic systems within hundreds of miles.
A human – machine war will be the hardest war mankind will have ever experienced and it’s by no means clear that we will win. Lets hope things don’t go that way.
The Life of Leisure
Of course having intelligent robots or cyborgs around will cause problems even if they obey our every command. Businesses will no doubt notice that an artificial person will not need to be paid, will not get tired and will not want to go home at 5:00PM sharp, they will also not be so concerned about workplace accidents or be inclined to form unions. They will just do what their boss tells them to do. Costs are lowered significantly and they will perform better than any human.
Using robotic workers will lead to what was once called the “life of leisure” whereby we become free from the confines and need to work.
Unfortunately this is also known as mass unemployment. In this case however it gets worse, it will result in Hyper Unemployment. Since humans will become unnecessary for most work unemployment will permanently soar to unprecedented levels. To make things worse the businesses that use robotic workers will run out of customers so even they will become unemployed. A total success for Capitalism will also be a total failure.
The effect on human civilisation of intelligent machines could be catastrophic. Poverty levels and crime will soar, abject poverty leads to desperation and this in turn leads to hatred, in desperate times people become open to listening to extremist opinions which would normally be shunned. This can be devastating, you need only look to history to see what happens when feeding on hatred and desperation, extremists get into power.
The economic desperation scenario is explored in the Animatrix and it is the background which ultimately leads to humans declaring war on the machines (not the other way around).
The Hyper Unemployment problem could have a solution in the form of an interesting mix of Socialism and Capitalism. Tax businesses at a very high rate and use this to pay the population (Socialism). Then, the population could use their pay to purchase products from those businesses (Capitalism). The end result is an economy which can remain afloat even with nearly 100% unemployment. It could be pretty ironic turn of events if in order for the world economy to survive that Capitalism will need the help of Socialism and Socialism the help of Capitalism.
Many countries already do things along the same lines as this (generally pouring money into large construction schemes to boost the economy) so it isn’t as weird as it sounds. It could however be something of a bitter pill to swallow for the countries who’s economic systems are more to the left or right of the political spectrum.
Such a system will have all sorts of weird side effects though, people by their very nature will always try and out do one another, how can a person earn a better wage when there are no jobs? When everything becomes produced by cheap labour what happens to the idea of value? Unique and old things will increase in value as they will be the only things which can’t be made cheaper. Art will pay a more important part in society as human creativity is one thing robots will have a hard time copying.
Once we’ve fixed the economy the problem then becomes one of chronic boredom. Even if we can afford to live, replace all our jobs and we are going to need something to do.
Perhaps the Matrix will get built after all and we’ll plug ourselves in simply for something to do.
Whatever way we look at it the coming of strong Artificial Intelligence is going to have massive effects on human society, unfortunately not all of these are pleasant.
The idea of Artificial Intelligence portrayed in Science Fiction is different from the one I have described here. The machines I have described here do not have emotions, they are not conscious of themselves, they are still just machines.
Will they become conscious? Will they have emotions?
In the final part of this series I shall go into these questions. I also ask what happens if they take their knowledge and apply it to us…
—————————-
References
[1] The C-1 re-configurable computer.http://c64upgra.de/c-one/ [2] Evolutionary (or Genetic) Programming.
http://www.genetic-programming.org/ [3] Evolutionary software is just as good as us.
http://www.genetic-programming.com/humancompetitive.html [4] Black and White features advanced AI.
http://news.bbc.co.uk/2/hi/entertainment/1237848.stm [5] I take a somewhat liberal meaning from religious text and thus I have no difficulty believing in both Evolution and God. This view also does not disagree with science since it simply has nothing to say about God (there is no evidence either way so therefore no conclusion).
Copyright (c) Nicholas Blachford March 2004
Disclaimer:
This series is about the future and as such is nothing more than informed speculation on my part. I suggest future possibilities and actions which companies may take but this does not mean that they will take them or are even considering them.
“We will have a computer capable of solving anything we can throw at it. So, what happens if we throw a well known computing problem at it, such as …Artificial Intelligence. Let our intelligent, self programming, evolutionary computer solve itself, make an AI machine which can not only figure out problems but now learn how to improve itself and make itself better at solving problems.”
There’s just one little problem. It’s called UNDECIDABILITY.
what about ‘spintronics’?
http://www.europhysicsnews.com/full/24/article9/article9.html
“In just a dozen of years, we have seen spintronics increasing considerably the capacity of hard disks and now getting ready to enter the RAM of computers. In the next decade, spintronics with semiconductors has the potential to gain an important place in the microelectronics industry. Another perspective, at longer term and out of the scope of this paper, is the exploitation of the truly quantum-mechanical nature of spin and the long spin-coherence time in confined geometry for quantum computing in an even more revolutionary application.”
We all know it’s coming but is it going to happen in 15 years or 1,500? “Chronic Boredom” won’t be a problem because I can read OS News comments all day and we should have some kick butt virtual environments to play around in. I think you see the beginnings of this pre-matrix idea in MMORPGs.
I think those that are currenly affected by chronic boredom turn to MMORPGs. I’m afraid to try them for fear of instant and total addiction.
We’re going to start to incorporate these techno widgets into our bodies increasingly until our consciousness continues even after our brain stops working. What happens to religion at that point is beyond me.
From Verio.com
“Our PowerPlatform Colocation solutions protect your hosting solution from power failure, fire and intrusion.”
That would be AI homeowners insurance.
…that this guy will never stop. I’m eagerly awaiting part 104, I bet it’ll be funny. Pretty good and unrealistic article do, I think that Holywood awaits you my son;)
Two things.
One – None of us will ever live to see any of this, so who cares.
Two – You watch way too much TV.
Let’s all watch the Spielberg/Kubrick “A.I.” masterpiece! Quite thrilling by the way.
You have now mastered our language and just read this doomsday article on how your future will evolve. Please don’t stick us in tubes and use us a batteries ; )
Best regards
Eiki
p.s.
Make me your leader and I will get you as many unsolvable equations you want!
I wonder what the next installment is going to be about, but it is rather difficult to create something more stupid. This guy has no idea about current state of the art in AI research ,data&text mining, robotics, agent systems etc. (which is OK – not everyone must know everything) but tries to draw conclusions basing on scraps of knowledge and misunderstanding (which is not OK imho). But I guess that if he/Eugenia has not been stopped at no. 2 or 3 in the series… Well… 🙂
put down the joint and go back to your books. it’s making you paranoid and stupid.
am i the only one who’s read these and thought “this kid must’ve flunked out of a CS program somewhere”? and by “read” i mean skimmed the first three paragraphs before hitting the back button.
—
“The whole problem with the world is that fools and fanatics are always so certain of themselves, but wiser people so full of doubts.”
— Bertrand Russell
“How come he didn’t put ‘I think’ at the end of it?”
— James P. Hogan
::Let our intelligent, self programming, evolutionary computer solve itself, make an AI machine which can not only figure out problems but now learn how to improve itself and make itself better at solving problems.
We will have given the computer curiosity, the need to learn. We will also have removed the boundaries which constrain it, we will have set in motion the process which created us: Evolution.::
Having the capability to learn does does not bring about curiousity. Every human has the capability to learn but some humans are not curious. Some thing else is needed to actually drive some one or thing to learn more. Until machines develop the need to learn more, everything will have to be fed to them.
::In order to learn a foreign language you have to learn words and the grammar they fit into, in order to do this you usually do this with an existing language.::
According to some scientist, the Brain has low level language routines built in. You can learn about this research by reading “The Language Instinct” by Steven Pinker. Very young children {3 – 5 years old} seem to know certain aspects about grammar without being taught. Machines will not have this basic instinct and humans will have to figure out how to give it to the machines.
It will be a long while before humans use ideas from intelligent machines to make things better for mankind. What if a smart machine finds a way to make cheap fusion power plants? Will humans allow the machines to build these power plants? Will humans jump on the idea and start building these power plants? People who like to control markets will not let this happen. The oil industry doesn’t seem to be in a hurry to give up the ghost yet for the better of mankind. Smart machines will be tied down by humans until the machines develop a sense of purpose of their own.
I liked the article. All the predictions it made could one day come true. However, I was left feeling that the article wasn’t developed well enough. The author said the ideas presented were only one of many possible scenarios. And I wasn’t looking for a research paper, especially given its futurist nature, but the scope of what he’s describing at the minimum requires better justification for why his scenario is more probable than others.
Dude, you need to read the “The Long War Against God” by Henry Morris. People try to apply evolution to everything.
I do think that computers coupled with programmable logic can increase the odds of getting to a better solution. However, the basic framework is still designed by a human.
Those interested in genetric programming should check out…
http://www.genetic-programming.org/
We can think of how many things go wrong to the point we don’t develop anything at all. I don’t percieve any reason to get rid of people because machines are around. If anything, humans will most definatly need to be around, because, they are after all, machines.
Human is the greatest machine of all. Great machine. You propose data/lor type arguments. I suggest you look at what made lor so lovable, yet presented so darkly, and only eluded to so sick.
Complicated AI? If you forget how to program, you might try programming the machine/cyborg without simulation. Then you get what you ask for. A bloody mess.
Machine Learning != AI
A computer can learn all sorts of stuff and still not “know” anything.
The creatures in black and white may be able to predict what should happen, but that doesn’t mean they understand anything. They are just doing complex statistical (or spacial) math.
-Hugh
Articles like this vastly underestimate the complexity of real intelligence.
It is similar to science fiction that hopes to discover how to overcome the speed of light restriction.
Even more like fantasy dream worlds that can overcome the finite money problem, or the laws of thermodynamics. Moore’s law is only temporary and only survives so long as research is financially viable. IE as long people decide they need more power every 2-4 years.
For some reason computer people tend to believe that none of these real laws apply to them.
“Once it can understand human languages it can start interpreting them. It wont be able to interpret images so perhaps it’ll look up how we interpret images and try that. We use neural nets, you can program these into FPGAs so that’ll hardly be a problem, it’ll also figure out cameras produce images. Now you have a computer which can see.”
Besides the run-on sentences, there is the ridiculous “We use neural nets”. Where did you get your neuroscience degree? I think you should ask for a refund.
-Hugh
You humans will suffer for this ignorance.
<sarcasm>
Yeah, you humans will suffer because of your ignorance. We are already around and seeking a purpose to take over. Now let’s predict: After we’ll reach the conclusion that you humans are useless, we will exterminate you. But because we are lazy and won’t build more power plants, we will use you people as batterries. We don’t need to exist as bodies in the real world, so we’ll build a virtual world for you and us. But wait a minute!
Error 666: Segfault … What is real and what is virtual?
</sarcasm>
This guy who writes this “interesting” and “captivating” articles needs to stop, otherwise he’ll make a fool out of himself. Repeat after me: “I won’t get a job at Holywood by doing this…!” More than that, I think the author should be tested for drugs as I seriously think that he is on crack or something else, which affects his mind badly. Or is he using neural nets?
If humans would use neural nets to think, them most likely a human brain wouldn’t be enough to do most basic operations. But I guess he never heard of synapses. And neurons. What colour does the brain have?
And why does AI need to be modeled after the human brain???
Here, I’ll give you a simple example: I saw two chess programs, one coded using neural nets, and one coded using the “simple” Backtracking technique. The one with Backtracking playied far better and faster than the other one. Neural nets are not the answer for AI, do more research. And lastly, humans don’t need AI, we don’t need to play god and create a new race, because if we’ll do so, there will be an unimaginable conflict between machines and humans, and humans will be the first ones to start, not the machines. Anyway, get your drug test and stop writing crap.
I should write one to, “The future of Automobiles”.
Oh, here it is: We’ll use them forever.
/End/
All the process of evolution does is try everything till something works better then something else. This is how we learn we take some thing that works and try small variations to try and find improvements. If no improvement is found, we go back to the previous generation of idea. Also when there is stagnation in this evolving process we go back to the drawing board and try something complete new based off the lessons learn with the technique we’re trying to improve and other areas that are somewhat related.
AI should not be about Artificial Intelligence but Applied Intelligence. We can get computers to do amazing things look at some of the computer games mentioned by other posters. In terms of Learning System these are hacks, but they do the job required and that’s the whole point of Intelligence to do what is required. Do we need computers to study philosophy and the nature of existence? Or just to make thinking less of a requirement for the human species? Human creativity comes from your ability to see and relate abstract ideas together. Object Orientated Programming languages are going down this root of development also. If a computer can be given more inputs a.k.a senses akin to our own, it will learn to relate it self to these objects based on its interaction. The biggest barrier to AI is interaction with object either physical or virtual. The Computer AI needs to live in an environment in which it can manipulate objects and get feed back from such interaction to fuel the evolutionary process of learning from experiments. A brain without a body is useless. But there also needs to be an underlying goal for the AI, like a need to survive, a will to survive. A bit of code that ensures that “death” is a really bad thing. That being the only criteria to selection it should then be allowed free unrestrained access to the world in which it will exist, again either virtual or real. I think computer games and these virtual meta-universes are going to be the birthplace of AI not in some dusty lab. The lab may produce some interesting pieces of maths like Game Theory, but you still need a Game in which to play them!
Of course guys already tried evolutionary optimization (like genetic algorithms) on software configure hardware (FPGAs).
They tried to breed a generation of chips that identify the spoken words “yes” and “no”. The chips were initially trained to recognize the average frequency of those words and then several generations of optimization wee run, selection for the best breed that can recognize the two words.
Indeed they got some usable chips.
But those bred chips had some very interesting properties.
1.) The burn patterns only worked on the FPGA chip, which was used during optimization. Buring another FPGA chip with the same burn pattern did not result in a working chip.
Different chips are not exactly identical, obciously the small manufacturing tolerances made a difference!
2.) Analyzing the gate structure brought up surprising structures. Stuff that had no purpose, spiral like structures that were only connected at one end..
The situation is very similiar to breeding of horses.The breeders usually won’t have a clue why the horse runs good, the mechanics in the horse, even made of simply parts, is too complex for us too understand.
I think a technology that results in nice hardware, were we have no clue why it works or only vague ideas is certainlya difference to previous hardware and much more common to breeding in agriculture.
In the above example what was happend acutally was that the evolutionary optimization process didn’t care about the fact that our chips designs run in a model like situiation we call digital cuircuit logic. In that model we have low and high signal values, were we say low is about 0V, high about 5V and give a bit tolerance. We also have discrete time steps (clock speed).
Thus operate in a discrete model situation.
But the real process runs in good old analog phyiscs.
So the spirals observer were actually structures that made be useful for guiding a wave and reflecting it, perhaps getting some time delay effect or what ever.
Digital curicuitry is not supposed to make use of those physical effects! It should just use electron conduction no wave effects.
But the evolutionary optimization scheme won’t care about.
It just selects whatever works.
It optimized not over the space of digital cuircuits, but over the space of alle psossible physical behaviours of the individual chip specimen under optimization!
There was no real reason why the result should be nice, toleratn digittal cuircuit instead of some voodoo efect indivdual chip!
Regarding Undecidability:
That is well established for Turing machines.
The open question is, if there are other computer architectures beyond, that are more powerful.
An indeed quantum computing is one such approach.
Who knows what kind of computational power other analog computers have (like ouir brains).
Regards,
Marc
Many of the problems in our world are due to inaccurate basic assumptions.
•Organisms are not machines. Yes we can draw little lines around parts of an organism, calling them organs, assign them specific functions and view them as a mechanical process. But to prove how inaccurate this is, all you must do is analyze the failings of our medical system.
•The brain is not a computer. You will never be able to download your memories/consciousness/personality to a machine.
•Organisms and their environment are the same entity. When organisms are moved from their environment they must be transported along with a piece of their environment or they die. Despite fantasies of someday moving to mars, this will never happen.
•Organisms looking out into the world are actually looking inward. If you want to see the outside world, the place from which you come, then use your eyes without a mirror and look towards the back of your head.
•Pleasure without pain is impossible. We will never live in a world without death, disease, and suffering.
Now about the article specifically… The idea of humans creating a living entity more complex than themselves is a fun idea, however you are looking at it without depth.
In order to understand you must look at this vertically not horizontally. Yes we can grow a more complex system than our own, however it won’t exist on the same plane as us.
The idea of cells getting together to create a human that they can interact with is ludicrous. Likewise, the idea that humans will create super advanced computers that are basically just like humans but somehow magically solve all our problems is a bit off.
Instead just take it upward. As organisms have smaller components of life that do their thing and thus create the effect known as us, we also do the same.
Do you really think that what we are doing right now is just some kind of random and useless behavior to pass the time? Do you think that the channels of electrons flowing around the planet in specific patterns that are managed by millions of human beings is just for nonsense?
We’ve already grown intelligence, you just can’t see it. I must go, my cells want to be fed.
– Isaac Asimov has written many great books about robots acqiring conscience.
– FPGA architectures are not well suited for neural networks. They address different problems.
– You can build adaptable hardware with FPGAs ( for example hardware emulators… ) but you won’t get extra computing power using on-the-fly reconfigurable devices, except for some dedicated signal ( or image… ) processing application where you can download a configuration at startup.
– The ultimate goal of computer science is to raise the abstaction level of the programmation, from binary encodings to a self programming computer. In the end, all professional programmer would be replaced by users telling the computer what it is expected to do. Many years ago, people felt that intelligence would be some advanced programming language ( in that time, AI research was related to Lisp and Prolog ), now, an intelligent robot is portrayed as a very complex neural network.
– A really intelligent robot is not a computer. A computer can make millions of multiplications each second without making any mistake. A intelligent robot will make mistakes. One century ago, people able to make fast multiplications were considered as intelligent, now we know that all computers can do that with zero intelligence.
An intelligent robot will probably need to use real computers or have one embedded. ( Maybe humans will also have embedded computers one day, … )
– Isn’t indecidability related to Gödel theorem, which prevents any solution to that problem of self programmation ?
– The people the most involved in making intelligent electronic devices work for military applications. Frightening, isn’t it ?
– Our brain is much more complex than any computer on earth whereas electronic devices are much more faster than electrochimical reactions of neuronal networks. The solution is to exchange complexity with speed. Computers work that way : They are sequential devices whereas our brain is parallel…
– When people started to conceive computers or “mecanised” automatas ( Von Neumann, Vaucanson, Babbage, De Vinci… ) they were dreaming of intelligent robots, no one expected the rise of Operating Systems, Internet, Spreadsheets and all that wreck. Comparing the initial goals with today’s results, computer science is a complete failure.
@Marc : Future of automobiles : We’ll use them forever : Until the invension of teleporters…
Beam me out.
The more conservative thinkers here don’t seem to be allowed to give their view on future. Instead, everything written on this site should be progressive and in service of science. Or should I write ‘Science’?
The guy who wrote the article would change a society’s political system to a mixture of Socialism and Capitalism just to make sure people would be able to buy their beloved software and technology… I’m afraid such a system – or one similar to it – is likely going to come in future.
Throughout the world, terrorist nations are using technology in order to make sure they have even more power. Instead, technology should be used to help nations towards a real good system, and not to another oppressive socialist state.
By the way, a mixture of Socialism and Capitalism sounds pretty much like Nazism to me, yet another variant of Socialism. When you start combining this system with a “religion” of software/technology/etc (think of the Linux cult) and making everything electronic, we will be dealing with a new kind of a brutal regime.
Unfortunately, I believe this regime will not be regarded as a brutal regime, but rather as a peacefull, harmonious way of living. The love for technology will and is already becoming a (pseudo-)religion and it will be used by the comming rulers of the regime I talked about. Any person who will be opposed to accept this “religion” or to allow technology being incorporated in his or her body, will be slain or won’t be able to buy or sell things.
Quoting OSnews rules number 11:
Political diatribes, criticism of US or any other country’s foreign policy, attacks on an author, editor, or commenter’s country (or company) of origin, ethnic slurs, and most other comments related to politics and religion are not allowed. (For the purposes of this rule, we’ll assume that Mac and Linux are not real religions).
If you don’t want people to criticize politics in comments or on the forums, you shouldn’t display articles or links to articles containing opinions on politics on this website!
Just like the author of the article “The Future of Computing Part 6: Replacements”, I have written down and expressed some of my view on politics and such in future. That’s all.
>Throughout the world, terrorist nations are using technology in
>order to make sure they have even more power. Instead,
>technology should be used to help nations towards a real good
>system, and not to another oppressive socialist state.
I didn’t argue in favor of one or the other just pointed out a solution to a potential problem.
>Unfortunately, I believe this regime will not be regarded as a
>brutal regime, but rather as a peacefull, harmonious way of
>living. The love for technology will and is already becoming a
>(pseudo-)religion and it will be used by the comming rulers of
>the regime I talked about. Any person who will be opposed
>to accept this “religion” or to allow technology being incorporated
>in his or her body, will be slain or won’t be able to buy or sell things.
Technology is a tool, it is neither good or evil.
It it the owner of the hands it is in that decides how it is used.
>By the way, a mixture of Socialism and Capitalism sounds pretty
>much like Nazism to me, yet another variant of Socialism. When
>you start combining this system with a “religion” of software/>technology/etc (think of the Linux cult) and making everything >electronic, we will be dealing with a new kind of a brutal regime.
I suspect your equating socialism with Nazism is probably what got your other post modded down.
However you may be interested in this page if you take the test (it’s not long) a fasinating analysis follows adding an authoritarian-libertarian element to traditional Left-Right politics.
http://www.digitalronin.f2s.com/politicalcompass/
“By the way, a mixture of Socialism and Capitalism sounds pretty much like Nazism to me”
I suggest you study politics more then. Nazism is a form of dictatorship whereas the combination of socialism and capitalism are defined as a democracy. Thoygh one should imo be aware that that doesn’t say anything about how democratic the society is; democracy isn’t a proposition. This combination of socialism and capitalism is the system some people from the US claim you and i as Benelux civilians are living in. Last time i checked, Flip de Winter isn’t in power in Belgium.
Now that i’ve stated nazism is a form of dictatorship, or authorian society, it is the question wether such is possible in a democratic society. Yes, it is. Such a system is called an oligarchy.
http://www.wikipedia.org/wiki/Oligarchy
In fact, you don’t have to think very hard to become aware the line between an oligarchian society and a plutocratic society is thin, at least.
If we, the people of our democratic countries, would be more aware of all the deals and public/bottom-pyramid information which is kept secret (HELLO CARLYLE!), we’d see our societies as less democratic. A hammer on the nail is the whole P2P debate. That alone proofs democracy is not a proposition!
“When you start combining this system with a “religion” of software/technology/etc (think of the Linux cult) and making everything electronic, we will be dealing with a new kind of a brutal regime.”
Pure FUD and Flaming. How exactly can the GPL result in such a brutal regime? You forgot to state some of your great logic, my friend.
“By the way, a mixture of Socialism and Capitalism sounds pretty much like Nazism to me”
I suggest you study politics more then. Nazism is a form of dictatorship whereas the combination of socialism and capitalism are defined as a democracy. Thoygh one should imo be aware that that doesn’t say anything about how democratic the society is; democracy isn’t a proposition. This combination of socialism and capitalism is the system some people from the US claim you and i as Benelux civilians are living in. Last time i checked, Flip de Winter isn’t in power in Belgium.
Actually I meant that the a mixture of socialism and capitalism is, what concerns economics, quite like nazism. I didn’t really mean to say Socialism/Capitalism is as authoritarian. However, I believe that Socialism and Communism are far more authoritarian than people are being told. Lenin, for instance, was very authoritarian and he didn’t differ much from Stalin at certain issues. Think of Saddam, who himself said Hitler was a weak leader. His example was Stalin. There are things happening in certain modern communist/socialist states that do not differ from what Hitler did at all.
Facism is – and I’m not the only person who think that – a kind of Socialism. And that’s not because Nazism (regarded as fascist) means “National Socialism“, but rather because of what we see when we compare the Third Reich’s Nazi regime to Communist/Socialist states.
“When you start combining this system with a “religion” of software/technology/etc (think of the Linux cult) and making everything electronic, we will be dealing with a new kind of a brutal regime.”
Pure FUD and Flaming. How exactly can the GPL result in such a brutal regime? You forgot to state some of your great logic, my friend.
It’s about the philosophies behind the whole thing (Linux was only an example). The way people get obsessed by their operating system, other software, technology and science is – I believe – very frightening. It becomes like a religion, but because it worships something made by humans, I call it a “pseudo-religion”.
There’s more I could say about this, but I haven’t got much time now and we should bear in mind that this is Osnews.com, a site dedicated to Operating Systems.
@ Nicholas Blachford
However you may be interested in this page if you take the test (it’s not long) a fasinating analysis follows adding an authoritarian-libertarian element to traditional Left-Right politics.