Researchers have shown that machines are inching closer to self-learning, and perhaps even copping a little attitude.
Over at Google, a computer program using a database of movie scripts was asked again and again by researchers to define morality. It struggles to do so, and in a conversation recorded by its human engineers, becomes exasperated and ends the conversation by lashing out at its human inquisitor.
Eerie. The full paper is more interesting.
Before you get too freaked out, try translating two normally sized english sentences with Google Translate to french and then back to english.
A.I. has some way to go, yes?
You often get the same problems when you try this with two different human translators.
It makes the same mistakes as it’s programmer!
Ethics, like physics, is a field of study and so is singular. So the first sentence is correct.
You, however, used “it’s” when you should have used “its”.
jbrader,
Oh gosh, I make tons of mistakes all the time. I am alarmed at how atrocious my writing is especially if I don’t take enough time to slow down and carefully proof read what I’ve written. I wasn’t being pedantic though, I don’t care much for that. My point was about programmers leaving traces of their own traits while training the AI.
Edited 2015-06-28 05:15 UTC
Tom Scott – Why Computers Stuck At Translation
https://www.youtube.com/watch?v=GAgp7nXdkLU
Edited 2015-06-27 12:46 UTC
Current computer technology is incapable to behave exactly as a human brain.
But in the near future, it may, but again this is “Artificial” which means not a real human brain and then expect it less than a human brain to behave even though it may look like one.
The reason is simple, you can never create a consciousness out from nothing. You can never as a product of this world to create something magically out of thin air. So the idea that machine will start learning by itself is nothing new, it was already used in gaming and other computer fields. Artificial will always be artificial regardless of the technology involved because of this premise that can never be broken.
Unless someone can demonstrate in a lab to create a material out from nothing.
allanregistos,
Do you mean creating an “entity” rather than “material”?
Alfman,
It could be an entity, a being. For example, you cannot create a stone or any material out from nothing. An entity could be anything, so we can create them from existing things, but not out from nothing.
The principle is: you can never give something out from nothing. You must have something first, before you can give that something to someone else. That is why, an AI agent will never possess what human beings already have = consciousness. You may create an AI agent that behave exactly as the rest of us, it can reason, it can argue, all of the stuff that human is wired to do, that is, it can “simulate” human’s consciousness. But again, that is not true consciousness, these AI agents don’t have the sense of existence, although they may act as if they they have one. You need to violate the first principle I stated before man create a true sentient being.
allanregistos,
Ah, Data would be so disappointed
You need to narrow down your statistics by just including the planets within the galaxy’s habitable zones.
allanregistos,
The universe threw the dice and failed 99.999999999999999999999% of the time, but then so what? Would you claim that there’s exactly zero chance for life to have occurred naturally without intelligent design?
Galaxy’s habitable zones that is a interesting question in itself we don’t have the exact answer to.
Like no matter how nasty of a area on earth we have found there has been life.
Now of course we have not found life on Mercury as yet it also insanely hard to search to confirm life is not present. Same with searching other planets in our solar system. We just presume that there is no because we have not found any DNA based life that can live there.
Habitable zone is based on theory of where DNA life can operate.
The basic building blocks of DNA turn out to be seeded on the meteorites and comets in our solar system. This in it self is a question is this natural and common. If this is not common number of galaxy with life possibility would drop massively.
http://www.nasa.gov/topics/universe/features/astrobiology_toxic_che…
We have also found DNA life that is abnormal as well. Question here is how abnormal can it go.
Then there is another nasty question we are not a very old race. Looking for life in the surface habitable zones might be a mistake. Exactly why could not habitable conditions happen in the likes of a planet/moon core. We have life on this planet who power sources are nuclear radioactive and general heat. So solar power access is not a requirement.
Theoretically DNA life(based on what has been found on earth) could be in solar system with no sun with heat generated in the plants by gravity interaction. A solar system with no sun will be extremely hard to find. Also highly complex non solar based life will be extremely hard to find as they don’t have to live on the surface. None of the probes on Mars can search for this kind of life effectively. Heck we have not effectively searched earth for all the forms of underground life forms that have no dependence on sunlight.
Habitable zone for life is really the right mix of materials and right amounts of energy and possibly some seeding to multi times trigger off the process. Yes life could be a trial and error process that only works in rare instances and the reason why earth is live is life was attempted to be started many times. Problem is we are not sure on what is the right amounts of energy or right mix of materials. Every time we think we know we find something else that throws us another curve ball.
Basically galaxy’s habitable zones is almost a wild guess. Only reason why galaxy’s habitable zones useful is that it is where we calculate easy to spot life forms might be as this is where being a surface dweller will be more possible. Not that this is where life is limited to in a Galaxy once you add-in the possibility of Subterranean. Subterranean has no requirement for the solar system to have a sun to support that form of life.
oiaohm,
Interesting idea about subterranean, that certainly would defeat a lot of our assumptions based on surface conditions.
Apparently five years ago NASA discovered microorganisms using arsenic:
http://www.nasa.gov/topics/universe/features/astrobiology_toxic_che…
Also, another point is that the lack of water and heat aren’t necessarily a problem for lifeforms that have different chemical makeup:
https://en.wikipedia.org/wiki/Hypothetical_types_of_biochemistry
If we rule out planets that have the wrong chemical makeup for earth lifeforms, then we could miss planets that may be supporting other kinds of lifeforms. It’s quite a conundrum: while the scale of our universe is so vast that foreign life is statistically likely, their distance is so great that it’s unlikely that we’ll ever get to learn about them.
As tu is à défense mecanism, answer with such words, would make them talk as YOU WOULD understand. Hum, I always thought about them, ‘.’ and ‘..’, as a practical way to have a correspondence between the theoretical control structures, trees, and their storage counterparts.
Why would an AI try and hurt us ? Is there any usage usability right hère ? Why their concepteurs would want to be offended ?
Something about UI takes its birth on discret mathematical représentation. Only displaying à full non trust image would require à non strait algorithm. Such as cos, sin, add, push and mov have their memory fingerprint as a résponse paramèter. Why accessing such memory flow from the machine would take less time than computing auto-generated path-flows. Thé time is now.
Poor consulting electric components can drive enough flow, and even some mathematical computing to drive à 4k pixel wide 60 FPS movie. Decoding from tha hardest and thé slowest encryption mecanism ever built, any cheap microcontrôleur could drive thé source from one to another.
Why do we need more power to drive those babies ? CPU foundries knew thé answer very well : we really could, but we really don’t right know.
Floating point opérations could be executed as if it was thé résult of many op codes on thé basis of stable pre-computed answers. Thing is, GPU pricessors do already wirk like that. Why would we pût more and more instructions to an auyomatic machine ? Just to wait for à spécific résponse ?
They all responf to pre-defined answer that would be relevant if it was treated as discret mathematical concepts.
Qualitative, meta-naming and user-response concepts are far from innovative.
What could be really à point talking about AI is where do we pût thé agressive mode ? Is that UI ? Is that Kernel relatéd ? Is that meta-generated ?
Something about target, official names, official dna distinction, official death culpability, of many, of some of many.
Something about thé right time, to talk about what is NOT war in action and what is NOT à war crime.
Why would someone pût thé responsability Hum, I always thought about them, ‘.’ and ‘..’, as a practical way to have a correspondence between the theoretical control structures, trees, and their storage counterparts.if à bullet in thé head in thé hand of some stupid respect game to thé next ?
War could lead you to great power and distinctions.
Why such effort could be made in sôme atomic représentation of à OR bitwise table.
Why such effort could be made in some atomic représentation of à ADD bitwise table.
Why an infinité path could be drawn with an atomic représentation of four bits and NOT less ?
Why such flows, as fast as they délivrer information, would require such few electric signal ?
Why à 4k HDMI 60 FPS signal sampling is what mznkind could achieve thé best at this time ?
Why would we require to target 4d worlds with discret or floating coordinates where we surely tzlk from one to another equipment with et least two bits ?
Thr point is thid machine could never interpret as I am thinking thé way thé engeneer would talk to it after à non pre-defined behaviour. Why would such treatment would be tolerated from thr user point of view ?
Where do you see war crime in building agressive machines ? Is that aa war crime using it ? or is that à war crime NOT using it ?
We’ve made enormous progress in recreating life out of nothing. 12 years ago, we’ve arrived at this possibility. This paper below has has effectively debunked that recreating a being from synthetic chemicals requires some special God-like power. Before you argue that a virus is not a living thing or equal to a mammal, keep in mind that it has the exact properties associated with all living things. Viruses reproduce, evolve, contain genetic properties and have a life cycle. Which are characteristics in common with a pet animal and not inanimate objects like a rock or a washing machine.
Chemical Synthesis of Poliovirus cDNA: Generation of Infectious Virus in the Absence of Natural Template:
http://www.sciencemag.org/content/297/5583/1016/suppl/DC1
I haven’t been keeping up with the times but I believe we have made much more progress since this paper was published. A good Google Scholar search should reveal more material.
I think you are underestimating the complexity of a single living cell. It is still the most and infinitely advance machine than any advanced machines invented by mankind.
Found this @ physicsforms:
https://www.physicsforums.com/threads/can-we-create-life-from-scratc…
if it is really possible today(2013 at the time the forum’s post was posted), to create a living cell out from existing “chemicals”, though the posters acknowledge the possibility, they denied that it is possible today because of the lack of understanding of the cells.
And even if it is possible, it does not refute my premise that man can never create something “out from nothing”. Man still need those existing “raw materials” to make something.
This sounds a lot like a boot-strapping problem:
Did the alien also have consciousness ? If so then were did the alien get it from ?
i think this logic doesnt make sense.
humans also create other humans, which have consciousnesses.
i think you’re afraid that machines can potentially outsmart us by several magnitudes some day.
Now, that is bullshit, you aren’t creating something out of thin air; brains are the hardware and our consciousness is the software running on it, just the same as it would be with an A.I. There is nothing about it that is happening magically or out-of-thin-air.
We are all computers and that some of us are powerful computers(high IQ) and the rest are morons. So no, human beings are unique individuals, while machines are all the same thing: hardware powered by software. I am not in any way going down with you to that level.
Uniqueness has nothing to do with consciousness, they are literally not related; you could totally have conscious beings even if they weren’t unique and vice versa.
I miss my reading of “first nine months of life”.
Since we are talking about “identity” here, show me where these biological “conscious” beings that were not unique to each other. Yes, AI beings were of the same thing, they are not unique to each other. What they were made of are hardware + software that powers them. However, I think you are confusing the two, biological beings and machine(AI) beings(as you stated brain=hardware, consciousness=software). To quote it for you:
“…human life begins at conception (i.e. fertilization. AKA the moment sperm and ovum meet and form an entirely new, self-directing living organism of the human species with its own individual DNA distinct from both mother and father.)”
– See more at: http://fallibleblogma.com/index.php/when-does-science-say-human-lif…
and here:
http://www.princeton.edu/~prolife/articles/embryoquotes2.html
Every human beings on this planet that ever existed is distinctly unique to each other, and I am not only talking about consciousness. That is science. However many people will equate human beings to animals or mere machines, that’s their choice but that’s not the scientific fact, it is called science fiction caused by watching too many matrix movies. So by the time human can create machines that have true consciousness, then they will cease to become artificial but real beings, but you see, it will _NEVER_ happen. You can just emulate human emotions, consciousness etc., but that just it.
allanregistos,
Futurama, actually:
http://gawker.com/5611867/futurama-debates-evolution-for-you
Edited 2015-06-27 08:04 UTC
Is that statement based on a lack of belief in mankind’s capabilities, or is it at the expense of a belief in a higher power? I can’t tell…
If it is the former, It took the universe about 14 billion years to create consciousness (assuming we are it, which is unlikely), and it wasn’t even trying. I think we might be able to do a wee bit better with some directed effort.
If it is the later, well there is not much point discussing any further.
Edited 2015-06-27 06:19 UTC
There is no scientific evidence that mankind’s capabilities will go beyond what the Greek mythical gods can do. We have now the data, so we can predict of what can be made in the near future, the article linked here demonstrated that. A personal assistants like C3PO and R2D2 is possible in the future. We can only create things out from the existing raw materials, not from nothing. Anything beyond that is insanity.
Near future? Sure, “dumb” AI is all we will have for a while. I doubt I’ll see real AI in my lifetime. But in another hundred years? We managed to go from horse drawn carriages to space flight in less than that. We went from ENIAC (~500 FLOPS) to China’s NUDT Tianhe-2 (33.86 PFLOPS), an increase of 67,720,000,000,000 – in 70 years. That is roughly an order of magnitude more operation per second than there are synaptic connections in the human brain…
At the rate we are going self aware AI is damn near inevitable…
But that is besides the point. You do not believe that consciousness is an emergent trait, where as I do. There is no way we can rectify that through discussion.
We both have faith, just in completely different things…
Edited 2015-06-27 10:03 UTC
Yes, I have faith in almost everything, for I rely on faith when I make decisions, such as trusting the bus driver before taking a ride.
We have, read the 12-year old research paper I linked in my above comment.
Negative.
I think you are underestimating the complexity of a single living cell. It is still the most and infinitely advance machine than any advanced machines invented by mankind.
Found this @ physicsforms:
https://www.physicsforums.com/threads/can-we-create-life-from-scratc…
if it is really possible today(2013 at the time the forum’s post was posted), to create a living cell out from existing “chemicals”, though the posters acknowledge the possibility, they denied that it is possible today because of the lack of understanding of the cells.
And even if it is possible, it does not refute my premise that man can never create something “out from nothing”. Man still need those existing “raw materials” to make something.
I do have an other question:
What is the size and weight of an average consciousness ? Or do you have a picture of what a consciousness looks like ?
I reckon it’s about the same as the size and weight of software…
I was trying to invoke a response from allanregistos. 😉
Ohh, well, I think consciousness emerges when you have a system interacting with the world. Having it’s own unique experiences, having agency and free will (or at least thoughts of your own). This creates identity and from identity comes consciousness.
Well, something like that.
So it doesn’t come from nothing, it comes with experiences.
I don’t yet know why a machine can’t have that.
Well I am a software developer. If you create a machine that is powered by a software, you already defined a cap limit of the machine by using a software.
It is so easy to define why machines can’t have “true” self-awareness because we know exactly how it works because we created them. You are here reading OS news and you do not know that? We only need to faked their awareness for amusement or relational purposes, but don’t take it as “true” self-awareness.
I told you guys already, that the principle is this:
Man can never create something out from nothing, he needed raw materials for something to create! That is the limitation of what we can do. We need things, we need chips, we need everything in order for us to move and be satisfied.
You could not just float around and then say a word to summon a table or anything you need.
If you can violate that principle, then I will believe you that a machine can be invented with true self-awareness or identity. By creating something out from nothing, you have now the option to give _THAT_ machine “an identity”. The problem is: you cannot give him that identity without first creating an identity out from nothing. This is very hard for closed minded people to comprehend.
allanregistos,
My ability to create a cell and my understanding of a cell have nothing to do with AI. Let’s stop using metaphors and just keep to speaking in terms of AI because this is getting out of hand.
What is your answer to the question I asked earlier: Would you claim that there’s exactly zero chance for life to have occurred naturally without intelligent design?
Edited 2015-06-29 08:17 UTC
Let us follow the evidence. What is the evidence? Anthony Flew has already decided.
allanregistos,
Let me rephrase: How would you (allanregistos) prove that I (Alfman) was self aware?
Edited 2015-06-30 06:58 UTC
I will try to summarize my points:
1. There is no evidence that we can truly create machine with true self-awareness AKA with primary intentionality.
REad again:
http://www.evolutionnews.org/2011/03/failing_the_turing_test045141….
2. We could create higher intelligence in AI machines, but that is not equal to true self-awareness as we humans have. Yes, you could implement the same criteria to machines to behave exactly as the human brain, but that is not true self-awareness, as pointed out in the link, or else debunk them and cite your source.
3. We know how exactly how software works from inside out. So there is a
You are deeply aware of my posts and responded to them accordingly though not accurately, so is that an evidence that I can be so sure that you are an intelligent being? Yes, you are not a machine with fake consciousness reading this thread.
Please do yourself a favor, the link actually addresses my point about AI.
And also read this stuff:
http://www.independent.co.uk/news/science/stephen-hawking-right-abo…
allanregistos,
The third problem is the way in which he dismisses turing tests. He’s using a blanket assertion that computers can’t think in order to break the turing test and concludes that humans aren’t able to identify real thinking. However he’s unable to lock in the assumption: given how humans are unable to identify thinking, we can’t unequivacly state that computers can’t think as an assumption. The logic falls apart.
It’s an interesting read, but it still has cracks.
I’m also a software developer and I’ve been watching the machine learning space lately.
If you think we can’t build such a machine then at least you are partly right. Because we aren’t even close to building such a machine at this time.
Because have a look at those numbers, we are not even close yet:
“One estimate (published in 1988) puts the human brain at about 100 billion (10^11) neurons and 100 trillion (10^14) synapses”
https://en.wikipedia.org/wiki/Neuron
Or do you think 9 layers is a lot ?:
http://www.quora.com/How-many-neurons-deep-is-the-longest-optical-n…
Do you really think we can infer from what we have now what is possible with 100 billion neurons ?
So saying: it’s never possible to build a machine with consciousness is just denying how little we know about these things right now.
Are you saying that when the integrated circuit was created they could predict the Internet and the smartphone ?
If I have to guess I think identity comes from personal experiences and some kind of free thinking, free will. One way to do that would to have a robot without being programmed with a purpose. Just like you and me. The only purpose is to interact with the world.
You have to remember there are still a lot of things we don’t know about how our own brains work.
Here I’ll point to some things in a talk from Andrew McAfee. For example Polanyi’s Paradox, which shows us how hard it is to induce information from your own brain:
https://www.youtube.com/watch?v=AfWUf-sY_cc#t=16m10s
However we are seeing real progress:
http://news.berkeley.edu/2015/05/21/deep-learning-robot-masters-ski…
Just look at what they are working on right now:
https://www.youtube.com/watch?v=EtMyH_–vnU#t=41m47s
But we haven’t been able to get any real fundamental breakthroughs yet, mostly just faster hardware (GPUs and networks with lower latency).
One of those breakthroughs would be learning concepts:
https://www.youtube.com/watch?v=EfGD2qveGdQ#t=2m54s
I like what he had to say about those 10 fundamental challenges:
https://www.youtube.com/watch?v=AfWUf-sY_cc#t=12m33s
Please read this to know exactly what I mean:
http://www.evolutionnews.org/2011/03/failing_the_turing_test045141….
One, it sounds like you’re conflating the development of artificial intelligence with the spontaneous creation of matter… you realize those are two totally different things, right?
Two, the prevailing theory is that human higher intelligence is an example of “emergent complexity” – a complex system arising naturally from a collection of many simpler systems. So why, exactly, is it inconceivable that artificial/machine intelligence won’t arise the same way, once the hardware & software become sufficiently complex/advanced?
Three, we do have a general overview of how biological intelligence most likely developed, but it’s nowhere near a complete understanding. So isn’t it just a little premature to declare artificial intelligence impossible, when we don’t even fully understand how/why biological intelligence (our only frame reference) is possible?
I’m really trying to give the benefit of the doubt here, but your position sounds suspiciously like an AI equivalent of the “irreducible complexity” (AKA the argument/fallacy that is the foundation of the “Intelligent Design” movement) – and similar to creationists who accept “micro” evolution while rejecting “macro” evolution, while failing/refusing to realize that they’re exact same thing.
There is no such a thing as simpler systems from biological life.
[citation needed]
My bet is that he is talking about RNA/DNA as the most basic construction block of what we call life. They are, indeed, very complex molecules that we have been unable to synthesize from basic organic constituents until now (at least, from what I read).
This has been the salvage hook many “religious” philosophy currents have clung fervently to. Never mind nature had billions of years and trillions of places to try its magic.
I guess, most people has the same vicious thinking of that guy that won millions on lottery, he thunders his chest and say “Look how difficult is to win, I had one chance in millions to win and I did, I have luck by my side!”. The fact that one order of magnitude or more cards were generated and the chance that none of them would hit the winning numbers were less than 1% does not bother him, he was the chosen, the god’s elected.
I see the same pattern even when scientists discuss the life origins and the odds against it, “We are luck and were in the right place!”. Of course we were in a special place or we would not have this thinking in the first place, we are one of the winning cards. Also, clearly, the most important aspects related to life creation are the conditions needed to generate it, from where it comes from is, at best, of second or even minor importance.
Yep, it’s also a well-known logical fallacy known as the “God of the gaps” argument – which really amounts to nothing more than “If science can’t (yet) fully explain something, then I’m just going to fill in the blanks with whatever fairy tale most appeals to me.”
I do not want to drag this to a religious flame war.
My original point is to debunk the notion that somehow man can create _TRULY_ thinking/self-aware machines. Back to my point, there is no such a thing as truly self-aware, conscious AI or it cease to be artificial. You can emulate/fake self-awareness/consciousness however for AI to relate to humans for we know exactly the limits of software/algorithm.
In order to create a truly self-aware AI, you need a miracle, that is, create an Identity out from nothing.
Take thé shortcut, Nasa itself did à n anouncemnt recently, where did it land ? War is coming. They were also asking for papers on proved working formal méthod. Here is à working one in the field of path drawing, path resôlùtion.
Take à law, find thé exact opposité, what you aré loôkîng for is none of thé two. Where you start all over, or both if ît is NOT relevà nt. It is one of both sides you want to take as a law. There are many laws. Finding that thé opposité is true makes it irrelevant.
By that reasoning, natural/biological intelligence shouldn’t be possible either – because there’s absolutely no evidence that any miracle was required for the development of human intelligence.
We know exactly how software works, so machine with intelligence we really KNOW how they will behave now and in the future, now what in the world that you now believe that magic? If you follow the post, I certainly believe it may possible in the looong future that man can create a protocol droid like C3PO doing the dirty job for us.
But it doesn’t mean we could create an algorithm and implement in an AI machine to make it self-aware(or at least evolve) for that is impossible even with software in the future. The machine will only compute,store information and is able learn from the surroundings and computation is not equal to mental effort.
Please read this link:
http://www.evolutionnews.org/2011/03/failing_the_turing_test045141….
Read the paper – the description of the conversation about morality is wrong. The computer wasn’t struggling, it simply couldn’t answer as it wasn’t part of its database. The HUMAN became exasperated and lashed out at the computer, which then responded in a similar fashion as it’s programming dictated.
There’s nothing in the paper to indicate “that machines are inching closer to self-learning, and perhaps even copping a little attitude.” All I see is the same-old, same-old for “Elisa” type programs. The writer is just misrepresenting the paper to get clicks.
Bill Gates and Stephen Hawking must have been tricked to read the articles.
I have read to the end of this thread yet – so some one might have said this already – but risking that..
On the level of the “principle” of advanced learning capability and growing/assuming a self aware level of consciousness.
Building a suitable advanced and speed, parallel thread capable computer and feeding it either and advanced self learning algorithm, or feeding it a “teaching database” to get it started is analagous
to the biological human man and woman conceiving and giving birth the baby.
At this point – both have “capability” to advance. Neither or very intelligent yet, and certainly neither is self aware. Both have capability to learn to rudimentary level of intelligence, and eventually to start “thinking” for ones selves if the self created algorithms (/nervous connections) become complex enough.
It’s around this point that advancement to true self aware conscious thinking either could or could not…be reached.
It’s debatable whether computer systems can reach this point (i would hazard yes; i have no idea through which computing route – I’m no expert to comment; I think jeff hawkins numenta group have made some nice strides.. as have google)
However, no all babies grow to self aware adults. Many with neurological disorders do not reach this point.
Nothing for sure on either side.
But given a sufficiently advance learning platform at the “baby” stage I think one has to conclude, anything is possible… x
You cannot just say “anything is possible” under the sun when we were using “existing tools” to create something. There is a limit of what we can do, beyond that is insanity. Matter cannot be created nor destroyed, so do not expect miracles. What we can do is just change the materials from something we can use for our purposes. When we use software to create AI, we already defined the cap limit of what the AI can possibly do, beyond that is just “science fiction”.
Makes sense to me. I share David Deutsch’s view that we can’t produce an artificial general intelligence (as opposed to a limited AI like a Starcraft opponent) by doubling down on our current techniques.
http://aeon.co/magazine/being-human/david-deutsch-artificial-intell…
(That’s an excellent article, by the way)
ssokolow,
I completely disagree here, and I actually think he contradicts himself. Earlier he rightfully points out that programming AI to play chess and even jeopardy are hard coded algorithms, not very good outside of their respective domains, fair enough. But in this paragraph he asserts that AGI can’t be produced without first understanding it in detail. I’d assert the very opposite, you can’t create a convincing AGI if you can’t demonstrate that it can do things that you don’t understand. In other words, that it is capable of doing things that you didn’t specially program it to do.
I’m mortal and have to eat now, but this is a great article, thanks for the link
Edited 2015-06-27 22:38 UTC
So you subscribe to the open letter signed by scientists like Stephen Hawking and also with Bill Gates warning humanity the danger of AI? I have searched this subject once in a while and one computer science professor refuted this fear of AI. I completely agree that you need an understanding the details of how AGI works before creating them, not the opposite as you proposed.
If your argument were true, we could have created a single living cell from existing chemicals without the full understanding of a living cell.
Do you have a link for that?
I fully agree, it’s just an algorithm trying to mimic what humans do.
The only smart part is the machine learning to analyze the use of language. It’s just generating responses based on previous conversations that they fed it as input-data.
The cause of this is obviously 2 fold:
1. the biggest: misunderstanding of the technology
2. maybe the surprise of the researched that wrote the papers on how well it worked. But the more data you have of a narrow field obviously the analyzing of use of language and generating answered based on that will be more effective.
If you want to see something advanced you should look at something like this:
http://news.berkeley.edu/2015/05/21/deep-learning-robot-masters-ski…
Also notice the learning speed of this system:
https://www.youtube.com/watch?v=EfGD2qveGdQ
Edited 2015-06-27 22:45 UTC
Although, quite possibly, just a random outcome based on some shaky algorithms.
Then again, aren’t we all …
How many legs do a spider have?
Three – I think.
yeah.
Last time I checked it was still open. Ethics and moral are also high debatable concepts that can not be separated from society roots (and change with society changes).
If this guy is targeting these 3 things at same time, sorry, but fail will be his only outcome.
It is a fun reading and experience, but from my POV it is just something to grab some attention, experience with algorithms, databases and natural language processing and that’s it.
My suspect about AI have been the same over the years, we are targeting too high and too coupled with what we expect a smart human being to be.
I would like to see more efforts put on something with multi-sensor capabilities, cause and effect inference and reward/punishment valuation. These 3 things, seems to me, are what pushed all live things to the point they are today.
Computers don’t have to deal with one of the biggest drivers of evolution – environment. A program is not affected by weather, availability of food & fresh water, access to natural resources, etc… Forget trying to be human, we already do that perfectly fine on our own. We don’t need more people or pretend people. Instead of that and dooming AI to flaw, we should work on AI thats capable of advancing science and technology. Something capable of learning outside the bounds of human thinking & emotion.
Who said anything about trying to make them more human ? We are complicated animals that more frequently than not let emotions take control. Far from it.
Sensors are very important because they add levels of interactions that are very hard to achieve with just stored data, even more because they can be superimposed, what make the experience orders of magnitude richer. They also can be changed or added as needed.
Cause and effect inference is the most basic aspect of anything aspiring to have some intelligence and can be, for many of the simplest situations, be simulated.
Finally, reward/punishment valuation can also can be satisfactorily programmed for simple cases. It does not need to have anything to do with human emotions aspects. Also, if you have worked with optimization of non-deterministic systems or with a computationally expensive problem you, very probably, used a form or other of valuation.
Anyway, for what, I think, you want we already have expert systems. These will supplant the majority of us, as individuals, on specialized areas in a near future.
Reward/punishment doesn’t produce intelligence. The most basic forms of known life are capable of prediction, and that’s what reward/punishment teaches.
You should check out Q-learning. This example is pretty fun to read about: https://en.wikipedia.org/wiki/TD-Gammon
You’re right though that we’re still missing a layer of “coming across a random new problem outside of the defined rules, recognising it, learning the new rules and finding a solution”. For that you need some sort of interaction with the real world (which is more or less random). The interaction part is pretty hard, and in the article they tried to approach that problem.
Edited 2015-06-29 10:25 UTC
I think people underestimate the potential of limited set of rules. Many complex behaviors we see every day exist not because the interactions are complicated but because the thread of them are. Most living things get along very well with just a bit more than reward/punishment valuations preprogrammed.
That is why I talked about sensors. They enable us to experience with sets of rules and analyze their effectiveness without the trouble (almost) associated to the creation of a very complex and specialized solution.
Seems to me that general cause and effect inference is the truly hard part. It is so hard that few biological species developed it (and some humans get going without it too, even confusing it with reward/punishment). After them, the Holy Grail, abstraction. If we get to this point before we finish our job of wreak havoc the entire planet we will be doomed as the dominant species (not necessarily as a species).
Also, when I talk about AI I am, usually, referring to general AI and not “expert systems”, what is what most of the people I talk to seems to be thinking of when the conversation starts.
Edited 2015-06-29 12:34 UTC
I don’t think algorithms such as Q-learning can be classified as ‘expert systems’ though.
Nor do I, I was just complementing my previous posts.
I am not in any form a specialist in IA, but read a good chunk of information about it. Perhaps, I could use a more “standard” set of concepts and definitions, what would, very probably, be more rewarding to my own learning by the answers I would get, but it, also very probably, would limit the span of answers on a general site like this one and that, actually, is the reason a frequently come to this shore. We never know what people will come with.
Thanks for the links, good stuff.
cfgr,
Truth be told, to define any rewards or punishments seems a contrived act for an intelligence that should be able to hold it’s own. We could explicitly program ethics, rules of survival, etc, but on the other hand it should be able to figure such things out on it’s own. In nature I believe Darwinian evolution played a key role here; random mutations took place, but organisms didn’t merely diverge randomly, they were forced to compete on sensors, muscles, efficiency, brains, etc. This is how I believe our intelligence came to be in nature: the combination of trivial & unintelligent processes – nobody needed to be there to understand the processes or the intelligence, it was an emergent property.
For computer science, I don’t think the difficulty will be reproducing similar conditions to entice intelligence to evolve in a computer. In my opinion the far bigger challenge is building hardware that can reduce millions/billions of years of evolution into a more manageable time-frame. This would require tremendous parallelism, substantially more than would be required to run the resulting intelligence itself.
With deliberate involvement of programmers, we ought to be able to achieve milestones much quicker than random mutations, the downside being that we’d need to be directly involved in algorithm formation. If we’re to be involved, then once again it opens up questions like yours about what we determine the AI’s goals should be.
Truth be told, to define any rewards or punishments seems a contrived act for an intelligence that should be able to hold it’s own. We could explicitly program ethics, rules of survival, etc, but on the other hand it should be able to figure such things out on it’s own.
A true AI would not need to be explicitly programmed. It would learn from others and more importantly, experience.
I agree, it shouldn’t be necessary. Although the corporations that are ultimately in charge of them will probably have motivation to program some things anyways.
Sometimes it seems as though we may be judging AI by higher standards than even humans. Humans generically have tremendous potential across many disciplines. However owing to intense specialization, it may be exceedingly difficult for any given intelligent individual to act intelligibly in unencountered situations. Many of us, who consider ourselves intelligent, would fail even at basic survival skills like living in the wilderness finding nutritious food and not dying.
So in the same way, we need to somehow factor this in when we say an AI should be able to handle unexpected situations – this is hard even for a human.
What are the inherent rewards/punishments from learning from others? Or is it because we programmed it? You only get experience when you do something. To do something, you need a goal. Humans have a goal: to survive and to enjoy life as much as possible before death. External factors force this goal upon us.
This is all a bit philosophical though, in practice we develop an AI with a specific set of goals and go from there. Basically we are the external factor.
I totally agréé that we are thé external factor. As a matter of fact some automatic chat bot ARE considéréd as true human NOT AI. As a matter of évolution in this field, there is one science : cybernetics or how to mimic life with machines. If AI would be dôme kind of pour babies, that would and take as much time as it needs, to learn from ourselves.
AI is part of our machine power, part of our imagination. Who would suggest our baby is more like à monkey than à very young human ? That what we were sûre about babies NOT So long ago.
À self could be NOT something and have two différent références as a minimum start.
À self could be something NOT So much pre-defined but something you could interpret as a person to person réaction, every day.
What could define thé most à self would be thé person that pushes thé button, NOT thé button. With personnalité, that we can define with words, with something you can call à goal or an achèvement, or a TOMORROW. Someone that react in order to be something TOMORROW. And that TOMORROW dépends on what it could have learn from ourselves. From baby point of you, all of that have to be discovered at first. They give things to know yourself.
Humans do things, and learn from those things, all the time without having a specific goal. The action can be completely random and the learning completely unintended. For example, think of all the random things kids do with no thought about reward/punishment, with no goal in mind, and for no real reason at all. They’re often times simply experiencing the world by doing something rather than nothing.
I would have thought we would have made more progress since Eliza. It doesn’t seem to be that much more advanced.
It all sounds like battlestar galactica to me. Some day we will be there and think what the Hell happened.
I find it interesting that the machine stands behind the concept of God but rejects ethics. At the same time I also find this terrifying since humans also poorly handle the balance of religion/morality/ethics.
Oh most western folks today feels fine as long as the discussion does not involve religion. Look at how difficulty it is to say to materialists that we could never create something out from nothing, they still have faith in man’s finite capability. We should know our limits in order not to act foolishly.
allanregistos,
For the record, the religion does not bother me. I try hard not to judge people by their faith (I know not everyone does), but what’s challenging is when I see religion driving the science rather than the other way around. Nobody has said AI comes from nothing. It comes from powerful machines, it comes from algorithms, earlier I said I believe that it can come from random chaos (which is still not “nothing”).
Your arguments might fit in a debate about the big bang, where everything comes from nothing…but that’s not what we’re talking about and if you really want the discussion to make any progress whatsoever then you need to stop conflating things.
What I am saying all along is to refute the idea that man can give(the keyword) an identity to an AI agent. This is next to impossible, because you as a product of this world can never create an identity out from nothing.
I will try to give an analogy:
Imagine 10 stones in a row, they are exactly the same. So for us(humans with self-awareness and intelligence) to better identify these 10 stones, you put every one of them a unique serial number to help you identify each one of them. What you’ve done so far is to uniquely identify each of these stones by reading the serial number. Nothing change, the stones in question are the same materials regardless if we put a unique code to them or not.
Replace these stones with AI machines. These AI machines are exactly the same, computers with built-in algorithms that made them capable of emulating human behavior. So, if you do not put a unique code to each one of them like perhaps C3PO or R2D2, you cannot identify them individually. So you designed the algorithm to let these machines identify themselves as Robo1, Robo2, etc. But that is just the end of it, nothing change, there is no miracle, like we read in media(people with no understanding how software works) often will the ones who conflate things. And I am trying here to debunk that, for humans are distinct and unique, and machines are totally different and that humans can never achieve creating machines with “true” self awareness. I am not conflating it.
allanregistos,
It’s extremely convenient for your analogy that humans are far more difficult to replicate than machines, but I’d like you to follow through the thought experiment with the assumption that you could replicate them both easily. Swap machines with humans and you get the exact same outcome: multiple humans that can’t be distinguished! Furthermore, being an “exact” duplicate would have no effect on intelligence.
I see no problem. But the humans can distinguish themselves by their self-aware identities. Whereas the machines have nothing of some sort.
I didn’t read the whole thing, but based on the conversations it seems like the ai is pulling its answers from the web.
And then it will say:
“Sorry, please check your internet connection.”
I read the PDF and I felt the machine was reacting to the human’s responses. Possibly because it was programmed to react that way given the human responses. It would be more interesting if the machines responses were novel and unexpected, which I don’t think the article articulated.
Computers have vastly superior computational abilities compared to the human brain. However, despite many advances in scientific research on the human brain, we still have limited understanding of how the human brain works, learns and adapts. We have almost no understanding of how intelligence developed over millions of years. Is self-awareness or consciousness even possible without motivation, emotions, and intuition?
I don’t think we will achieve true AI with the computers/hardware currently being used. Although computers are increasingly more complex, I believe the human brain is infinitely more complex in ways we don’t yet understand, than current hardware, especially with analysis and synthesis of information. I do not believe true AI will be possible until hardware advances considerably.
Edited 2015-06-29 19:57 UTC
What limits AI technology nowadays is because it uses software to run them. So we know exactly how they behave. What the AI will do, will do only under the bounds of the software’s power or the ability of those who implemented the algorithm. Expect no magic.
But thé boundaries couldn’t be à concept whenvdealing with auyomatic (ré) programmer systèmes like worms, path finders, ciphers…
That’s thé most complex systèms we understand as self awareness.
There are many case, for fluid simulations, where we expect à new scénario
Is that thé birth of modern AI ?