Sundar Pichai has outlined the rules the company will follow when it comes to the development and application of AI.
We recognize that such powerful technology raises equally powerful questions about its use. How AI is developed and used will have a significant impact on society for many years to come. As a leader in AI, we feel a deep responsibility to get this right. So today, we’re announcing seven principles to guide our work going forward. These are not theoretical concepts; they are concrete standards that will actively govern our research and product development and will impact our business decisions.
We acknowledge that this area is dynamic and evolving, and we will approach our work with humility, a commitment to internal and external engagement, and a willingness to adapt our approach as we learn over time.
It honestly blows my mind that we’ve already reached the point where we need to set rules for the development of artificial intelligence, and it blows my mind even more that we seem to have to rely on corporations self-regulating – which effectively means there are no rules at all. For now it feels like “artificial intelligence” isn’t really intelligence in the sense of what humans and some other animals display, but once algorithms and computers start learning about more than jut identifying dog pictures or mimicking human voice inflections, things might snowball a lot quicker than we expect.
AI is clearly way beyond my comfort zone, and I find it very difficult to properly ascertain the risks involved. For once, I’d like society and governments to be on top of a technological development instead of discovering after the fact that we let it all go horribly wrong.
There’s no “for once” about it, since that’s literally never happened before. It’s a staple of sci-fi that we somehow kill ourselves with technology, but historical evidence – that fact that we’re still here – shows that doesn’t happen.
It’s never technological development that goes horribly wrong. It’s ALWAYS been economical development aka cutting corners that screws things up.
The “for once” statement really should be “for once I’d like people to figure out how much things really cost* and not try to hide it through accounting or soothsaying**.”
* R&D. Or rather, R&D&C = research and development and cleanup.
** Free market optimism.
Edited 2018-06-08 00:41 UTC
We’ll probably manage one way or another… a quote from “GÅ‚os pana”/”Masters Voice” by StanisÅ‚aw Lem, which I currently read, seems somewhat fitting: (forgive the poor quality of my PL->EN translation)
Perhaps that’s the best we deserve…
a calculator today would have been the AI of 1800s (think Ada & Babbage). “artificial intelligence” seems to only include what we cannot understand, but once we do understand it, it is no longer considered “intelligent”.
But we do understand the algorithms of machine learning (at least those who designed it do). so I do not get the irrational fear behind it all. sure, some job will be replaced but that had always been the way. one cannot look back and deny techological progress. we dont live in a scifi film where there is always a disaster with every technological breakthrough.
I agree, it seems strange to single out AI here. In fact, looking through Google’s principles, the only one that’s arguably specific to AI is the second one “Avoid creating or reinforcing unfair bias”. Apart from the fact that some AI algorithms are hard to rationalise, they’re otherwise just like any other algorithm. It’s great Google have spent the time to think about this, but why don’t they (and everyone else) just apply these objectives to everything they do?
Some morons watch too many tinfoil hat movies with yellow subtitles, flat earth, mind death rays, etc. . In “AI” there is neither A nor I part. “AI” is just bunch of integrals and derivatives. It will do nothing more than the authors programmes it to do. Computers can’t change their own code unless they are programmed to do so.
Neither can humans. Have you ever changed your own code, or are you taking credit for biological process you have no control over?
We’re delving into philosophy here, but when you learn new skills, you’re effectively rewiring your brain.
Neural networks in affect do the same thing.
I guess we can choose or have the appearance of choice what subjects to study and how intensely to apply ourselves towards that endeavor.
Yes. Either they both count as reprogramming, or they both don’t count as reprogramming.
The statement I was responding to argues that it doesn’t count if the reprogramming was just part of its program, in which case, the exact same argument applies to humans, since the brain rewiring itself is because it was programmed by natural selection to do so.
We can even extend that to our drive to learn itself. A person “deciding” to learn things and rewiring the brain is merely doing what it was evolutionary programmed to do also.
cool argument.
A calculator is not an AI. There’s nothing intelligent about what a calculator does whether it’s the 1800’s or 2018.
I think the lay audience uses itself as a model rather than some theoretical definition of what counts as intelligence. And it sees in itself plenty of things that don’t seem to show up in AI. (Consciousness, for one thing.) So while each attempt at advanced automation eventually does plenty of interesting and useful work, they find it lacking when it comes to grand promises about understanding ourselves.
Of course, this has no relation with Google involvement in image processing of military drones footage, and all the negative press they got recently, and the opposition of many Google employees.
Now, suddenly, they think about ethics and AI.
For now, I’m more afraid of evil corporations than evil robots.
Edited 2018-06-08 01:00 UTC
Hi,
If these guidelines are actually followed; Google will have to build weapons and surveillance systems without using AI, so their weapons and surveillance systems will probably just be twice as efficient at half the cost.
If they actually cared about ethics, they wouldn’t restrict the guidelines to AI only.
– Brendan
Their rules 1 and 2 are both incredibly subjective and political.
Also, Google’s own AIs and bots already break those rules – they actively discriminate base on political belief. They’ve said so.
I disagree, they are written specifically to be subjective and therefore, impossible to break.
Its like the old joke ” This food is so terrible.. and such small portions too!”
I, as of this moment, am trying to determine for myself what the right line is for both of those.
I used to be a free speech absolutist. Now, I’m not certain that is always the best approach, but not sure how to describe a policy that is fair and makes sense.
“…but once algorithms and computers start learning about more than jut identifying dog pictures or mimicking human voice inflections, things might snowball a lot quicker than we expect.”
AI can already diagnose diseases more effectively than any human physician. AI can already analyse a stock market finacial statement and execute a trade within milliseconds.
Hi,
Neither of these things require any intelligence. For diagnosing diseases you only need to search through a database of diseases and give each disease a score representing how closely the symptoms match, then spit out a list of probabilities. For high frequency trading its even simpler – just look for a sharp increase in demand.
– Brendan
Most of the diagnostic programmes use deep learning to find disease patterns. Eg they are shown 10,000 mammograms and are trained to identify tumours. How the software actually detects the tumour is a black box.
Try using Google translate to translate Dutch to Chinese and back, and you’ll fall asleep like a well fed infant. No danger of general AI yet.
If you used 10 different human interpretors you’d get 10 different results.
For the main language pairs Google Translate is approaching human accuracy and is orders
of magnitude faster.
–
Edited 2018-06-08 11:54 UTC
The thing is, AI is still just code. Everyone is talking about death machines and artificial sexbots. The reality is the gulf between that and where we are now is VAST. The AI demos people see are impressive but still aren’t a fraction of a percent of the technology and computing power we would need for the scary stuff to occur… if its even possible. Its still just code and data.
The AI we see today are just the shadows on Plato’s cave wall. They look impressive, but there are is a LOT of complex stuff hiding behind it and that stuff took decades to build. To get anywhere near real artificial intelligence in the next hundred years we would need to discover magic.
Seriously, the complexity of that task is overwhelming. Not tough linear algebra problem overwhelming, but comparing a speck of dust to the majesty of the galaxy overwhelming.
Enjoy what convenience this facsimile of AI brings to your life and move on. Its still just code and data, and it will be for some time.
Dude, death machines can be real if we want them to be. There are algorithms written to target and aim weapons. They can fly themselves, run, crawl, roll, swim, hop, mosey, saunter, what ever shape and locomotion method we choose. The technology for those is here, they don’t need much more intelligence than what they have. Plato’s cave of AI is sufficient to kill.
Edited 2018-06-08 13:59 UTC
I’m kinda surprised we don’t yet see a wave of ~assassinations of prominent figures, using quadcopters/drones with, say, just a knife and a targetting camera attached to the bottom, simply falling on their victim. With inexpensive resource available today, this could be done by almost unskilled “lone wolf” …and with virtually no risk, without the need to sacrifice yourself (historically the only working tactic of an ~amateur was a suicide attack; not anymore, now you can sacrifice a fairly inexpensive ~robot)
Matter of time, IMHO. This is why things like requiring drones to be registered, isn’t an entirely bad idea.
Until someone argues that gun-toting drones are protected by the Second Amendment…
SHHH! Thom delete this thread before it goes into the dumb web.
rhetoric.sendmemoney,
You seem to be implying that the “scary stuff” (or sexbots for some reason) are dependent upon tons more computing power, but why?
Military bootcamps are all about turning naturally intelligent humans into robots that obey orders, higher level thinking and feelings are actually discouraged. I don’t see any reason whatsoever that an AI has to show any signs of sentience in order to pose significant danger in a military context.
You may have been referring to machines taking over by themselves, which is a possibility but IMHO the far more realistic scenario in the near term is that the people who control the machines could take over simply because they can and being a demagogue appeals to them.
I presume nobody today has a substantially large army of military robots, but someday someone will. It could be a government, a private corporation, a group of billionaires, it doesn’t really matter: we should not underestimate these machines. I think we should talk about the risk of becoming enslaved to machines even when it’s not machines giving the orders.
Edited 2018-06-08 14:32 UTC
Additionally, sexbots demonstrably require approximately zero computing power, since the very popular example of them, vibrators, have just this, 0…
(funny how it is mainly women, supposedly in more need of “feelings” etc., who first adopted en masse a love machine, one simplified to the very basics …but I’ll shut up now, I’m in trouble / I’ll get yelled at already )
PS. I must also note that THE most popular “vibrators” for men are entirely different in purpose – as joypads for videogames (though there’s always, as a kind of crossover, gamegirladvance Rez vibrator: http://www.gamegirladvance.com/2002/10/sex-in-games-rezvibrator.htm… & https://www.wired.com/2017/01/rez-vibrator-womens-sexuality/ ) …and joypads typically do have some processing power nowadays, I think (at least some embedded ARM CPU for governing all wireless communication…)
I still find it incredibly funny when people talk about “real artificial intelligence”. Artificial intelligence that is not artificial.
And funnily enough, an artificial intelligence that is smarter than humans will still be artificial.
ppl only seem to associate “intelligence” with what they do not understand. The more knowledge one has, the smaller the definition of “intelligent” gets….
it is like searching for “real magic”. but of course all magic is fake. and “fake” magic is the only real magic we can experience.
perhaps intelligence is the same. layers upon layers of complexity creates the illusion of intelligence, which is actually fake to the eyes of a theoretical all knowing entity. more advanced our technology gets, the more layers are peeled off only to reveal the next illusion
I get the excitement of the AI researchers. They are smart people, the intentions are good, and they work their *beeep* off spending billions in the process. I just hope it doesn’t go “Oppenheimer” where they suddenly realize what they have actually invented, and now the genie can’t be put back into the bottle. We are still dealing with that decades later, and that is despite the fact that such bombs are super expensive and hard to build.
AI? Once the algorithms and procesing power is there, no amount of “ethics” really suffice because now it is down to downloading algorithms and executing it on cheap hardware. Every man, woman and her dog knows damn well that somebody will program their drones to go rogue
Don’t mean to be a fear monger. I have degrees in both science and engineering and is not simply scared of things I dont understand. I also realise that humanity has been through these situations before. People used to think that phones (the old landline ones) would be the end of the world.
But, we’re playing with some serious stuff here; even if true AI (not the impressive chatbots) is a bit out in the distance.
It seems this is a difficult subject to debate because people don’t agree on what “artificial” means and what “intelligence” means.
I don’t see a vast difference between what “AI” does and what humans do. The process getting from input to decision is basically the same. The only real difference is in one case the mechanism doing the processing, humans, is a natural occurrence whereas with AI, the mechanism, the computer for example, does not occur naturally. I know humans like to think they’re special. As a species we like to believe we’re the peak of existence & intelligence. I’m inclined to say `not by a long shot`.
I don’t regard AI as an illusion of intelligence. By definition it certainly isn’t. We also don’t need exponential leaps in power & speed to arrive at dangerous AI. The technology most people carry around in their pocket, their cellphone, can easily outperform the brain in countless tasks and in many ways is far more advanced.
Life as we know it can change dramatically because of AI. We may not need to worry about robot factories pumping out human-killing robots, but we certainly need to handle-with-care. GNMT, the AI behind Google Translate created it’s own language, which it was not programmed to do, to make translating more efficient. Google stumbled across this and then had to reverse engineer what it was doing. This happened in less than a month of operation. This is unexpected proof that AI can evolve beyond the bounds of their original programming.
ilovebeer,
I’m in agreement with just about everything you’ve said. It may be hard to come up with a definition of AI that we’ll all agree on, which is one reason we should explicitly distinguish between replicators, knowledge & problem solving, and finally self-awareness/sentience. Once we do this, it’s much easier to set the goalposts independently of each other.
In principal, we can have unintelligent systems that replicate. Self replication can be achieved using unintelligent processes, which is the point of conway’s game of life. In the physical world, this happens with with DNA and I assume other processes as well. I wouldn’t say we’ve mastered self-replication yet, but I do believe it is within reach.
With knowledge & problem solving, we’re already being beaten by our computers with self learning algorithms in many areas. This has been demonstrated in numerous competitions between humans and computers. I think it’s fair to say that today’s computers have the “intelligence” to beat humans at finding solutions without being gifted human knowledge/algorithms (other than the rules of the game obviously).
With self-awareness/sentience, IMHO this is the hardest one in part because it’s so difficult to understand and gauge even in human terms. It’s natural to ask if computers can ever be self aware, but I don’t even know how to prove that biological lifeforms are self-aware. For all I know, everyone around me may be purely biological processes following the rules of physics and their complexity are merely expressions of darwin’s theory of evolution and survival of the fittest as applied to dumb replicators over countless generations.
Edited 2018-06-09 16:37 UTC