Linked by Thom Holwerda on Thu 7th Jun 2018 23:58 UTC
Google

Sundar Pichai has outlined the rules the company will follow when it comes to the development and application of AI.

We recognize that such powerful technology raises equally powerful questions about its use. How AI is developed and used will have a significant impact on society for many years to come. As a leader in AI, we feel a deep responsibility to get this right. So today, we’re announcing seven principles to guide our work going forward. These are not theoretical concepts; they are concrete standards that will actively govern our research and product development and will impact our business decisions.

We acknowledge that this area is dynamic and evolving, and we will approach our work with humility, a commitment to internal and external engagement, and a willingness to adapt our approach as we learn over time.

It honestly blows my mind that we've already reached the point where we need to set rules for the development of artificial intelligence, and it blows my mind even more that we seem to have to rely on corporations self-regulating - which effectively means there are no rules at all. For now it feels like "artificial intelligence" isn't really intelligence in the sense of what humans and some other animals display, but once algorithms and computers start learning about more than jut identifying dog pictures or mimicking human voice inflections, things might snowball a lot quicker than we expect.

AI is clearly way beyond my comfort zone, and I find it very difficult to properly ascertain the risks involved. For once, I'd like society and governments to be on top of a technological development instead of discovering after the fact that we let it all go horribly wrong.

Order by: Score:
Exponential
by kwan_e on Fri 8th Jun 2018 00:30 UTC
kwan_e
Member since:
2007-02-18

things might snowball a lot quicker than we expect.


I find it interesting how unprepared the human mind is for exponential change. We tend to mentally collect data and approximate it linearly, then extrapolate linearly, then be surprised when the new linear approximation is steeper than the old. Then the cycle continues - and it's like dejavu all over again.

And we never seem to learn to not assume, approximate or extrapolate linearly.

For once, I'd like society and governments to be on top of a technological development instead of discovering after the fact that we let it all go horribly wrong.


There's no "for once" about it, since that's literally never happened before. It's a staple of sci-fi that we somehow kill ourselves with technology, but historical evidence - that fact that we're still here - shows that doesn't happen.

It's never technological development that goes horribly wrong. It's ALWAYS been economical development aka cutting corners that screws things up.

The "for once" statement really should be "for once I'd like people to figure out how much things really cost* and not try to hide it through accounting or soothsaying**."

* R&D. Or rather, R&D&C = research and development and cleanup.
** Free market optimism.

Edited 2018-06-08 00:41 UTC

Reply Score: 3

RE: Exponential
by zima on Sat 9th Jun 2018 21:21 UTC in reply to "Exponential"
zima Member since:
2005-07-06

We'll probably manage one way or another... a quote from "Głos pana"/"Masters Voice" by Stanisław Lem, which I currently read, seems somewhat fitting: (forgive the poor quality of my PL->EN translation)

If someone would tell madam Curie, that in fifty years from her radioactivity will rise gigatons and "overkill", perhaps she wouldn't dare to work - and for sure she wouldn't return to previous calm after living through the terror of such news. But we got used to it and people, who count kilodead and megacorpses, noboby takes for madmen. Our ability to adapt and caused by it acceptance of everything is one of our biggest threats. Beeings, perfectly plastic adaptively, cannot have unelastic morality

Perhaps that's the best we deserve...

Reply Score: 2

too many scifi films
by razor on Fri 8th Jun 2018 00:55 UTC
razor
Member since:
2010-01-13

a calculator today would have been the AI of 1800s (think Ada & Babbage). "artificial intelligence" seems to only include what we cannot understand, but once we do understand it, it is no longer considered "intelligent".

But we do understand the algorithms of machine learning (at least those who designed it do). so I do not get the irrational fear behind it all. sure, some job will be replaced but that had always been the way. one cannot look back and deny techological progress. we dont live in a scifi film where there is always a disaster with every technological breakthrough.

Reply Score: 2

RE: too many scifi films
by flypig on Fri 8th Jun 2018 01:04 UTC in reply to "too many scifi films"
flypig Member since:
2005-07-13

I agree, it seems strange to single out AI here. In fact, looking through Google's principles, the only one that's arguably specific to AI is the second one "Avoid creating or reinforcing unfair bias". Apart from the fact that some AI algorithms are hard to rationalise, they're otherwise just like any other algorithm. It's great Google have spent the time to think about this, but why don't they (and everyone else) just apply these objectives to everything they do?

Reply Score: 2

v RE[2]: too many scifi films
by agentj on Fri 8th Jun 2018 04:14 UTC in reply to "RE: too many scifi films"
RE[3]: too many scifi films
by kwan_e on Fri 8th Jun 2018 05:29 UTC in reply to "RE[2]: too many scifi films"
kwan_e Member since:
2007-02-18

In "AI" there is neither A nor I part.


What?

Of course there is the "A" part. Is it "real" intelligence? No? Then it's "Artificial".

Computers can't change their own code unless they are programmed to do so.


Neither can humans. Have you ever changed your own code, or are you taking credit for biological process you have no control over?

Reply Score: 5

Bill Shooter of Bul Member since:
2006-07-14

We're delving into philosophy here, but when you learn new skills, you're effectively rewiring your brain.

Neural networks in affect do the same thing.

I guess we can choose or have the appearance of choice what subjects to study and how intensely to apply ourselves towards that endeavor.

Reply Score: 3

RE[5]: too many scifi films
by kwan_e on Fri 8th Jun 2018 15:15 UTC in reply to "RE[4]: too many scifi films"
kwan_e Member since:
2007-02-18

We're delving into philosophy here, but when you learn new skills, you're effectively rewiring your brain.

Neural networks in affect do the same thing.


Yes. Either they both count as reprogramming, or they both don't count as reprogramming.

The statement I was responding to argues that it doesn't count if the reprogramming was just part of its program, in which case, the exact same argument applies to humans, since the brain rewiring itself is because it was programmed by natural selection to do so.

We can even extend that to our drive to learn itself. A person "deciding" to learn things and rewiring the brain is merely doing what it was evolutionary programmed to do also.

Reply Score: 3

RE[6]: too many scifi films
by razor on Sat 9th Jun 2018 00:02 UTC in reply to "RE[5]: too many scifi films"
razor Member since:
2010-01-13

cool argument.

Reply Score: 1

RE: too many scifi films
by ilovebeer on Sat 9th Jun 2018 15:01 UTC in reply to "too many scifi films"
ilovebeer Member since:
2011-08-08

A calculator is not an AI. There's nothing intelligent about what a calculator does whether it's the 1800's or 2018.

Reply Score: 2

RE: too many scifi films
by toothbrush_linux on Tue 12th Jun 2018 11:39 UTC in reply to "too many scifi films"
toothbrush_linux Member since:
2018-06-12

a calculator today would have been the AI of 1800s.

A good calculator today would be the AI of the 1960s! The community surrounding John McCarthy took high level symbol manipulation to be the essence of what thinking is, and they developed some of the early computer algebra systems as AI projects with that idea in mind.

"artificial intelligence" seems to only include what we cannot understand

I think the lay audience uses itself as a model rather than some theoretical definition of what counts as intelligence. And it sees in itself plenty of things that don't seem to show up in AI. (Consciousness, for one thing.) So while each attempt at advanced automation eventually does plenty of interesting and useful work, they find it lacking when it comes to grand promises about understanding ourselves.

Reply Score: 1

Conscience
by Treza on Fri 8th Jun 2018 00:58 UTC
Treza
Member since:
2006-01-11

Of course, this has no relation with Google involvement in image processing of military drones footage, and all the negative press they got recently, and the opposition of many Google employees.

Now, suddenly, they think about ethics and AI.

For now, I'm more afraid of evil corporations than evil robots.

Edited 2018-06-08 01:00 UTC

Reply Score: 4

RE: Conscience
by Brendan on Fri 8th Jun 2018 02:11 UTC in reply to "Conscience"
Brendan Member since:
2005-11-16

Hi,

Of course, this has no relation with Google involvement in image processing of military drones footage, and all the negative press they got recently, and the opposition of many Google employees.

Now, suddenly, they think about ethics and AI.

For now, I'm more afraid of evil corporations than evil robots.


If these guidelines are actually followed; Google will have to build weapons and surveillance systems without using AI, so their weapons and surveillance systems will probably just be twice as efficient at half the cost.

If they actually cared about ethics, they wouldn't restrict the guidelines to AI only.

- Brendan

Reply Score: 2

Ugh
by The1stImmortal on Fri 8th Jun 2018 02:14 UTC
The1stImmortal
Member since:
2005-10-20

Their rules 1 and 2 are both incredibly subjective and political.
Also, Google's own AIs and bots already break those rules - they actively discriminate base on political belief. They've said so.

Reply Score: 1

RE: Ugh
by Bill Shooter of Bul on Fri 8th Jun 2018 20:04 UTC in reply to "Ugh"
Bill Shooter of Bul Member since:
2006-07-14

I disagree, they are written specifically to be subjective and therefore, impossible to break.

Its like the old joke " This food is so terrible.. and such small portions too!"

I, as of this moment, am trying to determine for myself what the right line is for both of those.

I used to be a free speech absolutist. Now, I'm not certain that is always the best approach, but not sure how to describe a policy that is fair and makes sense.

Reply Score: 2

way past that
by unclefester on Fri 8th Jun 2018 02:18 UTC
unclefester
Member since:
2007-01-13

"...but once algorithms and computers start learning about more than jut identifying dog pictures or mimicking human voice inflections, things might snowball a lot quicker than we expect."

AI can already diagnose diseases more effectively than any human physician. AI can already analyse a stock market finacial statement and execute a trade within milliseconds.

Reply Score: 6

v RE: way past that
by Brendan on Fri 8th Jun 2018 05:34 UTC in reply to "way past that"
RE[2]: way past that
by unclefester on Fri 8th Jun 2018 11:35 UTC in reply to "RE: way past that"
unclefester Member since:
2007-01-13

Most of the diagnostic programmes use deep learning to find disease patterns. Eg they are shown 10,000 mammograms and are trained to identify tumours. How the software actually detects the tumour is a black box.

Reply Score: 3

No danger
by andywoe on Fri 8th Jun 2018 08:10 UTC
andywoe
Member since:
2018-05-18

AI is clearly way beyond my comfort zone, and I find it very difficult to properly ascertain the risks involved


Try using Google translate to translate Dutch to Chinese and back, and you'll fall asleep like a well fed infant. No danger of general AI yet.

Reply Score: 1

RE: No danger
by unclefester on Fri 8th Jun 2018 11:40 UTC in reply to "No danger"
unclefester Member since:
2007-01-13

If you used 10 different human interpretors you'd get 10 different results.

For the main language pairs Google Translate is approaching human accuracy and is orders
of magnitude faster.

Reply Score: 3

Comment by jmorgannz
by jmorgannz on Fri 8th Jun 2018 11:53 UTC
jmorgannz
Member since:
2017-11-05

-

Edited 2018-06-08 11:54 UTC

Reply Score: 1

v Pragmatism
by rhetoric.sendmemoney on Fri 8th Jun 2018 13:26 UTC
RE: Pragmatism
by Bill Shooter of Bul on Fri 8th Jun 2018 13:59 UTC in reply to "Pragmatism"
Bill Shooter of Bul Member since:
2006-07-14

Dude, death machines can be real if we want them to be. There are algorithms written to target and aim weapons. They can fly themselves, run, crawl, roll, swim, hop, mosey, saunter, what ever shape and locomotion method we choose. The technology for those is here, they don't need much more intelligence than what they have. Plato's cave of AI is sufficient to kill.

Edited 2018-06-08 13:59 UTC

Reply Score: 4

RE[2]: Pragmatism
by zima on Sat 9th Jun 2018 23:35 UTC in reply to "RE: Pragmatism"
zima Member since:
2005-07-06

I'm kinda surprised we don't yet see a wave of ~assassinations of prominent figures, using quadcopters/drones with, say, just a knife and a targetting camera attached to the bottom, simply falling on their victim. With inexpensive resource available today, this could be done by almost unskilled "lone wolf" ...and with virtually no risk, without the need to sacrifice yourself (historically the only working tactic of an ~amateur was a suicide attack; not anymore, now you can sacrifice a fairly inexpensive ~robot)

Reply Score: 2

RE[3]: Pragmatism
by Bill Shooter of Bul on Mon 11th Jun 2018 15:31 UTC in reply to "RE[2]: Pragmatism"
Bill Shooter of Bul Member since:
2006-07-14

Matter of time, IMHO. This is why things like requiring drones to be registered, isn't an entirely bad idea.

Reply Score: 2

RE[4]: Pragmatism
by kwan_e on Tue 12th Jun 2018 00:32 UTC in reply to "RE[3]: Pragmatism"
kwan_e Member since:
2007-02-18

Until someone argues that gun-toting drones are protected by the Second Amendment...

Reply Score: 2

RE[5]: Pragmatism
by Bill Shooter of Bul on Tue 12th Jun 2018 17:59 UTC in reply to "RE[4]: Pragmatism"
Bill Shooter of Bul Member since:
2006-07-14

SHHH! Thom delete this thread before it goes into the dumb web.

Reply Score: 2

RE: Pragmatism
by Alfman on Fri 8th Jun 2018 14:28 UTC in reply to "Pragmatism"
Alfman Member since:
2011-01-28

rhetoric.sendmemoney,

The thing is, AI is still just code. Everyone is talking about death machines and artificial sexbots. The reality is the gulf between that and where we are now is VAST. The AI demos people see are impressive but still aren't a fraction of a percent of the technology and computing power we would need for the scary stuff to occur... if its even possible. Its still just code and data.


You seem to be implying that the "scary stuff" (or sexbots for some reason) are dependent upon tons more computing power, but why?

Military bootcamps are all about turning naturally intelligent humans into robots that obey orders, higher level thinking and feelings are actually discouraged. I don't see any reason whatsoever that an AI has to show any signs of sentience in order to pose significant danger in a military context.


You may have been referring to machines taking over by themselves, which is a possibility but IMHO the far more realistic scenario in the near term is that the people who control the machines could take over simply because they can and being a demagogue appeals to them.

I presume nobody today has a substantially large army of military robots, but someday someone will. It could be a government, a private corporation, a group of billionaires, it doesn't really matter: we should not underestimate these machines. I think we should talk about the risk of becoming enslaved to machines even when it's not machines giving the orders.

Edited 2018-06-08 14:32 UTC

Reply Score: 4

RE[2]: Pragmatism
by zima on Sat 9th Jun 2018 19:39 UTC in reply to "RE: Pragmatism"
zima Member since:
2005-07-06

You seem to be implying that the "scary stuff" (or sexbots for some reason) are dependent upon tons more computing power, but why?

Additionally, sexbots demonstrably require approximately zero computing power, since the very popular example of them, vibrators, have just this, 0... ;)
(funny how it is mainly women, supposedly in more need of "feelings" etc., who first adopted en masse a love machine, one simplified to the very basics ...but I'll shut up now, I'm in trouble / I'll get yelled at already ;) )

Reply Score: 2

RE[2]: Pragmatism
by zima on Wed 13th Jun 2018 19:32 UTC in reply to "RE: Pragmatism"
zima Member since:
2005-07-06

PS. I must also note that THE most popular "vibrators" for men are entirely different in purpose - as joypads for videogames ;) (though there's always, as a kind of crossover, gamegirladvance Rez vibrator: http://www.gamegirladvance.com/2002/10/sex-in-games-rezvibrator.htm... & https://www.wired.com/2017/01/rez-vibrator-womens-sexuality/ ) ...and joypads typically do have some processing power nowadays, I think (at least some embedded ARM CPU for governing all wireless communication...)

Reply Score: 2

RE: Pragmatism
by kwan_e on Fri 8th Jun 2018 15:27 UTC in reply to "Pragmatism"
kwan_e Member since:
2007-02-18

To get anywhere near real artificial intelligence


I still find it incredibly funny when people talk about "real artificial intelligence". Artificial intelligence that is not artificial.

And funnily enough, an artificial intelligence that is smarter than humans will still be artificial.

Reply Score: 3

RE[2]: Pragmatism
by razor on Sat 9th Jun 2018 00:17 UTC in reply to "RE: Pragmatism"
razor Member since:
2010-01-13

ppl only seem to associate "intelligence" with what they do not understand. The more knowledge one has, the smaller the definition of "intelligent" gets....

it is like searching for "real magic". but of course all magic is fake. and "fake" magic is the only real magic we can experience.

perhaps intelligence is the same. layers upon layers of complexity creates the illusion of intelligence, which is actually fake to the eyes of a theoretical all knowing entity. more advanced our technology gets, the more layers are peeled off only to reveal the next illusion

Reply Score: 2

Dejavu
by asgerms on Fri 8th Jun 2018 15:20 UTC
asgerms
Member since:
2018-05-07

I get the excitement of the AI researchers. They are smart people, the intentions are good, and they work their *beeep* off spending billions in the process. I just hope it doesn't go "Oppenheimer" where they suddenly realize what they have actually invented, and now the genie can't be put back into the bottle. We are still dealing with that decades later, and that is despite the fact that such bombs are super expensive and hard to build.

AI? Once the algorithms and procesing power is there, no amount of "ethics" really suffice because now it is down to downloading algorithms and executing it on cheap hardware. Every man, woman and her dog knows damn well that somebody will program their drones to go rogue

Don't mean to be a fear monger. I have degrees in both science and engineering and is not simply scared of things I dont understand. I also realise that humanity has been through these situations before. People used to think that phones (the old landline ones) would be the end of the world.

But, we're playing with some serious stuff here; even if true AI (not the impressive chatbots) is a bit out in the distance.

Reply Score: 2

Comment by ilovebeer
by ilovebeer on Sat 9th Jun 2018 14:59 UTC
ilovebeer
Member since:
2011-08-08

It seems this is a difficult subject to debate because people don't agree on what "artificial" means and what "intelligence" means.

I don't see a vast difference between what "AI" does and what humans do. The process getting from input to decision is basically the same. The only real difference is in one case the mechanism doing the processing, humans, is a natural occurrence whereas with AI, the mechanism, the computer for example, does not occur naturally. I know humans like to think they're special. As a species we like to believe we're the peak of existence & intelligence. I'm inclined to say `not by a long shot`.

I don't regard AI as an illusion of intelligence. By definition it certainly isn't. We also don't need exponential leaps in power & speed to arrive at dangerous AI. The technology most people carry around in their pocket, their cellphone, can easily outperform the brain in countless tasks and in many ways is far more advanced.

Life as we know it can change dramatically because of AI. We may not need to worry about robot factories pumping out human-killing robots, but we certainly need to handle-with-care. GNMT, the AI behind Google Translate created it's own language, which it was not programmed to do, to make translating more efficient. Google stumbled across this and then had to reverse engineer what it was doing. This happened in less than a month of operation. This is unexpected proof that AI can evolve beyond the bounds of their original programming.

Reply Score: 3

RE: Comment by ilovebeer
by Alfman on Sat 9th Jun 2018 16:37 UTC in reply to "Comment by ilovebeer"
Alfman Member since:
2011-01-28

ilovebeer,

It seems this is a difficult subject to debate because people don't agree on what "artificial" means and what "intelligence" means.

I don't see a vast difference between what "AI" does and what humans do. The process getting from input to decision is basically the same. The only real difference is in one case the mechanism doing the processing, humans, is a natural occurrence whereas with AI, the mechanism, the computer for example, does not occur naturally. I know humans like to think they're special. As a species we like to believe we're the peak of existence & intelligence. I'm inclined to say `not by a long shot`.


I'm in agreement with just about everything you've said. It may be hard to come up with a definition of AI that we'll all agree on, which is one reason we should explicitly distinguish between replicators, knowledge & problem solving, and finally self-awareness/sentience. Once we do this, it's much easier to set the goalposts independently of each other.

In principal, we can have unintelligent systems that replicate. Self replication can be achieved using unintelligent processes, which is the point of conway's game of life. In the physical world, this happens with with DNA and I assume other processes as well. I wouldn't say we've mastered self-replication yet, but I do believe it is within reach.

With knowledge & problem solving, we're already being beaten by our computers with self learning algorithms in many areas. This has been demonstrated in numerous competitions between humans and computers. I think it's fair to say that today's computers have the "intelligence" to beat humans at finding solutions without being gifted human knowledge/algorithms (other than the rules of the game obviously).


With self-awareness/sentience, IMHO this is the hardest one in part because it's so difficult to understand and gauge even in human terms. It's natural to ask if computers can ever be self aware, but I don't even know how to prove that biological lifeforms are self-aware. For all I know, everyone around me may be purely biological processes following the rules of physics and their complexity are merely expressions of darwin's theory of evolution and survival of the fittest as applied to dumb replicators over countless generations.

Edited 2018-06-09 16:37 UTC

Reply Score: 3