Linked by Thom Holwerda on Thu 7th Jun 2018 23:58 UTC
Google

Sundar Pichai has outlined the rules the company will follow when it comes to the development and application of AI.

We recognize that such powerful technology raises equally powerful questions about its use. How AI is developed and used will have a significant impact on society for many years to come. As a leader in AI, we feel a deep responsibility to get this right. So today, we’re announcing seven principles to guide our work going forward. These are not theoretical concepts; they are concrete standards that will actively govern our research and product development and will impact our business decisions.

We acknowledge that this area is dynamic and evolving, and we will approach our work with humility, a commitment to internal and external engagement, and a willingness to adapt our approach as we learn over time.

It honestly blows my mind that we've already reached the point where we need to set rules for the development of artificial intelligence, and it blows my mind even more that we seem to have to rely on corporations self-regulating - which effectively means there are no rules at all. For now it feels like "artificial intelligence" isn't really intelligence in the sense of what humans and some other animals display, but once algorithms and computers start learning about more than jut identifying dog pictures or mimicking human voice inflections, things might snowball a lot quicker than we expect.

AI is clearly way beyond my comfort zone, and I find it very difficult to properly ascertain the risks involved. For once, I'd like society and governments to be on top of a technological development instead of discovering after the fact that we let it all go horribly wrong.

Permalink for comment 658181
To read all comments associated with this story, please click here.
Comment by ilovebeer
by ilovebeer on Sat 9th Jun 2018 14:59 UTC
ilovebeer
Member since:
2011-08-08

It seems this is a difficult subject to debate because people don't agree on what "artificial" means and what "intelligence" means.

I don't see a vast difference between what "AI" does and what humans do. The process getting from input to decision is basically the same. The only real difference is in one case the mechanism doing the processing, humans, is a natural occurrence whereas with AI, the mechanism, the computer for example, does not occur naturally. I know humans like to think they're special. As a species we like to believe we're the peak of existence & intelligence. I'm inclined to say `not by a long shot`.

I don't regard AI as an illusion of intelligence. By definition it certainly isn't. We also don't need exponential leaps in power & speed to arrive at dangerous AI. The technology most people carry around in their pocket, their cellphone, can easily outperform the brain in countless tasks and in many ways is far more advanced.

Life as we know it can change dramatically because of AI. We may not need to worry about robot factories pumping out human-killing robots, but we certainly need to handle-with-care. GNMT, the AI behind Google Translate created it's own language, which it was not programmed to do, to make translating more efficient. Google stumbled across this and then had to reverse engineer what it was doing. This happened in less than a month of operation. This is unexpected proof that AI can evolve beyond the bounds of their original programming.

Reply Score: 3