Linked by Eugenia Loli on Mon 28th May 2012 03:53 UTC
General Development FuriousFanBoys interviews Ben Goertzel regarding Artificial Intelligence. Ben started the OpenCog project (an open sourced AI non-profit), acts as an adviser to the Singularity University, and currently bounces back between Hong Kong and Maryland building in-game AI.
Thread beginning with comment 520016
To read all comments associated with this story, please click here.
Quote from Russell and Norvig
by jrincayc on Wed 30th May 2012 12:15 UTC
jrincayc
Member since:
2007-07-24

From Artificial Intelligence, a modern approach, 2nd Ed. by Stuart Russell and Peter Norvig pg 964:

One threat in particular is worthy of further consideration: that ultraintelligent machines might lead to a future that is very different from today--we may not like it, and at that point we may not have a choice. Such considerations lead inevitably to the conclusion that we must weigh carefully, and soon, the possible consequences of AI research for the future of the human race.

Reply Score: 1

zima Member since:
2005-07-06

Well, we* certainly won't really "like" the future (whatever it is) of the human race anyway, eventually. Our subspecies, Homo sapiens sapiens, exists for a blink of an eye in the grander picture - and we will be extinct extremely soon (vs. the time remaining to the heat death of the universe).

Just how it is... we don't care much about Homo heidelbergensis, our likely ancestor, except for carrying further his lineage. In a future variant with "super AI" it will be probably similar (to varying degrees - maybe more like Neanderthals, which did contributed small part of our DNA, there was some interbreeding according to recent research)

* except, "we" in the strictest sense will not care about anything, being dead. And we hardly really have future in mind - for example, http://en.wikipedia.org/wiki/File:Human_welfare_and_ecological_foot...

Reply Parent Score: 2