Linked by Eugenia Loli on Sun 9th Sep 2007 21:06 UTC
Geek stuff, sci-fi... Technologists and investors gather at the two-day Singularity Summit in San Francisco to discuss the benefits and risks of advancing artificial intelligence--and what to do in the event that machines one day out-think humans.
Thread beginning with comment 269897
To view parent comment, click here.
To read all comments associated with this story, please click here.
Gullible Jones
Member since:
2006-05-23

Oh yeah, the only thing that can happen is for humanity to be rendered obsolete. No way us apes can improve ourselves.

Honestly: remember the long, long list of geniuses in our history? The human brain does scale up. And not to sound excessively optimistic, but why does everyone seem to think that self-aware AIs will either be completely hostile and psychopathic or subservient to us? It's getting a little ridiculous.

On the subject of AI: I have very, very strong doubts about anything bootstrapping to sentience a la "Dial F for Frankenstein". Just think about how many millions of years it took for selective forces to produce something sapient. In that kind of perspective, the machines we've got now are on about the same level as amoebas, or maybe worms at best. Supposing we went and tried to design a strong AI, rather than letting it happen randomly (and what reason would we have to do so, anyway?), we'd still be working almost blind. We don't know half of what's going on in our own brains, never mind the brains of other sapient creatures. We probably won't for a long time. Finding out if there are other brain structures that can produce sapience would take even longer. We could probably be uploading ourselves to digital storage and sending copies of ourselves on vacations to Pluto long before we figured out how to create a self-aware intelligence from scratch, even given hardware with enough processing power.

Meanwhile, this Singularity stuff has struck me for a while as having an unpleasant air of rapturism about it. It's either something wonderful or a doomsday scenario or both, sometimes for the same reason (figure that one out). It makes a great concept for science fiction. Beyond that... Well, you can only get so far trying to predict the future.

(Of course, wasn't that originally the whole point of the concempt? Damn, this stuff is almost as annoying as the Weak Anthropic Principal.)

At any rate, we've got to stay rational about this stuff. If some AI decides to go rogue on us, we can't go the genocide route - and I can hardly believe I have to say this to a bunch of people who are mostly liberal technophiles, but Ken MacLeod's school of thinking made it kind of clear that I do. And I know it's good to think ahead, but IMHO this stuff is about as useful as a bunch of tacticians making advance plans for World War 3.

Right now, this is all science fiction. It's not like that doesn't serve a purpose, but that doesn't make it any less fictional.

Please excuse my incoherent rant.

- GJ

Reply Parent Score: 2