Technologists and investors gather at the two-day Singularity Summit in San Francisco to discuss the benefits and risks of advancing artificial intelligence–and what to do in the event that machines one day out-think humans.
Technologists and investors gather at the two-day Singularity Summit in San Francisco to discuss the benefits and risks of advancing artificial intelligence–and what to do in the event that machines one day out-think humans.
The answer, of course, is SAC — Sentience Access Controls.
“Your computer is trying to become self-aware, cancel or allow?”
being the curious monkey i am, i would probably hit allow just to see what happens…
I think it’s a bit early to be discussing “what to do in the event that machines one day outthink humans.”
It might be early, but it’s important to think about.
Such a thing would have a huge impact on society and the economy.
With computers smarter than humans a large amount of society would be out of a job. The economy can’t function like that because people need something to trade for the things they need.
It requires a change in how we view resources
>>With computers smarter than humans a large amount of society would be out of a job.<<
Sure, but the price of production of the goods made only by robots would be close to zero (except for the energy needed, but intelligent robots probably requires drexler-like nanotech which would allow also a very low cost for energy production).
>> The economy can’t function like that because people need something to trade for the things they need.<<
Then we would invent a new economy..
Or we would just own a few androids and send them to work for us so we get cash
Why not? Talk is cheap, and it gets people’s imaginations flared up, and when smart people get fired up, good things tend to happen.
I think it’s a bit early to be discussing “what to do in the event that machines one day outthink humans.”
I’d say it’s already too late.
a japanise company works on an internet controlled duplicate of the owner goals are that he controls his androidcopy via internet.
For people that feel somewhat unwell thats great.
How about reading Bill Joy’s seven years old Wired article before reading this “new” stuff on CNET?
http://www.wired.com/wired/archive/8.04/joy.html
Oh yeah, the only thing that can happen is for humanity to be rendered obsolete. No way us apes can improve ourselves.
Honestly: remember the long, long list of geniuses in our history? The human brain does scale up. And not to sound excessively optimistic, but why does everyone seem to think that self-aware AIs will either be completely hostile and psychopathic or subservient to us? It’s getting a little ridiculous.
On the subject of AI: I have very, very strong doubts about anything bootstrapping to sentience a la “Dial F for Frankenstein”. Just think about how many millions of years it took for selective forces to produce something sapient. In that kind of perspective, the machines we’ve got now are on about the same level as amoebas, or maybe worms at best. Supposing we went and tried to design a strong AI, rather than letting it happen randomly (and what reason would we have to do so, anyway?), we’d still be working almost blind. We don’t know half of what’s going on in our own brains, never mind the brains of other sapient creatures. We probably won’t for a long time. Finding out if there are other brain structures that can produce sapience would take even longer. We could probably be uploading ourselves to digital storage and sending copies of ourselves on vacations to Pluto long before we figured out how to create a self-aware intelligence from scratch, even given hardware with enough processing power.
Meanwhile, this Singularity stuff has struck me for a while as having an unpleasant air of rapturism about it. It’s either something wonderful or a doomsday scenario or both, sometimes for the same reason (figure that one out). It makes a great concept for science fiction. Beyond that… Well, you can only get so far trying to predict the future.
(Of course, wasn’t that originally the whole point of the concempt? Damn, this stuff is almost as annoying as the Weak Anthropic Principal.)
At any rate, we’ve got to stay rational about this stuff. If some AI decides to go rogue on us, we can’t go the genocide route – and I can hardly believe I have to say this to a bunch of people who are mostly liberal technophiles, but Ken MacLeod’s school of thinking made it kind of clear that I do. And I know it’s good to think ahead, but IMHO this stuff is about as useful as a bunch of tacticians making advance plans for World War 3.
Right now, this is all science fiction. It’s not like that doesn’t serve a purpose, but that doesn’t make it any less fictional.
Please excuse my incoherent rant.
– GJ
Its funny that with the imminent disaster of greenhouse warming and the contra sun dimming problem masking that that we are even talking about replacing the human race with AI minds. If humanity survives another hundred years with the earth in the same shape as today, that would be truly far more amazing than seeing AI start to become human like (I don’t mean Kismet like).
Also I can’t quite equate cochlear or other electronic implants with AI, its mostly DSP cleaning up signals.
The Stanford story about Stanley was from my point of view another mostly DSP story, crunching camera images down to find a path with ever better hardware available.
Yeah, I used to worry about this (well, actually I worried more about grey goo) until I realized the oil crash would happen before true AI (or grey goo) ever could. yay?
Agent Smith: I’d like to share a revelation that I’ve had during my time here. It came to me when I tried to classify your species. I realized that you’re not actually mammals. Every mammal on this planet instinctively develops a natural equilibrium with the surrounding environment, but you humans do not. You move to an area, and you multiply, and multiply, until every natural resource is consumed. The only way you can survive is to spread to another area. There is another organism on this planet that follows the same pattern. A virus. Human beings are a disease, a cancer of this planet, you are a plague, and we are the cure.
Stephen Byerley for President!
🙂
Edited 2007-09-09 22:35
robo-ethics!!! the average john doe on the street cant even tell the different between right and wrong. We can’t expect to computationally represent morality when we don’t even have a universal understanding of what it is. A fully working robot society fully capable of collaboration and collectively solving problems which arise, which might happen to include constraining the actions of humans in some capacity to achieve their own individual and collective goals is soooooooooooo not going to happen. just as Robin Williams,playing the Genie, in Aladdin said, “Wake up and smell the humus!!!”
“The average John Doe on the street cant even tell the difference between right and wrong.”
I beg to differ.
the average john doe on the street cant even tell the different between right and wrong.
It always annoys me when people state that. What they really mean is other peoples concepts of right and wrong differ from mine, and thus they must be wrong.
It always annoys me when people state that. What they really mean is other peoples concepts of right and wrong differ from mine, and thus they must be wrong.
And how does that detract from the OP’s point? If people cant even agree on what’s right and what’s wrong, I think that counts as being unable to tell the difference between right and wrong.
Right and Wrong are largely subjective terms. I can tell right from wrong, and I don’t doubt that you are capable of the same, but I doesn’t mean we’ll agree on all points. Just because two people disagree as to if something is wrong or right doesn’t mean one of them is unable to tell right from wrong. You cannot take objective measures of subjective terms
Take the whole she-bang out.
Human life, or nothing!
We’ll have commercial fusion reactors long, long, before there is a technological singularity. I think the BBC Horizon programme had an episode about this (IIRC) and many people, including myself thought it was junk science/total rubbish.
Funny thing I was just catching up on IEC fusion.
I have seen some pretty junk science TV reporting on AI with some serious UK academic pundits and the TV presenters make the case look even worse than it already was.
If some Prof wants to fix his head with a chip, thats his choice but pretty soon the political idiots that govern us (UK/US) will get the idea we should all get chipped for security reasons right after a DNA submission. In a very real sense having cameras everywhere and getting your whereabouts tracked makes me wish to stay as far away from this sort of futuristic technology as possible.
As for fusion, I would bet that the Tokamak will never ever fly, its a forever research project, perhaps the Bussard or similar IEC project will work will but it seems to have its own critics.
more or less, Singularity is when the future point in history is not only unpredictable and vastly different from how we live today.
AI, Nano, ect all play into it.
-Nex6
Haven’t we already been through one then? Haven’t we already been through a dozen, for that matter?
(The electronic revolution. The industrial revolution. The agricultural revolution. They’ve said the next one will be biotech; I’m not sure, but it sounds like a decent bet.)
yes, becuase I am sure a scienitist form 1800 could not have foreseen in anyway the techonlogical socieity we have.
No, they probably couldn’t. I’m pretty sure no one before Tesla could have predicted the internet. Hell, 1800 was before Maxwell, they probably couldn’t come up with radio unless they were an unparalleled genius.
Do you think that, in pre-agrarian society, tribesmen in their tents dreamed up the ziggurats of Sumeria and Babylon? It’s damn easy to look back and say, “Hey, anyone could have predicted that!” when you have the advantage of living after the event has occured.
>I’m pretty sure no one before Tesla could have predicted the internet
Well the telegraph was a pretty useful invention, and they were laying transatlantic cables in 1857 (a year after Tesla’s birth). In fact the telegraph system is sometimes referred to as “the Victorian Internet” (Tom Standage’s book is a fascinating read).
With production costs at a zero and an ever diminishing need for human labour, I see two futures and neither has much to do with popping a red or blue pill
In one; capitalism like we know it lives on, because however cheap production gets there will still be a margin on which to make a profit. Trading will continue, research will continue and wealth accumulation will continue. However there will be no need for human workers and all other classes than the most privileged will be wiped out through a race to the bottom in wages and living standards. With only few mega rich left we might even see a world “democracy” born since few enough voices needs to be heard.
In the other; people outside the richest class see the threat in time, while they still hold political power, and do something about it. Instead of letting wealth accumulate in the hands of very few, a regime where the resources and means of production come under direct democratic governance is established. Thus making sure that everyone can be provided with an equal share out of the wealth created. In this future even a “useless” person who only has his manual labour to sell, can have children some day.
The choice is and has always been with us the people.
Have a good day.
Edited 2007-09-10 06:24
This is just an example of how undereducated modern man really is. These people don’t know and will never agree on:
1. what is consciousness
2. what drives a human being
3. how society works
and will never construct a spineless slave. As a matter of fact, there is a very close approximation of such a thing walking the streets for thousands of years – just look around. It can’t get more perfect and cheaper than that.
And the final laugh: what way can machines outsmart people? In winning video games? Predicting economical trends? Composing music? Erotic day-dreaming? Inventing better recipes? Getting better chicks? Being more humane than humans while treating patients and thus winning some wicked prize? It’s a paradox parading as a smart question.
Anyway … I bow before miracles of science, and hope my soon keeps quiet about the emperor’s new clothes. Yeah, and sorry for being bitter
Note the large corpus of Sci-Fi and speculative literature that’s set in the far-flung future where AI and SI are present: a lot of it continues, rightly, to focus on the fact that humans still haven’t got the major social issues right. I am all for scientific research but let’s get the overly-territorial ape in us sorted out properly before we seek to enhance its brain, or make its still remarkable intelligence apparently superfluous.
Edited 2007-09-10 09:29
True human-like AI will never be achieved, because
a) we don’t even understand the basics of how a brain functions, let alone the details.
b) the brain’s capacity can be huge, especially if it is proven that it employs quantum computations.
Stop spreading FUD about FOSS the problem is clearly MICRO$HAFT!!!!
Finally, someone posted on-topic!
We do have an idea how to implement general AI. The theory is well-defined by Marcus Hutter:
Universal Artificial Intelligence: Sequential Decisions Based On Algorithmic Probability
The AI model AIXI converges fast towards the best solution that there is for a problem. The only problem in the approach is that it requires infinite resources, therefore corners have to be cut. There are approaches that can do this and they show behaviour that is expected of intelligent systems. The systems that will be implemented in the future will be able to redesign themselves.
Basically the theory is this:
1) Make many random monkeys (programs), and give more time to the monkeys that best approach your program in the simplest way (Occams razor and Kolmogorov complexity).
2) Select the monkey that writes a word well and give him a banana (reward).
3) Breed this monkey into variants that write lines well.
4) Breed the succesful monkey (give it more probability) and give him a banana once he writes shakespeare.
In the meanwhile data-mining like techniques are used to build a world model and a model that checks what consequences actions of the robot itself are on the environment. This self-improving model will be used to prove which strategies (programs) best approach the goal.
These techniques are used by people who have succesfully learnt robots to tie knots or when you give them the order “fetch ball”, run
to the ball and take it, then bring it back to the one who gave the command. The techniques can also be used for driving or stocks.
Look for example to Novamente ( http://www.novamente.net ) or NARS (non axiomatic reasoning system).
A book I am reading now by Goertzel called Artificial General Intelligence is very good, it contains an overview of much research in the field of strong general AI.
Call me back when an android like Star Trek’s lt Cmdr Data exists. Until then, talk is cheap.
swords are coming!
No artificial intelligence will ever replace the human mind.
That and God is the creator of the Universe so a ‘dumb’ computer programmed by a man is in no way, shape, form or fashion going to go beyond the limitations…
We’ll start listening when you start backing up your arguments and using solid logic.
BTW, you might want to change your profile lest you be banned for racism… Troll.
No artificial intelligence needs to replace the human mind. Human mind is a collection of memories, past things happened to us, burdened by genes from our fore-fathers, unstable and prone to errors and so on..Also, what is intelligence anyway? The ability to solve logical problems? The ability to react to unforeseen events? To dream and fantasize?
PS. If there was a god and he/she created the universe, god would be utterly incompetent to create a race as volatile as humans..
PS. If there was a god and he/she created the universe, god would be utterly incompetent to create a race as volatile as humans..
I disagree man has a brain to make choices if one chooses to do wrong they do so by free will.
A sovereign power is in control, mere man cannot comprehend or accept this it is by their choice yet again.
True enough, although our will is not completely free – there are chemical constraints on it. Just how large those constraints are is a good question, but they do exist.
Either I’m reading the above sentence wrong or it contradicts itself…
Rubbish. Man has no choice over the world God creates by his actions. His so called free will is limited by what God intends and allows. And when God is unbelievably evil, as is his want on countless occasions, free will counts for nothing.
Hmm. Out of curiousity, are you being sarcastic, or are you a genuine mistheist?
Edited 2007-09-12 02:39 UTC
Not necessarily, a godlike entity might create such a species as us as an experiment, or part of an experiment. It could be the godly equivalent of AI research: trying to find a species with the right characteristics to bootstrap itself to godhood. There’s also no accounting for impulsiveness, perversity, or simple lack of morality.
Keep in mind as well that a truly godlike being wouldn’t have to create us, but merely simulate us in its mind, along with a more or less limited universe. We wouldn’t be able to tell the difference; some would say that there is no fundamental difference, when you get down to it, which leads to interesting questions about universes running on top of universes running on top of universes like a Rube Goldberg experiment in virtualization… But these things all involve multiplying entities far, far beyond necessity. There, I’ve invoked Occam’s Razor. Hurray.
(And don’t take the bit about bootstrapping to godhood seriously. Sin or not, hubris is a dangerous thing. )
Cyc is a good boy:
http://www.flickr.com/photos/13175437@N02/1355515431/