Linked by Eugenia Loli on Tue 28th Oct 2014 04:40 UTC
Geek stuff, sci-fi... We've highlighted the dire warnings of Tesla and SpaceX founder Elon Musk in recent months regarding the perils of artificial intelligence, but this week he actually managed to raise the bar in terms of making A.I. seem scary. More at Mashable. My take: I worked on AI 20 years ago (wow, time flies). I don't believe that we will ever create anything truly sentient. Intelligent and useful for our needs, yes. But truly sentient, no. For something to become evil, it must be sentient. Anything else, if it ever becomes problematic, it would just be software bugs, not evilness.
Order by: Score:
Never underestimate your opponent...
by techweenie1 on Tue 28th Oct 2014 05:33 UTC
techweenie1
Member since:
2008-10-15

I disagree, I don't know if we're as close as some say we are but I do believe we will eventually create artificial sentient life forms and when we do all hell will break loose...

Edited 2014-10-28 05:34 UTC

Reply Score: 0

allanregistos Member since:
2011-02-10

I disagree, I don't know if we're as close as some say we are but I do believe we will eventually create artificial sentient life forms and when we do all hell will break loose...

If you knew programming, you would certainly agree with the poster. The evil of AI is as evil only as the *one* who designed it and the motives of the creator behind it. It cannot have free will, it has no consciousness on its own. Humans may create AI beings that appear sentient, but as you noted, it is artificial, meaning it is fake, not the real one, so it doesn't have the rights except those it inherits from its creator.

Reply Score: 1

abstraction Member since:
2008-11-27

If you knew programming than you would know that not all algorithms are something you can control. Given an unsupervised neural network that processes inputs and produces output there is no telling what the result will be.

Edited 2014-10-28 18:10 UTC

Reply Score: 5

Eugenia Member since:
2005-06-28

I believe that humans can't create true consciousness. I'm spiritual (not religious in the traditional sense), so I see consciousness as something pre-existing. I see our bodies as vessels for pieces of that consciousness. My claims aren't scientifically provable, but they're self-evident if you do meditation for some time. As such, I don't believe that we, humans, can create consciousness out of nothing. Eventually we might be able to create vessels (both biological and mechanical), but I think that's how far we can go at this stage regarding our own evolution as "creators".

As I said above though, bad programming could create problems. Bugs and logical errors might create bad situations with our "rather smart" machines. But that won't be because of they got "evil". You can't be good or evil if you aren't sentient and you can't evolve.

Reply Score: 3

ssokolow Member since:
2010-01-21

Given that I ascribe to the physicalist viewpoint and you seem to be a very rational and intelligent person, I'd be fascinated to hear what you think of this Yale open courseware lecture series.

https://www.youtube.com/watch?v=gh-6HyTRNNY

(The course is named "PHIL 176: Death" and it's an excellent introduction into rational thought and discourse on the nature of conscious existence and its many aspects.)

Edited 2014-10-28 07:06 UTC

Reply Score: 3

gedmurphy Member since:
2005-12-23

As someone who's worked on AI much more recently, I believe it's only a matter of time before true AI is a reality.

It's not about how we as programmers create the AI being, it's about software becoming advanced enough to create self-modifying code which advances itself for its own worth. The more advanced this self-modifying aspect gets, the more 'aware' the code will become and the more likely it is to try to think for itself and ultimately try to protect itself from threats (us).

We are all biological computers which in principle work the same as mechanical computers.

Reply Score: 6

ddjones Member since:
2014-10-28

It's not about how we as programmers create the AI being, it's about software becoming advanced enough to create self-modifying code which advances itself for its own worth. The more advanced this self-modifying aspect gets, the more 'aware' the code will become and the more likely it is to try to think for itself and ultimately try to protect itself from threats (us).

We are all biological computers which in principle work the same as mechanical computers.


This is true only in a VERY general sense. I'm a physicalist, so I'm a firm believer that our consciousness arises from the functioning of our brains and that we are thinking machines. I'm less convinced that we should be called "computers." Our brain appears to function on very different principles from the devices we normally use that term to describe, and I'm not at all convinced that consciousness and intelligence can be achieved by a device which consists of a series of logic gates. That's not code for a secret claim of dualism. There are many phenomenon in nature which are poorly represented by discrete logic. I think it's quite possible that, assuming we don't destroy ourselves first, we'll eventually create artificial consciousnesses. But if we do, I suspect that the "brains" of such a creation will not significantly resemble modern CPUs.

More importantly, even if we assume that we are biological computers operating a complex algorithm that a finite-state machine is capable of emulating, that algorithm was created by millions of years of evolution. Humans are almost certainly the smartest creatures on this planet, but we're not particularly intelligent in an abstract sense. We do not act intelligently. We react emotionally. We act based upon drives, biological imperatives honed by countless generations competing for resources and survival. The urge to conquest is driven by the need to compete and reproduce. The ability to love is very likely a result of a competitive advantage experience by offspring with nurturing parents. Why should we assume that an AI which is not the result of such an extended process would share any of our drives or emotional prejudices? The assumption that an AI would think like us is as absurd as thinking that they would look like us. The only way that would happen is if we deliberately designed them that way. And if AIs arise from generations of self-modifying code, then we'll have very little input into their final design.

***edit: spelling

Edited 2014-10-28 11:40 UTC

Reply Score: 3

blcbob Member since:
2014-10-28

Did a fair bit of AI work myself - and I have to agree with Musk and Hawking.

If you understand neural networks and have just a little bit of imagination, its obvious that we have to be very carefull in this field.

You simply dont know what you are talking about if you are not worried about the potential danger of AI.

Reply Score: 3

cfgr Member since:
2009-07-18

Software is always restricted by hardware. If anything, just pull the plug. No body, no harm. Everything else can be targeted by human hackers as well so there's nothing new there.

And I doubt anyone would actually link critical systems to software (AI) that cannot be controlled, nor predicted, nor reliably tested. Heh, it's hard enough to pass all those bureaucratic tests as it is.

Also a fun read:
http://what-if.xkcd.com/5/

Reply Score: 4

Alfman Member since:
2011-01-28

cfgr,

Software is always restricted by hardware. If anything, just pull the plug. No body, no harm. Everything else can be targeted by human hackers as well so there's nothing new there.


...Unless we put software in charge of building the hardware. Then we get a feedback loop where it can improve upon itself. It actually makes sense to put computers in charge of designing computers since they can design far more complex circuits than humans can. Right now we're still designing CPU subsystems and instruction sets, etc. But it's likely, assuming CPUs are allowed to continue evolving, that at some point a CPU's inner workings will become too complex for humans to comprehend. One day we could end up with a trillion transistor CPU, engineers will be able to show that it passes their engineering fitness functions. However unless they've specifically restricted the generator to suboptimal "human" designs, they will have absolutely no idea what all the circuits do. It will consistently beat human designs, yet we won't understand how.

And I doubt anyone would actually link critical systems to software (AI) that cannot be controlled, nor predicted, nor reliably tested. Heh, it's hard enough to pass all those bureaucratic tests as it is.


Assuming these computers become commonplace and every company has their own watson (and watson itself has evolved a couple generations), then somebody's bound to "succeed" eventually.

Reply Score: 4

cfgr Member since:
2009-07-18

...Unless we put software in charge of building the hardware.

Yes that's true, but that would be a very specific software designed to optimise this particular problem. It would be a waste of developer time and CPU cycles to add more intelligence than necessary. So yeah, you can deploy a general AI that does lot more than just this task but why would anyone do that?

One day we could end up with a trillion transistor CPU, engineers will be able to show that it passes their engineering fitness functions. However unless they've specifically restricted the generator to suboptimal "human" designs, they will have absolutely no idea what all the circuits do. It will consistently beat human designs, yet we won't understand how.

Yeah, this is already the case for many problems in fact. During my final years at university, I've been developing a piece of machine learning software that essentially optimises a training algorithm which optimises itself to solve a given learning problem (supervised, unsupervised and reinforcement). Creating an optimal solution is one thing, actually understanding as to why this solution is optimal and what implications it has is an entirely different concept.

Basically, modern AI is statistics rather than logics: the results are there but what goes on inside is pretty much a black box that neither a human nor the computer algorithm itself actually understands. We just know that the output is a statistical conclusion based on the training input and that if trained properly can be extremely accurate. You can have a perfect understanding of why the learning algorithm works, but not necessarily why "this input leads to this output", especially not when "this input" is made up of large dimensional data and first goes through some preprocessors such as PCA. It's simply impossible to tell how each component interacts with the others.

So in short: computers already beat humans in problem solving, everyone knows how they reached that solution but no-one knows why it's better.

Edited 2014-10-28 16:10 UTC

Reply Score: 3

Alfman Member since:
2011-01-28

cfgr,

Yes that's true, but that would be a very specific software designed to optimise this particular problem. It would be a waste of developer time and CPU cycles to add more intelligence than necessary.



I'd say that depends on how software will be built in the future. Relatively simple "genetic algorithms" can solve tremendously complex problems. We provide a relatively simple fitness function, and the algorithms use evolutionary processes to find suitable solutions. Given enough parallelism and enough generations, almost anything should be possible. After all, that is the predominant scientific theory of how humans came to be. I predict these methods of designing software will become commonplace over all domains and it will be the motivation for the immensely powerful computers in the future, even if they're "over provisioned" by today's standards.

Yeah, this is already the case for many problems in fact. During my final years at university, I've been developing a piece of machine learning software that essentially optimises a training algorithm which optimises itself to solve a given learning problem (supervised, unsupervised and reinforcement). Creating an optimal solution is one thing, actually understanding as to why this solution is optimal and what implications it has is an entirely different concept.


Sounds fun, they didn't offer such a course at my university. I did take a Genetic Algorithms class, but honestly I found the undergrad classes too basic for me.

Basically, modern AI is statistics rather than logics: the results are there but what goes on inside is pretty much a black box that neither a human nor the computer algorithm itself actually understands.


Yep, I think this is the only way to build an AI that isn't limited by the rules/algs it's designer comes up with. I wonder what hard problems we're going to throw at strong AI in the future?

Medicine is a good one. A computer will be able to make an "intelligent" medical diagnosis by observing the data and "understanding" the patient's condition. Of course a computer cannot become a master just by it's existence, experience is important too. Also, lacking physical senses places it at an obvious disadvantage, there are subtle context clues that won't necessarily be documented.

Edited 2014-10-28 17:33 UTC

Reply Score: 2

jrincayc Member since:
2007-07-24

"If anything, just pull the plug. No body, no harm. "

In a strict sense, if you are arguing that today, AI would be incapable of causing humanity to go extinct, you are correct. However, the amount of hardware that computers control is increasing rapidly. For example, from the what if you mention: "They might want to run us down, Futurama-style, but they’d have no way to find us." All the AI controlling the car would need is to find a camera pointing at the car (cellphone or cctv), and suddenly it can see if the car could hit anything by accelerating.

So computers could kill a lot of people today, and in the future the number of people that a dangerous AI could kill will increase.

Reply Score: 2

abstraction Member since:
2008-11-27

Yes, but I don't think it necesarilly has to be self-modifying code in order to achieve that.

Reply Score: 3

Lennie Member since:
2007-09-22

I'm no expert, but recently I've seen some videos on AI and I think it is interesting. Intelligence in general is interesting.

While we haven't been able to build a full simulation of a human brain or even much smaller brains that doesn't mean that even the human brain is limited in resource. It has to be conservative which it's use of energy.

So I think instead of self-modifying, I think you should consider temporary re-configuration/re-purposing of parts of the brain for the task at hand.

Also see the videos I linked in the other comment:

http://www.osnews.com/permalink?598605

Reply Score: 3

abstraction Member since:
2008-11-27

Perhaps we can't produce consciousness because we can't scientifically say what consciousness is and therefor the question about what is conscious and not falls under philosophy and not science.

Reply Score: 4

Alfman Member since:
2011-01-28

abstraction,

Perhaps we can't produce consciousness because we can't scientifically say what consciousness is and therefor the question about what is conscious and not falls under philosophy and not science.


I don't know what consciousness is. Many people believe that consciousness is special, and cannot be replicated by machines. We say machines cannot feel emotions, pain, sadness, etc. It is conceivable that we could create a machine that mimics these behaviors (ie pain, emotion), but most of us would say it doesn't truly experience it as we do.

The thing that gets me is that I cannot even prove that other human beings are conscious. Sure they exhibit the behaviors that are reminiscent of consciousness, but in scientific terms, I'd be at a loss to categorically distinguish between the consciousness behind similar behaviors of a human being and a machine mimicking us. If we setup a controlled experiment, where we prod a human and a computer, and they both say "ouch", how can we scientifically conclude that only one experiences pain? We say the machine is only mimicking conscious behaviors, but how do we know scientifically that other human beings aren't also mimicking them like an automaton programmed to say ouch at the right moment?

I guess this is why consciousness falls under philosophy and not science.

Edited 2014-10-28 19:24 UTC

Reply Score: 3

Lennie Member since:
2007-09-22

What do you fine folks that worked on AI think about there being an other layer in intelligence that AI doesn't seem to have worked on:

https://www.youtube.com/watch?v=RZ3ahBm3dCk
https://www.youtube.com/watch?v=1DeNSE9RH8A

My simple explanation is:

The process of learning how to automatically select the best learning algorithm for handling a specific situation.

Reply Score: 2

zima Member since:
2005-07-06

Eugenia, first prove to me that you are concious... ;)

Reply Score: 2

WOPR
by danny on Tue 28th Oct 2014 06:11 UTC
danny
Member since:
2005-07-10

[typing] Love to. How about Global Thermonuclear War?

(Sigh) My old Commodore 64 Basic ELISA program barely understood "which witch is which?" As far as emulating a nervous system, it is difficult to get beyond half a dozen neurons. We have ~100 billion. It will be a while in my opinion before we get a WOPR or a HAL 9000. On the other hand, Elon Musk does not have a belly button!

Reply Score: 0

RE: WOPR
by Kochise on Tue 28th Oct 2014 06:23 UTC in reply to "WOPR"
Kochise Member since:
2006-03-03

Wrong, we use serialized instructions execution speed to emulate parallel processing, hence multitasking. Now we have computers with 512+ cores running at 3GHz+. There are computing farms out there. Look at IBM's computer that won Jeopardy. With the right algorithm, it could runs free.

Yet, while being sentient, it shouldn't necessarily have mankind flaws like power-mongering, but be pragmatic and come fast (about 1 or 2s) to the conclusion that an Earth without over (re)producing humankind wouldn't be a bad alternative and thus eradicate us all.

After all we made extinct many species in the last couple of centuries without us feeling much concerned.

Kochise

Reply Score: 3

RE[2]: WOPR
by ssokolow on Tue 28th Oct 2014 07:09 UTC in reply to "RE: WOPR"
ssokolow Member since:
2010-01-21

Except that David Deutsch makes a very persuasive argument that "Artificial General Intelligence" (since the term "AI" has become so diluted) is an algorithm that is so different from what we currently implement that, not only can we not stumble on it accidentally, we may require a whole new way of thinking about the problem to achieve it intentionally.

http://www.aeonmagazine.com/being-human/david-deutsch-artificial-in...

Edited 2014-10-28 07:13 UTC

Reply Score: 3

RE[3]: WOPR
by abstraction on Tue 28th Oct 2014 18:26 UTC in reply to "RE[2]: WOPR"
abstraction Member since:
2008-11-27

If this would be true we would probably still be able to simulate it but the algorithm would be so painfully slow it would not show any results worth recognizing. For instance, the algorithm might have to take into account not only the functionality of the brain itself but also simulate the entire surrounding world as well or else the information might not be enough to reach any sort of intelligent behaviour.

Reply Score: 3

RE[4]: WOPR
by Kochise on Wed 29th Oct 2014 05:40 UTC in reply to "RE[3]: WOPR"
Kochise Member since:
2006-03-03

"Game of life" doesn't requires much of surrounding simulation.

Kochise

Reply Score: 2

Agreed..
by sheokand on Tue 28th Oct 2014 06:11 UTC
sheokand
Member since:
2013-04-23

Just imagine IF AI used in wrong places or wrong people( terrorists, North korea). It can be very dangerous, because AI bots are just programs. And these people just have to change the final Goal.

Its like you use knife to cut vegetables OR kill someone. It just depend on the person who has knife.

Reply Score: 1

RE: Agreed..
by Kochise on Tue 28th Oct 2014 07:50 UTC in reply to "Agreed.."
Kochise Member since:
2006-03-03

Not wrong people. We are always the wrong people for somebody else. Since while we are a one mankind that should behave like a global brotherhood, we strikes each others for dubious reasons. Hence we'll never reach a consensus and live in peace, thus we'll use the technology to "defend" ourselves from... the others (the not-like-us, the aliens, the wild animals, the ...)

Fearful little humans.

Kochise

Reply Score: 2

pd1011
Member since:
2010-12-08

imagine a spreadsheet used to enrich a few, fire many, hire h1b, that's worse than any AI
A spreadsheet in the wrong hands can be harmful to society
We don't live in a global society, you have to pay top dollar for medicine health records and movies
Why should you be able to hire from the third world at bottom dollar
Then blame robotics and AI
As long as the rich are allowed to be richer and the poor poorer you have a problem
We also use computers to control access instead of educate and spy on people

Reply Score: 0

Musky
by Carewolf on Tue 28th Oct 2014 08:35 UTC
Carewolf
Member since:
2005-09-08

I always thought Musk was a pretty smart guy, now I am not so sure. Maybe he has just read TOO much bad sci-fi.

Reply Score: 2

Sentient?
by immanos on Tue 28th Oct 2014 08:52 UTC
immanos
Member since:
2014-10-28

Of course it will happen. Replicating the brain is just a sweet technical problem. Digitally replicating evolution, rather (with a new generation every two microseconds... at that point we're screwed, if the thing is attached to a Boston Dynamics wildcat)

Edited 2014-10-28 08:53 UTC

Reply Score: 2

no evil necessary
by unclefester on Tue 28th Oct 2014 09:03 UTC
unclefester
Member since:
2007-01-13

"For something to become evil, it must be sentient. Anything else, if it ever becomes problematic, it would just be software bugs, not evilness."

Wrong. AI can destroy us by acting in a purely rational manner.

Imagine that AI one day decides (after objectively analysing changing environmental parameters) humans are destroying the environment too quickly. Logically, without thinking, it neutralises 99.99% of us humanely. A few thousand survivors are left to live as hunter-gatherers in a game reserve. No evil involved but the outcome is the same.

Edited 2014-10-28 09:07 UTC

Reply Score: 6

Artificial Stupidity
by M.Onty on Tue 28th Oct 2014 10:18 UTC
M.Onty
Member since:
2009-10-23

I've often thought that we're trying to fly before we can crawl with Artificial Intelligence. We've not made a computer as intelligent as a vole yet, and we want to make it as intelligent as Mr Hoskins from down the road?

In any case, what would we want it for? We have a massive glut of human intelligence at present which we struggle to put to good use, insisting that the most powerful and expressive machine in the known universe --- the human brain --- is largely employed in stacking shelves, digging ditches and designing Angry Bird knock-offs.

Why not aim for Artificial Stupidity, or Artificial Animalism? A computer with the intelligence of an ants' nest would be very useful. A computer with the intelligence of a Border Collie would be even more so.

Right now I don't want a computer that's my intellectual equal or superior; I'll settle for one that understands when I'm getting angry with it, learns which of its habits are irritating and which are useful, learns my patterns of work and predicts them.

Like a Border Collie sheep dog in fact: A computer that understands that a random mashing of keys is the equivalent of the user shouting, "Stop that right now you little bastard!"

Those who want a computer that's perfectly tuned to their requirements would start with a 'puppy' and shape it to their 'shepherding' needs. Most of us with less demanding requirements would prefer to skip the shitting on the digital carpet phase and download a fully developed 'worker dog'. Might not understand you so well, but it would probably do.

Even were it achievable in the foreseeable future, why would Artificial Intelligence be more useful and desirable than Artificial Stupidity?

Reply Score: 3

Comment by randy7376
by randy7376 on Tue 28th Oct 2014 16:25 UTC
randy7376
Member since:
2005-08-08

I'm surprised no-one has mentioned this yet.

As an example of one possibility as to what might happen with a "rogue artificial intelligence", check out the movie "Colossus: The Forbin Project".

http://www.imdb.com/title/tt0064177/?ref_=nv_sr_1

Reply Score: 3

preposterous
by CaptainN- on Tue 28th Oct 2014 19:07 UTC
CaptainN-
Member since:
2005-07-07

We absolutely will create something sentient. It's not as if the human brain machine is impossible to understand. And once we understand it, someone will program a model to match it (then mess with it). It WILL happen (unless we destroy ourselves first) - I'm not sure how people believe it can't or won't.

Also, sentient computer intelligence is real, actual intelligence. That's not the same as artificial intelligence, which only simulates intelligence - artificially. I always wonder why this isn't more obvious than it is. Artificial intel just does whatever it was programmed to do, without any thought, and given the right circumstances things can go wrong for sure.

Reply Score: 4

defenitions
by Janvl on Wed 29th Oct 2014 11:46 UTC
Janvl
Member since:
2007-02-20

I read about conciousness and intelligence here.

But no one defines what we consider intelligence.
Is a species that destroys it's own habitat intelligent?

The big danger in the near furute is not AI but genetic manipulation where we manipulate on biological "machinery" without fully understanding it.

Reply Score: 3

AI is really interesting
by Jam61 on Sat 1st Nov 2014 18:58 UTC
Jam61
Member since:
2014-11-01

AI is really interesting at the same time frightening if it becomes real threat to our human existence, Terminator-like scenario. But Corporates may be ready to go to any extent if they profit by employing machines instead of human being, we will have to wait and see...

Reply Score: 1