Linked by Eugenia Loli on Tue 28th Oct 2014 04:40 UTC
Geek stuff, sci-fi... We've highlighted the dire warnings of Tesla and SpaceX founder Elon Musk in recent months regarding the perils of artificial intelligence, but this week he actually managed to raise the bar in terms of making A.I. seem scary. More at Mashable. My take: I worked on AI 20 years ago (wow, time flies). I don't believe that we will ever create anything truly sentient. Intelligent and useful for our needs, yes. But truly sentient, no. For something to become evil, it must be sentient. Anything else, if it ever becomes problematic, it would just be software bugs, not evilness.
Thread beginning with comment 598549
To view parent comment, click here.
To read all comments associated with this story, please click here.
Eugenia
Member since:
2005-06-28

I believe that humans can't create true consciousness. I'm spiritual (not religious in the traditional sense), so I see consciousness as something pre-existing. I see our bodies as vessels for pieces of that consciousness. My claims aren't scientifically provable, but they're self-evident if you do meditation for some time. As such, I don't believe that we, humans, can create consciousness out of nothing. Eventually we might be able to create vessels (both biological and mechanical), but I think that's how far we can go at this stage regarding our own evolution as "creators".

As I said above though, bad programming could create problems. Bugs and logical errors might create bad situations with our "rather smart" machines. But that won't be because of they got "evil". You can't be good or evil if you aren't sentient and you can't evolve.

Reply Parent Score: 3

ssokolow Member since:
2010-01-21

Given that I ascribe to the physicalist viewpoint and you seem to be a very rational and intelligent person, I'd be fascinated to hear what you think of this Yale open courseware lecture series.

https://www.youtube.com/watch?v=gh-6HyTRNNY

(The course is named "PHIL 176: Death" and it's an excellent introduction into rational thought and discourse on the nature of conscious existence and its many aspects.)

Edited 2014-10-28 07:06 UTC

Reply Parent Score: 3

gedmurphy Member since:
2005-12-23

As someone who's worked on AI much more recently, I believe it's only a matter of time before true AI is a reality.

It's not about how we as programmers create the AI being, it's about software becoming advanced enough to create self-modifying code which advances itself for its own worth. The more advanced this self-modifying aspect gets, the more 'aware' the code will become and the more likely it is to try to think for itself and ultimately try to protect itself from threats (us).

We are all biological computers which in principle work the same as mechanical computers.

Reply Parent Score: 6

ddjones Member since:
2014-10-28

It's not about how we as programmers create the AI being, it's about software becoming advanced enough to create self-modifying code which advances itself for its own worth. The more advanced this self-modifying aspect gets, the more 'aware' the code will become and the more likely it is to try to think for itself and ultimately try to protect itself from threats (us).

We are all biological computers which in principle work the same as mechanical computers.


This is true only in a VERY general sense. I'm a physicalist, so I'm a firm believer that our consciousness arises from the functioning of our brains and that we are thinking machines. I'm less convinced that we should be called "computers." Our brain appears to function on very different principles from the devices we normally use that term to describe, and I'm not at all convinced that consciousness and intelligence can be achieved by a device which consists of a series of logic gates. That's not code for a secret claim of dualism. There are many phenomenon in nature which are poorly represented by discrete logic. I think it's quite possible that, assuming we don't destroy ourselves first, we'll eventually create artificial consciousnesses. But if we do, I suspect that the "brains" of such a creation will not significantly resemble modern CPUs.

More importantly, even if we assume that we are biological computers operating a complex algorithm that a finite-state machine is capable of emulating, that algorithm was created by millions of years of evolution. Humans are almost certainly the smartest creatures on this planet, but we're not particularly intelligent in an abstract sense. We do not act intelligently. We react emotionally. We act based upon drives, biological imperatives honed by countless generations competing for resources and survival. The urge to conquest is driven by the need to compete and reproduce. The ability to love is very likely a result of a competitive advantage experience by offspring with nurturing parents. Why should we assume that an AI which is not the result of such an extended process would share any of our drives or emotional prejudices? The assumption that an AI would think like us is as absurd as thinking that they would look like us. The only way that would happen is if we deliberately designed them that way. And if AIs arise from generations of self-modifying code, then we'll have very little input into their final design.

***edit: spelling

Edited 2014-10-28 11:40 UTC

Reply Parent Score: 3

blcbob Member since:
2014-10-28

Did a fair bit of AI work myself - and I have to agree with Musk and Hawking.

If you understand neural networks and have just a little bit of imagination, its obvious that we have to be very carefull in this field.

You simply dont know what you are talking about if you are not worried about the potential danger of AI.

Reply Parent Score: 3

abstraction Member since:
2008-11-27

Yes, but I don't think it necesarilly has to be self-modifying code in order to achieve that.

Reply Parent Score: 3

Lennie Member since:
2007-09-22

I'm no expert, but recently I've seen some videos on AI and I think it is interesting. Intelligence in general is interesting.

While we haven't been able to build a full simulation of a human brain or even much smaller brains that doesn't mean that even the human brain is limited in resource. It has to be conservative which it's use of energy.

So I think instead of self-modifying, I think you should consider temporary re-configuration/re-purposing of parts of the brain for the task at hand.

Also see the videos I linked in the other comment:

http://www.osnews.com/permalink?598605

Reply Parent Score: 3

abstraction Member since:
2008-11-27

Perhaps we can't produce consciousness because we can't scientifically say what consciousness is and therefor the question about what is conscious and not falls under philosophy and not science.

Reply Parent Score: 4

Alfman Member since:
2011-01-28

abstraction,

Perhaps we can't produce consciousness because we can't scientifically say what consciousness is and therefor the question about what is conscious and not falls under philosophy and not science.


I don't know what consciousness is. Many people believe that consciousness is special, and cannot be replicated by machines. We say machines cannot feel emotions, pain, sadness, etc. It is conceivable that we could create a machine that mimics these behaviors (ie pain, emotion), but most of us would say it doesn't truly experience it as we do.

The thing that gets me is that I cannot even prove that other human beings are conscious. Sure they exhibit the behaviors that are reminiscent of consciousness, but in scientific terms, I'd be at a loss to categorically distinguish between the consciousness behind similar behaviors of a human being and a machine mimicking us. If we setup a controlled experiment, where we prod a human and a computer, and they both say "ouch", how can we scientifically conclude that only one experiences pain? We say the machine is only mimicking conscious behaviors, but how do we know scientifically that other human beings aren't also mimicking them like an automaton programmed to say ouch at the right moment?

I guess this is why consciousness falls under philosophy and not science.

Edited 2014-10-28 19:24 UTC

Reply Parent Score: 3

Lennie Member since:
2007-09-22

What do you fine folks that worked on AI think about there being an other layer in intelligence that AI doesn't seem to have worked on:

https://www.youtube.com/watch?v=RZ3ahBm3dCk
https://www.youtube.com/watch?v=1DeNSE9RH8A

My simple explanation is:

The process of learning how to automatically select the best learning algorithm for handling a specific situation.

Reply Parent Score: 2

zima Member since:
2005-07-06

Eugenia, first prove to me that you are concious... ;)

Reply Parent Score: 2