Linked by Nicholas Blachford on Thu 11th Mar 2004 19:36 UTC
Editorial Conscious, Emotional machines, will we ever see them? How far can technology go and can technology be applied to us? In this final part I wonder into the realm of Science Fiction. Then, to conclude the series I come back down to Earth to speculate on the features we'll see in any radical new platform that appears. Update: Never let it be said I ignore my errors, in the interests of clarity and with apologies to Extreme Programmers I have revised Part 1.
Order by: Score:
no machines in bladerunner
by n00b on Thu 11th Mar 2004 20:05 UTC

they were genetic creations

Artificial but Organic
by Best on Thu 11th Mar 2004 20:20 UTC

The Replicants were still artificial intelligences though in that they were created, the original Robots weren't machines either.

My guess is that we'll see brain in a box computers like the Magi before we see anything like a intellect born of a complex system.

I know that I'll never have my own personal AI. Not because I don't think I'll live to see one (I plan to live at least another 60 years), but rather because I'm terrigied of the idea of coming home and having the machine say:

"Look at you... A pathetic creature of meat and bone..."

Refuses to Read
by Chris on Thu 11th Mar 2004 20:54 UTC

The title alone....
I think you meant to ask if they can become sentient. They have similar meanings, but it's beside the point anyway. We have as yet to understand the most basic method by which we think and how we can make the connections we make. We've got a long way to go in terms of mathematical power before a computer can run a program as quickly as we think.
I don't see a true need for machines that think at a sentient level. It would simply create ethical questions and leave us with useless emotional computers that help slow down our work more.

Maybe we should solve heat and energy usage issues before we worry about making our computers sentient. After that we can solve the security issues. Then we will probably be directly interacting with the human mind. And after all that, maybe someone will come up with an algorythm to simulate true learning.

links
by atom on Thu 11th Mar 2004 21:40 UTC

i think it may be beneficial to have links to the other articles in this series.

v Ok this is ridiculous
by Anonymous on Thu 11th Mar 2004 21:41 UTC
Consciousness Not Really Special
by linux_baby on Thu 11th Mar 2004 21:43 UTC

>> Will machines become conscious?
There is no answer to this question since we do not at this point even understand our own consciousness or how it works.
>>

Well, actually, consciousness is really nothing special. It is simply the sum-total of our mental life, and to that extent, it isn't anything particular to our species. Chimps and cows and cockroaches all have consciousness. Obviously, human consciousness is more advanced than that of the chimp, the same way the chimps consciousness is more advanced than that of an earthworm, and an earthworm's more advanced than that of an apple. But we are talking of "degrees" of complexity here, not about any special qualitative difference. To that extend, there is no reason why machines would not eventually evolve to have a consciousness more, shall be way, "advanced" to that enjoyed by humans.

computer - funny and ironic
by tech_user on Thu 11th Mar 2004 22:13 UTC

when a computer appreciates something funny or ironic then i will be impressed

Well done
by AnonaMoose on Thu 11th Mar 2004 22:31 UTC

Howdy

Good to see you fixed that earlier part, incremental development isn`t with out it`s own problems but the way you worded the earlier article was kind of inflamatory ;)

As for AI i think we have bucklies chance of getting this right (well not for a looong time) when we can`t even get a 100 million line program right.
Imagine an AI entity with severe mental problems that can`t be explained!

I For One
by Sphinx on Thu 11th Mar 2004 22:32 UTC

Welcome our new robot overlords.

v RE: Ok this is ridiculous
by Eugenia on Thu 11th Mar 2004 22:44 UTC
ATTN : All Billy No Mates @ OSNews
by You Can Call me Al on Thu 11th Mar 2004 22:58 UTC

This is where I was gonna slam individual lamers, but you know who you are so there's little point.

------------------------

You know what? I really admire this guy, since his first article he's tolerated rude comments from troll's here, but he still has the balls to come back the following week with a new fascinating installment.

You know what they say Nicholas "Don't let the B's grind you down"

Re intelligent machines and the future
by clr on Thu 11th Mar 2004 23:21 UTC

If anyone is interested in all this stuff (like me), then you should read Ray Kurweil's book "The Age of Spirtual Machines'. He goes into greater detail about the specifics than this interesting article here.
He extrapolates from existing and developing technology and leads onto to some very stimulating ideas indeed.... er.. sod it, I'm off to the library to read it again.
This is a good series of articles. Thanks for the read.

Re: clr
by Bascule on Thu 11th Mar 2004 23:50 UTC

If anyone is interested in all this stuff (like me), then you should read Ray Kurweil's book "The Age of Spirtual Machines'

That's an excellent suggestion. Some other excellent reading on the matter are Daniel Dennett's books/essays Brainchildren, Consciousness Explained, and Freedom Evolves. Dennett attempts to explain/demonstrate in Freedom Evolves how free will can mainfest itself on top of a structured system which retains some chaotic operating properties.

I think within our lifetimes we will build a complete mathematical model of the operation of the human brain (especially if we can use the information from the Human Genome Project to model the physical development of the brain, then analyze the operation of the physical brain model in order to construct a mathematical model) at which point we can simply tweak and improve this model to fit our liking. As for creating a conscious computer program from scratch, I'm certainly not holding my breath. Artificial intelligence is the most depressing field of computer science, showing relatively little progress compared to virtually every other area.

RE: Troll eh?
by Chris on Thu 11th Mar 2004 23:57 UTC

"You know what? I really admire this guy, since his first article he's tolerated rude comments from troll's here, but he still has the balls to come back the following week with a new fascinating installment. "
Oh please, we've complained because his articles are over-sensationalized and they are a waste of OSNews frontpage space. We are not trolls but are dedicated readers of the site. We have been rude, people do that when they want someone to discontinue their current line of behavior.

The only thing worse than trolls is people who accuse everyone who disagrees with them of being trolls.

The Brain
by Johny Pneumatic on Fri 12th Mar 2004 00:22 UTC

I don't think we'll ever completely understand the human brain, at least within the next 100 years, let alone have a mathematical model of it. Computers will never have emotions or genuine intelligence. Why would you want them to? We have enough war, creating another life form to fight with is not a good idea. I say, make computers fast, but keep them dumb.

interresting
by aliensoldier on Fri 12th Mar 2004 00:22 UTC

i'm a so so fan of that article serie, but that one and the one on FPGA (the previous one) are quite rellevant if you come here for news (real one, not just sponsorised M$ PR).

some comment.

consciousness:
This is a pure illusion. Consciouness emerge from re-entrant system. Of course you can have philosophy mixed in. Perhaps real one (like the one that we see only in human and some animal) could be extanded by asking for the concept of empaty and self observation.

Analog/digital debate:
First, analog world is not really analog, as you don't have really real time sense it always have delays.

Those that are familiar with realtime microcontroler will immediately have Z-transform to mind. That allow transistion of laplace analog world "s" data stream into numerical "z" domain. both are equal. You also can say that the same way complete sound positioning 3 axis exist in normal stereo sound and can be extracted, the same way quantum info can make it into the numerical stream (no idea of how to get that back but neural simulation probably can).

non computerable problem:
That is a myth, each stuff that can be made by a human can be done by a machine. Logic don't exist in human, it's a universe rule. The best proof of that is simple formatl math software. Sure computational brute force computation can't reach all solution, but you can have both on a computer.

roger penroze:
i like the guy and hate him at the same time. he is the best exemple that a genius is stupid at the same time. genius have no problem to resolve problem, that mean they are not aware of how their genious work. Someone that became inteligent over time (many are like that but take einstein for this exemple) is far more able to make opinion on how the mind work. that is why penrose search in quantum effect in the mind just like some search god as many reason to explain stuff they don't understand yet.

a comment on the bladerunner comment. Yes the replican are robot and not genetically modified robot. Of course the closer you come to nanotechnology the more it become biology.

Complexity?
by Stray on Fri 12th Mar 2004 00:24 UTC

Well, actually, consciousness is really nothing special.....Chimps and cows and cockroaches all have consciousness.....But we are talking of "degrees" of complexity here, not about any special qualitative difference. To that extend, there is no reason why machines would not eventually evolve to have a consciousness more, shall be way, "advanced" to that enjoyed by humans.

Great..You're trying to prove that conciousness can be replicated by showing more "organic" examples -- as if the problem is having machines concious on a "human" level is the only problem. But the truth is that even replicating what an earthworm has is just as difficult.

We are not talking about "degrees" of complexity. Consciousness is not measureable like intelligence nor is it a sum of all parts -- If that's what it was we'd be just like computers. We haven't even gotten to the point of understanding the nature of consciousness, so how can you think it's just a matter of implementation? Or is that some kind of "top-to-bottom" design? -- May as well write a program that says "I am conscious" instead of "Hello World".
Because all you're doing is fooling yourself by saying "consciousness is really nothing special".

We are talking about something completely specific to organic creatures, in which the "complexity" just might be the same across the board. The nature of consciousness is unknown. It's a mystery, as much as "whether God exists" is a mystery.

conjecture!
by hugh jeego on Fri 12th Mar 2004 01:02 UTC

insert values ( conjecture ) into article;
delete facts from article;
delete evidence from article;

"consciousness is really nothing special"
"Computers will never have emotions or genuine intelligence."
"Chimps and cows and cockroaches all have consciousness."

really pointless...
None of you know anything about consciousness; You are all just rambling.
Most of you can't even manage grammatical sentences.

Back this stuff up people or don't waste our time!

-Hugh

COMPUTABLE
by hugh jeego on Fri 12th Mar 2004 01:07 UTC

"One thing I think fiction gets wrong is the idea of emotions in artificial entities"

How does fiction get something wrong???

And to the clueless guy who mentioned "non-computerable problems" and called them a myth, how does AI help you solve the halting problem, when even humans CANNOT do it?! Seriously, get an education before you shoot your mouth off.

-hugh

Flame on!
by James Dorn on Fri 12th Mar 2004 01:07 UTC

hrm? Conscious, Emotional machines? I think I had one once... It SKERD me, so I baught a mac.

Main Point
by Marco on Fri 12th Mar 2004 01:17 UTC

I don't think you're getting the point.
First, one must show some humility when speaking about consciousness, and must understand his limited nature.
Second, even if you could define what consciousness is with infinite precision you must understand that the language is intrinsically ambiguous and if it wouldn't you cannot be sure that you can communicate one concept to some one and having him to understand that concept like you do. Your vision of how you see the nature, is not sharable with certainty (as a trivial example, please, consider this: to one is impossible to say “this object is red” and be sure that another one sees the red color as the first does. Maybe what the second sees as red is what the first would call green, but it's not possible to communicate what you really perceive)
Third, if you understand what the consciousness is (which IMHO is a property of human being only) you automatically understand many many things at the level of existential questions, and if you understand things like the meaning of life, its nature, etc... I don't think you'll go ahead in constructing conscious machines.
Fourth, the main point is that the ideas exposed in this article are certainly imaginative, but in the end you should admit that a topic like this involves deep investigation, opened mind view, religious questions.
Fifth, even if you don't accept God (I hope not), you can understand by using your intelligence that the deepest mystery of the life is its mysterious nature, and always with your intelligence you can realize that living inside the universe means be constrained to its rules. You can investigate them, but investigating the nature from inside means to always interact with the universe during the investigation; and this involves modifying its status, hence you're missing the capability of measuring its status without affecting it.
So that implies you must live with your limits, and ask two deep questions: who has created the rules? who gave the consciousness to you?

close, but no cigar
by hugh jeego on Fri 12th Mar 2004 01:21 UTC

"to one is impossible to say “this object is red” and be sure that another one sees the red color as the first does. Maybe what the second sees as red is what the first would call green, but it's not possible to communicate what you really perceive"

I'll respond to this despite the lack of grammaticality. This has nothing to do with consciousness; it is a shortcoming of human language.

-hugh

RE: close, but no cigar
by Marco on Fri 12th Mar 2004 01:34 UTC

Sorry for my grammar. Would you mind to correct that sentence please?
What I wanted to say is that human language is ambiguos and that even if you can say what the consciousness is for you, you have to say it with words and concepts that maybe have a different meaning for another person, and even if you share with him the concept you cannot be sure that you're really thinking at the same "thing".
As you can say it's really difficult to expose this concept for me, and be sure you understand what I mean.

language ambiguity
by hugh jeego on Fri 12th Mar 2004 01:53 UTC

Sorry for being a grammar nazi. The statement was understandable.

I wanted to point out that ambiguous language is a separate issue from the fuzziness of consciousness. I think you are right about the fuzziness problem, but even if our thoughts were perfectly unambiguous, expressing thoughts in human language could still be problematic.

The problem you described before is one of the speaker and the listener having different internal knowledge representations. This isn't specific to conscious systems, even databases can have different internal representations for the same external data.

-hugh

Re:
by Anonymous on Fri 12th Mar 2004 01:53 UTC

4 words: SEARLE'S CHINESE ROOM EXPERIMENT. nuff' said about machines and so called consciousness

Re: Re:
by hugh jeego on Fri 12th Mar 2004 01:55 UTC

SEARLE'S CHINESE ROOM EXPERIMENT

link?

Re: SEARLE'S CHINESE ROOM EXPERIMENT
by hugh jeego on Fri 12th Mar 2004 02:03 UTC

I read about it;
It is based on the assumption that someone who doesn't know Chinese can use a script and be indistinguishable from a native Chinese speaker when answering questions.

I don't accept this assumption. What happens if the question-asker asks something that isn't covered by the scripts???

-hugh

Re: SEARLE'S CHINESE ROOM EXPERIMENT
by Anonymous on Fri 12th Mar 2004 02:17 UTC

Google it if you don't know.

The point of the experiment is not that it can be fooled and asked a question it doesn't know. Assume it can answer anything.

The point is that while the translator in the room (i.e. a computer) SEEMS conscious, he is not really conscious because the very nature of machine prohibits true consicousness (no qualia, etc.)

So while the translator can answer anything he never understands anything he's answering. He's just manipulating symbols like a computer.

Re: SEARLE'S CHINESE ROOM EXPERIMENT
by Anonymous on Fri 12th Mar 2004 02:25 UTC

So in conclusion I agree with Searle and think we could build computers virtually indistinguishable from people with enough technology but they would never be truly conscious nor feel emotion or anything else like we do because they would just be blindly manipulating symbols. Never understanding like we do. Inside as cold and void as space.

Re: SEARLE'S CHINESE ROOM EXPERIMENT
by Anonymous on Fri 12th Mar 2004 02:33 UTC

There are many refutations of Searle's Chinese Room Thought Experiment. Here is one:

http://12.108.175.91/ebookweb/discuss/msgReader$877

Re: SEARLE'S CHINESE ROOM EXPERIMENT
by hugh jeego on Fri 12th Mar 2004 02:42 UTC

"Assume it can answer anything"

No. That is not an assumption I can accept. It would have to understand the language to answer anything, which this experiment tries to refute.

-hugh

Re: SEARLE'S CHINESE ROOM EXPERIMENT
by Anonymous on Fri 12th Mar 2004 02:57 UTC

Alright, well if it can't answer everything because it needs to truly understand the language then there are complications I agree but not necessarily against the argument:

1. If thats the case machines will never be able to answer everything since they can't truly understand, so Searle is still right.

2. If machines really can answer everything then they must be able to understand, so Searle is wrong.

So no conclusion can be made from that alone.

I'm sure there are a lot of arguments against the chinese room, and I'm no philosopher, but my 2 cents is I agree with Searle's line of thought based on what I know about cognitive science and computers. Most of the field is still relatively young and no one can even agree on what consciousness IS to begin with much less if machines are really capable of it.

Nevertheless, the idea of machines FEELING emotions and EXPERIENCING beauty or pain or existential anguish or despair or love and so on seems silly, unless you start from the premise that humans are glorified computers which I do not believe (For example it seems that we are not deterministic even if the rest of the universe was La Placian (sp?) for one thing).

But nothing is conclusive on one side or the other. I've got my bet placed though. ;)

determinism
by hugh jeego on Fri 12th Mar 2004 03:09 UTC

I tend to think we are deterministic. I have no choice but to think that.

-hugh

Re: determinism
by Anonymous on Fri 12th Mar 2004 03:28 UTC

hehe nice one ;)

Re : Chris
by You Can Call me Al on Fri 12th Mar 2004 03:42 UTC

If you slam an article without reading it(your own admission) first I think it's reasonable to call that trolling don't you?

get an education
by aliensoldier on Fri 12th Mar 2004 04:58 UTC

hugh, i have an education already.

Please re-read my post and you will notice that the myth attribute is applied to the case of problem that can be already resolved by human (i separated my post into categoty, that particular part was a subset of the category intended).

As for the grammar english is not my native language. Instead of fast judging like you do, assume this when reading post and try to reformulate it in your head.

Self-awareness
by Someone on Fri 12th Mar 2004 05:11 UTC

Self Awareness is important however I think self-awareness and conciouness are often taken out of context in what they mean for a living creature. There is every reason to believe that most mammals are self-aware and have enough conciousness to have a personality.

WHat makes humans rather special is imagination. The ability to put themselves into another situation and think it through. However we aren't particularly advanced at it. It isn't until about age 3 that we develop the ability to empathize with others and consider their point of view. Before then it appears we can't even comprehend the concept properly. On the evolutionary scale it has been hugely successful advance.

Most likely any artificial intelligence will have a lot less grey area than humans do. Nearly all people display irrational behavior, such as phobias, delusions, addictions, insecurities and of course numerous mental illnesses. In a person these are functional components that have been expanded through evolution to be more than just survival traits. (Similar to how a patch of hair has evolved to become a rather distinct and useful characteristic of a Rhino)

The function of the brain is slowly being determined. How the various parts of it fit together to produce what we feel and see and think is becoming apparent. I doubt we will achieve the same balance that humanity has, or even the same complexity. I do not see the economic gain in realistic artificial people. Intelligent machines will probably only ever have focused personalities that are more absolute and reliable than our own. However self-awareness and reliable un-encumbered imaginations are enormously powerful and useful traits.

Of more interest would be what would happen if we were to start tinkering with our own makeup to take away or to restrict the affects of overwhelming emotion, and irrational thought. Experiments have been done, but results thus far are unsatisfactory. Perhaps we will make ourselves into the artifical intelligences of the future.

Re: get an education
by hugh jeego on Fri 12th Mar 2004 05:36 UTC

From your post:
"non computerable problem: That is a myth, "

I read that as meaning that the idea of problems which are non-computable is a myth. What did you mean to say? Surely you realize that there are problems that computers cannot solve. (like the halting problem)

-Hugh

i think the context was showing it, sorry
by aliensoldier on Fri 12th Mar 2004 05:43 UTC

"That is a myth, each stuff that can be made by a human can be done by a machine", so i refer to stuff a human CAN do.

That said i don't think problems can't be solved at all. Perhaps for now, but they will eventually be solved.

Of course some problem are not solvable, like if i want chocolate cake as big as our planet.

Machines creating machines
by robUx4 on Fri 12th Mar 2004 10:38 UTC

"As for AI i think we have bucklies chance of getting this right (well not for a looong time) when we can`t even get a 100 million line program right.
Imagine an AI entity with severe mental problems that can`t be explained!"

The thing is that AI is a program, a sum of informations/instruction. And what AI do best is handle informations. So one day (probably not so far) an AI will be able to create another AI. Once this point is reached, we will have less and less control on the existing AI. For bad, or for good...
I think that's the way AI will evolve and will get free of our slow progress.

emotionless AI?
by anon on Fri 12th Mar 2004 10:57 UTC

Personally looking at the comments in the article on how AI emotion is hard, and AI probably won't be emotional I think this is the reverse of what will happen.

Emotion will come first then human level conversation abilities. Emotion isn't really that hard, I replicated a little experiment once (can't remember the orginal author, I'll look it up later) where you simulate a group of mobile entities that have two simple drives, to eat and reproduce. They where attracted towards things that satisfied there needs based on the internal state of how much need they had for the thing and a radial falloff relating to distance. The results may not have been pleasant but they did show emotion, primarily lust and fear. A more complex simulation could lead to more complex emotional behaviour.

Likewise in the biological world if you look at animals with less inteligence than us they are emotional and can comunicate this emotion, and it is this that makes them seem to have self awareness (even if it is at a less level than for humans). If you have a look at your pet, if you have one, it is most likely that you will see that it has some level of self awareness, and therefore conciousness, this is without any high level conversations but simply because it displays emotion.

Proffessor Penrose's theory that AI will not be possible without taking into account the quantum effects inside the brain I believe is probably wrong. It is possible to completely rearrange the quantum structure of the brain with no effect on the conciousness of the indiviual, this is done daily as a routine MRI scan. Therefore the quantum effects cannot be a vital part of conciousness. The randomness it gives maybe useful, but this can be simulated to a high degree.

Re: SEARLE'S CHINESE ROOM EXPERIMENT
by anon on Fri 12th Mar 2004 11:42 UTC

The problem with Searle'S Chinese Room Experiment is that it assumes a priori that human conciousness is more than a mechanical activity and therefore even if something mimics it prefectly then this mimicary is not the same as having true conciousness.

This is the reverse of the assumptions of the Turing Test that AI is based on that there is no difference between perfect mimicry of conciousness and having it.

The two approachs are fundimentally irreconcilable, so either you go with Searle and there is no possiblility of true AI just good mimics, or Turing in which case AI is possible just very hard based on the evidence so far.

Personally I go with Turing as you can never know if someone you are talking to is truly concious, in the Searle way (You cannot really know that they exist at all, Cartesian Dought, but it's best to assume they do), just that they behave like they are. Therefore if I assume that they are concious then I must assume that a machine that can behave like them is also concious.

Another way of looking at this
by Chuck Bermingham on Fri 12th Mar 2004 16:17 UTC

If you take the view that there is no such thing as un-consciousness, (that the universe itself is conscious), then the whole point is moot. I know that many people from India take this view; in fact, I have a book at home called The Conscious Universe, by an Indian physicist. I will pass the reference along tomorrow.

Anyway, taking that view really changes the angle about conscious vs. unconscious machines; i.e., machines are something that the Universe is doing, just as we are something that it's doing. Since the two activities are highly related to each other (we are the "parents" of these machines in that they exist through "the human agency,") I tend to believe that whatever relationship may evolve between humans and machines must therefore be an outgrowth of that relationship.

Just as one example, consider the question of "artificial intelligence." While the AI researchers are trying to develop "human-like cognitive behavior" in machines, the machines are sedately managing many of the activities that humans formerly handled (arithmetic, certain kinds of analysis, data searches, monetary transactions, pictures (as opposed to realistic paintings and photographs,) even rudimentary musical compositions and entertainment.) Are they "conscious" of all this? Do they experience the same distain from all that hard labor that humans have in the past? Remember that people view suffering from human endeavors of these kinds as "paying dues," because the ultimate benefits outweighed the costs, even when we had to do them.

If they are conscious of it, how would we know. We can't even talk to whales, nor other primates, well enough to ask *them* how they feel.

One thing I will say: If one of us could come back here 10,000 years from now, and if humans and machines were still here, I doubt we could even comprehend the relationships between them, much lest predict them.

We actually have conscious computer research going on. It's out there and useable. The problem is that people don't like virus' and worms very much, so they stifle any development on them and try to kill them instead.

If people want conscious computers they'll have to learn to think of the computer as not entirely theirs to control. Other wise they will never become conscious.

Just my 2 cents.

Only looking at 1/3 of the picture
by Jared White on Fri 12th Mar 2004 17:08 UTC

The problem with AI is that the scientists are only looking at 1/3 of the picture here. They see the human brain, made up of matter and energy, and assume its functions can be reproduced with other matter and energy. However, this is an assumption that anyone who believes in a spiritual dimension will regard as false.

As a Christian, I believe that humans have a spiritual dimension, and, as such, part of our "intelligence" or "self-awareness" exists outside the boundaries of space-time. (Obviously this idea isn't unique to Christianity -- most religions assume the existence of a spiritual dimension.) In this case, our brains are responsible for only part of our thought processes. In fact, my personal belief it that the brain partially acts as an inter-dimensional interface between physical and spiritual -- which would explain why people's thoughts and actions can be influenced by chemicals and physical defects as well as the spirit.

Some of you may scoff at these concepts saying they can't be proven, but, on the other hand, your thinking is also clouded by your personal faith in atheism. Believing there isn't a God requires as much faith, perhaps more in a way, than believing there is a God. If there is a God, and there is a spiritual dimension, then all "scientific" concepts of AI go out the window because how do you program the human soul?

Jared

@Jared
by jizzles on Fri 12th Mar 2004 18:41 UTC

"Some of you may scoff at these concepts saying they can't be proven, but, on the other hand, your thinking is also clouded by your personal faith in atheism. Believing there isn't a God requires as much faith, perhaps more in a way, than believing there is a God. If there is a God, and there is a spiritual dimension, then all "scientific" concepts of AI go out the window because how do you program the human soul?"

Well, I am strongly agnostic. I believe that is essentially impossible for humans to know whether God exists or not. I believe all major religions are intricate fabrications of centuries of independent fabrications of the mind carried forward. I think atheism is the pure denial of religion, not a serious explanation of the universe.

Science is based on developing theories to explain evidence. In all belief systems, one can find individuals that refuse to adjust their belief systems based on strong contrary evidence, or refuse to seek alternative evidence.

What direct evidence of the soul do we have? Why should we believe in its existence?

@hugh
by jizzles on Fri 12th Mar 2004 18:45 UTC

The Halting Problem is undecidable. There are many other undecidable problems. In fact, there are infinitely many. Even worse, they form an infinite hierarchy...so even if some magical construction could solve one undecidable problem and those problems that reduce to it, there is always infinitely many more problems that are still undecidable, even with such an Oracle.

The universe is a deep thing, and we have discovered that even the most powerful mathematical tools that can ever be created will ever form a closed system to explain it. There will always be something that is true but cannot be proven.

Fairly depressing to think about.

@Jared
by GodWho on Sat 13th Mar 2004 04:58 UTC

"Some of you may scoff at these concepts saying they can't be proven, but, on the other hand, your thinking is also clouded by your personal faith in atheism. Believing there isn't a God requires as much faith, perhaps more in a way, than believing there is a God."

Using this kind of logic, I can also say "believing there isn't a NONSENSE requires as much faith, perhaps more in a way, than believing there is a NONSENSE."
Believe whatever you want to believe. What I believe or not believe is NOT your business.

Re A bad feeling.
by clr on Sat 13th Mar 2004 08:08 UTC

This is a very interesting thread, but why do I get the impression that philosophy is becoming a personality disorder. It seems that everyone mainly has opinions about why everyone else is wrong, and imply that nobody should venture ideas for fear of being vilified. It seems the primary goal of philosophy is to discredit all others' ideas with semantics and sophistry. And then someone steps in with a mild voice selling divine intervention to calm our troubled souls.

Sorry, but I just enjoy the possibility that maybe somewhere, in the forboding quagmire of communication, some people manage to inspire my thoughts to new heights (I can manage a few inches already). I'm no scientist, and no philosopher, but I've worked with folks who have profound issues with their lives for twenty years. Such people offer great insights perhaps. Who gives if peopleare right or wrong? I can't know that. But I do value all the sparks that fly from peoples' enthusiasm, and I resent that belittling tone of some of these mails above.

Seems to me that if computers are to emulate or reproduce (forgive the lack of precision in my language - I haven't time for it) human consciousness, then they would need to provide a series of consciousnesses linked together so they could truly develop the awareness that we assume is related to one such phenomena. As Lange speculated, the self may well be divided, and in that division the mutual awareness of different "selves" fosters what is commonly known as self-awareness.

And then maybe I'm wrong. If you think so then maybe a simple "I don't agree" would suffice.

Mmm (dons ear-defenders and waits)

:-()

Here's a snippet from the website
http://www.wired.com/news/infostructure/0,1377,56459,00.html
:
A human brain's probable processing power is around 100 teraflops , roughly 100 trillion calculations per second, according to Hans Morvec, principal research scientist at the Robotics Institute of Carnegie Mellon University. This is based on factoring the capability of the brain's 100 billion neurons, each with over 1,000 connections to other neurons, with each connection capable of performing about 200 calculations per second.

According to the list of the top 500 supercomputers in the world, circa 2004 ( http://www.top500.org/list/2002/11/# ) ,
the current reigning champ (fastest computer in the world) is the NEC Earth-Simulator, with a speed of 35.86 Teraflops .

Anyone else worried about the fact that the fastest computer on earth is now performing calculations at about one-third the estimated rate of a human brain?

Admittedly, the NEC Earth Simulator is almost five times faster than the next fastest computer on the list. Also the kind of computer currently packed into any kind of AI or robotic project is orders of magnitude slower than anything on this top 500 list.
And more comfortingly, the average "very fast" PC of today (Pentium 4 3.2 GHz, Athlon XP 3200+, etc) seems to run somewhere in the 5 Gigaflops to 10 Gigaflops range. Since a Gigaflop is a thousand times smaller than a Teraflop, that means today's consumer PC's are about ten thousand times slower than the estimated 100 Teraflops of the human brain. No wonder AI hasn't taken off in a big way yet: our computers are barely brighter than a housefly. And already they can recognize faces, recognize voices, create music, and otherwise do things that show the beginnings of "intelligience".

If Moore's law continues to hold, PC's will have ten thousand times todays processing speeds in about 13 speed doublings, or somewhere around 20 to 26 years. In other words, if the semiconductor chip making industry continues to deliver as it has since 1964, we can expect PC's that can calculate as fast as a human brain before 2030.

If today's PC's can do math faster than I can, and can recognize voices and faces about as well as, say, a sparrow, what will a PC 10,000 times faster be able to do?

I turned 40 today, so there's a good chance I'll live to 2030. I wait with excitement and trepidation in equal parts to see what may come.

-Gnobuddy

God ?
by robUx4 on Mon 15th Mar 2004 09:29 UTC

God has been mentioned above and of course the concept and/or the existence is highly controversial. But at least if God (or gods) is responsible for a part of our act (hidden in randomness of life ?), then why wouldn't it be the same for machines ?

We might have a conscious or not. But maybe machines don't need one to live and evolve and grow. They might also never need to interact with us. But I doubt that part, because we have a lot of history/informations that they could learn from. So at some point they will need to understand us. But maybe they won't care if we don't understand them.