A detailed simulation of a small region of a brain built molecule by molecule has been constructed and has recreated experimental results from real brains. The “Blue Brain” has been put in a virtual body, and observing it gives the first indications of the molecular and neural basis of thought and memory. Scaling the simulation to the human brain is only a matter of money, says the project’s head. The work was presented at the European Future Technologies meeting in Prague.
Did anyone tell the scarecrow about this yet?
From what I could gather, the scarecrow IS behind this research.
-Ad
…scaling it to a human brain.
Would be rather cruel, to tell a simulated human being which has emotions and a personality that it is only simulated. Or how would you feel about it?
By the way, did you read the Otherland Saga? There you might get some idea what simulated humans could become.
For those who did not read the books: They could become YOU.
Hmm, if you can exactly simulate a brain with a computer, would it develop a consciousness? That’s the part that gets me… if it did, you’d have to treat it as a human being right?
Why? Large numbers of human beings in this world are not treated like human beings. Unless the computer is powerful enough to defend its own rights, I wouldn’t expect it to do any better than real human beings who lack the power to defend their own rights.
Sad but true.
consciousness?? what we call ’emotion’, ‘consciousness’ and etc are named by people in a way of people think. All those are in a simple term interactions between neurons and stuffs by chemicals. When we say ‘consciousness’, it refers to a way of behaviour/thought. So it doesn’t mean that there is such a thing called ‘consciousness’ in our brains. So why develop ‘consciousness’?
** WARNING, SPIRITUALITY AHEAD **
That raises the interesting question of whether ones self lays in the concious or whether there is something that is transcendent and unmeasurable (not saying there is a deity or anything like that).
But those emotions and personality would be real. The only thing that would be simulated are the molecules.
Would be rather cruel, to tell a simulated human being which has emotions and a personality that it is only simulated.
But those emotions and personality would be real. The only thing that would be simulated are the molecules.
And that would be the crux of the matter. Humans have a sense of self and of independence. That sense of Independence comes from our physical form. As long as we can get food and water in moderate temperature surroundings, our basic needs for existence are met and aren’t dependent on a further outside influence.
A simulated entity doesn’t have an independent body. What that being has for a physical representation is an electro-mechanical vessel controlled by whomever created the simulation. The simulation (most likely) has no control over the power supply to, or the location of the device that houses its consciousness.
All our laws are based on the assumption that they govern the interactions of born, biological beings. Since the simulation was created and not born, isn’t biological, there are no laws against terminating it at will. If the simulation is self aware and knows there is no legal framework present preventing its random termination, I reckon that simulation would get pretty miserable knowing its existence is dependent on the interest its creators have in keeping it around.
I think it would be pretty cruel to just traumatize a sentient being like that.
Ghost in the Shell (1 and 2) are probably the best example/movie about this situation. It deal with what make humans human in a close future when some kind of cyborg will start to appear (peacemaker actually make peoples cyborg, even contact lenses do, but this movie talk about less natural case), massive spying botnet starting to gain consciousness, hacker ethics, dangers of interconnected brains and other incoming threat like that.
Really good movie for sci-fi, film noir, ethics and philosophy fans.
9/10
(About the tile, it not a ghostbuster remake, ghost = soul and shell = body)
I would rather recommend the Time of Eve:
http://www.crunchyroll.com/library/Time_of_Eve
Ghost in the Shell rather focuses on the moral complexity of transferring souls (or human minds) into machines and prosthetic bodies.
-Ad
Edited 2009-04-23 07:43 UTC
AI development is an interesting topic, and a real tough nut to crack. As such, simulated brains is a really interesting topic. However, when/if scientists eventually develop virtual brains with enough complexity to allow independent, non-aided thought-process and emotions they will have created actual virtual human-made life.
The problem is, are you killing someone if you turn off such a virtual personality? Where do you draw the line between a life and non-life? If you have no biological body but can feel emotions, can think for yourself, can even discuss with others does that still mean that others should be allowed to decide whether you live or die, on a whim, without consequences?
I personally lean towards the idea that once the virtual personality has reached a certain point it should gain some rights and not be allowed to be dismissed without consequences.
Oh people and their questionable ethics… First they create something and then start to question whether it’s okay to kill it. Well, if you have a problem with that, don’t create it in the first place, all right?
All of that is the reason why I’m against AI. There are things creepier than them coming to attack us..
but what if we can fool the creepier things into killing the virtual us instead of the real ones?
In otherwords, your logic is:
I’ve created it, I can kill it?
Stop for a second and think of where that line of logic might lead you, or how it could be misused… No, we don’t want to go there.
Why don’t we want to go there? Surely it’s better to just march ahead without thinking of the consequences! Let’s create stuff and let others deal with it!
Easy. Create a virtually really painful disease and the thing will beg you to unplug it.
Just suspend the software until the technology matures enough to create a body for it. Technically, the “being” will remain alive.
We can stop and resume a software executing inside a conventional computing environment, unlike a human brain or even general biological functions (yet).
But shutdown, with loss of all information, a self-aware virtual personality should be a crime.
All this has happened before, and all this will happen again.
That’s not the point. CREEPY THOUGHTS — that’s the point!!
Nevertheless; is it really *that* easy to plot down all the …lots of molecules with just mon-ay?
On the internet, you never know who’s on the other side of the keyboard… It could be a virtual brain using their virtual body to type on a virtual keyboard.
So in other words it will now be a virtual brain pretending to be a 50 year old guy pretending to be a 18 year old girl? Great, that’s all I needed now
…when I see this:
I just see someone asking for a very large research grant.
As long as Asimov’s laws can be built into them I have no problem. But I would add two more laws:
You shouldn’t create another robot in any way nor should you take the control of a form of robot creation,
AND you can repair yourself, but never ever will you modify yourself in anyway.
This way we’re in control of robotic life creation and its evolution.
They are not making an AI, although better understanding of how the brain works may aid (inspire?) AI research. They are developing cellular and molecular level simulations of sections of a mammal brain in order to aid medical experiments related to the brain. With a working simulation of a brain, they (1) know that they have enough right about the brain to make a working simulation, which is saying a lot, (2) can run many simulated experiments and confirm the interesting ones with real brain tissue, saving a lot of lab time (assuming the processing time is actually faster/cheaper than lab time, which, with Moore’s law, it probably will be at least some time in the future if not already), and (3) can get detailed imaging of brain tissue in action at a level that may be impossible in real life.
I once heard an interesting argument about intelligent AI’s and evil:
Intelligence is understanding things. Which means empathy. If you’re smarter, you’re more empathic. A real AI will thus be very empathic and have a higher moral standard than we humans.
All these questions you people have: Will it have feelings? Will it have a consciousness?
The big question is: How could you tell? I don’t think we can answer that!
I read an article just a few weeks back that described how by 2020 the U.S. military wants autonomous drones to replace the current remote controlled UAVs. Now it did make a lot of sense, the reaction time from the controller in Arizona to the Predator flying in Afghanistan does create problems. But it does bring up a disturbing question about having an AI determine who is the enemy and when it should lock in for a kill. I can’t remember which Scifi author said this, but it was “The Science Fiction of today is the science fact of tomorrow”.
Now think of Terminator and the autonomous drone killing in Afghanistan, or think of Blade Runner and this “Brain”.
I wonder if a scientist someday in the future when developing some AI will repeat “I am become Death, the destroyer of worlds”. Point is humans are still so immature as they simply harness power yet are incapable of wielding it. We are after all talking about a species that even in this 22nd century still clings to the beliefs of ignorance and fear.
The most popular theme of today’s generation in Scifi really is the idea of AI’s and such going beyond the control of humans; Matrix, Blade Runner, Battlestar Galactica, I Robot, Terminator, etc.. I just wonder if we are now at the dawn of something great….or something very bad
Actually the US used computers to automatically determine attach targets in the vietnam war. That went “well”, as we all know by the pictures we have of this war.
Yeah, but it’s still is a difference between thinking up some stupid algorithm which decides something, but is inherently deterministic and creating an AI which is non-deterministic and also not necessarily equipped with social behaviour, and let that thing decide what to do with it’s power to kill.
If those machines were built not to kill, but to catch enemy targets (some sort of intelligent, walking prison cell), their power would not be as easy to misuse, even if they went on the rampage.
I call it bs, human thought cannot be duplicated, period.
Really? And why not? Is there some physical principal you’d like to site, here, or are you arguing that human thought is magical?
If human consciousness arises from understandable physical processes, then, at least in theory, those processes can be replicated (either physically or virtually). If it does not, then it is, pretty much by definition, supernatural. And I’m not comfortable referencing a belief in the supernatural to avoid having to confront a difficult philosophical problem.
To emulate human thought they would also have to emulate human conscience, and human conscience is beyond rationality, we, the humans, have spirituality. Im sure they can emulate the rational part, but not conscience or spirit. Is not just possible. It can be programmed in a certain way (the way the programmer desides) but not to have self conscience. No such thing.
First, they are not programming an AI, they are running a physical simulation of the cells which make up a small piece (and later larger pieces) of a mammal brain, which they believe that with large software and hardware upgrades they could scale up to physically simulating an entire human brain.
That said, it is obviously unknown how much such a simulation would act like a human as no one has tried it yet. From a purely materialist point of view, it should be indistinguishable from a real human. The people working on the simulation say (in the FAQ at http://bluebrain.epfl.ch/page18924.html#16 ):
“Will consciousness emerge?
We really do not know. If consciousness arises because of some critical mass of interactions, then it may be possible. But we really do not understand what consciousness actually is, so it is difficult to say.”
They are making no claims about being able to successfully simulate intelligence or consciousness. Even if they fail to do so, research in that direction would probably still give great insight into how the brain works and advance medical science.
A simulation does not work that way. The brain is simulated and not programmed. The simulated “universe” is programmed, not the things in it. They develop like (so is the theory) in the real universe. (Yes, its not a whole universe but only a tiny part.)
After reading the comment threads, I had an incredible feeling of Deja Vu; that I had read this discussion before.
I then realized that the discussion eerily parallels Stanislaw Lem’s “The Seventh Sally or How Trurl’s Own Perfection Led to No Good”:
http://home.sandiego.edu/~baber/analytic/Lem1979.html
Here is an interesting video on training Neural Networks by introducing noise in the system and how this may lead to idea creation.
http://www.youtube.com/watch?v=N5wY5wm2ufI
…There is hope for me. Soon I’ll be able to replace my brain with something that works!