“The most famous engineering brain models are “Neural Networks” and “Parallel Distributed Processing.” Unfortunately both have failed as engineering models and as brain models, because they make certain assumptions about what a brain should look like.” Read the article at TheRegister.
I once read that neurons can actually migrate from place to place in your brain and nobody even knows why.
“I once read that neurons can actually migrate from place to place in your brain and nobody even knows why.”
I once saw a video of it. It was a fetal mouse neuron, and it looked much like an amoeba. I (obviously) don’t know much about this sort of thing, but I believe that the neurons cease to be mobile after leaving the fetal stage. Sure new dendrites and whatnot will grow, but the whole amoeba-like thing goes away early.
If they want to understand the human brain they should start by studying hampsters, wheels, cog like gears, and spider webs. those are the building blocks to the brain.
i can’t believe u think that, if it is true someone knows why, i assure u. if u still think it is true u should read this url:
http://www.snopes.com/critters/wild/duckecho.htm
it explains the ‘no one knows why’ thing pretty good
“I once read that neurons can actually migrate from place to place in your brain and nobody even knows why.”
The move south for the winter?
Seriously, I think that the theories developed at the Singularity Institute ( http://www.singinst.org ) and at AGIRI (www.agiri.org ) are better explaining. As the Register article points out, too often engineers pursuing AI were looking for simple, elegant models like physicists do, but the truth is that there is a) a bunch of stuff that is in place courtesy evolution, not from learning (language, vision, and lots of other more subtle things are built-in) and b) it is all a very complex system of interdependent parts, not something that can be explained away with e.g. Lisp recursion
I think read it from an AI book some years ago at university, can’t remember the books name.
But googling for it:
http://web.sfn.org/content/Publications/BrainBriefings/neuron.html
You have to be careful with AI. Assumptions should always err to the caution, and you must think of that on so many deep levels.
I didn’t understand a whole lot of the article. But it’s a very interesting read.
I’m afraid in the future man will make machines that are concious on some level, so would that make them alive? To be truly smart machines they need to be able to make decisions on their own which could be dangerous.
In my book biologically they won’t be “alive” but any conscious being should have rights; which the machines will not have.
If scientists could reproduce the most powerful computers in the world (our brains) they could do some amazing things with that much computing power. Like say figure out how to travel faster then the speed of light if it is possible. Or create worm holes, the possibilities are endless.
Some scientists have been able to grow neurons on top of silicon, and detect the electrical signals flowing through the cells. Hopefully, they’ll be able to tease some more secrets out.
“I’m afraid in the future man will make machines that are conscious on some level, so would that make them alive? ”
It is possible to be alive but not conscious – for example, a bacterium or a yeast cell.
To decide whether it is possible to be conscious but not alive, you would need very careful definitions of both words.
One problem with AI is that people are trying to jump straight to making a human being. It makes more sense to try to reverse engineer something simpler such as a Paramecium first.
J Z Young did some very successful research using the brains of octopuses rather than mammals, because these are somewhat simpler, so there is a bit more chance of understanding them. I suspect several as yet unthought of concepts will be needed before we can understand the brain and behaviour of a mouse.
(I mean concepts comparable to “information” or “energy”.)
just what this article is talking about, you can’t do worse than to start with reading “The Man Who Mistook His Wife for a Hat” by Oliver Sacks (ever happen to you, Eugenia? ;^); then “The Working Brain” by A.R. Luria; “Basic Neuroscience : Anatomy and Physiology” by Guyton; then there are some others which I don’t doubt you’ll find for yourselves.
Other than that, you can just waffle on and pretend you actually know something.
Now I’ve got that out of my hair, I think the point the article made, was necessary – that at the moment our neuroscience instruments are rather blunt – we can only see the broad outline of the brain’s activities, and nothing of the finer details, so we have no idea just what basic algorithms of the brain are, and how they work.
A little humility goes a long way in scientific research; it’s about time AI took it on.
To be precise – the basic activity of the brain involves excitation and inhibition of neurons via the synapses, where the algorithms are in effect stored, in the form of :
if x, fire chemicals; if anything else, dampen down activity.
The difficulty lies in the possibility that any given neuron may be simultaneously firing some synapses and inhibiting the firing of others – and the dendrites (connecting “wires”) of those neurons may spread over a wide region.
And when one can only be certain of a few such cycles of activity at a time, it is impossible to decide just what is happening in the average, everyday world – most neuroscience is the direct result of tragedy such as war or accident – Broca’s Area was discovered to have something to do with Language because of the patient “Tan”, so named because that was the only word he knew how to speak.
Imagine how far along we’d be in computer science if the major part of our knowledge came from dissecting crashed computers!
>>If scientists could reproduce the most powerful computers in the world (our brains) they could do some amazing things with that much computing power. Like say figure out how to travel faster then the speed of light if it is possible. Or create worm holes, the possibilities are endless.<<
Without wanting to flame you, but this is the usual fallacy. A computer can reproduce something that resembles on the outside a small part of what your average brain is doing – that is: calculating numbers.
The brain isn’t understood at all, only very precisely defined tests have resulted in somewhat usable data under completely unrealistic premises, like asking the participants to not think anything! Yeah, right.
In this sense i am with Wesley Parish (IP: —.auckland.clix.net.nz), brain research is basically poking in the dark, guess what this makes AI research?
Please explain the color blue. Thank you.
this register article appeals to people because it is about ai and everyone has an opinion on how ai is, should, or shouldn’t be. fact is, i’m betting most of the people who thought or wrote something about, “thats the problem with ai. trying to make humans, blah blah blah” realy don’t have a background in it.
essentially any ai research that is taken seriously has nothing to do with creating a “brain”. once you’ve taken that first ai course you know it is damn near impossible to get a computer to play checkers well, much less anything useful.
neural nets are used not to mimic brains, but to learn functions. its math. not brain. no one is trying to mimic the brain.
why don’t people see taht a Human when born is like a computer with a basic OS in it. it runs, it does some basic stuff, but not much else.
as it grows, it gains information from its surounding environment. it adds things to its database and looks for relashonships. at first, it can do some of these on its own, but later, it releys on parents and teachers.
after a while, the child grows up and can call on its vast database of information to take in other data and make desicions.
so, how is this not possable to create with a computer? what is the obsesion of reconstructing the physical build of the brain?
i believe quantum effects are a significant factor in the human mind’s though processes.
“so, how is this not possable to create with a computer? what is the obsesion of reconstructing the physical build of the brain?”
Because even the most stupid human being ever can do a lot of things a computer won’t in the next years.
Some numbers: a brain is 30 000 000 000 neurons, and a cpu is somewhat below 1 000 000 000 transistors ( a PIV is around 70 000 000, I think, but not sure). Generally, one neurone can do a lto more than a transistor, as far as I know.
But more sinificant: we have fantastic input/output (but I really don’t like the comparison between compter and human being), self adaptation capabilities: one failure in a computer, and it is almost dead. A human brain has very good capabilities to “repair itself”.
An eye, ant the neural system with it can recognize forms, volors, etc… far better than any computer. Because raw power means nothing: we have such a little understanding of the human brain, it is somewhat pathetic. Reading a book is very easy for most people, even if they don’t know the writer. Recognisizing writing with a computer is… primitive. And I don’t speak about speech recognition, etc…
An other impressive exemple: everybody knows about computers who can beat any chess player. What poeple don’t know is that for other games, which cannot be won by raw power, like GO ( a Japanese “chess like” game ), computer are useless. The only way one have to beat a human brain is to use raw power, because we cannot “speak” to a computer with an high level language. A lot of things which are trivial to do for us are horrible for computers.
The only thing a computer can do is counting numbers. There is no such a low level abstraction for human brain.
Well, if someone wants a small peak, look at http://www.ai-junkie.com/
The “smart sweepers” application there is a rather cool demonstration of neural networks(and genetic algorithms)
One block to research into the human thought process is the
*myth* that it cannot be understood and that there is
something supremely magical and sacred about the human
brain/mind.
It is only human ego and arrogance that leads, even the most
adept academics to believe that this [the brain] is
anything more than a group of evolutionary biochemical processes interacting with each other.
The only really big mistake in the field of AI is that
the people doing it are approaching AI from a linear
programming point of view and not an organic point of view.
Psychology had the same problem 100 years ago.
The only problem here is about HOW it evolve. We know some of the way we learn thing – from a psychological pov, one way to see thing(one complementary way I thing).
The problem here(and in most biological science) is that not everything is mathematical… Or seem not to be, but it’s another question.
Computer abstract thing with number, we know that, so to abstract our brain we must learn how to abstract ourselve in number Programming Language are a beginning in this direction.
About the comparison with other science : erhm, biology has a lot more of fact to learn than physic, chemistry – for example, theorical chemistry is a no way, chemistry didn’t evolve anymore : physics has taken relay to manipulate at another level atoms… with concept pretty well defined, without infinite variation.
Just for fun : wormhole, anti matters, and so on are to my knowledge not anymore dream but labo reality – at little scale. Give me a lever, I will roll the world…
I know AI will never truely occur. we will make things that seem inteligent but really wont be. human inteligence relies on a soul. otherwise you just have animals
can biology explain alot about the brain? yes. but will never explain everything becuase it is missing the whole reason it was built (yes made).
something missing from the biology perspective in all this research that another poster said already is the quantum effects. i have a hunch that plays a large role in many aspects.
AI will always be born brain dead.
“One block to research into the human thought process is the
*myth* that it cannot be understood and that there is
something supremely magical and sacred about the human
brain/mind.”
It is not understood yet, and it is the hardest thing there is to understand. Why? Because this is the brain trying to understand itself, which is not what it is designed for.
Bootstrapping knowledge is very difficult and you easily get tangled up in metaphysical problems.
but see, you are comparing the biological components of a human, I am just concerned with AI capabilities. a computer might be less fault tolerant than a human, but you could at least make a computer as “smart” as a very mentally handicapped human, if there is a linear relationship between transistors and neurons, which I do t believe there is, I think that an SMP system could be raised to be as sentient as a 7 or 8 year old child.
the nature of how neurons interact with each other makes them more fault tolerant, but in everything, you have a trade off…more fault tolerance, less power. which is why I think a fault intolerant computer could be taught the ways of the world and have the capabilities of a child, minus emotion.
you have to take these a step at a time. you can not go from nothing and begin work on a perfect human android. first we must learn how to build a human reasoning system, a “rational relational database” if you will.
then we can work on making it more powerful, more fault tolerant, smaller, etc.
a super brain computer,
what an astonishingly bad idea.
lets use computers to solve real problems instead of playing UT2003.
we have an awfully long way to go before anything good or non-military comes out of AI.
“but see, you are comparing the biological components of a human, I am just concerned with AI capabilities. a computer might be less fault tolerant than a human, but you could at least make a computer as “smart” as a very mentally handicapped human, if there is a linear relationship between transistors and neurons, which I do t believe there is, I think that an SMP system could be raised to be as sentient as a 7 or 8 year old child.”
Computers still take days to do things that the brain manages in a split second. Computers won’t do what our brains can do for two reasons.
Speed: One neuron is probably more like a processor than like a single transistor. Neurons are slower than cpu’s, but we have billions of neurons.
Structure: The design of the brain is completely different, which makes it much better at certain tasks than a computer is.
If anything is going to be as “smart” as a human being, or even my cat, it’s probably neural networks. Perhaps neural networks are not as much like our brains as some connectionists would like to believe, but they are certainly good at many of the same tasks (eg: pattern recognition/learning) as a computer. I’m not sure if neural networks have ever been built outside of a computer simulation, though.
I wrote:
“Perhaps neural networks are not as much like our brains as some connectionists would like to believe, but they are certainly good at many of the same tasks (eg: pattern recognition/learning) as a computer.”
Oops, I meant to say that neural networks are good at many of the same things as brains, of course.
Debman wrote:
“first we must learn how to build a human reasoning system, a “rational relational database” if you will.”
I believe systems like this already exist, they are called “Expert Systems”. They are already being used as a diagnosing tool for doctors. It one hell of a job to fill one of these systems with all the knowledge a human has, though.
I remember reading something about a group of researchers who have already been working for decades on filling a computer with common knowledge. Unfortunately, I forgot the name of the project and website.
Hmm, I think perhaps a different approach is called for here if they truly want a computer that mimics intelligence.
The human brain (Depending on your beliefs substitute evolution for the Creator designing the human mind LAST out of all the animals…perhaps they wanted to perfect the design :>) didn’t just appear out of thin air. It is the result of millions of years of evolution.
So why not take this into a computerised form? Instead of trying to design an intelligent program in one fell swoop why not design a very, very simple intelligence, but give it the ability to create new (And varied) versions of itself. Given the correct evolutionary stimuli (Namely that any child program which displays less intelligence than the parent is eliminated) I think genuine intelligence could be reached quite rapidly since there could be a new “generation” of programs every few minutes.
No doubt someone is already doing it, it’s too simple an idea not to have been thought of. There’s a problem here though. Since the programs are writing their children it probably wouldn’t take too long for the programmers overseeing the work to be unable to understand exactly how the programs are displaying intelligence. Which leads us around in circles. The programs might be intelligent, but nobody will be smart enough to tell us how they are doing it :>.
“The problem here(and in most biological science) is that not everything is mathematical… Or seem not to be, but it’s another question. ”
Biology is the product of chemistry and the interaction
of organics in creating systems that are stable and
naturally idling at a much slower rate of change than
the chemical reactions that they are based.
(ie: DNA is a chemical)
Chemistry is the result of physical phenomenon that arise
when atoms interact with each other.(or not as the case may be)
Since physics can be defined mathematically and chemistry
is a product of physics and in that biology is a product of
chemistry, it then follows that biology can be defined by
numerical systems.
Your comment about biology smacks of the same philosophical
mysicism that I described. That is what I meant. People
think that there is something *holy* about organisms as opposed to chemicals or marbles. There isn’t.
“I know AI will never truely occur. we will make things that seem inteligent but really wont be. human inteligence relies on a soul. otherwise you just have animals”
Well, that’s a load of philosophical ego crap. Who gave human
beings the corner on self awareness? The *soul* is a
completely false construct of tribal religion and
doesn’t even exist. The human social ego once required this ‘soul-delusion’ to validate the existence and development of a social hierarchies. This was done purely for ancient
economic and breeding reasons and has nothing to do with
sentience at all. More power and wealth distribution than
anything.
” It is not understood yet, and it is the hardest thing there is to understand. Why? Because this is the brain trying to understand itself, which is not what it is designed for. ”
Horse pucks.
You’re placing a purely aritrary, personal psycological barrier on research by placing some kind of *metaphysical* value on human thought processes. Can a mechanic figure out
how a car works? Of course it can. Can a rat figure out
which switch drops the food pellet?Of course it can.
Can we figure out how the brain works? Of course we can.
At some point our machines will become self sufficient,
self reproducing and probably exceed us in *intelligence*.
The problem is that our egos won’t let us admit that
we don’t live at the center of the intellectual universe.
We’re really only large rodents that talk. Deal with it.
The *soul* is a completely false construct of tribal religion and doesn’t even exist.”
An earlier poster asked us to ‘please explain the color blue.’ Can you? to a person born blind? What I’m trying to say is, we’re all human, but we don’t always relate to each other about everything. It’s not my business to tell people they can’t have their own opiniion. But that doesn’t mean that other people cannot have experiences of a spiritual nature that you do not.
“An earlier poster asked us to ‘please explain the color blue.’ Can you? to a person born blind?”
Blue has been explained by Helmholtz etc.
Blue is a response to a stimulus.
Philosophy, on the other hand, is a business that
propagates myths and even more philosophers to cash in
on them.
There is only the *now* enjoy it and forget about
the color blue.