Linked by Thom Holwerda on Tue 23rd Nov 2010 22:14 UTC, submitted by ARUmar
In the News "If you've ever been interested in artificial intelligence, you've seen that promise broken countless times. Way back in the 1960s, the relatively recent invention of the transistor prompted breathless predictions that machines would outsmart their human handlers within 20 years. Now, 50 years later, it seems the best we can do is automated tech support, intoned with a preternatural calm that may or may not send callers into a murderous rage.To build a brain, you need to throw away the conceit of separate hardware and software because the brain doesn't work that way. In the brain it's all just wetware. If you really wanted to replicate a mammalian brain, software and hardware would need to be inextricable. We have no idea how to build such a system at the moment, but the memristor has allowed us to take a big step closer by approximating the biological form factor: hardware that can be both small and ultralow power."
Permalink for comment 451008
To read all comments associated with this story, please click here.
by oelewapperke on Wed 24th Nov 2010 17:06 UTC
Member since:

There is no such thing as coupled hardware and software, and the human brain does have clear distinctions between both concepts.

There is the problem of irreducible complexity when you're talking about human minds, but that's just been politicked out of existence, it's not very proper to talk about it, as it's the same problem that prevents climate simulations from working (notice how the IPCC failed to predict el Nino for the third time now, resulting in predictions wildly of the mark after a few years. But don't worry, the predictions for 100 years ahead are still accurate).

You cannot create a model of a human mind that is, by any measure, significantly simpler than an actual human mind. This means that while you can make AI's that match or even exceed human intelligence, you just can't do it much cheaper than in an actual human skull. And that means quite a few things : the first 2 years of life for such an AI, it will spend like any other human baby : crying, eating, sleeping, crying, eating, ... You cannot shorten this time, as learning to be human requires actual interaction with real, live, human beings. And it will take some 20 years of education to become a real, valuable, productive member of society.

Just like climate science, the problem cannot be shortened significantly, and all attempts to cheat will result in massive differences in the "end product", any and all information about the future may affect the outcome.

That's called "chaos theory", and it means that anyone who cannot tell you next month's winning lottery numbers cannot make any useful prediction about climate, or about what any human mind will do.

This limitation is independant of the amount of information available to the researches, and independant of the accuracy of the scientific theories used (anything less than 100% accuracy will lead to the same problems), so it's a problem that has the property that it's not just unsolved "for the moment" but it's eternally unsolveable by any method.

Even that simple fact - no matter how solidly it stands upon mathematical theory - that science is thoroughly locked out of finding answers to quite a number of very relevant questions, is controversial in the extreme. That psychology and climate prediction are one of the unsolvable problems is, for obvious reasons, a straight ticket to "persona non grata" status in most "civilized" circles.

But as anyone can look up, attempts to predict the climate initially resulted in massive failures. These failures are what led to the discovery of one of the major limits of science : chaos theory.

But we're massively politically invested in this being untrue. How do we get out ?

Reply Score: 1