Username or EmailPassword
Apart from "super AI", scifi also envisioned colonies on the moon in XX century; or such flying cars / aircraft from "our" times: http://goo.gl/9TLhg (Wiki Unicode URL, tends to work weird; and we can even build them - basically just take a Harrier, remove wings and canopy - still a horrible idea vs. "boring" reality: http://commons.wikimedia.org/wiki/File:Ryanair_Boeing_737-800_appro... ); or that videocalls will be the mode of distant communication (while, in fact, we largely went "back" to text)
OTOH it didn't really envision the ubiquity of computers, mobile phones, or digital capture and storage of images and audio.
Or Rosey the Robot versus Roomba difference.
Maybe works of popular fiction tend to be no better than background noise at predicting future, at least as far as what's commonly depicted in them goes.
Besides, that "human intelligence in robot form" is far older than it seems, for example with golem - one of old tricks in myths or fairy tales. They are tools of storytelling; and sort of cargo cults overall, modern mythologies really - in those we always wished for something silly to be true, often also naively extrapolating "known" things or observed rates of progress (like with those aircraft above, envisioned during the times of rapid advanced in marine tech; or "spaceplanes" in the scifi of ~40s, during rapid advanced in airplane tech - worse, possibly inspiring some later dead end projects, large and expensive enough to suck out funding from more sensible paths)
A closer term for scifi would be probably tech fantasy ...after all, there's usually not much place for science in it (as in depicting an actual scientific process, or having a minimum of respect to the conclusions it already gives about our world)
Overall, we sort of had this topic not a long time ago... http://www.osnews.com/comments/26004
PS. And perhaps our universe already shows us that "thinking machines" are at least unlikely, maybe even impractical. After all, something like this should have insane evolutionary advantage, hence it would possibly show up & take over already - if not within our biosphere (obviously not, for now), then at least within likely billions other biospheres in the universe, spreading and likely massively transforming it, enough for it to be visible (possibly even reaching and ~consuming us by now? ;p )
The world already has thinking machines or even replicators: they're called "civilisation" or "life" ...it's an open question if very much more efficient ones are feasible. Edited 2012-06-23 21:18 UTC
will arrive when the first machine reply to its master by itself: "not now, I have a 'chipache'" out of pure laziness. Edited 2012-06-23 21:24 UTC
I've studied artificial intelligence in University, but neither I nor my teacher, nor my colleagues, nor any of the authors we studied had the slightest idea how and when the first true artificial intelligence will be produced. Right now it's science fiction. Computer Science can't predict when, or even if, artificial intelligence may become true.
It wouldn't surprise me if some smart guy will make a proof shoving that true artificial intelligence in practice is not possible. I have the feeling that it might be the case. Can't prove it, but right now nobody has any idea of a computer model of a "real" or "true" AI. More than that, there are actual proofs that most models and AI algorithms in research today won't materialize in a "real" AI.
And Turing isn't right. For him, during an q&a session, if someone can't tell if the actual answer is given by a human or a machine, the machine may be considered "intelligent". Right now no machine passed the Turing test, but even if some machine will pass it in the future it won't be an indication of it's intelligence but an indication of the passion and art of its programmers.
unless there is some sort of unknown magic juice that allows for bio-intelligence (Humans) it should be mathematically provable that Intelligence is a result of a chaotic system of massively parallel signal interconnects (that is what the Human brain is)
What Penrose proposes is highly speculative at best - there's no evidence for his hypothesis, even some serious evidence against it (starting with totally unlike scales of neural processing and quantum effects likely possible in the brain)
Besides, intelligence != conciousness; it's also not even really clear if the latter (together with free will) actually exists in the first place.
And nobody really argued above that brains and present computers are alike, quite the contrary - but they most likely ultimately can be, by the virtue of working in and exploiting the same physical reality.
BTW, in the last few decades tons of pseudoscience invokes quantum mechanics to support the ~"mystical" sphere / subjects.
If your statement is "Brains and Computers we have today are not the same" then I would agree.
If you are saying "Brains and all computing devices that humans can build will never be the same" then you are wrong.
A brain is simply a massively parallel mesh network of signaling nodes. at their most basic level the nodes pass digital signals using chemical vectors. Unless you can show that it is impossible to construct such a computer outside the reproductive process of humans, your position is unsupportable.
(quasiquote (While allanregistos's cannot think of himself as conscious, though he can be programmed to act as one, it is stupidity to think that he is conscious.))
There, I fixed your statement. The only reason I have for believing that other humans are conscious is that they act conscious. Therefore, if a robot or computer acts conscious, I will assume that it is conscious.
You don't have a free will or conciousness, you're only wired to stupidly think you do ...prove this wrong.
And/or: you were forced into this world without your consent, and the most reliable indicator of your life is when/where/to whom you were born and raised - oh, and if one would want to opt out ...well, that just happens to be a very bad thing under the schema which participated in wiring you, forbidden under the threat of eternal punishment.
(that IBM chip is just an early effort on a looong road of future efforts, nothing "done" about it) Edited 2012-07-01 00:12 UTC
I have the impression that it can happen before 1000 years from now, which would be, from an evolution standpoint, extremely fast.
Of course, all sorts of things can happen, firstly a global ecological catastrophe and complete natural resources exhaustion that will appropriately send 1000 years backwards our descendants.
they have found that proteomics plays a huge role in how fast new evolutionary traits express themselves.
In a stable environment, the proteins responsible for maintaining a stable expression of genes are able to keep up with the system, under stress however this falls apart and all sorts of stuff happens. the shorter the reproductive cycle of an organism the sooner these modifications to their new gene expressions can show, but it can happen in just a few generations under the right conditions.
Whatever its exact mechanisms (and not yet having a clear picture of them - or of prenatal development in general - doesn't change anything), homosexuality appears evolutionary selected for, to a degree - for example, sisters of homosexual men are statistically more fertile (WRT to homosexual women... well, let's be honest, they'd be more or less forced into childbearing anyway, any traits hardly selected either way - perhaps why women are supposedly a bit more "undetermined")
But you just prefer to make up "data" or "evidence" (in other nearby posts), wish them into existence out of thin air (or, rather, from your ~Christian mythology, it seems - which BTW at some point had a problem even with the moons of Jupiter - but this is not the kind of fiction under discussion)...
...while there appears to be nothing which would block "true AI" as far as our understanding of the universe is concerned (which is BTW brought by ~science, not mythologies - in fact, what you can be pretty damn sure of is that they concede more and more; virtually everything was being explained by myths and divine intervention at one point or another, virtually everything what they once claimed turns out wrong, over time, gradually chipped off; no reason to go back now)
At the very least, we should be able to do a "brute force" approach of running a full human brain in software ...and over time streamlining it, optimising, modifying - until, from some point on, it won't be human any more. It will be more. Edited 2012-07-01 00:16 UTC
What are you talking about? I keep mine in my garage, next to my flying car.
... if it is not equipped with a "moral code" or something that resembles a conscience. Sure, you have Asimov's Laws of Robotics, but I don' believe three rules are enough to accomodate for the decision-making process of a thinking computer.
Being able to think and to organize in societies implies some sort of "moral code". Even relatively primitive animals have it.
It will likely be different from human "moral code" but provided that machines have some interests in keeping us around (either for safety, as an effort to "preserve an environment", or simply as a farm stock) that doesn't automatically mean our extinction. Just like we didn't kill all species of animals.
In my mind, we've already successfully achieved artificial intelligence for many years now. Everything from everyday automatic sliding doors to computer fingerprint analysis to artificial aircraft pilots are really in the realm of "Artificial Intelligence", that is intelligence from artificial origin.
Some technology may already be more intelligent than average humans, especially within speciality domains.
I think the reasons people are disappointed with AI today are threefold:
1. It's artificial.
This may seem dubious, but many people don't consider computers intelligent BECAUSE their intelligence was programmed by a human. They want to see intelligence from a self learning computer. And I think we're starting to see more progress on that as computers get more powerful.
2. It's virtual.
A computer game clearly can exhibit intelligence, but it's less realistic because it's on screen. Developers are accustomed to abstracting concepts, and I believe we, as developers, can appreciate abstracted intelligence more than a typical person can. If the exact same intelligence found in sophisticated AI could be projected into the real world, it suddenly feels less "artificial".
3. It's not conscious.
The role of consciousness as it relates to AI is poorly understood. The truth is we don't know if any AI can truly be conscious. Sure it could act as though it were conscious, it may even have learned how to act consciously by learning how to emulate humans on it's own, but even then I'd have trouble overlooking the fact that it's just a bunch of sophisticated deterministic algorithms - it can't "feel" anything, can it?
On the other hand, if an alien creature came to earth and claimed to be conscious, most of us wouldn't even second guess that, but how would we know it wasn't lying? If it was sufficiently intelligent, it could easily fool any of us into believing it were conscious.
Perhaps this is what people are looking for with AI, an intelligence that can fool us into believing a computer is conscious.
How exactly would it be possible for an AI to prove it's own consciousness? Sceptics like myself would always point to the code and say that it's *emulating* consciousness without *being* conscious. Edited 2012-06-24 05:59 UTC
Our morality comes from being a primate, a social animal, from the evolutionary advantages it gave (to a point...) - roots of it can be easily found at the very least (but not only) in chimpanzees for example.
Also, later, from evolution of societies - moderately stable (~"moral") large ones were more successful, able to absorb or annihilate others.
But that you would even think about such "experiment" shows what your morality is worth...
But why do you have such insanely high criteria with AI? Again, not even humans entirely measure up to all of those three there* ...NVM intelligent(?) animals.
* Plus, the first criterion arbitrarily demands from an AI to follow the same process of ~knowledge acquisition as humans - missing how the whole point of AI is inexpensive mass-production and distribution of it.
WRT 2nd - doesn't a mobile phone with auto switching of situation-based states understand what you're doing?... (within its area of expertise) Edited 2012-06-24 16:36 UTC
"I am simply saying what I would expect from an intelligent AI. I am not talking about other animals but I think birds act smarter than the smartest computer characters(and birds aren't that smart)."
I think you're underestimating how accurately computers can simulate things - even to the point where you couldn't differentiate between the real and artificial intelligences. But the problem is the computer lacks a natural physical form and that's a dead give away for the AI. Normal people aren't accustomed to abstracting intelligent actions from their physical actors, but once you get used to doing that as we often do in CS, then you'll realise that most AIs are actually within reach.
Unfortunately technology isn't at a state where we can conceal supercomputers and their energy source within a natural body. While that's surely a disappointment to enthusiasts, the opposite is theoretically possible: taking real animal brains and wiring them up to a virtual, albeit limited environment. You could end up with real animals and AI animals interacting together and never suspecting that the other is different. We might even setup a scenario where a real animal has AI offspring, or visa versa. Edited 2012-06-24 17:35 UTC
"I would like to see a company that makes AI. Game companies could license the AI. You could tune the AI with parameters like: want to live, scared, passive, language,hungry etc."
Games like globulation already do things like that. Not that I'd promote it as a prime example of good AI, but just saying...with the exception of "language" those things seem to be pretty basic.
Language is rather different though. since it's highly correlated to one's environment. Every species has it's own ways of communicating. Consider elephants using seismic communication, bees using physical gestures, whales using whalesong, birds chirping, etc. These things are unrecognisable to us, yet they are languages for those who use them. With proper training (programming) some animals can learn to understand human languages. Even a human being needs years of continual language input to be trained, and we've built scholarly institutions just for this purpose.
Why should we hold computers to a different standard?
Here is a group that is working on more intelligent game ai:
http://opencog.org/2010/10/opencog-based-game-characters-at-hong-ko... Edited 2012-06-25 12:32 UTC
"Actors in a movie don't come across as dumb."
Interesting, but the actors in a movie are just following a script, and to that extent I would argue they ARE dumb in this respect since following a script does NOT require intelligence.
You can nitpick and say they need to be able to read the scripts and interact with other actors in order to do their jobs (which requires some intelligence). However actors don't technically have to understand a script, so that's setting a very low bar for "intelligence" in my opinion. One which computers could probably achieve in the short term if it weren't for the virtual/physical barrier I spoke about earlier.
"Hey, I've got an idea. Let's create an artificial brain."
"Yeah, that sounds cool. Let's get started."
Scientists start playing around.
"Hmm... pray, tell me. How does the human brain work?"
When we don't fully understand how our brain and mind are working, how are we to recreate it succesfully?
Talking about artificial intelligence means that there supposedly is a non-artificial "intelligence".
Seeing the difficulties to define "intelligence" and to measure it (IQ) the whole discussion get to be very odd. Look at the IQ-tests on the internet, they are good for a laugh only, mostly being very language-specific or very specific for people from a certain social status. Those designing the IQ-tests know that it is extremely difficult to measure what you cannot define clearly.
We will see "machines" that can "learn" I am sure since we build more and more systems that adapt to he behaviour of the owner, but to gain conciousness will need a giant step in understanding what this is. And we, the human beings, have still not reached the point where we understand conciousness enough to simulate it.
Similar issues you mention with intelligence also apply to conciousness - recognizing it in the first place might be problematic (does mirror test do it? Or maybe other tests, or perhaps observations hinting at some theory of the mind in few animals?)
Meanwhile, we have quite poor grip on our own minds... (go through a list of cognitive biases; or consider that split-brain patients are virtually unchanged, basically just with some "glitches" - while we very much believe in monolithic me; or how modern neuroscience equipped with tools like fMRI casts some doubt at "free" will; or placebo, and how adamantly people can defend its results)
Perhaps we usually essentially believe in strong conciousness of other people (and limit it to people), also because of how we like to perceive ourselves (and contrast it with "lesser" life forms) ...however, who knows how many philosophical zombies are around us each day ;p
But seriously, most of the time we are in a bit "mindless" & automatisms-driven state, anyway (I suspect that's how being an animal largely feels like - just with rarer and smaller "awakenings", if any)
PS. And why does the ad in this OSNews story direct me to a local classified ads website, using a banner which mentions that you can find kittens there?... (and depicting some - not sure if it'll work, but: http://pagead2.googlesyndication.com/simgad/1656137121673980848 ) Edited 2012-06-24 13:13 UTC
They became tory politicians.
I watched Tron in the early 80's (I still love it (not the new one however, what was that???)).
There is a scene where Alan (the guy that writes Tron) mentions that computers will be thinking "soon".
It's now beyond the future that Marty McFly stepped into, and we still have no "thinking" machines.
I personally believe that like time travel, both technologies will never truly exist.
I believe we can mimic intelligence, but I don't believe computers will be able to grasp "though" as we do, not now, not in another 30 years, not ever. Again, this is just based on observations over the past 30 years of living in the industry.
I think the problem is that we underestimate the brain somewhat. I think we are only now beginning to get an idea of what we can actually do.
"I personally believe that like time travel, both technologies will never truly exist."
Well there's a pretty big difference between the two. Technology for time travel cannot exist because the rules of nature as we understand them don't permit it. I don't think anybody would claim that physics rules out artificial intelligence in the same way.
"I believe we can mimic intelligence, but I don't believe computers will be able to grasp 'though' as we do, not now, not in another 30 years, not ever."
I'd like you to define precisely what you consider to be "intelligent". It seems there's a strong tendency to shift goalposts in the field of AI.
Like zima already said, there's a risk of setting the bar so high as to rule out animals and humans. If we're to be objective, our litmus tests shouldn't focus around humans proficiencies but instead be inclusive of any intelligent life in the universe.
Here's a challenge: come up with a satisfactory litmus test that animals and humans can pass but ultimately computers cannot.
While it is fairly easy to find theories (or, more precisely, interpretations, thought applications and experiments of some established theories) which appear to permit "time travel" as understood in popular fiction*, it usually comes with strings attached such as "having a very localised supply of energy greater than produced by a large galaxy" or "assuming an object of infinite length rotating nearly the speed of light"...
Overall, it's quite safe to assume that nothing will ever attain the fiction-type time travel simply because we would most likely observe such by "now" (one of more sensible things would be, say, to move "back" your ~civilisation as early as you can, when the universe was more dense)
*because, really, we do it all the time, just within the confines of what this universe appears to be - and it can be seen as if entirety of it travels at the speed (just trading between space and time aspects of it)
The question is, do we even really WANT truly thinking and conscious machines?
Could we please lock down topics more often when someone doesn't like to lose an argument?
Ouch... I despise locks too, but could we please not get *this* topic locked down too! As you can see I'm quite enjoying it
Edit: go bugger some microsoft threads Edited 2012-06-25 03:43 UTC
Well, I can't really reply or bring it up in the locked one though
Hmm...and now comments are disappearing.
Seriously, users can't mod up and down in a topic they have commented in but this kind of immature nonsense is fine?
I assume you refer to the story with digital collages?
Now the whole story is just... gone. Yay for promotion of real art(tm)! Edited 2012-06-27 17:03 UTC
Humanity is still so stupid anyway. It was the later half of the 20th century when Turing was charged for being homosexual and castrated, and that's still quite hard to believe, not the 1500s, less than two decades before i was born and during my parent's lifetime.
I think thanks to the internet, the world has become an enormously better place, where this kind of stupid attitudes by governments, media or other groups can be condemned publicly.
Personally i think that AI in the vein of a thinking machine will become real. I think that most of the work to AI will almost be accidental and that our requirement for a better standard of living will create AI accidentally.
What i mean by this is that a lot of companies are creating very basic AI to power games, traffic systems and more recently mobile/smart devices, i.e. siri. I believe that evolution of these products will have the secondary effect of consciousness. The other big factor is in this is the internet, which is having more and more information pumped into it, with more logic and ai being built into it also (google search). Now these two entities (smart devices and the internet) are already connected allowing for them to interact more and more with each other and more importantly humans, allowing us to rub more and more off onto them.
I currently believe we are a couple of steps away from AI, the next step will be a massive overhaul of the way we interact with computers. We are reaching that point, but we need to reach a point where we interact with computers of all types in the same way they do in star trek, i.e. natural voice with understandings of complex commands and language syntax. Now i believe this is important as we again it allows us to rub a little more off into computers.
As for creating an AI system from scratch to compete with turnings requirements, the problem we have is that although the electric circuits run slower in the human mind, the human mind is massively parallel processing, so we will need something akin, something with not just 256 cores but something with 1000's of cores per chip. Watson demonstrated this with the 1000's of processing units.
From a software point of view we need an operating system or software that has the ability to learn, to not load everything the computer AI needs to know straight away, but a system which helps the AI learn the information itself. For example language, instead of loading a dictionary we need software that would allow the system to learn the language step by step.
I suppose once we get to a point with AI, the real question will be how do we use it, set it free, integrate it into our society? For that i would recommend reading Asimov's series of books, he's a better 'thinker' than me.