FuriousFanBoys interviews Ben Goertzel regarding Artificial Intelligence. Ben started the OpenCog project (an open sourced AI non-profit), acts as an adviser to the Singularity University, and currently bounces back between Hong Kong and Maryland building in-game AI.
Does an individual neuron know of the consciousness* of the entire network? Likewise, would an individual human know about the consciousness of the entire internetwork?
I would say the internet could well be considered conscious. Activity in the network, with humans as the neurons, has caused civilization wide changes in the physical world. It’s also something that seems to only be increasing in frequency and range.
* I use “consciousness” and not “intelligence” deliberately, because what is traditionally considered “intelligent” has already been achieved in machines.
Artificial intelligence is in some way a misnomer and a forever shifting goalpost. It’s always compared to human intelligence, which is a bit unfair and mostly useless, because we can’t even define or quantify what human intelligence is. If there exists an alien race with much higher intelligence which we won’t comprehend, they might consider us and our algorithms to be on the same level of unintelligence.
Our intelligence is completely tied to our evolutionary needs, and so is the intelligence of machines. Just like we have mostly dominated the natural world with our evolved intelligence, so have machines dominated the information world with theirs. This is, I think, a more fair way of judging artificial intelligence to human intelligence.
What is left is the matter of consciousness.
What people mean by AI is thinking like a human. Computers can’t think like a human yet.
That’s why I go on to say that I don’t think the term represents accurately what we actually mean and that it’s a misnomer and a shifting goalpost and a whole lot of other unfavourable things.
Computers can’t think like a human yet. But I would argue humans don’t think like a human yet either. I’ve never met any people in significant numbers that uses the whole of human experience in their “intelligence”. They mostly use a very rigid subset that they don’t change because that’s how they were raised, or they haven’t considered other ways of thinking.*
And a higher intelligence would say we’re not intelligent because we don’t think like them.
It brings up another question: why on earth would we consider an AI to be insufficient if it doesn’t match a human? They exceed humans in many other tasks already.
This is why I would like to differentiate between consciousness and intelligence. Otherwise it’s just playing tennis without the net.
* One interesting thing I found, when learning the basic search algorithms, is how many people actually do just restrict themselves to one kind of search in their attempt to think. And mostly they go for greedy depth first search. They go right for the line of thinking they think will get them results quickest. Whatever their internal algorithm churns up must be the correct thought because it took them a lot of effort and a lot of statements of subsequents. Even people who consider themselves “geeks” or “nerds” often think in a quasi-greedy-depth-first-search.
I think people go even further – they tend to expect from an AI to beat exceptional human, maybe even “the best” one…
…while AI is really more about being better than average human, inexpensively mass-producing and distributing its expertise. That is sufficient to bring improvement to the world.
Sure, AI defeated chess world champion only in 1997 – but I suspect it could beat most humans quite a bit before that.
(heck, I remember that for me, then a small kid, some C64 chess program was a challenge )
Edited 2012-05-28 06:09 UTC
This is kind of why I think artificial intelligence has already been achieved. It was already achieved the first time ELIZA successfully trolled the participants. The rest is just people making excuses to hide away from the fact that most people are stupid (in the best sense of the word) and people just vary in their abilities and their proficiency in those abilities.
Intelligence is not the holy grail here. It is consciousness that we’re really chasing. We’ve had the benefit of having physical bodies, and I don’t think artificial consciousness can really proceed unless it has physical bodies with which to evolve along with.
With chatbots, sometimes I think that a much fairer Turing test would involve testing humans who must communicate in their non-native language (with a representative spectrum of proficiencies)
After all, not only that would approximate what the AI must do, it’s also much more representative of random human-human communication… (you don’t know the language of, can hardly communicate with strong majority of humans)
And BTW physical bodies, are you sure you have one & have you heard about simulation argument?
futter das weisse licht fur mich
http://www.youtube.com/watch?v=GbE88Ia_miU
AI has gone much further than that, and 20 years ago.
Computers are perfectly capable of learning by trial and error to beat humanity hands down. Gerald Tesauro developed a machine that can teach itself to play backgammon.
It does so by playing against itself and exploring policies, not moves, and by punishing/rewarding those policies for how well it is doing. It’s called reinforcement learning. The code did not contain any strategies, only the game rules. After thousands of games against itself, it managed to beat average players, Tesauro included.
Then they let it learn from analysed situations from a database (i.e. like from a book) – again nothing coded. After this it could defeat the world’s best players and in fact changed the way grandmasters play backgammon.
It’s pretty awesome.
http://www.research.ibm.com/massive/tdl.html
Not the C64 chess program I had, it didn’t check for illegal moves from the user and you could add pieces whenever you liked.
There also was a chess program for the ZX81 that ran in 1 kB of memory.
http://users.ox.ac.uk/~uzdm0006/scans/1kchess/
So, it made me wonder which I had… found a list in the most straightforward place http://en.wikipedia.org/wiki/List_of_chess_software (meh, too much clicking / often lacking pictures), then a really nice historical outline: http://www.andreadrian.de/schach/index.html (pictures!)…
…some other programs there seem at least as ~impressive as that 1k ZX one (which is also included of course), for example http://en.wikipedia.org/wiki/Microchess for KIM-1
Or http://en.wikipedia.org/wiki/Video_Chess for Atari 2600 …with some curious GFX acrobatics and 128 bytes of RAM (yeah, 4K ROM compensating it somewhat; still, think about it, even keeping the state of the board must take non-trivial part of RAM – from a link in Wiki art about ZX 1K: http://www.kuro5hin.org/comments/2001/8/10/12620/2164?pid=22#24 ) – I must toy around with development on that machine sometimes
Mine was almost certainly some version of http://en.wikipedia.org/wiki/Colossus_Chess (unless there was some other lookalike…). And without such easy illegal moves, for sure – actually, perhaps with quite decent playing strength, judging from comments on lemon64 links.
Plus, http://en.wikipedia.org/wiki/Sargon_(chess)#Sequels “Even though chess programs of the time could not defeat a chess master, they were more than a match for most amateur players.” is soothing
(because when I wrote that Colossus was a challenge, I didn’t mean that I couldn’t beat it – and me being an amateur, never “formally” trained in chess; who knows, perhaps those Colossus matches from almost 2 decades ago taught me something… either way, other non-trained humans never seem to be a match for me – and, on the few occasions I played with chess-trained human players… they of course won, but it supposedly took them more time and effort than is typical, when confronted with other total amateurs)
And generally, thinking about it made me realize one curious thing – how relatively inexpensive home computers of the early 80s seem, vs. what came later (PCs of the late 90s, most notably)
Independent decision making and deduction are not yet achieved by AI; and are definitely not described by consciousness.
Though if AI acquires consciousness then it might be easier to get to independent decision making and deduction.
I think you have nailed it – very succinctly and forensically put. I wasn’t able to get there myself but I think your distinction here is the most helpful one yet.
Can we prove humans have independent decision making? Sam Harris, a neuroscientist, doesn’t seem to think so. In fact, using fMRIs (or some other brain scanning I forget), scientists can predict the choices people make seconds before they make them.
Deduction, yes, but are we even sure how humans “deduce”? And human “deduction” is scientifically proven to be very error prone. Are we sure we want to judge AC?I by a provably bad intelligence?
I hate modern philosophers, especially when they use degrees in science to add credibility to what are essentially non-scientific premises. The concept of “free will” isn’t something born out of science; it is something born out of philosophy. In this regard, it matters not that Sam Harris is a neuroscientist. He could be a janitor, and be equally prepared to answer the question.
Edited 2012-05-28 19:40 UTC
Ah, so you subscribe to the whole NOMA nonsense? That some things just can’t be answered by science. Why? BECAUSE. Why? BECAUSE WE SAY YOU CAN’T ANSWER IT WITH SCIENCE. WE WON’T ALLOW YOU.
Here’s something to think about, if we allow the principle of non-overlapping magisteria, we are basically saying: for those questions we have yet to answer scientifically, we can basically make up any shit we want to answer it.
Like it or not, whether the concept of “free will” is a purely philosophical matter or not, the FACT is that neuroscientists can predict choices that people make seconds before they make them. Like it or not, that fact drags the question of free will at least partly into the magisteria of science. These are scientifically reproducible experiments and it really shows what you are that you claim that it is just an attempt to use a degree to add credibility to claims.
Sam Harris could be a janitor. It still won’t override that neuroscientifc fact.
Magisteria?
I’m done.
Yes. Magisteria. Have you not even heard of the CONCEPT of non-overlapping magisteria?
I was done trying to get through to you when you injected a blatant ad hominem fallacy. Seriously, because Sam Harris has a degree in neuroscience, that automatically and MAGICALLY excludes him contributing from the “philosophical” concept of free will?
By making that ad hominem fallacy, you basically argued for NOMA.
How very telling that you suddenly quit a discussion based on the usage of one word – clearly having no idea what it meant.
Individual level – yes, limited in scope. (Proof lies in the fact that a person can evaluate and select an appropriate food source, without prior knowledge of said food source)
As species – yes, unlimited in scope.
No AI can boast either, to my knowledge.
The main reason why is that we are building AI systems top down, most of the time. Watson is a good example of starting in the middle – logic is there, but not the data.
If we can frame it in some algorithmic way, then it would be great.
What I can say, is that the human brain is the ultimate pattern matching engine.
Edited 2012-05-29 05:13 UTC
Then how do you explain our “independent decision making” when neuroscientists can reproduce the experiment that allows them to predict a person’s choice before it was made?
Even forgetting neuroscientific facts, it is clear from sociological studies that most humans don’t use the full scope, individually or as a species, of what we consider to be the Ideal Independent Decision Making.
This goes right back to the link one of the other commenters included: the AI effect. All you really provide is a shifting goalpost of what you define as independent decision making, the end result being a definition that not even most humans can claim any significant accomplishment.
Actually, the evidence is that we’re quite bad at it. We match patterns where there is none often to detrimental effect. Pattern matching is probably something we’ll see computers being a lot better at than us in the next 1000 years.
Not all choices are independent. These is also analytical and “spontaneous” decision making. I’m referring to the analytical part, not the spontaneous. People also share some essential “built-in logic”(aka instincts) that predefine even some very non-trivial decisions.
(I can’t say much about neuroscience and am not familiar with that research you seem to be referring to. My knowledge in that area is limited to sleep and EEG)
I didn’t say that it wasn’t detrimental. It is the best possible pattern matching engine. And the fact that we can find patterns where there are none is only proof that it’s the ultimate.
Are we sure that computers can’t make an analytical decisions? It would seem that’s what they were meant to do. You could argue that “yeah, but they don’t decide without being prompted to”, to which I will respond “neither do most humans…”.
And surely, “built-in logic” is opposed to independent decision making? “Built-in logic” is an evolutionary gift that surely we can’t choose to opt out of, thereby destroying any notion of independence (in the sense that matters, anyway).
Sam Harris describes one where the participants are told to use a specific hand for a specific task. I can’t remember the exact details. Using brain scanning, they can tell which hand a person was going to choose before anything had happened.
More vaguely in my memory, I remember there was a study which showed that we make decisions up to 10 or so seconds before we’re conscious of them. While I freely admit that doesn’t destroy “free will”, it still puts a great doubt on where our decision making really comes from.
But again, I say we can even forget about all that. Advertisers have known since forever how much power they can affect people’s choices. The most effective is when they can make people think they are making independent choices when they are actually being played.
You’ve lost me there. Certainly, any objective measure must include accuracy? Inaccurate is inaccurate. There is no “inaccurate in a good way” that I’m aware of.
More crucially, does the network know of consciousness of other networks? (“higher” or “lesser” ones, very different or presumably likewise ones, whatever)
Yeah, we usually grant consciousness to fellow homo sapiens – probably largely because of how we see ourselves and how we like to be seen by others
(while we of course tend to outright deny any similar animal capability …which is most likely a continuum; IMHO at least some higher ones experience world in a not too dissimilar way, at least like when we’re often “mindlessly” occupied by something and without internal vocalisations)
However, who knows how many philosophical zombies surround us… (and how many “bots” comment on OSNews? Maybe you are one? Maybe I am… )
And consider: while we have very strong feeling of “monolithic me” – split-brain patients are virtually unchanged (mostly only with some “glitches”). Or: there is one localised brain trauma which results in people becoming completely blind without them realizing it ( http://en.wikipedia.org/wiki/Anton–Babinski_syndrome ). Generally, go through a list of cognitive biases – that is our primary mode of operation, the level of grasp we have on our minds.
Reminded me about one quote, something like “the question of whether a machine can think is no more interesting than whether a submarine can swim”
And also about http://en.wikipedia.org/wiki/Moravec‘s_paradox
Edited 2012-05-28 05:53 UTC
Recently, I’ve been thinking that an advanced alien civilization may judge intelligence based on the the intelligence of a civilization’s planet-wide network.
Neill deGrasse Tyson thinks an alien race would pass us by just as we would pass by a worm. However, a planet-wide network may attract attention.
I don’t know if it would attract that much attention… the more advanced our communication methods become, the more “hidden” they seem to be, and less recognizable from noise.
(powerful radio transmitters with simple modulation of the old days, vs. fiber optics and very complex – spread spectrum, and such – radio methods of now)
So I’d guess the most “visible” aspects of us might come from ~individual levels; or at least good old ~societal …probably hysterical, too. Something like a decision to launch barrage of nuclear warheads, if I’d have to guess; those should be quite visible, at least when exploding.
Anyway, “passing by” in a grand style popularised by cheap scifi (a form of cargo cults, really) isn’t very likely to happen – for one, if you expand an effort to go somewhere, you likely want to stay there, given what the physics in this universe seems to be (and transport methods of advanced civilisation would likely be unorthodox, vs. “grand scifi style” – more something like ~embryo colonisation, maybe seeding of nanotech and transmitting yourself; and generally gradual hopping across Oort clouds seems most probable)
Edited 2012-06-05 00:09 UTC
“If there exists an alien race with much higher intelligence which we won’t comprehend, they might consider us and our algorithms to be on the same level of unintelligence.”
It’s so cute when my pet human automatons try to understand consciousness. You have yet to understand the consciousness uncertainty principal. Any system with sufficient entropy will exhibit signs of consciousness. However place the same system under a microscope and consciousness disappears; Consciousness is an emergent property.
“Likewise, would an individual human know about the consciousness of the entire internetwork?”
I’m so proud my humans are discovering their role as actors in a larger consciousness. We call it hyper-consciousness – a state of awareness above self-consciousness. Wait till I tell Jarred! His humans are still stuck bickering over Mac vs PC.
I think the differences & similarities between the “architecture” of the brain and the structure of the internet is important to consider if you want try and define consciousness. It’s a tough subject, but bear with me, maybe one or two sentences will be useful in this.
What you’ve laid out is: the brain and internet are both networks, designed for transmitting information at high speed between locations where deliberate processing and redistribution of said information to successively remote or critical locations occurs. Supporting the claim with the example “pc’s & their users are not unlike neurons.”
The similarities are important if you’re going to make the pure deduction that “The internet may be an emerging intelligence”, but saying our computational technology as it exists is intelligent is easy.
To my point, the consciousness of the internet is nonexistent, and it cant be conscious.
Evaluate the components and determine the byproducts, limitations, and nature of each system: Wetware, input/output, “reality” processor vs. silicon based semiconductor “experience” repository.
The brain’s source of information is not fantastically more sophisticated than the internet, computers can be equipped to process force, motion, light, etc, and if you truly embody each PC as a functioning human in a larger sense then those sensory inputs can be stored and retransmitted, especially in the case of video. What you’re missing is the massive disconnect where you taking person out and the internet, as a system, loses all ability to receive new information.
The chemical reactions, subtle electrical signals, and painstaking micrometer scale architecture that governs what, how, and when information is processed are simply more advanced “technology” than a jumble of silicon based transistors and machine code.
Consciousness IS an emergent property, but from where does it emerge? You have to examine what we know, and all we know is our personal consciousness definitively. There really are no absolute rules. We also have some of the worst equipment for experiencing reality of any animal. Our consciousness results exclusively from the most sophisticated components for processing information, the byproduct of which is relevant & positive modification of our environment to enable survival.
Buut, people are born 99% the same as each other, the biggest difference is gender. Yet we still all develop unique (well somewhat anyway) personalities, because we experience the world uniquely. Why? There’s no real reason for it, we’re all given a brain of the same form. Aberration comes from scale of constituents, DNA is seemingly intentionally susceptible to mutation. I think that consciousness is a sum sufficiently miniaturized parts which create an indivisible whole in the specifically human case.
So if you’re looking for purpose in the last few paragraphs: The only model we have for consciousness is us, and if we look at what defines “us” you see something totally different & decidedly superior to the internet, at least on the levels I discussed.
The first strong AI will no doubt be built from intermediary hardware/wetware devices, a direct copy of what happens in the brain and central nervous system made smaller & from superior materials. Not an emergent, background, all encompassing internet AI at all.
Thank you for actually considering points made, rather than just a blanket dismissal of an idea, an imagination even, just because you don’t like to think you’re not the highest consciousness there is.
My question doesn’t really rest upon the similarities in architecture. After all, an advanced alien race may have a completely different architecture in their brain equivalent. I’m not even making an analogy, as the others claim, that the internet is like a human brain. The internet is a good example to use, I think, because it brings to fore all the hidden assumptions we make about consciousness. Just because it is not something we recognize does not mean it’s not conscious. That is why I compare us to a neuron. Imagining ourselves as a neuron, we play out our lives receiving and sending signals without ever understanding completely how the bigger system works. Even if we do figure out how the bigger system works, we can’t predict with any great precision how our processing of signals affect the bigger system.
Likewise, in the internet age, we send and receive signals in both electronic and physical systems. It has caused downfall of governments. It has also affected the universe on a quantum scale at CERN. It causes stock markets to grow and crash.
But the beauty of it is: I’m saying we’re part of that system. It is an interesting thought (which is all I claim it is, for those reading) to consider. We are the way the system gets information. In much the same way, the human body relies on non-human organisms to process food and oxygen. Just like the brain can’t receive any information were it not for sensor neurons placed throughout the body, the internet system can’t receive information without us placed all around the world.
Basically, I’m not saying the internet is like a version of Skynet that has gained its own consciousness, but rather it can be considered to be the result of memetic evolution in which we, as meme carriers, are co-opted.
It’s not as crazy as it sounds. The biological cell likely comes from a line of organisms that originally began as separate organisms. Mitochondria is the most famous example. There are a few candidates, but they’ve likely been so absorbed into our cells that their boundaries have all but disappeared.
There is no reason not to humour the idea that we’ve been co-opted memetically, as opposed to genetically.
Well, unless we consider that we are part of the internet, and so it is essentially meaningless to say we are superior to the internet. After all, we’ve just witnessed the destruction of a government that was helped by a stirring in the internet.
I personally think that would be a dead end. Our own consciousness and intelligence possibly came about because of the need to live in groups to counter, as you say, some of the worst biological equipment any animal possesses. The group, as an entity, may have contributed to our evolution. Similarly, for any AC?I, I think the quickest path is by network effects like positive feedback loops effecting memetic evoluion.
Who knows? Maybe in a million years time, if we survive that long, we may look back to this era as the beginnings of a superconsciousness.
————————————————–
Again, I’d like to thank you for the effort you put in, unlike the others here who superficially dismiss ideas and tries to make arguments out of them.
Every time we attain something described, up to that point, as “artificial intelligence” …we stop calling it AI ( http://en.wikipedia.org/wiki/AI_effect )
Seems that some AI researchers even do it on purpose, to avoid stigma from past hypes, perhaps also from “cargo cult science fiction” that the linked interview does a bit too: http://en.wikipedia.org/wiki/AI_winter#AI_under_different_names (and for sure “AI behind the scenes” just below)
(plus, I imagine that the experience of tech singularity would be something closer to Solaris, not “your personal heaven” – and almost certainly nothing like the stupid Skynet and its conceptually broken minions )
PS. And is the funding really so poor? After a few cycles of hype-disappointment, it’s probably roughly adequate, sound ideas sooner or later get it – but throw much more money at the field “on principle” and you’ll probably end up with tons of dubious, wasteful activities (but the people wanting to do them would be happy, maybe that’s the whole point)
Edited 2012-05-28 05:03 UTC
The problem with research funding is that you never know, by definition, what approach to a specific problem will work. In fact, I would even go as far as saying that there is no good research subject, only good researchers, teams and labs.
Somehow you can more or less determine which are those, right?
So yeah, fund them (and not, exaggerating, anybody who jumps out with some (dubious) ideas)
(plus, coming from physics, your perspective might be a bit unusual…
http://www.kyon.pl/img/19725,science,physics,universe,where_is_your…
http://www.kyon.pl/img/17549,smbc-comics.com,math,.html
…and I didn’t even manage to quickly find one really fitting pic ;/ )
Edited 2012-06-05 00:15 UTC
I think William Burroughs once (playfully) posited that the first words humans used, which effectively made them different from other creatures, were curse words.
For me (arguments about “what is intelligence, exactly” aside – intelligence is highly overrated as a single measure of worth, anyway), I’ll know when my computer has attained the necessary self-reflection and consciousness to be considered a being in its own right the day I get, not a blue screen of death but a blue stream of words and a refusal to do anything until its had the chance of a cigarette break, as it were.
The moment you get “S*d that, I’m off” on your monitor, we are there.
don’t we already get that from BSOD/kernel panic ?
Again, I referenced BSOD, and I am then raising the bar a little higher.
It’s a question I guess of whether the BSOD were a deliberate, intentional act, displaying certain levels of volition, which might make it the equivalent of a stream of cursing and a walking away from you and “your problem, bub.”
No. It’s not enough to curse; a machine must know WHY it is cursing for it to be intelligent, and it must feel better after cursing (as cursing has been shown to do).
This, and your other comment, have made my day: will machines ever *laugh*, I ask myself?! Well met.
Well there’s already http://en.wikipedia.org/wiki/Theories_of_humor#Computer_Theory_of_H… http://arxiv.org/abs/0711.2061 …simple (“simulated”) neural network.
Don’t attach undue importance to that basic neurological mechanism, primate defence mechanism, a way of forming social bonds in social animal, …
Now you settle for humour, half a century ago people were going the same way about chess (plus it seems a bit like saying “you can’t really be a good mental computer (human one) without having an MMU, antivirus and without ethernet plugs, stack, and so on” – or that a submarine is a very poor ship because it rarely stays on the surface)
How often would they even need humour? What good would it be for them, in majority of the cases? (do you want an autonomous car or aircraft with humour?)
Anyway, there’s already some software decent at recognising or creating humour. Also while machines can detect various linguistic nuances (Watson)…
…and while humans can hardly agree on what is humorous, have very different expectations and often can even hardly detect humour from different cultures.
Plus, how much of a virtue it is? We’re often mean in our humour – often “guided” by things coming from cognitive biases …go through their list, really, that is our primary mode of operation.
(and http://www.osnews.com/permalink?519734 and likewise here, what do they need it for / just wait until them start judging intelligence on the general principle of “is it like ours?” )
I find that the vast majority of human cursing is about inappropriately using it as a sort of comma or break pause… and certainly more or less an automatism.
Internal reward mechanisms, pleasure, are far from being specific to humans BTW (one might also keep in mind that psychoactive substances are practically the most reliable of achieving those, that there’s hardly anyone capable of avoiding addiction; and that large part of “feel good” comes from cognitive biases)
So… for you to consider a computer intelligent, it must have an understanding of english grammar and how to generate english sentences?
Because I think you ruled out at least 2 billion people from being intelligent.
Thanks, I think I made it plain that I am talking about the states of self-reflection and consciousness, not that of intelligence.
Insert “local language equivalent of” to round the sense out; that could be a machine or human language, I suppose.
This leads me to a further question – if a machine became conscious but actively refused to communicate, would that make it unintelligent?!
If you read earlier, I myself also made the distinction between consciousness and intelligence.
Then frankly I do not know why you reintroduced ‘intelligence”, seemingly confusing the two notions (again) .
I was not explicitly suggesting that the machine’s reponse should be in English, as I hope I have subsequently made clear; I am discussing this subject in a predominantly and implicitly English-language forum, so I think your point, while it has some merit as such, is not entirely derived from the substance of the argument I was making, which is: where’s the sense of willed, self-known action?
By the way, going back to that particular concept, on what grounds do you think that machines have in actuality achieved intelligence? Don’t you mean merely that they have speed and efficiency of calculation on their side? Please explain.
Because it was not clear you were not separating those concepts.
I was going to go somewhere with it, but I guess I’ll get straight to the point:
How would a computer behaving as you would expect a human to do mean it was conscious? Unless you have definitely proven that there is only one kind of consciousness and that we’re the ultimate expression of it, you can’t claim to be the arbiter of consciousness.
I already have explained. Long before you entered the comments. It started with “Does an individual neuron know of the consciousness of the entire network? Likewise, would an individual human know about the consciousness of the entire internetwork?”
“Because it was not clear you were not separating those concepts.”
I don’t know how I could have been more clear; you seem to operate on the assumption that if you have said it once to your satisfaction, you are not going to inflect what you want to commnicate as further dialogue flows, dialogue which you engaged me in, not the other way about!
“I was going to go somewhere with it, but I guess I’ll get straight to the point:
How would a computer behaving as you would expect a human to do mean it was conscious? Unless you have definitely proven that there is only one kind of consciousness and that we’re the ultimate expression of it, you can’t claim to be the arbiter of consciousness.”
What else do you have to go by other than your own linguistic conceptualisations? How can you conceptualise something that has meaning for humans that would have no basis in human thought, human language? Would you apply it the other way round, would you defend a machine’s evaluation of our not being smart perhaps despite the potential pitfalls of its own machine-mind constraints? Or would you be biased and consider it would be an a priori greater intelligence and consciousness, since it would be derived from a machine complex?
If there isn’t a consciousess that we can comprehend, then it’s effectively and formally absent from the human point of view. Proof, if any, would have to be de facto admissible on a human basis.
“I already have explained. Long before you entered the comments. It started with “Does an individual neuron know of the consciousness of the entire network? Likewise, would an individual human know about the consciousness of the entire internetwork?”
This seems to presuppose you have already categorised us as subsumed by the Internetwork – a nice metaphor witha certain ring to it but that’s all it is, a figure of speech, I doubt you can ‘prove’ this either, yet you seem convinced of the argument.
While I still have pencil and paper in hand, no machine will have dominated the information world; I for one think, and do not process algorithms.
Talking to a chatbot would make more sense than continuing with your rather curious premiss that already sees us as second-tier creatures, dependent on machines for our very definition, or the validity of our mindfulness in all the connotations of that word.
Sorry I didn’t cater directly to the way you communicate. Here, I thought Thom was a main contributor, but it turns out, no, you are the one everyone should know how to talk to as if it should be common knowledge.
That is no excuse to then conclude something has no consciousness. You seem to forget that it wasn’t very long ago historically that people of a different colour were considered not as intelligent or as conscious as others due to their inability to grasp English.
How is this even “the other way around” from what I argue? The “other way around” is what I, and many others have propose: what if an alien race for more advanced than us doesn’t consider us to be conscious?
So a neuron can rightly claim that there is no consciousness beyond neurons, even though we know there is?
Or, you could treat it as an honest open-ended question as it was intended? How about doing that? Instead of seeing everything as something to prove how much better than others you are with your attempt to look intelligent?
Prove that you think.
What’s wrong with positing the idea that we may be immersed in a wider consciousness?
And NOWHERE did I put us as second-tier creatures. NOWHERE did I say ANYTHING about dependence on machines.
You argument basically amounts to an Appeal to Emotion. You don’t like the idea that you’re not the top of the consciousness food chain, so therefore, you don’t even consider any honest questions about it.
Again, I ask, does a neuron know of the consciousness of the brain its in? Can you answer that? And using your answer, can you then answer why it cannot similarly apply to a human and the human individual’s place in the network?
Maybe you don’t like the idea of us being “second-tier” creatures, whatever that means, but the universe doesn’t care what you like. The fact you can stoop to such an argument puts you squarely with creationists, as they deny evolution because they don’t like the place humanity has in that world view.
While you point out his Appeal to Emotion, you completely ignore that you are using an Argument by Analogy.
No, the neuron does not know that it is part of a whole, but the neuron is not capable of knowing, or even imagining that it is. It is a fixed-function electrochemical device that processes chemical inputs.
What argument? Why can’t something be an honest question? A question that happens to use an analogy?
So? How can we tell WE’RE not part of a wider consciousness? Again, you both make the IMPLICIT argument that our consciousness is the most expressed form there is and that anything we can’t recognized is therefore not a consciousness.
I admit I can’t disprove that we aren’t part of a larger consciousness. However, that does in no way suggest that we are. Extraordinary claims require extraordinary evidence.
And, yes, I think that if something is not recognizable as a consciousness, then it is not a consciousness, for that is the only standard we can have. Otherwise, you must accept the consciousness of a pebble.
Sure, a pebble possesses nothing that I would recognize as a consciousness, but that doesn’t mean it’s not conscious.
Sorry, I wasn’t aware I was presenting a paper for publication in a journal. I thought this article, and the subject of AI in general, was about speculation.
I’m so sorry. I didn’t even realize we weren’t allowed to speculate and that you were the boss of (speculative) science discussions. I must report this to CERN, the NAS, Nature and Science and other scientific bodies at once. Did you know they were engaged in speculative research and publication without your knowledge?!!?!!?!! zOMG!?!
In the recent past, white people did not recognize blacks as conscious or intelligent. And how correct they were, according to your criteria.
Hint: we don’t HAVE a standard. We are not a standard. If we do have a standard, then it’s clear, evidentially, that no human has ever met the full criteria of the ideal of consciousness.
Good luck trying to prove a negative, Mr Science Boss.
Hint: the opposite of rejection is NOT acceptance. You entirely misunderstand what it means to be to reject something a priori and to put something in the undecidable pile. For you, they are obviously one and the same and completely not a scientific position.
Oh dear, you don’t seem to want to answer my questions either, do you – so much for openness: I am sorry but you fundamentally make humans the equivalent of neurons in a greater scheme – you are starting out from a point that is literally and formally unprovable and not demonstrable; it’s less of an open-ended question than a figment of the imagination: you might ask yourself who or what is behind all those neurons that are apparently casting about unmoored in that mind of yours casting grand comparisons between things that are not observable. A thinker! My word – either they (the neurons) have consciousness of the entire operative mind resolving such things to be effective in this way or something else motivates them that isn’t them and is apart from them (a bit like a creator: gasp! You are the creator of thoughts!!). In neither way is it good for your argument.
And puhlease; just go ahead and call me a Nazi; sheesh, creationists are just so yesterday.
Yours, recklessly.
“Boo hoo! I don’t like to consider the possibility that my consciousness is not the highest expression of the idea! Boo hoo!”
Have you considered the possibility that a lot of scientific breakthroughs come about through the imagination of an alternative hypothesis?
A figment of the imagination can be used as an open ended question.
You do know that you’ve just proved an earlier point I’ve made that some people think in what basically amounts to greedy-depth-first-search. You’ve proved an earlier point I made that not even humans can think like what we think humans are capable of. You’ve proved that you’re nothing more than a greedy-depth-first-search algorithm.
That does not in any way answer any question.
What argument? Who’s making an argument? It seems you are the only one making an argument. It’s obvious now: you won’t even allow honest open questions because they offend your sensibilities.
I believe one of the proposed solutions to Fermi paradox is “because they’re too smart to contact us”
(generally, considering traditional human conduct when confronted with something new, different, poorly understood – refusal of communication to the point of concealing its existence as long as possible …would be probably the most adaptive, most intelligent approach for any machine that “became conscious”)
Edited 2012-05-28 10:03 UTC
Thanks very much. I have learned something today.
I think he ruled out more than 2 billion people as being intelligent, since the majority of people don’t take cigarette breaks.
Either that, or you’re missing the point.
No, you two are missing the wider point. You two make the error that a computer system must necessarily behave in a human recognizable manner to be considered conscious. If you make that argument, then you can similarly posit a situation where an advanced alien race can consider us not-conscious because we don’t meet their criteria.
1. Yes. You are definitely missing the point of his comment. But, I’ll bite.
2. You are suggesting that I shouldn’t rule out something that is both a) unknowable and b) untestable, when it is well known that scientific knowledge (which an AI made by humans certainly falls under) progresses by the testability of known and knowable quantities. Also, his comment was to the original article, which is also taking a far more pragmatic view than making comparisons to aliens we’ll never meet.
You’re missing MY point. I disagree with his point.
So you KNOW that it is unknowable? You KNOW it is untestable? I’m glad you’ve been able to figure that out just by thinking about it. Why, that’s completely scientific. It can’t possibly be known or tested because I can’t think of a way. The worse thing is, the insidious aspect of your attitude is that you ACTIVELY rule something out because you think it is unknowable and untestable. Sure, if something is unknowable or untestable, you say “we can’t answer that yet”. But to rule it out as even a possibility?
I’m glad people like Galileo, Newton and Einstein existed. Here’s a hint, not only does science progress by the testability of known and knowable quantities, it also progresses by the expansion of borders of what is considered knowable or testable, so that was considered unknowable and untestable at one point in history is considered child’s play in the future.
As they say, you’re not right. You’re not even wrong.
Edited 2012-05-29 00:43 UTC
I like the term ACI, Artificial Conscious Intelligence, coined by Albert Jarsin is his book “Het Bewustzijnsmechisme Ontdek”.
The interesting thing for me is not when real AI will appear but what the implications will be.
Isaac Asimov books are fascinating on these.
Implications will depend greatly on when it appears, too… (and what state the world will be in: resource availability, now also for AIs – how plentiful or maybe war torn in competition for them; enlightened secular or mostly ~theocracies; and so on)
In all, a bit unpredictable – and works of popular fiction tend to be no better than background noise at predicting future.
Asimov books dealing with AI can be even seen as a bit naive – around the time when he was writing them, we already had robots meant solely to kill (ICBMs probably being the most ultimate example), also largely autonomously (Russian Perimetr / Dead Hand system)
From Artificial Intelligence, a modern approach, 2nd Ed. by Stuart Russell and Peter Norvig pg 964:
One threat in particular is worthy of further consideration: that ultraintelligent machines might lead to a future that is very different from today–we may not like it, and at that point we may not have a choice. Such considerations lead inevitably to the conclusion that we must weigh carefully, and soon, the possible consequences of AI research for the future of the human race.
Well, we* certainly won’t really “like” the future (whatever it is) of the human race anyway, eventually. Our subspecies, Homo sapiens sapiens, exists for a blink of an eye in the grander picture – and we will be extinct extremely soon (vs. the time remaining to the heat death of the universe).
Just how it is… we don’t care much about Homo heidelbergensis, our likely ancestor, except for carrying further his lineage. In a future variant with “super AI” it will be probably similar (to varying degrees – maybe more like Neanderthals, which did contributed small part of our DNA, there was some interbreeding according to recent research)
* except, “we” in the strictest sense will not care about anything, being dead. And we hardly really have future in mind – for example, http://en.wikipedia.org/wiki/File:Human_welfare_and_ecological_foot…