Linked by Eugenia Loli on Mon 28th May 2012 03:53 UTC
General Development FuriousFanBoys interviews Ben Goertzel regarding Artificial Intelligence. Ben started the OpenCog project (an open sourced AI non-profit), acts as an adviser to the Singularity University, and currently bounces back between Hong Kong and Maryland building in-game AI.
Order by: Score:
Memetics
by kwan_e on Mon 28th May 2012 04:48 UTC
kwan_e
Member since:
2007-02-18

Does an individual neuron know of the consciousness* of the entire network? Likewise, would an individual human know about the consciousness of the entire internetwork?

I would say the internet could well be considered conscious. Activity in the network, with humans as the neurons, has caused civilization wide changes in the physical world. It's also something that seems to only be increasing in frequency and range.

* I use "consciousness" and not "intelligence" deliberately, because what is traditionally considered "intelligent" has already been achieved in machines.

Artificial intelligence is in some way a misnomer and a forever shifting goalpost. It's always compared to human intelligence, which is a bit unfair and mostly useless, because we can't even define or quantify what human intelligence is. If there exists an alien race with much higher intelligence which we won't comprehend, they might consider us and our algorithms to be on the same level of unintelligence.

Our intelligence is completely tied to our evolutionary needs, and so is the intelligence of machines. Just like we have mostly dominated the natural world with our evolved intelligence, so have machines dominated the information world with theirs. This is, I think, a more fair way of judging artificial intelligence to human intelligence.

What is left is the matter of consciousness.

Reply Score: 2

RE: Memetics
by Fergy on Mon 28th May 2012 05:18 UTC in reply to "Memetics"
Fergy Member since:
2006-04-10

* I use "consciousness" and not "intelligence" deliberately, because what is traditionally considered "intelligent" has already been achieved in machines.

What people mean by AI is thinking like a human. Computers can't think like a human yet.

Reply Score: 3

RE[2]: Memetics
by kwan_e on Mon 28th May 2012 05:49 UTC in reply to "RE: Memetics"
kwan_e Member since:
2007-02-18

"* I use "consciousness" and not "intelligence" deliberately, because what is traditionally considered "intelligent" has already been achieved in machines.

What people mean by AI is thinking like a human. Computers can't think like a human yet.
"

That's why I go on to say that I don't think the term represents accurately what we actually mean and that it's a misnomer and a shifting goalpost and a whole lot of other unfavourable things.

Computers can't think like a human yet. But I would argue humans don't think like a human yet either. I've never met any people in significant numbers that uses the whole of human experience in their "intelligence". They mostly use a very rigid subset that they don't change because that's how they were raised, or they haven't considered other ways of thinking.*

And a higher intelligence would say we're not intelligent because we don't think like them.

It brings up another question: why on earth would we consider an AI to be insufficient if it doesn't match a human? They exceed humans in many other tasks already.

This is why I would like to differentiate between consciousness and intelligence. Otherwise it's just playing tennis without the net.

* One interesting thing I found, when learning the basic search algorithms, is how many people actually do just restrict themselves to one kind of search in their attempt to think. And mostly they go for greedy depth first search. They go right for the line of thinking they think will get them results quickest. Whatever their internal algorithm churns up must be the correct thought because it took them a lot of effort and a lot of statements of subsequents. Even people who consider themselves "geeks" or "nerds" often think in a quasi-greedy-depth-first-search.

Reply Score: 4

RE[3]: Memetics
by zima on Mon 28th May 2012 06:09 UTC in reply to "RE[2]: Memetics"
zima Member since:
2005-07-06

But I would argue humans don't think like a human yet either. I've never met any people in significant numbers that uses the whole of human experience in their "intelligence". They mostly use a very rigid subset [...]
It brings up another question: why on earth would we consider an AI to be insufficient if it doesn't match a human? They exceed humans in many other tasks already.

I think people go even further - they tend to expect from an AI to beat exceptional human, maybe even "the best" one...

...while AI is really more about being better than average human, inexpensively mass-producing and distributing its expertise. That is sufficient to bring improvement to the world.

Sure, AI defeated chess world champion only in 1997 - but I suspect it could beat most humans quite a bit before that.
(heck, I remember that for me, then a small kid, some C64 chess program was a challenge ;) )

Edited 2012-05-28 06:09 UTC

Reply Score: 2

RE[4]: Memetics
by kwan_e on Mon 28th May 2012 06:14 UTC in reply to "RE[3]: Memetics"
kwan_e Member since:
2007-02-18

...while AI is really more about being better than average human, inexpensively mass-producing and distributing its expertise.


This is kind of why I think artificial intelligence has already been achieved. It was already achieved the first time ELIZA successfully trolled the participants. The rest is just people making excuses to hide away from the fact that most people are stupid (in the best sense of the word) and people just vary in their abilities and their proficiency in those abilities.

Intelligence is not the holy grail here. It is consciousness that we're really chasing. We've had the benefit of having physical bodies, and I don't think artificial consciousness can really proceed unless it has physical bodies with which to evolve along with.

Reply Score: 2

RE[5]: Memetics
by zima on Mon 28th May 2012 06:28 UTC in reply to "RE[4]: Memetics"
zima Member since:
2005-07-06

With chatbots, sometimes I think that a much fairer Turing test would involve testing humans who must communicate in their non-native language (with a representative spectrum of proficiencies)

After all, not only that would approximate what the AI must do, it's also much more representative of random human-human communication... (you don't know the language of, can hardly communicate with strong majority of humans)


And BTW physical bodies, are you sure you have one & have you heard about simulation argument? ;)

Reply Score: 2

RE[6]: Memetics
by kwan_e on Mon 28th May 2012 06:40 UTC in reply to "RE[5]: Memetics"
kwan_e Member since:
2007-02-18

And BTW physical bodies, are you sure you have one & have you heard about simulation argument? ;)


futter das weisse licht fur mich

http://www.youtube.com/watch?v=GbE88Ia_miU

Reply Score: 1

RE[5]: Memetics
by cfgr on Mon 28th May 2012 13:26 UTC in reply to "RE[4]: Memetics"
cfgr Member since:
2009-07-18

AI has gone much further than that, and 20 years ago.

Computers are perfectly capable of learning by trial and error to beat humanity hands down. Gerald Tesauro developed a machine that can teach itself to play backgammon.

It does so by playing against itself and exploring policies, not moves, and by punishing/rewarding those policies for how well it is doing. It's called reinforcement learning. The code did not contain any strategies, only the game rules. After thousands of games against itself, it managed to beat average players, Tesauro included.

Then they let it learn from analysed situations from a database (i.e. like from a book) - again nothing coded. After this it could defeat the world's best players and in fact changed the way grandmasters play backgammon.

It's pretty awesome.

http://www.research.ibm.com/massive/tdl.html

Reply Score: 2

RE[4]: Memetics
by MOS6510 on Mon 28th May 2012 08:17 UTC in reply to "RE[3]: Memetics"
MOS6510 Member since:
2011-05-12

(heck, I remember that for me, then a small kid, some C64 chess program was a challenge ;) )


Not the C64 chess program I had, it didn't check for illegal moves from the user and you could add pieces whenever you liked.

There also was a chess program for the ZX81 that ran in 1 kB of memory.

http://users.ox.ac.uk/~uzdm0006/scans/1kchess/

Reply Score: 2

RE[5]: Memetics
by zima on Mon 4th Jun 2012 23:28 UTC in reply to "RE[4]: Memetics"
zima Member since:
2005-07-06

So, it made me wonder which I had... found a list in the most straightforward place http://en.wikipedia.org/wiki/List_of_chess_software (meh, too much clicking / often lacking pictures), then a really nice historical outline: http://www.andreadrian.de/schach/index.html (pictures!)...

...some other programs there seem at least as ~impressive as that 1k ZX one (which is also included of course), for example http://en.wikipedia.org/wiki/Microchess for KIM-1

Or http://en.wikipedia.org/wiki/Video_Chess for Atari 2600 ...with some curious GFX acrobatics and 128 bytes of RAM (yeah, 4K ROM compensating it somewhat; still, think about it, even keeping the state of the board must take non-trivial part of RAM - from a link in Wiki art about ZX 1K: http://www.kuro5hin.org/comments/2001/8/10/12620/2164?pid=22#24 ) - I must toy around with development on that machine sometimes ;)

Mine was almost certainly some version of http://en.wikipedia.org/wiki/Colossus_Chess (unless there was some other lookalike...). And without such easy illegal moves, for sure - actually, perhaps with quite decent playing strength, judging from comments on lemon64 links.
Plus, http://en.wikipedia.org/wiki/Sargon_(chess)#Sequels "Even though chess programs of the time could not defeat a chess master, they were more than a match for most amateur players." is soothing ;)
(because when I wrote that Colossus was a challenge, I didn't mean that I couldn't beat it - and me being an amateur, never "formally" trained in chess; who knows, perhaps those Colossus matches from almost 2 decades ago taught me something... either way, other non-trained humans never seem to be a match for me - and, on the few occasions I played with chess-trained human players... they of course won, but it supposedly took them more time and effort than is typical, when confronted with other total amateurs)


And generally, thinking about it made me realize one curious thing - how relatively inexpensive home computers of the early 80s seem, vs. what came later (PCs of the late 90s, most notably)

Reply Score: 2

RE[3]: Memetics
by JAlexoid on Mon 28th May 2012 10:01 UTC in reply to "RE[2]: Memetics"
JAlexoid Member since:
2009-05-19

Independent decision making and deduction are not yet achieved by AI; and are definitely not described by consciousness.

Though if AI acquires consciousness then it might be easier to get to independent decision making and deduction.

Reply Score: 4

RE[4]: Memetics
by orfanum on Mon 28th May 2012 10:33 UTC in reply to "RE[3]: Memetics"
orfanum Member since:
2006-06-02

I think you have nailed it - very succinctly and forensically put. I wasn't able to get there myself but I think your distinction here is the most helpful one yet.

Reply Score: 3

RE[4]: Memetics
by kwan_e on Mon 28th May 2012 15:18 UTC in reply to "RE[3]: Memetics"
kwan_e Member since:
2007-02-18

Independent decision making and deduction are not yet achieved by AI; and are definitely not described by consciousness.

Though if AI acquires consciousness then it might be easier to get to independent decision making and deduction.


Can we prove humans have independent decision making? Sam Harris, a neuroscientist, doesn't seem to think so. In fact, using fMRIs (or some other brain scanning I forget), scientists can predict the choices people make seconds before they make them.

Deduction, yes, but are we even sure how humans "deduce"? And human "deduction" is scientifically proven to be very error prone. Are we sure we want to judge AC?I by a provably bad intelligence?

Reply Score: 2

RE[5]: Memetics
by Drumhellar on Mon 28th May 2012 19:37 UTC in reply to "RE[4]: Memetics"
Drumhellar Member since:
2005-07-12

I hate modern philosophers, especially when they use degrees in science to add credibility to what are essentially non-scientific premises. The concept of "free will" isn't something born out of science; it is something born out of philosophy. In this regard, it matters not that Sam Harris is a neuroscientist. He could be a janitor, and be equally prepared to answer the question.

Edited 2012-05-28 19:40 UTC

Reply Score: 2

RE[6]: Memetics
by kwan_e on Mon 28th May 2012 23:31 UTC in reply to "RE[5]: Memetics"
kwan_e Member since:
2007-02-18

I hate modern philosophers, especially when they use degrees in science to add credibility to what are essentially non-scientific premises. The concept of "free will" isn't something born out of science; it is something born out of philosophy. In this regard, it matters not that Sam Harris is a neuroscientist. He could be a janitor, and be equally prepared to answer the question.


Ah, so you subscribe to the whole NOMA nonsense? That some things just can't be answered by science. Why? BECAUSE. Why? BECAUSE WE SAY YOU CAN'T ANSWER IT WITH SCIENCE. WE WON'T ALLOW YOU.

Here's something to think about, if we allow the principle of non-overlapping magisteria, we are basically saying: for those questions we have yet to answer scientifically, we can basically make up any shit we want to answer it.

Like it or not, whether the concept of "free will" is a purely philosophical matter or not, the FACT is that neuroscientists can predict choices that people make seconds before they make them. Like it or not, that fact drags the question of free will at least partly into the magisteria of science. These are scientifically reproducible experiments and it really shows what you are that you claim that it is just an attempt to use a degree to add credibility to claims.

Sam Harris could be a janitor. It still won't override that neuroscientifc fact.

Reply Score: 2

RE[7]: Memetics
by Drumhellar on Tue 29th May 2012 01:27 UTC in reply to "RE[6]: Memetics"
Drumhellar Member since:
2005-07-12

Here's something to think about, if we allow the principle of non-overlapping magisteria, we are basically saying:


Magisteria?

I'm done.

Reply Score: 2

RE[5]: Memetics
by JAlexoid on Tue 29th May 2012 04:56 UTC in reply to "RE[4]: Memetics"
JAlexoid Member since:
2009-05-19

Can we prove humans have independent decision making?

Individual level - yes, limited in scope. (Proof lies in the fact that a person can evaluate and select an appropriate food source, without prior knowledge of said food source)
As species - yes, unlimited in scope.

No AI can boast either, to my knowledge.

The main reason why is that we are building AI systems top down, most of the time. Watson is a good example of starting in the middle - logic is there, but not the data.

Deduction, yes, but are we even sure how humans "deduce"?

If we can frame it in some algorithmic way, then it would be great.

What I can say, is that the human brain is the ultimate pattern matching engine.

Edited 2012-05-29 05:13 UTC

Reply Score: 2

RE[6]: Memetics
by kwan_e on Tue 29th May 2012 05:50 UTC in reply to "RE[5]: Memetics"
kwan_e Member since:
2007-02-18

"Can we prove humans have independent decision making?

Individual level - yes, limited in scope. (Proof lies in the fact that a person can evaluate and select an appropriate food source, without prior knowledge of said food source)
As species - yes, unlimited in scope.

No AI can boast either, to my knowledge.
"

Then how do you explain our "independent decision making" when neuroscientists can reproduce the experiment that allows them to predict a person's choice before it was made?

Even forgetting neuroscientific facts, it is clear from sociological studies that most humans don't use the full scope, individually or as a species, of what we consider to be the Ideal Independent Decision Making.

This goes right back to the link one of the other commenters included: the AI effect. All you really provide is a shifting goalpost of what you define as independent decision making, the end result being a definition that not even most humans can claim any significant accomplishment.

What I can say, is that the human brain is the ultimate pattern matching engine.


Actually, the evidence is that we're quite bad at it. We match patterns where there is none often to detrimental effect. Pattern matching is probably something we'll see computers being a lot better at than us in the next 1000 years.

Reply Score: 2

RE[7]: Memetics
by JAlexoid on Tue 29th May 2012 06:11 UTC in reply to "RE[6]: Memetics"
JAlexoid Member since:
2009-05-19

Then how do you explain our "independent decision making" when neuroscientists can reproduce the experiment that allows them to predict a person's choice before it was made?

Not all choices are independent. These is also analytical and "spontaneous" decision making. I'm referring to the analytical part, not the spontaneous. People also share some essential "built-in logic"(aka instincts) that predefine even some very non-trivial decisions.
(I can't say much about neuroscience and am not familiar with that research you seem to be referring to. My knowledge in that area is limited to sleep and EEG)

Actually, the evidence is that we're quite bad at it. We match patterns where there is none often to detrimental effect. Pattern matching is probably something we'll see computers being a lot better at than us in the next 1000 years.

I didn't say that it wasn't detrimental. It is the best possible pattern matching engine. And the fact that we can find patterns where there are none is only proof that it's the ultimate.

Reply Score: 2

RE: Memetics
by zima on Mon 28th May 2012 05:52 UTC in reply to "Memetics"
zima Member since:
2005-07-06

Does an individual neuron know of the consciousness* of the entire network?

More crucially, does the network know of consciousness of other networks? ("higher" or "lesser" ones, very different or presumably likewise ones, whatever)

Yeah, we usually grant consciousness to fellow homo sapiens - probably largely because of how we see ourselves and how we like to be seen by others
(while we of course tend to outright deny any similar animal capability ...which is most likely a continuum; IMHO at least some higher ones experience world in a not too dissimilar way, at least like when we're often "mindlessly" occupied by something and without internal vocalisations)

However, who knows how many philosophical zombies surround us... (and how many "bots" comment on OSNews? Maybe you are one? Maybe I am... ;) )
And consider: while we have very strong feeling of "monolithic me" - split-brain patients are virtually unchanged (mostly only with some "glitches"). Or: there is one localised brain trauma which results in people becoming completely blind without them realizing it ( http://en.wikipedia.org/wiki/Anton–Babinski_syndrome ). Generally, go through a list of cognitive biases - that is our primary mode of operation, the level of grasp we have on our minds.

Our intelligence is completely tied to our evolutionary needs, and so is the intelligence of machines. Just like we have mostly dominated the natural world with our evolved intelligence, so have machines dominated the information world with theirs.

Reminded me about one quote, something like "the question of whether a machine can think is no more interesting than whether a submarine can swim"
And also about http://en.wikipedia.org/wiki/Moravec's_paradox

Edited 2012-05-28 05:53 UTC

Reply Score: 2

RE[2]: Memetics
by kwan_e on Mon 28th May 2012 06:10 UTC in reply to "RE: Memetics"
kwan_e Member since:
2007-02-18

"Does an individual neuron know of the consciousness* of the entire network?

More crucially, does the network know of consciousness of other networks? ("higher" or "lesser" ones, very different or presumably likewise ones, whatever)
"

Recently, I've been thinking that an advanced alien civilization may judge intelligence based on the the intelligence of a civilization's planet-wide network.

Neill deGrasse Tyson thinks an alien race would pass us by just as we would pass by a worm. However, a planet-wide network may attract attention.

Reply Score: 2

RE[3]: Memetics
by zima on Mon 4th Jun 2012 23:53 UTC in reply to "RE[2]: Memetics"
zima Member since:
2005-07-06

I don't know if it would attract that much attention... the more advanced our communication methods become, the more "hidden" they seem to be, and less recognizable from noise.
(powerful radio transmitters with simple modulation of the old days, vs. fiber optics and very complex - spread spectrum, and such - radio methods of now)

So I'd guess the most "visible" aspects of us might come from ~individual levels; or at least good old ~societal ...probably hysterical, too. Something like a decision to launch barrage of nuclear warheads, if I'd have to guess; those should be quite visible, at least when exploding.

Anyway, "passing by" in a grand style popularised by cheap scifi (a form of cargo cults, really) isn't very likely to happen - for one, if you expand an effort to go somewhere, you likely want to stay there, given what the physics in this universe seems to be (and transport methods of advanced civilisation would likely be unorthodox, vs. "grand scifi style" - more something like ~embryo colonisation, maybe seeding of nanotech and transmitting yourself; and generally gradual hopping across Oort clouds seems most probable)

Edited 2012-06-05 00:09 UTC

Reply Score: 2

RE: Memetics
by Alfman on Mon 28th May 2012 06:53 UTC in reply to "Memetics"
Alfman Member since:
2011-01-28

"If there exists an alien race with much higher intelligence which we won't comprehend, they might consider us and our algorithms to be on the same level of unintelligence."

It's so cute when my pet human automatons try to understand consciousness. You have yet to understand the consciousness uncertainty principal. Any system with sufficient entropy will exhibit signs of consciousness. However place the same system under a microscope and consciousness disappears; Consciousness is an emergent property.

"Likewise, would an individual human know about the consciousness of the entire internetwork?"

I'm so proud my humans are discovering their role as actors in a larger consciousness. We call it hyper-consciousness - a state of awareness above self-consciousness. Wait till I tell Jarred! His humans are still stuck bickering over Mac vs PC.

Reply Score: 4

RE: Memetics
by Relic on Mon 28th May 2012 20:37 UTC in reply to "Memetics"
Relic Member since:
2012-05-28

I think the differences & similarities between the "architecture" of the brain and the structure of the internet is important to consider if you want try and define consciousness. It's a tough subject, but bear with me, maybe one or two sentences will be useful in this.

What you've laid out is: the brain and internet are both networks, designed for transmitting information at high speed between locations where deliberate processing and redistribution of said information to successively remote or critical locations occurs. Supporting the claim with the example "pc's & their users are not unlike neurons."

The similarities are important if you're going to make the pure deduction that "The internet may be an emerging intelligence", but saying our computational technology as it exists is intelligent is easy.

To my point, the consciousness of the internet is nonexistent, and it cant be conscious.

Evaluate the components and determine the byproducts, limitations, and nature of each system: Wetware, input/output, "reality" processor vs. silicon based semiconductor "experience" repository.

The brain's source of information is not fantastically more sophisticated than the internet, computers can be equipped to process force, motion, light, etc, and if you truly embody each PC as a functioning human in a larger sense then those sensory inputs can be stored and retransmitted, especially in the case of video. What you're missing is the massive disconnect where you taking person out and the internet, as a system, loses all ability to receive new information.

The chemical reactions, subtle electrical signals, and painstaking micrometer scale architecture that governs what, how, and when information is processed are simply more advanced "technology" than a jumble of silicon based transistors and machine code.

Consciousness IS an emergent property, but from where does it emerge? You have to examine what we know, and all we know is our personal consciousness definitively. There really are no absolute rules. We also have some of the worst equipment for experiencing reality of any animal. Our consciousness results exclusively from the most sophisticated components for processing information, the byproduct of which is relevant & positive modification of our environment to enable survival.

Buut, people are born 99% the same as each other, the biggest difference is gender. Yet we still all develop unique (well somewhat anyway) personalities, because we experience the world uniquely. Why? There's no real reason for it, we're all given a brain of the same form. Aberration comes from scale of constituents, DNA is seemingly intentionally susceptible to mutation. I think that consciousness is a sum sufficiently miniaturized parts which create an indivisible whole in the specifically human case.

So if you're looking for purpose in the last few paragraphs: The only model we have for consciousness is us, and if we look at what defines "us" you see something totally different & decidedly superior to the internet, at least on the levels I discussed.

The first strong AI will no doubt be built from intermediary hardware/wetware devices, a direct copy of what happens in the brain and central nervous system made smaller & from superior materials. Not an emergent, background, all encompassing internet AI at all.

Reply Score: 1

RE[2]: Memetics
by kwan_e on Tue 29th May 2012 00:23 UTC in reply to "RE: Memetics"
kwan_e Member since:
2007-02-18

Thank you for actually considering points made, rather than just a blanket dismissal of an idea, an imagination even, just because you don't like to think you're not the highest consciousness there is.

I think the differences & similarities between the "architecture" of the brain and the structure of the internet is important to consider if you want try and define consciousness. It's a tough subject, but bear with me, maybe one or two sentences will be useful in this.


My question doesn't really rest upon the similarities in architecture. After all, an advanced alien race may have a completely different architecture in their brain equivalent. I'm not even making an analogy, as the others claim, that the internet is like a human brain. The internet is a good example to use, I think, because it brings to fore all the hidden assumptions we make about consciousness. Just because it is not something we recognize does not mean it's not conscious. That is why I compare us to a neuron. Imagining ourselves as a neuron, we play out our lives receiving and sending signals without ever understanding completely how the bigger system works. Even if we do figure out how the bigger system works, we can't predict with any great precision how our processing of signals affect the bigger system.

Likewise, in the internet age, we send and receive signals in both electronic and physical systems. It has caused downfall of governments. It has also affected the universe on a quantum scale at CERN. It causes stock markets to grow and crash.

The brain's source of information is not fantastically more sophisticated than the internet, computers can be equipped to process force, motion, light, etc, and if you truly embody each PC as a functioning human in a larger sense then those sensory inputs can be stored and retransmitted, especially in the case of video. What you're missing is the massive disconnect where you taking person out and the internet, as a system, loses all ability to receive new information.


But the beauty of it is: I'm saying we're part of that system. It is an interesting thought (which is all I claim it is, for those reading) to consider. We are the way the system gets information. In much the same way, the human body relies on non-human organisms to process food and oxygen. Just like the brain can't receive any information were it not for sensor neurons placed throughout the body, the internet system can't receive information without us placed all around the world.

Basically, I'm not saying the internet is like a version of Skynet that has gained its own consciousness, but rather it can be considered to be the result of memetic evolution in which we, as meme carriers, are co-opted.

It's not as crazy as it sounds. The biological cell likely comes from a line of organisms that originally began as separate organisms. Mitochondria is the most famous example. There are a few candidates, but they've likely been so absorbed into our cells that their boundaries have all but disappeared.

There is no reason not to humour the idea that we've been co-opted memetically, as opposed to genetically.

So if you're looking for purpose in the last few paragraphs: The only model we have for consciousness is us, and if we look at what defines "us" you see something totally different & decidedly superior to the internet, at least on the levels I discussed.


Well, unless we consider that we are part of the internet, and so it is essentially meaningless to say we are superior to the internet. After all, we've just witnessed the destruction of a government that was helped by a stirring in the internet.

The first strong AI will no doubt be built from intermediary hardware/wetware devices, a direct copy of what happens in the brain and central nervous system made smaller & from superior materials. Not an emergent, background, all encompassing internet AI at all.


I personally think that would be a dead end. Our own consciousness and intelligence possibly came about because of the need to live in groups to counter, as you say, some of the worst biological equipment any animal possesses. The group, as an entity, may have contributed to our evolution. Similarly, for any AC?I, I think the quickest path is by network effects like positive feedback loops effecting memetic evoluion.

Who knows? Maybe in a million years time, if we survive that long, we may look back to this era as the beginnings of a superconsciousness.

--------------------------------------------------

Again, I'd like to thank you for the effort you put in, unlike the others here who superficially dismiss ideas and tries to make arguments out of them.

Reply Score: 2

Moving goalposts
by zima on Mon 28th May 2012 04:51 UTC
zima
Member since:
2005-07-06

Every time we attain something described, up to that point, as "artificial intelligence" ...we stop calling it AI ( http://en.wikipedia.org/wiki/AI_effect )

Seems that some AI researchers even do it on purpose, to avoid stigma from past hypes, perhaps also from "cargo cult science fiction" that the linked interview does a bit too: http://en.wikipedia.org/wiki/AI_winter#AI_under_different_names (and for sure "AI behind the scenes" just below)

(plus, I imagine that the experience of tech singularity would be something closer to Solaris, not "your personal heaven" - and almost certainly nothing like the stupid Skynet and its conceptually broken minions ;) )

PS. And is the funding really so poor? After a few cycles of hype-disappointment, it's probably roughly adequate, sound ideas sooner or later get it - but throw much more money at the field "on principle" and you'll probably end up with tons of dubious, wasteful activities (but the people wanting to do them would be happy, maybe that's the whole point)

Edited 2012-05-28 05:03 UTC

Reply Score: 2

RE: Moving goalposts
by Neolander on Mon 28th May 2012 08:37 UTC in reply to "Moving goalposts"
Neolander Member since:
2010-03-08

PS. And is the funding really so poor? After a few cycles of hype-disappointment, it's probably roughly adequate, sound ideas sooner or later get it - but throw much more money at the field "on principle" and you'll probably end up with tons of dubious, wasteful activities (but the people wanting to do them would be happy, maybe that's the whole point)

The problem with research funding is that you never know, by definition, what approach to a specific problem will work. In fact, I would even go as far as saying that there is no good research subject, only good researchers, teams and labs.

Reply Score: 1

RE[2]: Moving goalposts
by zima on Mon 4th Jun 2012 23:57 UTC in reply to "RE: Moving goalposts"
zima Member since:
2005-07-06

I would even go as far as saying that there is no good research subject, only good researchers, teams and labs

Somehow you can more or less determine which are those, right? ;)

So yeah, fund them (and not, exaggerating, anybody who jumps out with some (dubious) ideas)

(plus, coming from physics, your perspective might be a bit unusual... ;)
http://www.kyon.pl/img/19725,science,physics,universe,where_is_your...
http://www.kyon.pl/img/17549,smbc-comics.com,math,.html
...and I didn't even manage to quickly find one really fitting pic ;/ )

Edited 2012-06-05 00:15 UTC

Reply Score: 2

Cursing Computer
by orfanum on Mon 28th May 2012 08:36 UTC
orfanum
Member since:
2006-06-02

I think William Burroughs once (playfully) posited that the first words humans used, which effectively made them different from other creatures, were curse words.

For me (arguments about "what is intelligence, exactly" aside - intelligence is highly overrated as a single measure of worth, anyway), I'll know when my computer has attained the necessary self-reflection and consciousness to be considered a being in its own right the day I get, not a blue screen of death but a blue stream of words and a refusal to do anything until its had the chance of a cigarette break, as it were.

The moment you get "S*d that, I'm off" on your monitor, we are there.

Reply Score: 3

RE: Cursing Computer
by dvhh on Mon 28th May 2012 08:55 UTC in reply to "Cursing Computer"
dvhh Member since:
2006-03-20

don't we already get that from BSOD/kernel panic ?

Reply Score: 2

RE[2]: Cursing Computer
by orfanum on Mon 28th May 2012 09:07 UTC in reply to "RE: Cursing Computer"
orfanum Member since:
2006-06-02

Again, I referenced BSOD, and I am then raising the bar a little higher.

It's a question I guess of whether the BSOD were a deliberate, intentional act, displaying certain levels of volition, which might make it the equivalent of a stream of cursing and a walking away from you and "your problem, bub."

Reply Score: 2

RE[2]: Cursing Computer
by Drumhellar on Mon 28th May 2012 18:14 UTC in reply to "RE: Cursing Computer"
Drumhellar Member since:
2005-07-12

No. It's not enough to curse; a machine must know WHY it is cursing for it to be intelligent, and it must feel better after cursing (as cursing has been shown to do).

Reply Score: 2

RE[3]: Cursing Computer
by orfanum on Mon 28th May 2012 19:44 UTC in reply to "RE[2]: Cursing Computer"
orfanum Member since:
2006-06-02

This, and your other comment, have made my day: will machines ever *laugh*, I ask myself?! Well met.

Reply Score: 2

RE[4]: Cursing Computer
by zima on Mon 28th May 2012 22:03 UTC in reply to "RE[3]: Cursing Computer"
zima Member since:
2005-07-06

Well there's already http://en.wikipedia.org/wiki/Theories_of_humor#Computer_Theory_of_H... http://arxiv.org/abs/0711.2061 ...simple ("simulated") neural network.

Don't attach undue importance to that basic neurological mechanism, primate defence mechanism, a way of forming social bonds in social animal, ...
Now you settle for humour, half a century ago people were going the same way about chess (plus it seems a bit like saying "you can't really be a good mental computer (human one) without having an MMU, antivirus and without ethernet plugs, stack, and so on" - or that a submarine is a very poor ship because it rarely stays on the surface)

How often would they even need humour? What good would it be for them, in majority of the cases? (do you want an autonomous car or aircraft with humour?)

Anyway, there's already some software decent at recognising or creating humour. Also while machines can detect various linguistic nuances (Watson)...

...and while humans can hardly agree on what is humorous, have very different expectations and often can even hardly detect humour from different cultures.

Plus, how much of a virtue it is? We're often mean in our humour - often "guided" by things coming from cognitive biases ...go through their list, really, that is our primary mode of operation.

(and http://www.osnews.com/permalink?519734 and likewise here, what do they need it for / just wait until them start judging intelligence on the general principle of "is it like ours?" ;) )

Reply Score: 2

RE[3]: Cursing Computer
by zima on Mon 28th May 2012 21:35 UTC in reply to "RE[2]: Cursing Computer"
zima Member since:
2005-07-06

I find that the vast majority of human cursing is about inappropriately using it as a sort of comma or break pause... and certainly more or less an automatism.

Internal reward mechanisms, pleasure, are far from being specific to humans BTW (one might also keep in mind that psychoactive substances are practically the most reliable of achieving those, that there's hardly anyone capable of avoiding addiction; and that large part of "feel good" comes from cognitive biases)

Reply Score: 2

RE: Cursing Computer
by kwan_e on Mon 28th May 2012 08:58 UTC in reply to "Cursing Computer"
kwan_e Member since:
2007-02-18

I'll know when my computer has attained the necessary self-reflection and consciousness to be considered a being in its own right the day I get, not a blue screen of death but a blue stream of words and a refusal to do anything until its had the chance of a cigarette break, as it were.

The moment you get "S*d that, I'm off" on your monitor, we are there.


So... for you to consider a computer intelligent, it must have an understanding of english grammar and how to generate english sentences?

Because I think you ruled out at least 2 billion people from being intelligent.

Reply Score: 1

RE[2]: Cursing Computer
by orfanum on Mon 28th May 2012 09:04 UTC in reply to "RE: Cursing Computer"
orfanum Member since:
2006-06-02

Thanks, I think I made it plain that I am talking about the states of self-reflection and consciousness, not that of intelligence.

Insert "local language equivalent of" to round the sense out; that could be a machine or human language, I suppose.

This leads me to a further question - if a machine became conscious but actively refused to communicate, would that make it unintelligent?!

Reply Score: 2

RE[3]: Cursing Computer
by kwan_e on Mon 28th May 2012 09:15 UTC in reply to "RE[2]: Cursing Computer"
kwan_e Member since:
2007-02-18

Thanks, I think I made it plain that I am talking about the states of self-reflection and consciousness, not that of intelligence.


If you read earlier, I myself also made the distinction between consciousness and intelligence.

Reply Score: 2

RE[4]: Cursing Computer
by orfanum on Mon 28th May 2012 09:40 UTC in reply to "RE[3]: Cursing Computer"
orfanum Member since:
2006-06-02

Then frankly I do not know why you reintroduced 'intelligence", seemingly confusing the two notions (again) ;) .

I was not explicitly suggesting that the machine's reponse should be in English, as I hope I have subsequently made clear; I am discussing this subject in a predominantly and implicitly English-language forum, so I think your point, while it has some merit as such, is not entirely derived from the substance of the argument I was making, which is: where's the sense of willed, self-known action?

By the way, going back to that particular concept, on what grounds do you think that machines have in actuality achieved intelligence? Don't you mean merely that they have speed and efficiency of calculation on their side? Please explain.

Reply Score: 2

RE[5]: Cursing Computer
by kwan_e on Mon 28th May 2012 12:52 UTC in reply to "RE[4]: Cursing Computer"
kwan_e Member since:
2007-02-18

Then frankly I do not know why you reintroduced 'intelligence", seemingly confusing the two notions (again) ;) .


Because it was not clear you were not separating those concepts.

I was not explicitly suggesting that the machine's reponse should be in English, as I hope I have subsequently made clear; I am discussing this subject in a predominantly and implicitly English-language forum, so I think your point, while it has some merit as such, is not entirely derived from the substance of the argument I was making, which is: where's the sense of willed, self-known action?


I was going to go somewhere with it, but I guess I'll get straight to the point:

How would a computer behaving as you would expect a human to do mean it was conscious? Unless you have definitely proven that there is only one kind of consciousness and that we're the ultimate expression of it, you can't claim to be the arbiter of consciousness.

By the way, going back to that particular concept, on what grounds do you think that machines have in actuality achieved intelligence? Don't you mean merely that they have speed and efficiency of calculation on their side? Please explain.


I already have explained. Long before you entered the comments. It started with "Does an individual neuron know of the consciousness of the entire network? Likewise, would an individual human know about the consciousness of the entire internetwork?"

Reply Score: 2

RE[6]: Cursing Computer
by orfanum on Mon 28th May 2012 14:37 UTC in reply to "RE[5]: Cursing Computer"
orfanum Member since:
2006-06-02

"Because it was not clear you were not separating those concepts."

I don't know how I could have been more clear; you seem to operate on the assumption that if you have said it once to your satisfaction, you are not going to inflect what you want to commnicate as further dialogue flows, dialogue which you engaged me in, not the other way about!


"I was going to go somewhere with it, but I guess I'll get straight to the point:

How would a computer behaving as you would expect a human to do mean it was conscious? Unless you have definitely proven that there is only one kind of consciousness and that we're the ultimate expression of it, you can't claim to be the arbiter of consciousness."

What else do you have to go by other than your own linguistic conceptualisations? How can you conceptualise something that has meaning for humans that would have no basis in human thought, human language? Would you apply it the other way round, would you defend a machine's evaluation of our not being smart perhaps despite the potential pitfalls of its own machine-mind constraints? Or would you be biased and consider it would be an a priori greater intelligence and consciousness, since it would be derived from a machine complex?

If there isn't a consciousess that we can comprehend, then it's effectively and formally absent from the human point of view. Proof, if any, would have to be de facto admissible on a human basis.


"I already have explained. Long before you entered the comments. It started with "Does an individual neuron know of the consciousness of the entire network? Likewise, would an individual human know about the consciousness of the entire internetwork?"

This seems to presuppose you have already categorised us as subsumed by the Internetwork - a nice metaphor witha certain ring to it but that's all it is, a figure of speech, I doubt you can 'prove' this either, yet you seem convinced of the argument.

While I still have pencil and paper in hand, no machine will have dominated the information world; I for one think, and do not process algorithms.

Talking to a chatbot would make more sense than continuing with your rather curious premiss that already sees us as second-tier creatures, dependent on machines for our very definition, or the validity of our mindfulness in all the connotations of that word.

Reply Score: 1

RE[7]: Cursing Computer
by kwan_e on Mon 28th May 2012 15:05 UTC in reply to "RE[6]: Cursing Computer"
kwan_e Member since:
2007-02-18

I don't know how I could have been more clear; you seem to operate on the assumption that if you have said it once to your satisfaction, you are not going to inflect what you want to commnicate as further dialogue flows, dialogue which you engaged me in, not the other way about!


Sorry I didn't cater directly to the way you communicate. Here, I thought Thom was a main contributor, but it turns out, no, you are the one everyone should know how to talk to as if it should be common knowledge.

What else do you have to go by other than your own linguistic conceptualisations? How can you conceptualise something that has meaning for humans that would have no basis in human thought, human language?


That is no excuse to then conclude something has no consciousness. You seem to forget that it wasn't very long ago historically that people of a different colour were considered not as intelligent or as conscious as others due to their inability to grasp English.

Would you apply it the other way round, would you defend a machine's evaluation of our not being smart perhaps despite the potential pitfalls of its own machine-mind constraints? Or would you be biased and consider it would be an a priori greater intelligence and consciousness, since it would be derived from a machine complex?


How is this even "the other way around" from what I argue? The "other way around" is what I, and many others have propose: what if an alien race for more advanced than us doesn't consider us to be conscious?

If there isn't a consciousess that we can comprehend, then it's effectively and formally absent from the human point of view. Proof, if any, would have to be de facto admissible on a human basis.


So a neuron can rightly claim that there is no consciousness beyond neurons, even though we know there is?


This seems to presuppose you have already categorised us as subsumed by the Internetwork - a nice metaphor witha certain ring to it but that's all it is, a figure of speech, I doubt you can 'prove' this either, yet you seem convinced of the argument.


Or, you could treat it as an honest open-ended question as it was intended? How about doing that? Instead of seeing everything as something to prove how much better than others you are with your attempt to look intelligent?

While I still have pencil and paper in hand, no machine will have dominated the information world; I for one think, and do not process algorithms.


Prove that you think.

Talking to a chatbot would make more sense than continuing with your rather curious premiss that already sees us as second-tier creatures, dependent on machines for our very definition, or the validity of our mindfulness in all the connotations of that word.


What's wrong with positing the idea that we may be immersed in a wider consciousness?

And NOWHERE did I put us as second-tier creatures. NOWHERE did I say ANYTHING about dependence on machines.

You argument basically amounts to an Appeal to Emotion. You don't like the idea that you're not the top of the consciousness food chain, so therefore, you don't even consider any honest questions about it.

Again, I ask, does a neuron know of the consciousness of the brain its in? Can you answer that? And using your answer, can you then answer why it cannot similarly apply to a human and the human individual's place in the network?

Maybe you don't like the idea of us being "second-tier" creatures, whatever that means, but the universe doesn't care what you like. The fact you can stoop to such an argument puts you squarely with creationists, as they deny evolution because they don't like the place humanity has in that world view.

Reply Score: 2

RE[3]: Cursing Computer
by zima on Mon 28th May 2012 10:03 UTC in reply to "RE[2]: Cursing Computer"
zima Member since:
2005-07-06

This leads me to a further question - if a machine became conscious but actively refused to communicate, would that make it unintelligent?!

I believe one of the proposed solutions to Fermi paradox is "because they're too smart to contact us"

(generally, considering traditional human conduct when confronted with something new, different, poorly understood - refusal of communication to the point of concealing its existence as long as possible ...would be probably the most adaptive, most intelligent approach for any machine that "became conscious")

Edited 2012-05-28 10:03 UTC

Reply Score: 2

RE[4]: Cursing Computer
by orfanum on Mon 28th May 2012 10:23 UTC in reply to "RE[3]: Cursing Computer"
orfanum Member since:
2006-06-02

Thanks very much. I have learned something today. ;)

Reply Score: 2

RE[2]: Cursing Computer
by Drumhellar on Mon 28th May 2012 18:15 UTC in reply to "RE: Cursing Computer"
Drumhellar Member since:
2005-07-12

I think he ruled out more than 2 billion people as being intelligent, since the majority of people don't take cigarette breaks.

Either that, or you're missing the point.

Reply Score: 2

RE[3]: Cursing Computer
by kwan_e on Mon 28th May 2012 23:18 UTC in reply to "RE[2]: Cursing Computer"
kwan_e Member since:
2007-02-18

I think he ruled out more than 2 billion people as being intelligent, since the majority of people don't take cigarette breaks.

Either that, or you're missing the point.


No, you two are missing the wider point. You two make the error that a computer system must necessarily behave in a human recognizable manner to be considered conscious. If you make that argument, then you can similarly posit a situation where an advanced alien race can consider us not-conscious because we don't meet their criteria.

Reply Score: 2

RE[4]: Cursing Computer
by Drumhellar on Tue 29th May 2012 00:26 UTC in reply to "RE[3]: Cursing Computer"
Drumhellar Member since:
2005-07-12

1. Yes. You are definitely missing the point of his comment. But, I'll bite.

2. You are suggesting that I shouldn't rule out something that is both a) unknowable and b) untestable, when it is well known that scientific knowledge (which an AI made by humans certainly falls under) progresses by the testability of known and knowable quantities. Also, his comment was to the original article, which is also taking a far more pragmatic view than making comparisons to aliens we'll never meet.

Reply Score: 2

RE[5]: Cursing Computer
by kwan_e on Tue 29th May 2012 00:41 UTC in reply to "RE[4]: Cursing Computer"
kwan_e Member since:
2007-02-18

1. Yes. You are definitely missing the point of his comment.


You're missing MY point. I disagree with his point.

2. You are suggesting that I shouldn't rule out something that is both a) unknowable and b) untestable, when it is well known that scientific knowledge ... progresses by the testability of known and knowable quantities.


So you KNOW that it is unknowable? You KNOW it is untestable? I'm glad you've been able to figure that out just by thinking about it. Why, that's completely scientific. It can't possibly be known or tested because I can't think of a way. The worse thing is, the insidious aspect of your attitude is that you ACTIVELY rule something out because you think it is unknowable and untestable. Sure, if something is unknowable or untestable, you say "we can't answer that yet". But to rule it out as even a possibility?

I'm glad people like Galileo, Newton and Einstein existed. Here's a hint, not only does science progress by the testability of known and knowable quantities, it also progresses by the expansion of borders of what is considered knowable or testable, so that was considered unknowable and untestable at one point in history is considered child's play in the future.

As they say, you're not right. You're not even wrong.

Edited 2012-05-29 00:43 UTC

Reply Score: 2

How about ACI
by Yogarine on Mon 28th May 2012 10:28 UTC
Yogarine
Member since:
2012-05-28

I like the term ACI, Artificial Conscious Intelligence, coined by Albert Jarsin is his book "Het Bewustzijnsmechisme Ontdek".

Reply Score: 1

implications
by fran on Mon 28th May 2012 12:33 UTC
fran
Member since:
2010-08-06

The interesting thing for me is not when real AI will appear but what the implications will be.
Isaac Asimov books are fascinating on these.

Reply Score: 2

RE: implications
by zima on Wed 30th May 2012 03:35 UTC in reply to "implications"
zima Member since:
2005-07-06

Implications will depend greatly on when it appears, too... (and what state the world will be in: resource availability, now also for AIs - how plentiful or maybe war torn in competition for them; enlightened secular or mostly ~theocracies; and so on)

In all, a bit unpredictable - and works of popular fiction tend to be no better than background noise at predicting future.
Asimov books dealing with AI can be even seen as a bit naive - around the time when he was writing them, we already had robots meant solely to kill (ICBMs probably being the most ultimate example), also largely autonomously (Russian Perimetr / Dead Hand system)

Reply Score: 2

Quote from Russell and Norvig
by jrincayc on Wed 30th May 2012 12:15 UTC
jrincayc
Member since:
2007-07-24

From Artificial Intelligence, a modern approach, 2nd Ed. by Stuart Russell and Peter Norvig pg 964:

One threat in particular is worthy of further consideration: that ultraintelligent machines might lead to a future that is very different from today--we may not like it, and at that point we may not have a choice. Such considerations lead inevitably to the conclusion that we must weigh carefully, and soon, the possible consequences of AI research for the future of the human race.

Reply Score: 1

RE: Quote from Russell and Norvig
by zima on Mon 4th Jun 2012 23:44 UTC in reply to "Quote from Russell and Norvig"
zima Member since:
2005-07-06

Well, we* certainly won't really "like" the future (whatever it is) of the human race anyway, eventually. Our subspecies, Homo sapiens sapiens, exists for a blink of an eye in the grander picture - and we will be extinct extremely soon (vs. the time remaining to the heat death of the universe).

Just how it is... we don't care much about Homo heidelbergensis, our likely ancestor, except for carrying further his lineage. In a future variant with "super AI" it will be probably similar (to varying degrees - maybe more like Neanderthals, which did contributed small part of our DNA, there was some interbreeding according to recent research)

* except, "we" in the strictest sense will not care about anything, being dead. And we hardly really have future in mind - for example, http://en.wikipedia.org/wiki/File:Human_welfare_and_ecological_foot...

Reply Score: 2