Linked by Thom Holwerda on Fri 26th Jun 2015 21:47 UTC
Hardware, Embedded Systems

Researchers have shown that machines are inching closer to self-learning, and perhaps even copping a little attitude.

Over at Google, a computer program using a database of movie scripts was asked again and again by researchers to define morality. It struggles to do so, and in a conversation recorded by its human engineers, becomes exasperated and ends the conversation by lashing out at its human inquisitor.

Eerie. The full paper is more interesting.

Thread beginning with comment 613298
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[4]: What is intelligence ?
by cfgr on Mon 29th Jun 2015 10:17 UTC in reply to "RE[3]: What is intelligence ?"
cfgr
Member since:
2009-07-18

How can you possibly deny the effort being put into trying to make AI more human-like? Ever heard of the Turing test? That's the measuring stick of an AI. As many teams as there are who are trying to get AI to mimic human behavior, you should know it's practically the default goal.

That's not really true, it's just the goal that's most noticeable to people outside the field. It's also fun to talk about ethics (see this topic) and, for engineers, to try to beat humans at their own game.

However, most AI I know are self learning systems to solve a clearly defined problem of some sort: visual, auditive, prediction, classifying etc. And they all easily beat humans in accuracy and speed exactly because they are not meant to replicate human behaviour, they're just 'boring'.

Reward/punishment doesn't produce intelligence. The most basic forms of known life are capable of prediction, and that's what reward/punishment teaches.

You should check out Q-learning. This example is pretty fun to read about: https://en.wikipedia.org/wiki/TD-Gammon

You're right though that we're still missing a layer of "coming across a random new problem outside of the defined rules, recognising it, learning the new rules and finding a solution". For that you need some sort of interaction with the real world (which is more or less random). The interaction part is pretty hard, and in the article they tried to approach that problem.

Edited 2015-06-29 10:25 UTC

Reply Parent Score: 2

RE[5]: What is intelligence ?
by acobar on Mon 29th Jun 2015 12:30 in reply to "RE[4]: What is intelligence ?"
acobar Member since:
2005-11-15

You're right though that we're still missing a layer of "coming across a random new problem outside of the defined rules, recognising it, learning the new rules and finding a solution".

I think people underestimate the potential of limited set of rules. Many complex behaviors we see every day exist not because the interactions are complicated but because the thread of them are. Most living things get along very well with just a bit more than reward/punishment valuations preprogrammed.

That is why I talked about sensors. They enable us to experience with sets of rules and analyze their effectiveness without the trouble (almost) associated to the creation of a very complex and specialized solution.

Seems to me that general cause and effect inference is the truly hard part. It is so hard that few biological species developed it (and some humans get going without it too, even confusing it with reward/punishment). After them, the Holy Grail, abstraction. If we get to this point before we finish our job of wreak havoc the entire planet we will be doomed as the dominant species (not necessarily as a species).

Also, when I talk about AI I am, usually, referring to general AI and not "expert systems", what is what most of the people I talk to seems to be thinking of when the conversation starts.

Edited 2015-06-29 12:34 UTC

Reply Parent Score: 2

RE[6]: What is intelligence ?
by cfgr on Mon 29th Jun 2015 13:21 in reply to "RE[5]: What is intelligence ?"
cfgr Member since:
2009-07-18

I think people underestimate the potential of limited set of rules. Many complex behaviors we see every day exist not because the interactions are complicated but because the thread of them are. Most living things get along very well with just a bit more than reward/punishment valuations preprogrammed.

I mostly agree, though I think the problem arises in defining and programming the reward/punishment. Everything we do has a personal meaning in some way: we want something for ourselves (or for others). We understand the reward and punishment of an action because it leads us closer or further away from our personal goals. I'd say our goals are ultimately defined by the fact that we die so we make the best of our lives. What would the goal of a general AI be? And how do you define the rewards and punishments? That's what I meant with the missing layer.

Also, when I talk about AI I am, usually, referring to general AI and not "expert systems", what is what most of the people I talk to seems to be thinking of when the conversation starts.

I don't think algorithms such as Q-learning can be classified as 'expert systems' though.

Reply Parent Score: 2