Linked by Thom Holwerda on Fri 26th Jun 2015 21:47 UTC
Hardware, Embedded Systems

Researchers have shown that machines are inching closer to self-learning, and perhaps even copping a little attitude.

Over at Google, a computer program using a database of movie scripts was asked again and again by researchers to define morality. It struggles to do so, and in a conversation recorded by its human engineers, becomes exasperated and ends the conversation by lashing out at its human inquisitor.

Eerie. The full paper is more interesting.

Thread beginning with comment 613380
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[9]: What is intelligence ?
by cfgr on Tue 30th Jun 2015 09:28 UTC in reply to "RE[8]: What is intelligence ?"
cfgr
Member since:
2009-07-18

A true AI would not need to be explicitly programmed. It would learn from others and more importantly, experience.

What are the inherent rewards/punishments from learning from others? Or is it because we programmed it? You only get experience when you do something. To do something, you need a goal. Humans have a goal: to survive and to enjoy life as much as possible before death. External factors force this goal upon us.

This is all a bit philosophical though, in practice we develop an AI with a specific set of goals and go from there. Basically we are the external factor.

Reply Parent Score: 2

RE[10]: What is intelligence ?
by gotocaca on Tue 30th Jun 2015 11:59 in reply to "RE[9]: What is intelligence ?"
gotocaca Member since:
2014-02-22

I totally agréé that we are thé external factor. As a matter of fact some automatic chat bot ARE considéréd as true human NOT AI. As a matter of évolution in this field, there is one science : cybernetics or how to mimic life with machines. If AI would be dôme kind of pour babies, that would and take as much time as it needs, to learn from ourselves.


AI is part of our machine power, part of our imagination. Who would suggest our baby is more like à monkey than à very young human ? That what we were sûre about babies NOT So long ago.

À self could be NOT something and have two différent références as a minimum start.

À self could be something NOT So much pre-defined but something you could interpret as a person to person réaction, every day.

What could define thé most à self would be thé person that pushes thé button, NOT thé button. With personnalité, that we can define with words, with something you can call à goal or an achèvement, or a TOMORROW. Someone that react in order to be something TOMORROW. And that TOMORROW dépends on what it could have learn from ourselves. From baby point of you, all of that have to be discovered at first. They give things to know yourself.

Reply Parent Score: 1

ilovebeer Member since:
2011-08-08

What are the inherent rewards/punishments from learning from others? Or is it because we programmed it? You only get experience when you do something. To do something, you need a goal. Humans have a goal: to survive and to enjoy life as much as possible before death. External factors force this goal upon us.

Humans do things, and learn from those things, all the time without having a specific goal. The action can be completely random and the learning completely unintended. For example, think of all the random things kids do with no thought about reward/punishment, with no goal in mind, and for no real reason at all. They're often times simply experiencing the world by doing something rather than nothing.

Reply Parent Score: 2