Linked by Thom Holwerda on Tue 9th Aug 2016 22:39 UTC
In the News

AlphaGo's surprising success points to just how much progress has been made in artificial intelligence over the last few years, after decades of frustration and setbacks often described as an "AI winter." Deep learning means that machines can increasingly teach themselves how to perform complex tasks that only a couple of years ago were thought to require the unique intelligence of humans. Self-­driving cars are already a foreseeable possibility. In the near future, systems based on deep learning will help diagnose diseases and recommend treatments.

Yet despite these impressive advances, one fundamental capability remains elusive: language. Systems like Siri and IBM's Watson can follow simple spoken or typed commands and answer basic questions, but they can't hold a conversation and have no real understanding of the words they use. If AI is to be truly transformative, this must change.

Siri, Google Now, or Cortana are more like slow and cumbersome command line interfaces than actual AIs or deep learning or whatever - they're just a shell to a very limited number of commands, a number of commands they can barely process as it is due to the terrible speech recognition.

Language is incredibly hard. I don't think most people fully realise just how complex language can be. Back when I still had a job in a local hardware store in my area and I spent several days a week among people who spoke the local dialect, my friends from towns only mere kilometres away couldn't understand me if I went full local on them. I didn't actually speak the full dialect - but growing up here and working with people in a store every day had a huge effect on the pronunciation of my Dutch, to the point where friends from out of town had no idea what I was talking about, even though we were speaking the same language and I wasn't using any special or strange words.

That breadth of pronunciation within the same language is incredibly hard to deal with for computers. Even though my town and the next town over are only about 1-2 kilometres apart, there's a distinct pronunciation difference with some words if you listen carefully to longtime residents of either town. It's relatively elementary to program a computer to recognise Standard Dutch with perfect AN pronunciation (which I can actually do if I try; my mother, who is from the area where Standard Dutch is from, speaks it naturally), but any minor variation in pronunciation or even voice can trip them all up - let alone accents, dialects, or local proverbs or fixed expressions.

The question is, then, one that we have discussed before in my article on Palm and Palm OS:

There are several key takeaways from Dimond's Stylator project, the most important of which is that it touches upon a crucial aspect of the implementation of handwriting recognition: do you create a system that tries to recognise handwriting, no matter whose handwriting it is - or, alternatively, do you ask that users learn a specific handwriting that is easier for the system to recognise? This would prove to be a question critical to Palm's success (but it'll be a while before we get to that!).

If speech recognition is going to keep sucking as much as it does, today's engineers either have to brute-force it - throw tons of power at the problem - or ask of their users that they speak Standard Dutch or whatever it's called for your language when talking to their computers.

I'm not optimistic for the coming 10-20 years.

Order by: Score:
Comment by flanque
by flanque on Tue 9th Aug 2016 23:16 UTC
flanque
Member since:
2005-12-15

It'll get there, and just for a bit of fun...

"Started out, J.A.R.V.I.S. was just a natural language UI. Now he runs the Iron Legion. He runs more of the business than anyone besides Pepper."

Reply Score: 3

It's not helped
by kwan_e on Tue 9th Aug 2016 23:48 UTC
kwan_e
Member since:
2007-02-18

I think a big problem with language processing is trying to be accurate in the first place. As is painfully obvious now, most people don't care if what they or others express matches up with reality. But with things like IBM Watson, the only way they can test it is verifying what Watson comes up with matches reality.

A lot of language is down to personality and the personality a lot of people evidently have is a Hitler supporting teenage girl.

Reply Score: 7

RE: It's not helped
by Kochise on Wed 10th Aug 2016 10:42 UTC in reply to "It's not helped"
Kochise Member since:
2006-03-03

If you already have understanding problems amongst humans living not farther apart than 2 kms, then the problem do not lies in the AI understanding capabilities.

Seriously.

Back then in France, a King decided he was fed up His subjects were subject of discordance because of regional verbiage, size and weight measurement standards. He decided to invent the kilogram and the meter, and that everyone should fucking speak french. For God sake.

If a technical problem is highlighted because of human inconsistencies, I think it's the best opportunity ever to address those not on the technical level, but trying to skew the human syndrome on the right side. Say here having a distinctive and understandable accent and leave Texans chewing their words.

Remember, once upon a time, legions spoke Roman, America was discovered in Portuguese, then invaded in English. Remove the useless hystheorical bits of a language and everyone would be able to understand each other.

Do to self, learn Lojban, I think there's something valuable that would shine in a near future.

Reply Score: 2

RE[2]: It's not helped
by Carewolf on Wed 10th Aug 2016 17:52 UTC in reply to "RE: It's not helped"
Carewolf Member since:
2005-09-08

Kilogram, meter and the other metric units were not the invention of some king, on the contrary, they were inventions of the French revolution and the revolutionaries attempt to make everything modern and rational.

Reply Score: 3

RE[3]: It's not helped
by kwan_e on Thu 11th Aug 2016 03:29 UTC in reply to "RE[2]: It's not helped"
kwan_e Member since:
2007-02-18

Kilogram, meter and the other metric units were not the invention of some king


A better example to his point would be Emperor Qin Shi Huang. He united kingdoms into an empire and enforced a system of weights, measurements, written language and practical things like agricultural practices and the width of cart axles.

Reply Score: 3

RE[4]: It's not helped
by Carewolf on Thu 11th Aug 2016 09:28 UTC in reply to "RE[3]: It's not helped"
Carewolf Member since:
2005-09-08

All the European monarchies in the 18th centrury did that, you had to, to make trade work and build early machines, the problem was they were not consistent across borders, and they had been enforced by Monarchs, which is why the revolutionaries wanted them gone.

Reply Score: 2

RE[5]: It's not helped
by Kochise on Thu 11th Aug 2016 09:42 UTC in reply to "RE[4]: It's not helped"
Kochise Member since:
2006-03-03
RE[5]: It's not helped
by kwan_e on Thu 11th Aug 2016 22:53 UTC in reply to "RE[4]: It's not helped"
kwan_e Member since:
2007-02-18

All the European monarchies in the 18th centrury did that, you had to, to make trade work and build early machines, the problem was they were not consistent across borders, and they had been enforced by Monarchs, which is why the revolutionaries wanted them gone.


That's why I said Emperor Qin Shi Huang is a better example because all the different monarchies of China at the time did have their own standards. He came in and conquered all those monarchies and standardized them across borders, and they had pretty much remained in place (because of all the economic benefits) until the end of the imperial era.

Reply Score: 2

Not a question of computing power...
by sergio on Wed 10th Aug 2016 05:47 UTC
sergio
Member since:
2005-07-06

If speech recognition is going to keep sucking as much as it does, today's engineers either have to brute-force it - throw tons of power at the problem - or ask of their users that they speak Standard Dutch or whatever it's called for your language when talking to their computers.


I totally agree with you. But I don't think 'brute-for-it' is a solution to the problem... I mean, IBM Watson already went that way (with virtually infinite computer power) and its voice recognition module sucks just like Cortana or Siri.

IMHO with current approach and tools We'll never solve the problem... human-like voice recognition implies super complex IA, as you said, language is something truly hard. In fact language is the most complex thing humans ever created (and We are not 100% sure if We really created it... seems it was already in our "firmware" ha!).

So... I'm not very optimistic about this... We are not even close to have something barely useful...

Reply Score: 2

jgagnon Member since:
2008-06-24

I also think that any brute force attempt will not work. The biggest problem for computers concerning language, as I see it, is that language evolves locally and then migrates irregularly. A "common" way of speaking is truly an arbitrary assessment of the people of a location over a period of time. What is common for one generation in one location will not likely be common elsewhere by different people.

Slang speech is something I feel computers will have to master in order for speech recognition to become useful to the wider population. People deliberately use words out of context, changing their meaning. Some such changes spread rapidly while others remain part of a local dialect. To complicate matters further, some have taken "written slang" and adapted it to speech. Saying "LOL" out loud, while annoying to me even in its written form, is something I hear all too often in some circles. There's no way a computer will be able to keep up with these changes to dialect today, especially with how fleeting some of them are (thankfully).

Accents, enunciation, and inflection are all evil devices that will also be used against any attempts at speech recognition. They're enough to make a computer "/boggle".

English speech can even be frustratingly hard to decipher for a human. We have many words that sound the same or similar but have entirely different meanings. We also have words that change their meaning, sometimes in drastic ways, simply through a change of context. Consider the difference between "that's great news" and "well, that's just great"... Great can have very different meanings in each case and how it is interpreted will depend on the context of the rest of what was said.

A friend asks you, "Did you see the game last night?" Assuming you share a similar taste in "games" then you might immediately know what they are referring to. As a stranger listening in, I might not have any clue at all. But in some areas that are dominated by specific sports, "the game" might have a very limited range of interpretation. A computer not attuned to your location would have a difficult job getting that right.

On a related note, the use of the word "right" is a tough one. The phrases "turn, right?" and "turn right" and relatively easy to interpret when written but could be difficult for a computer to figure out depending on how the two words were spoken. One is a question and the other is a statement or command.

It's too late to digress, but yeah, this is no simple accomplishment for the human mind so it might be some time before a computer can get to our level.

Reply Score: 3

Alfman Member since:
2011-01-28

sergio,

I totally agree with you. But I don't think 'brute-for-it' is a solution to the problem... I mean, IBM Watson already went that way (with virtually infinite computer power) and its voice recognition module sucks just like Cortana or Siri.


I don't think that's fair because I don't think the Watson team ever set out to generically solve the voice recognition problem, it wouldn't surprise me if that voice recognition was completely outsourced. Their real achievement was coming up with specific answers to unstructured questions.

jgagnon,

I also think that any brute force attempt will not work. The biggest problem for computers concerning language, as I see it, is that language evolves locally and then migrates irregularly.



But isn't that how humans learn? I think brute forcing is exactly how it will be solved. The today problem is they are trained under laboratory conditions, whereas a human learns at home with lots of culture. I would wager that a hypothetical human who grew up in a lab environment with no more input than the computer would be just as developmentally challenged as the computer seems to be.

It's very challenging to match the parallelism of a human brain, but once that happens I believe it will be easy in principal for it to learn as a human would.


Also some people can be very difficult to understand because of cultural differences. I've noticed that children frequently make innocent mistakes, and we even have the expression "to err is human". But when it comes to computers we seem to think they should be able to understand everyone equally, which is a skill that many humans have difficulty with.


One topic that would be interesting to study is the identification of natural instincts and what effect they have on our development (ie reactions to pain, need to eat, etc). Would it be possible to ultimately become an intelligent being if we started out with no biological instincts or do instincts determine how intelligent we can become?

Reply Score: 3

acobar Member since:
2005-11-15

Alfman,

The problem may very well be bigger than most of us think it is. There is a huge number of variations on intonations, inflections and sentence constructions we learn to decode for specific groups. I think most of us already experienced the "displeasure" to have to get "accustomed" to a particular "dialect" until we finally start to make sense of it, be it because the phonemes or the grammar (or lack of it, thereof) is/are not what we are used to. Yes, like we do on orthography, our brain try to correct bad/unfamiliar constructions automatically, that is why we need "training" for particular groups.

Solution for that would be a more strict control of language evolution and its phonemes and a more rigorous education of [insert_your_language_here]. Good luck even trying to raise such proposals.

Reply Score: 2

Alfman Member since:
2011-01-28

acobar,

Solution for that would be a more strict control of language evolution and its phonemes and a more rigorous education of [insert_your_language_here]. Good luck even trying to raise such proposals.


Yea, there's no denying it would make the problem much easier, but it's controversial because it takes away our unique cultural identities. Does culture have merit? I don't know, but ultimately I don't think it's up to us or technology to decided anyways, people should be free to choose for themselves. It may be difficult, but I think that technology should conform to human needs rather than the other way around.

Reply Score: 2

acobar Member since:
2005-11-15

It may be difficult, but I think that technology should conform to human needs rather than the other way around.

Even though I agree with you on general line I don't see how requiring a more strict education process on languages would impair culture, I actually think it would improve it as we would be able to properly access ancient text/symbols of our own languages, a thing already lost for most of us. Perhaps, if you see culture as a set of knowledge, values, habits, beliefs and so on we are able to pass to others, one which the main form of dissemination is by recorded registers, you may very well conclude we should pursue a more strict language education instead of let it slowly drift apart because, as time goes by, we risk to lose a lot.

Reply Score: 2

Alfman Member since:
2011-01-28

Even though I agree with you on general line I don't see how requiring a more strict education process on languages would impair culture, I actually think it would improve it as we would be able to properly access ancient text/symbols of our own languages, a thing already lost for most of us. Perhaps, if you see culture as a set of knowledge, values, habits, beliefs and so on we are able to pass to others, one which the main form of dissemination is by recorded registers, you may very well conclude we should pursue a more strict language education instead of let it slowly drift apart because, as time goes by, we risk to lose a lot


Ah well fair enough then ;)

It's just that strict education guidelines tend to stifle original thinking. I am reminded of one professor that failed me on an exam when I had a unique solution that was technically correct but did not conform to his.

Many here may not be aware, but the US is currently undergoing the standardization of school curriculums under common-core, a program being spearheaded under Bill Gate's foundation. Many parents are protesting and boycotting it for numerous reasons, in part because the children aren't learning to do math the same way as their parents.

https://www.washingtonpost.com/politics/how-bill-gates-pulled-off-th...

Edited 2016-08-10 22:48 UTC

Reply Score: 2

Thom_Holwerda Member since:
2005-06-29

It's just that strict education guidelines tend to stifle original thinking.


Language is the number one way - by a huge margin - through which we, as humans, express thoughts. Any imposed constraints on language are, therefore, constraints on thoughts, and thus, the freedom of expression.

Think long and hard before you try to impose languages or specific variants of language currently deemed "standard" by a ruling class. It's led to horrendous things.

Reply Score: 2

jgagnon Member since:
2008-06-24


jgagnon,

"I also think that any brute force attempt will not work. The biggest problem for computers concerning language, as I see it, is that language evolves locally and then migrates irregularly.



But isn't that how humans learn? I think brute forcing is exactly how it will be solved. The today problem is they are trained under laboratory conditions, whereas a human learns at home with lots of culture. I would wager that a hypothetical human who grew up in a lab environment with no more input than the computer would be just as developmentally challenged as the computer seems to be.

It's very challenging to match the parallelism of a human brain, but once that happens I believe it will be easy in principal for it to learn as a human would.


Also some people can be very difficult to understand because of cultural differences. I've noticed that children frequently make innocent mistakes, and we even have the expression "to err is human". But when it comes to computers we seem to think they should be able to understand everyone equally, which is a skill that many humans have difficulty with.


One topic that would be interesting to study is the identification of natural instincts and what effect they have on our development (ie reactions to pain, need to eat, etc). Would it be possible to ultimately become an intelligent being if we started out with no biological instincts or do instincts determine how intelligent we can become?
"

I think the ability for people and other animals to learn is mostly about environment, not instinct or DNA. You put an animal in the right circumstances and they will develop intelligence and problem solving skills (I've seen this first hand). You put a human in the wrong environment and they will develop very little intelligence, at least in the way we determine it socially.

Our brains are all about adapting to stimuli and linking bits of information together. A great deal of our early learning is through association, context, and imitation. We get far, far more input and feedback from our environment than a computer would with today's sensor technology. If a part of us is damaged, we compensate and adapt. We live our lives in a constant feedback loop.

Computers do not currently get nearly enough information and feedback from their environment, nor could they process all of the information if they did. Eventually, sure, I think we'll get them there. But in the meantime we have a huge advantage over voice recognition... we can see, hear, and feel the entire communicational package that conversation has to offer. Not just the words people say, but their tone, temperament, stance, gestures, facial expressions, etc. Voice recognition doesn't stand a chance of being "good enough" until a whole lot more environmental factors weigh in to the interpretation of what is said.

Reply Score: 3

acobar Member since:
2005-11-15

Very good points.

Wish I could mod your comment up but, for some unexplained reason, it is not possible on OSnews.

Reply Score: 2

Alfman Member since:
2011-01-28

I think the ability for people and other animals to learn is mostly about environment, not instinct or DNA. You put an animal in the right circumstances and they will develop intelligence and problem solving skills (I've seen this first hand). You put a human in the wrong environment and they will develop very little intelligence, at least in the way we determine it socially.


Ok, I agree with that. But there was a very specific reason I asked the question, and that's how critical the initial programming is to whether or not further learning would be successful. Assuming we already had the massive processing power, what is the role of other key ingredients, like instincts, towards intelligence?

This is getting highly theoretical, but I actually believe non-determinism may be an important ingredient because perfectly deterministic algorithms can't increase their entropy and cannot learn. If everything is predetermined, then then there's only one possibility, and that's copying intelligence from elsewhere. This may be enough to fake intelligence, even well, until there's a new test.



Computers do not currently get nearly enough information and feedback from their environment, nor could they process all of the information if they did. Eventually, sure, I think we'll get them there. But in the meantime we have a huge advantage over voice recognition... we can see, hear, and feel the entire communicational package that conversation has to offer. Not just the words people say, but their tone, temperament, stance, gestures, facial expressions, etc. Voice recognition doesn't stand a chance of being "good enough" until a whole lot more environmental factors weigh in to the interpretation of what is said.


Agreed.

I think if we pumped enough resources at it, I am confident we would reach the level of the human brain's parallelism pretty quickly, but where's the business sense? If humans cost $20-100k/yr and an equivalent parallel computer costs computer $50m to develop plus $5m to run over the same period, then what's the point? On the other hand if that computer were equivalent to 10-100 people, you reach the tipping point where it's the humans who are no longer viable.

So I think today's boundaries to AI are mostly economic rather than technological. Once the costs come down it's going to be very hard to stop the transition. We should be careful what we wish for ;)

Reply Score: 3

allanregistos Member since:
2011-02-10

Computers do not currently get nearly enough information and feedback from their environment, nor could they process all of the information if they did.


Because computers lacked the sensors needed to store information.

Human's basic five senses:
1. Seeing
2. Hearing
3. Touching
4. Smelling
5. Tasting

And there are other senses like Balance, Temperature, Kinesthetic sense, and Pain.

http://udel.edu/~bcarey/ART307/project1_4b/

And even so, there are less understood senses of man such as being able to communicate remotely, or a human can detect changes of emotion from another human without using his vision.

Our sense of seeing is million light years ahead than that of a computer camera, even though our eyes can't zoom in like camera lenses do. We still have the best camera lenses ever invented by nature. What progress do we have about computer vision? Can it fully determine and distinguish objects around it? I hope we can do in the future. Not crossing my fingers. So how can a computer hear, touch, smell or taste?
Machines fall short because they are powered by software algorithms which sets the limit of what machines can do. Any imaginations beyond what software can do is magic.

Reply Score: 2

Alfman Member since:
2011-01-28

allanregistos,

So how can a computer hear, touch, smell or taste?


Many people are born blind, deaf, or tasteless (ha!). Having limited senses doesn't in itself imply someone can't be intelligent, so that shouldn't be a reason to dismiss AI.


Machines fall short because they are powered by software algorithms which sets the limit of what machines can do.


I could say the same thing about humans.
You could retort that humans have the capacity to adapt.
I could say the same thing about advanced AI.

The last time this topic came up, I took issue with what seemed to be double standards - criticizing AI for lacking traits that no one can prove about humans either.

http://www.osnews.com/comments/28658

So in the interest of getting further along in the discussion this time round, if it's acceptable to you, I'd like to start with the questions left at the end of last discussion. I asked:

It's a thought experiment. Obviously it has not happened, but it's not outside the realm of possibility that technology could make it happen. I want you to appreciate that the reason I'm asking this is because you have so many preconceptions about what "machines" can't do. So by using the same hardware as our brain, we can sidestep all these problems and talk just about the intelligence of software, or neurons as it would be.

Case 1: A real brain - intelligent!
Case 2: Configuration of neurons from real brain programmed into new artificial brain tissue in fabrication lab - intelligent?
Case 3: AI programmed into artificial brain tissue - intelligent?



Assume the fake brain is an exact neuron for neuron replica of a real brain. It's responses are identical and indistinguishable from the real brain. It will do and say everything as the original brain would have done and said, including claiming self awareness. Is this replica self aware?

Reply Score: 2

allanregistos Member since:
2011-02-10

allanregistos,

"So how can a computer hear, touch, smell or taste?


Many people are born blind, deaf, or tasteless (ha!). Having limited senses doesn't in itself imply someone can't be intelligent, so that shouldn't be a reason to dismiss AI.
"
I am not dismissing AI, I am dismissing people's arguments who seems to overrate the current and future capabilities of AI.
Having accessibility software components is not a sign that your computer is any smarter than your pets. Demonstrate it if you can.

"Machines fall short because they are powered by software algorithms which sets the limit of what machines can do.


I could say the same thing about humans.
You could retort that humans have the capacity to adapt.
I could say the same thing about advanced AI.
"
The problem of your argument alfman is you think that a piece of metal which is able to compute is comparable to man's intelligence. It is like saying a Calculator is way smarter because it can perform arithmetic calculation a hell faster than humans. Try your computer to adapt a new language and culture, put a computer in a crowded street and let it live there in order to adapt. I am a software developer, and I know what limits the software. Software will tell the computer to perform a certain task, = calculation, computation of something. And these are not equal to mental action! There is a chasm that separate mental action from computing, and your AI alfman is about computing, because it was powered by software algorithm. A computer is a piece of metal designed to compute not designed to do mental action! My goodness. There is absolutely NOTHING special in it. Wake me up when we can design a computer too different than it is today. A computer that can perform mental operations same as humans.
Ok search google for "neuron computers".

If you try to argue that somehow software and computers are complex enough to become aware of itself, then that gives them the benefit of citizenship and other human basic rights. It is possible for a machine to become self-aware, but it is not in the sense from the POV of a human, since if the machine itself is a metal, it is not truly self aware but only through emulation. Emulating humans. You should try to experiment with a hybrid human robot to become self-aware, you transfer man's consciousness to it if ever it is possible.

When you try to claim that software algorithms in the future will create true consciousness, then I will more trust David Blaine that his magic is real. At least I have the object in the person of David Blaine to blame for his magic, than software algorithms creating "consciousness" out of thin air. IN dealing with science, you should deal with reality that you can't create something out from nothing, and consciousness is a something that you can't create, but only you can emulate, since it is AI, there is a reason why we put "Artificial" in machine intelligence.


The last time this topic came up, I took issue with what seemed to be double standards - criticizing AI for lacking traits that no one can prove about humans either.

Yes, I remember.

http://www.osnews.com/comments/28658

So in the interest of getting further along in the discussion this time round, if it's acceptable to you, I'd like to start with the questions left at the end of last discussion. I asked:

"It's a thought experiment. Obviously it has not happened, but it's not outside the realm of possibility that technology could make it happen. I want you to appreciate that the reason I'm asking this is because you have so many preconceptions about what "machines" can't do.

IT is because we knew how software work, we knew that it is not magic, it is by design that we will know what machines can and can't do based on the raw materials that we have = software and silicon. And it has been ever since algorithms are invented. Software is a tool, its not a crystal ball to forecast for something it was _not_ designed to do.

So by using the same hardware as our brain, we can sidestep all these problems and talk just about the intelligence of software, or neurons as it would be.


Case 1: A real brain - intelligent!
Case 2: Configuration of neurons from real brain programmed into new artificial brain tissue in fabrication lab - intelligent?
Case 3: AI programmed into artificial brain tissue - intelligent?
"

In your thought experiment, a real brain did not appear out from nowhere, is it? Please note that, human brain is only possible through biological reproduction. I doubt the capacity of your artificial brain.

Case 1: Not intelligent, a brain does not store knowledge, see:
https://aeon.co/essays/your-brain-does-not-process-information-and-i...
The biggest myth from tech is to ignorantly compare human brains/consciousness to computers.

Case 2: It might be called "consciousness" transfer, therefore the memories retained by the human whose brain was transferred to that of the identical but separately/artificially created brain. So that the artificial brain becomes intelligent "thanks to the human" but "no thanks to the artificial brain". The artificial brain is again a tool, to prolong the life or whatever purpose of the human. This does not serve your arguments well.

Case 3: AI programmed into artificial brain, of course it might be intelligent for there is "Intelligence" in any AI machines. But consciousness, absolutely nothing. I've found no evidence that this AI programmed artificial brain is fully aware(it can be aware through emulation of human behavior,awareness etc), or have true consciousness, for you can't claim that until you explain how you created the conscious part of the brain. It won't and IT WON'T magically appear in the brain. You have to define it. What you are saying here is like when we can create an identical clone of a human brain in a lab, then we program it using AI techniques and VOILA it is now fully aware and truly self-conscious, yes it deserves basic human rights from now on! That's magic!!! We are dealing with science here, not magic. In software, you should right a function to create "consciousness" and even so, it is not in the same way as "human consciousness" it is fake. Its emulating human consciousness. It's only possible in your Case 2, which doesn't support your premise because we have to give credit to the original human's consciousness being transferred.

Assume the fake brain is an exact neuron for neuron replica of a real brain. It's responses are identical and indistinguishable from the real brain. It will do and say everything as the original brain would have done and said, including claiming self awareness. Is this replica self aware?
[/q]

When there is a "transfer" of consciousness/knowledge/awareness from A(Man) to B(New brain/fake brain but identical) then there is awareness, consciousness, memories retained.
But I have to thank the human for having the courage to transfer his/her brain to an artificial brain.
But there is nothing special of your fake brain alfman, since we have to give credit to man(original) for the consciousness, not your brilliant effort to made the transfer from the original brain to the artificial brain. Because in the context of consciousness, we give credit to the man for having the consciousness, not to the artificial brain. The artificial brain now is the medium of the man's consciousness.

Congratulations.

Reply Score: 2

Alfman Member since:
2011-01-28

allanregistos,

The problem of your argument alfman is you think that a piece of metal which is able to compute is comparable to man's intelligence.


I can do this too:
"The problem of your argument allanregistos is you think that AI will never be comparable to man's intelligence..."

The statement merely asserts a belief, but is absolutely meaningless in terms of validating it.


I am a software developer, and I know what limits the software.


I am a software developer too, so this appeal to authority does nothing to convince me.

Software will tell the computer to perform a certain task, = calculation, computation of something. And these are not equal to mental action!


I believe most of our reactions to stimuli aren't driven by intelligence at all, but are actually neural impulses that are ready to go. That's why being smart and strong isn't enough to play a sport, ride a bike or scateboard, go skiing, play an instrument, or drive a car, etc. We need to program the neurons in our brains via associations and experiences through practice. Being able to reprogram our own neural responses is a key part of advanced intelligence, but I can't think of any reason that computers can't fundamentally do that too.



you should deal with reality that you can't create something out from nothing, and consciousness is a something that you can't create, but only you can emulate, since it is AI, there is a reason why we put "Artificial" in machine intelligence.


Just because a man-made lake is artificial doesn't mean it can't have the qualities of a natural lake. Artificial just means that it's man-made, there's no reason that man-made things can't eventually outdo natural ones. Genetic engineering is already doing that for many species of plants (for our purposes it doesn't matter if it's ethical or not, only that it's feasible). The difference between an intelligent life form and an unintelligent one lies in the complexity of it's DNA, not whether it's created artificially or not.


Case 1: Not intelligent, a brain does not store knowledge, see:
https://aeon.co/essays/your-brain-does-not-process-information-and-i.....
The biggest myth from tech is to ignorantly compare human brains/consciousness to computers.


Props for finding that, it makes the same assertions as you, but after reading about half of it I couldn't find evidence backing those assertions, you can let me know if I missed it.


Case 2: It might be called "consciousness" transfer, therefore the memories retained by the human whose brain was transferred to that of the identical but separately/artificially created brain.


Ok, so what if we created a hundred of them from the same template, are those intelligent? Or just the first one? Why?



Case 3: AI programmed into artificial brain, of course it might be intelligent for there is "Intelligence" in any AI machines. But consciousness, absolutely nothing. I've found no evidence that this AI programmed artificial brain is fully aware(it can be aware through emulation of human behavior,awareness etc), or have true consciousness, for you can't claim that until you explain how you created the conscious part of the brain.


It's interesting that you think that, at what point do you consider organic matter to be intelligent? Is the raw material intelligent? Is it intelligent in the form of DNA? A zygote? Is it intelligent once it's started cell division? Once it's born?

The reason I ask is because it will eventually be scientifically testable to synthesize DNA from a digital data source and begin cell division in a completely sterile environment void of other preexisting natural life. Would you give the resulting biological being the benefit of being alive and aware even though it was created artificially?

If you say yes, then you must admit that intelligent life can be decomposed into non-living elements and you must admit that non-living elements can be assembled artificially to produce intelligent life.

If you say no, then specifically why not?
Then you would be ethically ok with killing a creature with human DNA if it was created artificially?

Edited 2016-08-13 07:16 UTC

Reply Score: 2

mistersoft Member since:
2011-01-05

Alfman,


"I also think that any brute force attempt will not work. The biggest problem for computers concerning language, as I see it, is that language evolves locally and then migrates irregularly.



But isn't that how humans learn? I think brute forcing is exactly how it will be solved. The today problem is they are trained under laboratory conditions, whereas a human learns at home with lots of culture. I would wager that a hypothetical human who grew up in a lab environment with no more input than the computer would be just as developmentally challenged as the computer seems to be.

It's very challenging to match the parallelism of a human brain, but once that happens I believe it will be easy in principal for it to learn as a human would.
"

I agree. And while it might take a huge amount of compute cycles to "brute force" a true natural language "understanding" capability - or in reality be almost impossible if use nothing beyond a deep learning/neural net approach,

I think that's also missing (when doing the carbon,silicon top trumps) the reality that even developing human/animal brains don't actually start with a blank slate anyway. There's a lot of templating done, and on several levels. Even though the biological mechanisms on the neuronal development level aren't understood yet - it's quite obvious that our genes confer some innate encoding for listening to, preferentially, for instance certain frequency ranges, for repeating patterns, rhythm and so on. And beyond that everything builds, first word associatings, and gradually concepts, subtlety in language, complex thought, self awareness, irony, inference rather than direct point making, poetry and so on up the complexity scale. And it takes most of us on the order of 20+ years or so to achieve our full linguistic capability (genii and book worms perhaps less).

Computer true intelligence programming might have to take a(n admittedly guided) evolutionary approach too - and not be brute forcible from scratch or via dictionaries or "taught" machine learning approaches. but iteratively adding layers of complexity such as listed above. I think using instructed ML approaches (with large training sets) is probably the best way to get speech recognition off the ground. And same for basic conceptual learning. They could/should of course be finessed with an unstructured open neural net approach. But raw word meanings for instance might naturally be done not even with large training sets but via dictionary definitions(I'm not aware of what the standard is btw), however word meanings probably *should* best be assigned with a training set (of language usage examples) to infer the meaning rather than provide the definition which otherwise would be overly precise and less flexible. (dictionary definitions could still be employed secondarily)

And layer up layer of complexity could therefore be built-up. And I think the KEY to creating "real" machine intelligence will necessarily be how you decide to encode and adequately cross reference this almost unimaginably complex web of data/metadata patterns. It will i believe have to basically digitally represent something similar** to a brain. OR be too huge and power hungry for any widespread deployment.

**When I use the word similar i mean similar in how it polls a massive metadata-web to achieve any focus or "thought". Not literally how it's electrically wired. We process huge amounts of input through massive parallelism (but not highly structured - to the level of machine code) however don't have super high raw "transistor" speed. I'm sure for AI something in the right ball-park could be done with far less ultimate parallelism but much higher clock speeds. e.g. a multicore cpu running the "thought&language" centre - some GPU derivative running the raw visual processing, and dedicated DSP/CPU combo running the speech recognition --eventually, even if the model it runs is first developed in the lab on a megaserver

Reply Score: 3

Alfman Member since:
2011-01-28

And it takes most of us on the order of 20+ years or so to achieve our full linguistic capability (genii and book worms perhaps less).



A bit off topic. but I've heard that our mental abilities peak in early 20s. Here's a study suggesting from 22-27.

http://www.findingdulcinea.com/news/health/2009/march/Mental-Abilit...


I don't know if this is biological, or if it coincides with a period when most adults leave their intellectual pursuits in college and transfer to every day working life, making them stupider... ;)

Edited 2016-08-10 21:00 UTC

Reply Score: 2

dionicio Member since:
2006-07-12

"...It will i believe have to basically digitally represent something similar** to a brain."

...Something similar** to a "human". You can't understand even your family and friends, if not understanding their will and unwilling, desires and fears, dreams and nightmares. What is signals, or noise to Them. What is faith, trust or confidence. And all of that without even coming near to linguistics.

Computers shouldn't, shouldn't be taken seriously. Or at least be left with decisions at serious issues.

Reply Score: 2

Ai~=AI
by l3v1 on Wed 10th Aug 2016 05:54 UTC
l3v1
Member since:
2005-07-06

Deep learning means that machines can increasingly teach themselves how to perform complex tasks


No, not exactly. What has been achieved is stepping one or two steps up in the quality and performance of machine learning - and even that only technological (hw capabilities) and not theoretical, since the involved methods are very long known -, which enables the creation of fairly high performing (speaking in relative terms, i.e., vs. "classical" approaches) solutions for very _specific_ tasks (i.e., not "complex tasks"). Think game plays, classification of signal contents (which includes a wide variety of disciplines e.g. image, audio, radar, lidar, whatever). But that is not really AI, since it's artificial alright, but there's no real intelligence involved.

but they can't hold a conversation and have no real understanding of the words they use


Exactly. There's no cognition, no understanding, no transfer of knowledge - well, not even knowledge itself in the proper sense -, etc.

Huge steps have been made recently, no doubt about that, but even so, we're only just a tiny little bit closer to AI.

Edited 2016-08-10 05:55 UTC

Reply Score: 4

RE: Ai~=AI
by Lennie on Wed 10th Aug 2016 07:59 UTC in reply to "Ai~=AI"
Lennie Member since:
2007-09-22

"Deep learning means that machines can increasingly teach themselves how to perform complex tasks


No, not exactly. What has been achieved is stepping one or two steps up in the quality and performance of machine learning - and even that only technological (hw capabilities) and not theoretical, since the involved methods are very long known -, which enables the creation of fairly high performing (speaking in relative terms, i.e., vs. "classical" approaches) solutions for very _specific_ tasks (i.e., not "complex tasks"). Think game plays, classification of signal contents (which includes a wide variety of disciplines e.g. image, audio, radar, lidar, whatever). But that is not really AI, since it's artificial alright, but there's no real intelligence involved.

but they can't hold a conversation and have no real understanding of the words they use


Exactly. There's no cognition, no understanding, no transfer of knowledge - well, not even knowledge itself in the proper sense -, etc.

Huge steps have been made recently, no doubt about that, but even so, we're only just a tiny little bit closer to AI.
"

There is no intelligence, that is correct. Machine learning is a better term.

What I think happened was: we already had lots of algorithms, but we didn't have the hardware for it. Now we do have the hardware and data (!) for those algorithms and as you mentioned we've gotten one step further.

As someone more knowledgeable on the subject ones said:

There at least 10 known big difficult problems in the field of AI. We've now started to solve the first. It took us 50 years to get there. Part of the wait was on hardware. And hardware improvements are exponential. The question now is: will the improvements in the field of IA also go faster.

The answer so far: we don't know yet.

I think the problem with, especially spoken, language is:

You need a lot of context to understand what somebody is talking about.

Image recognition for example, where huge steps were made in which seems to be sort of a 'solved problem' right now, does not have this problem.

Edited 2016-08-10 08:17 UTC

Reply Score: 5

RE: Ai~=AI
by dionicio on Wed 10th Aug 2016 22:51 UTC in reply to "Ai~=AI"
dionicio Member since:
2006-07-12

"... - well, not even knowledge itself in the proper sense -, etc."

Would disagree a little on this. But indeed not -in any way- human transferable knowledge.

Reply Score: 2

RE: Ai~=AI
by allanregistos on Thu 11th Aug 2016 04:24 UTC in reply to "Ai~=AI"
allanregistos Member since:
2011-02-10

"Deep learning means that machines can increasingly teach themselves how to perform complex tasks


No, not exactly. What has been achieved is stepping one or two steps up in the quality and performance of machine learning - and even that only technological (hw capabilities) and not theoretical, since the involved methods are very long known -, which enables the creation of fairly high performing (speaking in relative terms, i.e., vs. "classical" approaches) solutions for very _specific_ tasks (i.e., not "complex tasks"). Think game plays, classification of signal contents (which includes a wide variety of disciplines e.g. image, audio, radar, lidar, whatever). But that is not really AI, since it's artificial alright, but there's no real intelligence involved.

but they can't hold a conversation and have no real understanding of the words they use


Exactly. There's no cognition, no understanding, no transfer of knowledge - well, not even knowledge itself in the proper sense -, etc.

Huge steps have been made recently, no doubt about that, but even so, we're only just a tiny little bit closer to AI.
"

Correct, because simply a machine is not aware and is not human. It has no mental awareness, it was only able to compute, and computing is not equal to consciousness or mental action. There is a reason why the "intelligence" in computers was called "artificial". The word "artificial" there explains this a lot to geeks who believe that one day, a machine can be as intelligent as men, the biggest tech myth today as a result of watching too many sci-fi movies.

And yes, there is no knowledge transfer, there is only "information" storage.

Edited 2016-08-11 04:26 UTC

Reply Score: 2

American English should do.
by ThomasFuhringer on Wed 10th Aug 2016 07:01 UTC
ThomasFuhringer
Member since:
2007-01-25

The approach should probably be to focus on one lingua franca and forget about local dialects or even accents for now.
Presumably in a generation or two everybody in the developed world will be able to speak standard American English. Siri understands that perfectly well as far as pronounciation is concerned.

Reply Score: 2

RE: American English should do.
by ssokolow on Wed 10th Aug 2016 11:08 UTC in reply to "American English should do."
ssokolow Member since:
2010-01-21

Presumably in a generation or two everybody in the developed world will be able to speak standard American English. Siri understands that perfectly well as far as pronounciation is concerned.


Doubtful. Outside of North America, people are at least as likely to trend toward a high-class UK accent instead.

Reply Score: 2

RE: American English should do.
by Thom_Holwerda on Wed 10th Aug 2016 11:23 UTC in reply to "American English should do."
Thom_Holwerda Member since:
2005-06-29

Siri understands that perfectly well as far as pronounciation is concerned.


I don't think you've ever really used Siri (or any of the others.

U n l e s s . Y o u . S p e a k . L i k e . T h i s ., uttering standard commands, it won't understand a thing you're saying.

Reply Score: 2

RE[2]: American English should do.
by kkarvine on Wed 10th Aug 2016 11:56 UTC in reply to "RE: American English should do."
kkarvine Member since:
2008-01-10

I'm from Finland and I've tried to use Siri with English and even though most of the persons I've spoken with say that my pronunciation is really good still Siri does seem to understand only the simple commands, if e.g. I try to call my wife I never know to whom it will call since it doesn't seem to understand the Finnish names, so I don't even bother trying to use it for anything.

Once my cat somehow managed to get Siri to listen and when I drove the cat away from my phone using just Finnish Siri was already calling my uncle, so I can't say that Siris understanding of pronunciation really works if it understands some spoken Finnish in English.

Reply Score: 1

RE: American English should do.
by darknexus on Wed 10th Aug 2016 12:03 UTC in reply to "American English should do."
darknexus Member since:
2008-07-15

Ok, so what's standard American English? I'm from the states, and have lived on pretty much all sides of the country. Never once have I heard anything that could be considered "standard." Some common elements, yes, but not enough. Heck, I'm a human and I have trouble understanding some Americans. Outside of the states... well, no one speaks American English at all except traveling Americans. So how in the world you think focusing on American English is a good idea is beyond me. If anything, they should focus on some British dialects as they are closer in many ways to what the rest of the English-speaking population is using save for the Canadians, who have their own accents.

Reply Score: 3

RE[2]: American English should do.
by pmac on Wed 10th Aug 2016 12:44 UTC in reply to "RE: American English should do."
pmac Member since:
2009-07-08

I can't tell a Canadian accent from an American accent unless I hear specific words that are known to be different (such as "about"). I think people generally think accents in their country are more varied than outsiders do. I'm sure I'm guilty of this, too. Obviously, I know Canada and the US are different countries, but as an outsider they don't sound all that different - certainly less different than a standard American accent and a Alabama accent, for example.

Reply Score: 1

darknexus Member since:
2008-07-15

How many Canadians have you met? A lot of them have a great deal more accent than just some specific words. The ones that sound more American tend to be in the cities, but if you go outside the cities you most certainly will hear a Canadian accent that is unique to the part of Canada you're in.
Sad thing is that I can understand them, even Newfoundlanders (and they've got one heck of an accent) better than I understand certain types of American speech; the kind you hear in mainstream rap music is, in particular, almost incomprehensible to me and a lot of people are talking like that more and more.

Reply Score: 2

RE[4]: American English should do.
by pmac on Wed 10th Aug 2016 14:19 UTC in reply to "RE[3]: American English should do."
pmac Member since:
2009-07-08

I've met quite a few Canadians - worked with three, and I've been to Toronto, Ottawa, and Montreal. Perhaps I'm thinking mostly of Toronto where they may have a more American accent, being so close to the border? You're right, I've heard people from more remote parts of Canada and they can sound almost Scottish or Irish. Or maybe I'm just really bad at accents.

Reply Score: 1

ssokolow Member since:
2010-01-21

I can't tell a Canadian accent from an American accent unless I hear specific words that are known to be different (such as "about"). I think people generally think accents in their country are more varied than outsiders do. I'm sure I'm guilty of this, too. Obviously, I know Canada and the US are different countries, but as an outsider they don't sound all that different - certainly less different than a standard American accent and a Alabama accent, for example.


See the reply I just posted to darknexus prior to this. It's got a good 5:40 video that explains Canadian accent quirks and then ends by explaining "Standard American English".

Edited 2016-08-10 14:58 UTC

Reply Score: 2

RE[2]: American English should do.
by RJay75 on Wed 10th Aug 2016 14:46 UTC in reply to "RE: American English should do."
RJay75 Member since:
2010-05-18

I agree. I'm from the upper midwest where some say we have an almost neutral accent and have a difficult time understanding some people when you get out into rural areas. And the speed at which people talk varies a lot too.

I lived in East TN for a while in the Appalachia region. The more you get into the mountains and secluded areas there the harder it is to understand people whom I think were speaking English. A lot of it was just completely incomprehensible much of the time. But they complained I was just as hard to understand as I had to make an effort to slow my speech way down. In my home accent I spoke 2-3 times faster than they did so communication was always an issue. I couldn't use a drive through for more than a year because of it.

And that's just pronunciation. Not even getting into slang and what words actually mean. That's different in just going from the city to the suburbs.

If anything I think they will have to train AIs on specific regional areas. How small of region I don't know but trying to train to a specific language itself is way to broad. Using mobile devices could then lookup where you're at and use the closest trained dialect AI to you. The same as using regional human interpreters.

Reply Score: 2

darknexus Member since:
2008-07-15

Using mobile devices could then lookup where you're at and use the closest trained dialect AI to you. The same as using regional human interpreters.


That wouldn't work. To use your example, just because I might be in TN at that given moment doesn't mean I speak with their accent. Trying to interpret what I say based on the way someone from part of TN sounds would probably not work so well, though the results may be rather amusing.

Reply Score: 2

Alfman Member since:
2011-01-28

darknexus,

That wouldn't work. To use your example, just because I might be in TN at that given moment doesn't mean I speak with their accent. Trying to interpret what I say based on the way someone from part of TN sounds would probably not work so well, though the results may be rather amusing.


I agree, location can be used to set a default, but ultimately you should be able to choose whatever dialect you want. You might even want to choose a language/dialect that doesn't match your own to become more fluent with it.

In mono-cultures, specialization always suffers. I really wish companies could be less fixated on "one size fits all" and instead opened up the tech allowing the community to specialize it. To push innovation, you have to open up the tech and allow many diverse groups of people to contribute, tweak, and improve. If only corporations were less averse to healthy competition and more encouraging of it, there would be an explosion of 3rd party innovation. If apple/google could just build the frameworks and let users share & contribute the guts, the resulting AI would be far greater than what either of them could come up with on their own.

Reply Score: 2

RJay75 Member since:
2010-05-18

It could be used as a default. Of course the users should be able to select what regional dialect they want to use for voice recognition.

Reply Score: 1

RE[2]: American English should do.
by ssokolow on Wed 10th Aug 2016 14:54 UTC in reply to "RE: American English should do."
ssokolow Member since:
2010-01-21

Ok, so what's standard American English? I'm from the states, and have lived on pretty much all sides of the country. Never once have I heard anything that could be considered "standard."


save for the Canadians, who have their own accents.


I'd been putting together a detailed response but, while searching YouTube for example videos, I found this, which explains both "Standard American English" and Canadian accents in a concise 5 minutes, 40 seconds.

https://www.youtube.com/watch?v=8YTGeIq4pSI

The other accents in Canada and the U.S. that people tend to forget about are the "remnants of a native language that the government tried to squash out of their grandparents" constallation of accents, as in this clip:

https://www.youtube.com/watch?v=771JklF4SAs

(Guy #1 speaks from the beginning, Guy #2 first speaks at 1:05)

Disclaimer: I'm a Canadian and, depending on where you live and who you interact with, it's very easy to never meet someone who speaks with a "Canadian" accent other than "Standard American English".

I'll now leave you with an interesting video about an accent you may have wondered about: The "Transatlantic Accent" which characterized 1930s and 1940s cinema:

https://www.youtube.com/watch?v=Gpv_IkO_ZBU

Reply Score: 3

No it isnt Member since:
2005-11-14

Compared to English dialects, all American English sounds very much the same.

Reply Score: 2

Comment by nojiz
by nojiz on Wed 10th Aug 2016 08:27 UTC
nojiz
Member since:
2016-08-09

Language is incredibly hard, yes, but I actually think people *DO* understand that. That's why their expectations of these technologies are low. I listened to my computer illiterate dad interact with Siri some time ago, and he automatically tried to simplify. Like talking to a small child or a Retriever.

Language requires not only general AI, but sensory input. Because human interaction is not only sentences, but also body languages, long contexts and back-and-forth sequences to resolve ambiguities inherit in all human languages.

Reply Score: 2

Speech and Language
by avgalen on Wed 10th Aug 2016 08:27 UTC
avgalen
Member since:
2010-09-23

Sure, Cortana, Google Now and Sire all have a voice interface but that just converts Speech to Text. You can also bypass this step and just type in your commands. I actually prefer this for most tasks on my pc and only use the voice interface on mobile (in private)

After the Speech2Text it becomes a matter of text-parsing to produce a command that the computer can process. We have come a long way in this kind of text processing but it is still extremely limited
* Programming: You have an extremely strict syntax and any mistake will result in compile-time or run-time errors. Basically only for programmers with major IDE's to help them write commands that the computer will understand
* Commandline: You have a strict syntax with a bit more flexibility, but still it requires quite some skill and trial and error for an above normal user to get the wanted results
* Google Search-like interfaces: Very easy interface for a very limited task that just provides you with a list of things that might match what you wanted. Only very rarely do you get what you wanted immediately (hero-results)
* WIMP-like interfaces: By far the most used interfaces between you and your machine. An entire interface is tailored to a very limited amount of tasks with lots of thought and code dedicated to helping you perform a specific task. Basically the only interface that normal people can use to tell a computer directly what should be done

Human-Machine interfaces are hard. Expecting digital assistants to be able to
* perform every task
* from non-strict-syntax
* from speech
....is a very high expectation that simply isn't realistic, but is indeed how this is sold to consumers. More realistic would be:
* perform an ever increasing list of basic tasks
* with a commonsense verb+keyword interface that keeps improving
* also from voice, if you speak a supported language well

Reply Score: 2

My Dutch
by cropr on Wed 10th Aug 2016 08:30 UTC
cropr
Member since:
2006-02-14

I am from West-Flanders in Belgium, a region with a very specific Dutch dialect. People from rest of Belgium are have a tough time understanding my dialect, people from the Netherlands don't understand my dialect at all. Even I speak Standard Dutch, peolpe hear immediately I am from West-Flanders.

Siri understands roughly 1 out of 5 of my commands, Google Now about 1 out of 3. So not really usable. If I set Siri to UK Enlgish, Siri understands my English much better, but the downside is that UK Siri does not recognize the Dutch names of people and towns. So no improvement here.

When Apple launched the new Apple TV, which supports Siri, Apple had to delay the launch in Belgium, because Siri failed to understand the "Belgian" Dutch commands to switch channels. The French speaking part fo Belgium was upset, because the French Siri worked quite well

Edited 2016-08-10 08:35 UTC

Reply Score: 1

Accents
by pmac on Wed 10th Aug 2016 09:13 UTC
pmac
Member since:
2009-07-08

I don't think understanding accents is a problem with the current systems. They can convert speech to text very well, it's the next part that's the problem. If Siri et al can't understand your accent, it's not because its speech recognition doesn't work, it's because it hasn't been trained well enough to understand your accent. For my unusual English accent (Irish, from a part of Ireland with a completely different accent from the rest), Siri understands me perfectly.

I'm not a language expert so maybe there's something about Dutch accents that trip up the speech-to-text engine, but I'd guess it's the case that it just hasn't been trained sufficiently to deal with your accent. That's a data problem.

I agree that they're dumb command lines and are useless when treated as anything but that.

Reply Score: 2

RE: Accents
by jal_ on Wed 10th Aug 2016 10:59 UTC in reply to "Accents"
jal_ Member since:
2006-11-02

Indeed, Tom sketches the problem as if recognizing the speech itself is the hardest part, while of course language, even if spoken with a perfect standard accent, is still very difficult to decipher. For people to understand each other, they use a common frame of reference, and when that fails, common knowledge. The computer has neither.

Reply Score: 3

A strange commentary from Thom
by birdie on Wed 10th Aug 2016 14:04 UTC
birdie
Member since:
2014-07-15

The original article you mentioned was about computers trying to make sense of what we're saying, not trying to understand what we're saying - computers are already good at that.

Reply Score: 1

RE: A strange commentary from Thom
by darknexus on Wed 10th Aug 2016 14:18 UTC in reply to "A strange commentary from Thom"
darknexus Member since:
2008-07-15

The original article you mentioned was about computers trying to make sense of what we're saying, not trying to understand what we're saying - computers are already good at that.

You could have fooled me, judging by what Siri/Google Now have done with a lot of my requests. I have no idea how they get some of the dictation results they do, and I'm speaking clearly on purpose.

Reply Score: 2

whartung
Member since:
2005-07-06

Ask any married couple about the difficulties of communication.

Ask any person who struggles to get their burger, fries and a coke at a drive thru window. Heck, even a walk up window.

Ever try to talk with someone who constantly interrupts you? Or tries to complete your sentences (I'm guilty of both of these at times)? These are distinct examples of cognition firing on too many cylinders.

We can barely, as human beings, communicate over the printed word. You can ask anyone about issues with emails and misunderstanding. You can witness the utter breakdown of communication in any internet forum. Not just trolls, but well meaning people enraged and simply talking past each other. And this isn't "heat of the moment" discussion over a table, it takes extra work and steps to type that information in to a computer and hit send. It's, theoretically, "thoughtful discourse". (Some would certainly argue otherwise).

Yet it happens routinely.

Communication, through any medium is a problem "natural" intelligence has yet to solve, much less AI.

I appreciate being able to send simple commands to Siri on my phone. It mostly sorta works ok. It IS better (mostly) then typing the stuff in.

But my cat is a better listener than Siri is.

Reply Score: 3

dionicio
Member since:
2006-07-12

And I'm very optimistic it will not, near term.

With new energy guidelines becoming norm in the next years, You can take that a little more beyond, Thom.

Beyond the brute force and full harvest and agency curated approach used by our time Leviathans, the issue belongs to another kind of -more organic- computing.

Reply Score: 2

dionicio Member since:
2006-07-12

This doesn't mean We could be careless about soul empty market forces:

IT|Robotics has reached threshold. Human work highly dependent in Pavlovian loops and Turing algorithms is about to be whipped out of the World Economy.

An that is going to hit third World economy, more than anywhere else in the Globe.

If Hegemonic Powers dismiss that. Politic Map is going mayhem.

Reply Score: 2

AI is an overrated concept
by allanregistos on Thu 11th Aug 2016 00:41 UTC
allanregistos
Member since:
2011-02-10

When you power up a machine, a machine that the only function it was capable of is = "Computing" then you are setting the max limit of the machine.

When you open or boot a machine, that is powered by a software algorithm, then you are setting the maximum limit of the machine. It can never reach the intelligence of even a monkey.

Some will say "the IBM machine" defeated the best human player, so the IBM machine is now superior = is the biggest tech myth you will ever encounter. It is like saying that because a Calculator can perform arithmetic calculations of magnitude faster than a human, now that a piece of plastic is smarter than any human beings.

Mental capability of man is not equal or inferior to 'calculations' per second performed by a machine.
For mental awareness is not equal to machine awareness and human perception is millions lightning ahead in terms of complexity than the machine perception of reality.

If you use software to power your AI machines, then you are lost. Do not deceive yourselves that your machine can be more intelligent than a monkey.

Reply Score: 2

RE: AI is an overrated concept
by sergio on Thu 11th Aug 2016 01:07 UTC in reply to "AI is an overrated concept "
sergio Member since:
2005-07-06

I totally agree with your POV.

Reply Score: 2

RE: AI is an overrated concept
by agentj on Thu 11th Aug 2016 08:52 UTC in reply to "AI is an overrated concept "
agentj Member since:
2005-08-19

Currently computers can't even reach capabilities of simple single cell organism, let alone monkey.

Reply Score: 2

RE[2]: AI is an overrated concept
by Alfman on Thu 11th Aug 2016 12:37 UTC in reply to "RE: AI is an overrated concept "
Alfman Member since:
2011-01-28

aqenti,

Currently computers can't even reach capabilities of simple single cell organism, let alone monkey.


I assume you are talking about self-copying replicators? In the biological domain RNA/DNA serve as replicators, but are not intelligent. If we focus on a digital domain, software algorithms can actually be replicators too using simple rules like conway's game of life. Neither process is intelligent though, and I don't see the connection with machine replication and becoming intelligent - one doesn't imply the need for the other.

Reply Score: 2

allanregistos Member since:
2011-02-10

aqenti,

"Currently computers can't even reach capabilities of simple single cell organism, let alone monkey.


I assume you are talking about self-copying replicators? In the biological domain RNA/DNA serve as replicators, but are not intelligent. If we focus on a digital domain, software algorithms can actually be replicators too using simple rules like conway's game of life. Neither process is intelligent though, and I don't see the connection with machine replication and becoming intelligent - one doesn't imply the need for the other.
"

He must be speaking of the complexity of a single cell, there is intelligence there on how the cell behaves such as the t. There is a document somewhere that detailed the enormous complexity of the living cell if you magnify it to the point of being the size of a city, you will be amazed that its complexity has never been surpassed by any laboratory invented by men. Therefore you are wrong, there is intelligence in every aspect of the cell. For example, every single person of the world, begins as a single cell.

Reply Score: 2

RE: AI is an overrated concept
by kwan_e on Thu 11th Aug 2016 23:07 UTC in reply to "AI is an overrated concept "
kwan_e Member since:
2007-02-18

Some will say "the IBM machine" defeated the best human player, so the IBM machine is now superior = is the biggest tech myth you will ever encounter. It is like saying that because a Calculator can perform arithmetic calculations of magnitude faster than a human, now that a piece of plastic is smarter than any human beings.


You must be living under a rock, because a computer has beaten a top Go player. Go cannot be brute strengthed and AlphaGo relies on intuition and has been shown to play in a style that has caused top Go players to rethink how the game is played.

Reply Score: 2

allanregistos Member since:
2011-02-10

"Some will say "the IBM machine" defeated the best human player, so the IBM machine is now superior = is the biggest tech myth you will ever encounter. It is like saying that because a Calculator can perform arithmetic calculations of magnitude faster than a human, now that a piece of plastic is smarter than any human beings.


You must be living under a rock, because a computer has beaten a top Go player. Go cannot be brute strengthed and AlphaGo relies on intuition and has been shown to play in a style that has caused top Go players to rethink how the game is played.
"

And? What's the point? The argument still stands, you can't say that AlphaGo is smarter than a monkey, for it can answer what you want, but the monkey can't obviously. Computers are tuned to perform a certain task, no, they can't climb the tree as fast as the monkey for example.
O well, you can design a machine that can climb faster than a monkey, but that's all, nothing more.

I failed to see what's your point here. The AlphaGo as I said, the IBM machine defeated the top GO player, and that doesn't mean it was now superior to human with regards to intelligence. AlphaGo is superior at playing Go, but it is not anymore superior than your pets intelligently speaking.

Reply Score: 2

kwan_e Member since:
2007-02-18

Computers are tuned to perform a certain task, no, they can't climb the tree as fast as the monkey for example.


What does "climbing a tree" have to do with intelligence? I can't climb a tree as fast as a monkey either.

O well, you can design a machine that can climb faster than a monkey, but that's all, nothing more.[q/]

Here's a crazy idea: what if you combined many machines into one? Here's another crazy idea: what if that's what has always been done?

[q]I failed to see what's your point here.


I fail to see your point. You brought up things completely unrelated to intelligence. Then you tried to argue that computers can't be general purpose, when they can, and ARE already.

The AlphaGo as I said, the IBM machine defeated the top GO player


AlphaGo is from DeepMind. Not IBM. It was developed before being acquired by Google.

AlphaGo is superior at playing Go, but it is not anymore superior than your pets intelligently speaking.


So what? What's your obsession with superiority? Not everything needs to be on a linear, single-variable scale of superiority?

What is YOUR point? That your wholly unsuitable mental model of how the world works is unsuitable for contemplating artificial intelligence?

Reply Score: 2

people will change
by unclefester on Thu 11th Aug 2016 03:41 UTC
unclefester
Member since:
2007-01-13

People will eventually simply change their speaking patterns to fit the AI. End of problem.

Australia has essentially lost it's regional and class-based accents and most of it's local slang over the past 50 years. The old Paul Hogan/Steve Irwin accents have virtually disappeared.

Reply Score: 2

New surface of the digital divide.
by dsmogor on Thu 11th Aug 2016 10:38 UTC
dsmogor
Member since:
2005-09-01

The ammount of work to support new language / region in AI based in interfaces is just staggering. This made me sceptical about the whole AI revolution in the recent years.
And the outcome can only be twofold:
Either the technology will simply not evolve the way everyone (and VC the most) is expecting
Or due to diminishing returns the support for everybody not speaking one of couple maistream languages will languish creating a new form of digital divide.
Right after making the goods of the internet available for the last 2 bilions we'll create new walls.

Edited 2016-08-11 10:38 UTC

Reply Score: 2

Fer real?
by ezraz on Thu 11th Aug 2016 13:06 UTC
ezraz
Member since:
2012-06-20

Yall'z talkin crazy stuff chya
be like compruters can do my job better than i can goddamn do?
shit, pour me another drink
i feel ya

https://youtu.be/ET4S8MhiSAk

Reply Score: 2

RE: Fer real?
by dionicio on Thu 11th Aug 2016 14:26 UTC in reply to "Fer real? "
dionicio Member since:
2006-07-12

"...my job better than i can goddamn do? "

Some One is damning you, because she|he had to curate your goddamn text, because it went screened by the 'system', ezraz ;)

Reply Score: 2

RE[2]: Fer real?
by ezraz on Fri 12th Aug 2016 21:04 UTC in reply to "RE: Fer real? "
ezraz Member since:
2012-06-20

damn straight yup yup the compruters will end up making slang the only human thang left.

don't shoot i'm human!

Reply Score: 2

fabrica64
Member since:
2013-09-19

AI is nothing more than learning from the past using some sort of adapted/adjusted/weighted statistics. A computer can not hold a conversation and will never do because it can not understand the meaning and reply with creativity. Computers are deterministic machines...

Edited 2016-08-14 02:40 UTC

Reply Score: 1

kwan_e Member since:
2007-02-18

AI is nothing more than learning from the past using some sort of adapted/adjusted/weighted statistics.


And this is different from humans... how?

A computer can not hold a conversation and will never do because it can not understand the meaning and reply with creativity.


Most people can't either. Most people just basically pick from a store of preconceived notions they have and repeat and assert them.

Computers are deterministic machines...


Even if so, the SOFTWARE that runs on computers is not always deterministic. Evolutionary algorithms and neural networks require training. There is nothing deterministic about that. In the case of AlphaGo, not even its creators can determine what their machine is doing.

Reply Score: 2

fabrica64 Member since:
2013-09-19

"AI is nothing more than learning from the past using some sort of adapted/adjusted/weighted statistics.

And this is different from humans... how?
"

A statistic engine may know what is better, but it doesn't know what is beautiful. Real world example, just let AI selecting music for you

"Computers are deterministic machines...

Even if so, the SOFTWARE that runs on computers is not always deterministic. Evolutionary algorithms and neural networks require training. There is nothing deterministic about that. In the case of AlphaGo, not even its creators can determine what their machine is doing.
"

It is, if two computers are provided with exactly the same set of inputs (and training) you'll get exactly the same output.

Some people thinks that real world never gives the same set of input and human creativity and diversity is just the result of the chaos in the inputs and rationals and logic in organizing it, so computers can emulate it. I am agnostic, I don't know, may be we are just a very complex computer or may be we have something different is each one of us. But I know computers don't.

Edited 2016-08-14 11:11 UTC

Reply Score: 1

ssokolow Member since:
2010-01-21

Some people thinks that real world never gives the same set of input and human creativity and diversity is just the result of the chaos in the inputs and rationals and logic in organizing it, so computers can emulate it. I am agnostic, I don't know, may be we are just a very complex computer or may be we have something different is each one of us. But I know computers don't.


Look up transposons (A.K.A. jumping genes) and RNA interference (among other things).

https://en.wikipedia.org/wiki/Transposable_element
https://en.wikipedia.org/wiki/RNA_interference

Combine transposons for the impetus with RNA inteference as a force multiplier and the human body has a ready supply of ways to non-deterministically modify what gets "hard-coded" after fertilization.

In fact, I suspect that such a mechanism is responsible for the predispositions we're born with (eg. the risk-averseness I've struggled with my entire life) since it would make sense as a counterpart to teenage rebelliousness for limiting how inflexible culture can make a population.

Reply Score: 2

kwan_e Member since:
2007-02-18

A statistic engine may know what is better, but it doesn't know what is beautiful.


First, what does knowing what is beautiful have ANYTHING to do with AI? Why do people keep shoving things into the category that has no relevance?

Second, do you know what else doesn't know what is beautiful? People. People can have vastly different ideas of beautiful. There is no universal agreed upon standard of beauty, and in fact great tragedies of humanity have come from people trying to enforce any standard of beauty.

Real world example, just let AI selecting music for you


And yet, companies like Amazon and Google have made massive amount of money suggesting things to people they may be interested in.

It is, if two computers are provided with exactly the same set of inputs (and training) you'll get exactly the same output.


Not true. Many AI algorithms rely on a randomizing step in attempts to escape local maxima and can find surprising results that may or may not be repeatable. In fact there has been a lot of research into new ways of computing that allows errors in order to find solutions that are harder to reach by conventional methods.

Reply Score: 2

fabrica64 Member since:
2013-09-19

"A statistic engine may know what is better, but it doesn't know what is beautiful.


First, what does knowing what is beautiful have ANYTHING to do with AI? Why do people keep shoving things into the category that has no relevance?
"
There are two ways to make money with AI. First automatizing things humans does but not so well, e.g. driving a car. Second using AI to implement a semantic web, i.e. using computers to categorize web contents (photos, videos, products) and make useful suggestions tailored to each of us, i.e. understanding what is beautiful to each of us and sell it. Most of the money will be on the latter, if anyone finds a way to do it. You fail to see where money is :-)

Second, do you know what else doesn't know what is beautiful? People. People can have vastly different ideas of beautiful. There is no universal agreed upon standard of beauty, and in fact great tragedies of humanity have come from people trying to enforce any standard of beauty.

This is exactly the difference between a computer and a human and that's why AI will never be like a human, you said it! This is why we create, argue, love and hate. Nothing of this can be done by a deterministic machine that has always a deterministic output.

Are you a programmer? A computer instruction has always just one and only result, computers cannot create because they cannot escape this. Ok, you can put randomicity in it. You says that randomicity+statistics are making up intelligence and conscience?

And yet, companies like Amazon and Google have made massive amount of money suggesting things to people they may be interested in.

Their suggestions are based on statistics on the "kind" of music you listen, but as the "kind" is subjective it just happens that in hundreds of wrong (aka random) suggestions you end up liking one of two. Radio DJs do it much better, but cost more :-)

The way music is publicized nowadays is exactly the reason music quality is so low compared to just a couple of decades ago. Yet they make money, that's for sure...

Reply Score: 1

ssokolow Member since:
2010-01-21

There are two ways to make money with AI. First automatizing things humans does but not so well, e.g. driving a car. Second using AI to implement a semantic web, i.e. using computers to categorize web contents (photos, videos, products) and make useful suggestions tailored to each of us, i.e. understanding what is beautiful to each of us and sell it. Most of the money will be on the latter, if anyone finds a way to do it. You fail to see where money is :-)


You're massively over-simplifying because AI is such a fuzzy-edged concept. Monetizable computer tasks exist which can't be grouped into one of those two categories and not everything that automates is AI.

I'd be more direct, but I'm honestly not sure what you're trying to get at.

Are you a programmer? A computer instruction has always just one and only result, computers cannot create because they cannot escape this. Ok, you can put randomicity in it. You says that randomicity+statistics are making up intelligence and conscience?


I'm a programmer and, if you're saying that, you have a far more shallow understanding of the discipline than I do.

(Also, "randomicity" isn't a word, it's "randomness". If you want to use the "-icity" ending, the word you're looking for is "stochasticity".)

Saying "a computer instruction has always just one and only result" is the same as saying "a neuron, with a given trained state and set of inputs, will always produce the same output".

Your "mis-argument from differences in scale" is like saying that living organisms are impossible because the universe contains only energy and subatomic particles.

Their suggestions are based on statistics on the "kind" of music you listen, but as the "kind" is subjective it just happens that in hundreds of wrong (aka random) suggestions you end up liking one of two. Radio DJs do it much better, but cost more :-)


False. You're just looking at bad examples.

For example:

1. Garbage-in, Garbage-out: Last.fm is built on the false assumption that it's meaningful to recommend a band or artist when there can be such a huge gulf between the song(s) you like from them and other songs they produced.

(ie. It's not very helpful to give a position down to the millimetre if the margin of error is give or take one kilometre.)

2. Anything (even a human) will recommend poorly if the data is too varied for patterns to be clear from the number of available data points.

3. Stop holding computers up to higher standards than humans. We've never had the resources to hire a number of radio DJs (of sufficient skill) to cover the breadth and depth of music that we're trying to throw at modern collaborative filtering systems are trying to do.

If you want a proper test case, try the U.S. NetFlix catalogue. Last I heard, they were at the cutting edge for collaborative filtering algorithms (the technical term for that kind of recommendation system).

See also "The NetFlix Prize". It's a competition for scientist that they used to do where the first team to improve over their existing results by a certain amount got a million dollars.

(They were forced to stop after a judge had someone pull his video rental records out of a dumpster and got a law passed that, as a side-effect, required 100% certainty that nobody would ever be able to de-anonymize the test data used for NetFlix Prize runs.)

Edited 2016-08-14 21:06 UTC

Reply Score: 2

fabrica64 Member since:
2013-09-19

It's obvious I am simplifying. What I meant is that a computer only does what has been programmed for, never deviates. It may learn but only the way it was programmed for. May be we humans also have no real creativity and it's only randomness and trial and error. I feel it's not, but may be my feelings are just programmed that way.

(Also, "randomicity" isn't a word, it's "randomness". If you want to use the "-icity" ending, the word you're looking for is "stochasticity".)

http://www.oxforddictionaries.com/us/definition/american_english/ra...
Well, I am not a native english speaker, so I trusted a dictionary, but they may be wrong...

Reply Score: 1

Alfman Member since:
2011-01-28

fabrica64,

It's obvious I am simplifying. What I meant is that a computer only does what has been programmed for, never deviates. It may learn but only the way it was programmed for. May be we humans also have no real creativity and it's only randomness and trial and error. I feel it's not, but may be my feelings are just programmed that way.


Think about evolution - it's a very "dumb" process. Random mutations over millennia can enable life to adapt in ways it was never programmed for. Random imperfections is what makes evolution possible, ultimately resulting in intelligent life.

Reply Score: 2