Linked by Thom Holwerda on Thu 14th Jan 2016 00:00 UTC
Talk, Rumors, X Versus Y

Enter the message bots. As 2016 dawns, there's a sense in Silicon Valley that the decades-old fantasy of a true digital assistant is due to roar back into the mainstream. If the trend in past years has been assistants powered by voice - Siri, Alexa, Cortana - in 2016 the focus is shifting to text. And if the bots come, as industry insiders are betting they will, there will be casualties: with artificial intelligence doing the searching for us, Google may see fewer queries. Our AI-powered assistants will manage more and more of our digital activities, eventually diminishing the importance of individual, siloed apps, and the app stores that sell them. Many websites could come to feel as outdated as GeoCities pages - and some companies might ditch them entirely. Nearly all of the information they provide can be fed into a bot and delivered via messaging apps.

This seems a bit... Overblown. Bots are going to revolutionise a lot over the coming decades, but messaging bots replacing the point and click interface we've been using ever since Xerox invented it?

Much like the death of the PC or Apple, the end of our current HUI metaphor has been predicted more times than I can remember - I don't see how this one is any different.

Order by: Score:
AI
by agentj on Thu 14th Jan 2016 01:18 UTC
agentj
Member since:
2005-08-19

These "bot" assistants suck dick. According to predictions we should already have space ships, AI, hover boards, flying cars, cold fusion (never mind that such reactor would kill everyone in same room because of radiation), evil robots (why would robot want to kill humanity still baffles me). Unless it's connected directly to brain and it's extension of human body, it ain't gonna happen.

Reply Score: 1

RE: AI
by Kochise on Thu 14th Jan 2016 06:37 UTC in reply to "AI"
Kochise Member since:
2006-03-03

And what about the accent ?

If predictions are as accurate as the weather reports, that's no use asking where the distortion in space-time came from.

You cannot imagine how much people redo things from scratch, hence loosing time and resources, just because of private companies and patents.

But even in the FOSS things are not brighter : forks and systemd.

Imagine if everything was aggregated in a wiki-like fashion, with only one branch for cars, one for operating system, one for space rockets, technological advancement and improvement would go light speed ahead.

Like, no Blue Origin vs. Space X dicksize fighting. Putting in common their knowledge to go forward toward the infinite and beyond.

But I know, that's only a dream I had once upon a time.

Reply Score: 2

RE[2]: AI
by Andrius on Thu 14th Jan 2016 07:35 UTC in reply to "RE: AI"
Andrius Member since:
2016-01-11

Diversity and competition is always good. Having only one brand/model of cars would mean having only shitty cars overall, because different people have different needs, and you can't make one perfect car for everything.
Also, we already had such thing in former soviet union — only 1-2 different car models accessible for broad non-privileged public, and both of them were stinking really badly. Probably one of the worst cars ever created in human history.

Reply Score: 3

RE[3]: AI
by RobG on Thu 14th Jan 2016 13:27 UTC in reply to "RE[2]: AI"
RobG Member since:
2012-10-17

I think you misread the comment - he was talking about sharing the knowledge and research costs, not about having only one manufacturer.

Consider all the money dedicated to cancer research. Many individuals donate money to research charities, but the benefits of such research accrue to large multinational pharma companies who frequently then screw many of the same people for access to those medicines, let alone being motivated to suppress negative results in drug trials (cf Seroxat) and many other malpractices such as price gouging.


If you really think "Diversity and Competition is always good", you really need to do more reading.

Reply Score: 5

RE[2]: AI
by Bill Shooter of Bul on Thu 14th Jan 2016 18:51 UTC in reply to "RE: AI"
Bill Shooter of Bul Member since:
2006-07-14

SystemD, regardless of your opinion of it, is an interesting case on collaborative work in this day and age.

SysVInit sucks, doesn't do what most want.

Apple creates Launchd which is better, but the license is restrictive, so ubuntu creates upstart.

Upstart is good, but has some issues, and... the contribution requirements are too restrictive, so Red hat devs create systemd which arguably is better.

Systemd has no license issues for most people (GPL v2). Has no contribution restrictions. And has contributors from a variety of distros. Because of this, I think it will stick around for a while without a whole new init system taking mindshare.

So each version gained some insight from the previous design, but had to create a different project rather than refactoring the previous.

Reply Score: 3

RE[3]: AI
by Kochise on Thu 14th Jan 2016 22:57 UTC in reply to "RE[2]: AI"
Kochise Member since:
2006-03-03

The question remains, how many man-month worth has been spent trying to circumvolute around badly designed technology and/or stupid licenses when all the people involved could have worked together on the main problem ?

Just like Blue Origin and Space X doing basically just doing exactly the same thing. Twice. If all the workload was spent only on one project, where would have it been so far now ? Landing on Mars already and coming back ?

Reply Score: 2

RE[4]: AI
by stormcrow on Fri 15th Jan 2016 02:25 UTC in reply to "RE[3]: AI"
stormcrow Member since:
2015-03-10

I agree with what you're saying. Keep seeing studies after studies on money lost on patent trolls, but no one seems to be asking the question: how much money is being lost to work around even ostensibly "good" patents in order to do something similar but better not envisioned by the patent holder. That's becoming more and more an acute problem the faster technology advances but patent legal systems remain mired in the slower paces of technological developments from a couple hundred years ago when patent protections were envisaged to enhance product development.

Reply Score: 2

RE[4]: AI
by Bill Shooter of Bul on Fri 15th Jan 2016 03:37 UTC in reply to "RE[3]: AI"
Bill Shooter of Bul Member since:
2006-07-14

That is a good question. Do you get better results from two independent groups working towards the same goal? Or from one group with twice as much resources?

Consider changing tires for race cars. Sure one guy is slow. Two is better. If you graph the number of people helping to the time taken you should see a linear improvement, but at some point you stop seeing that benefit from adding a single person. Maybe at five you still increase your time, but not as much as adding number four. By the time you get to 9, you may actually start losing time, if there isn't enough space to move efficiently.

I think large projects can turn out that way too. Too many chefs and what not.

Reply Score: 3

RE[5]: AI
by Kochise on Fri 15th Jan 2016 06:13 UTC in reply to "RE[4]: AI"
Kochise Member since:
2006-03-03

Well here would be a large staff dedicated to landing but then the added resources could be dedicated to interspace cruising, another one to... When you reach the state of the art, the only improvements can only be cosmetic.

I will digress too much about chefs so I pass on this one.

Reply Score: 2

RE: AI
by Andrius on Thu 14th Jan 2016 07:29 UTC in reply to "AI"
Andrius Member since:
2016-01-11

These "bot" assistants suck dick.

Dick-sucking personal assistant? Sign me up for one of these!
According to predictions we should already have space ships,

We do have those. A company called NASA makes them.
AI

We have this one, too.
hover boards

And this, also.
flying cars

OK, we don't have that yet.
cold fusion (never mind that such reactor would kill everyone in same room because of radiation)

Why would anyone be in the same room as a reactor core??? Do we have unprotected people working inside nuclear reactor cores in current power plants? Does radiation from today's nuclear reactors is somehow less harmful than the one from cold fusion?
evil robots

You are confusing predictions with science fiction.

Reply Score: 4

RE[2]: AI
by signals on Thu 14th Jan 2016 13:26 UTC in reply to "RE: AI"
signals Member since:
2005-07-08

flying cars

OK, we don't have that yet.


It's not that we don't have them. It's just that they aren't very good yet:

http://www.aeromobil.com/

Edited 2016-01-14 13:27 UTC

Reply Score: 4

RE[3]: AI
by stormcrow on Fri 15th Jan 2016 02:14 UTC in reply to "RE[2]: AI"
stormcrow Member since:
2015-03-10

It's not *just* that they aren't very good yet, it's also no one in their right mind wants the average motor vehicle driver flying those things either. That's why there's no serious work being done on flying cars.

Reply Score: 1

RE[2]: AI
by No it isnt on Fri 15th Jan 2016 06:28 UTC in reply to "RE: AI"
No it isnt Member since:
2005-11-14

"AI

We have this one, too.
"

No, you just redefined AI to include machine learning. We still don't have the "hard AI" prophesied in ancient scrolls. We have computer programs that can translate somewhat passably between English and Mandarin, but none that can offer an analysis of The Catcher in the Rye on an eighth grade level.

But that's OK. Machine learning is what we're going to get, and AI was never more than a pipe dream.

Reply Score: 3

RE[3]: AI
by kwan_e on Fri 15th Jan 2016 08:30 UTC in reply to "RE[2]: AI"
kwan_e Member since:
2007-02-18

We have computer programs that can translate somewhat passably between English and Mandarin, but none that can offer an analysis of The Catcher in the Rye on an eighth grade level.


That is not a good criteria for "hard AI" because there is no agreed standard on what constitutes an analysis of The Catcher in the Rye on an eighth grade level. In fact, things like the Sokal affair shows how easy it is for machines to make up nonsense that can trick humans.

Furthermore, the shifting goal posts of what is considered "hard AI" is just people trying to put human capabilities beyond computers - capabilities that most humans don't actually have either - in order to satisfy a deep seated desire to feel untouchably special.

Most humans can't play chess, write symphonies or solve outstanding conjectures of number theory, let alone break conventions and create whole new fields/paradigms. Most humans require other humans to teach them stuff too, so the argument that computers can only do what they're taught equally applies to humans. Modern AI is probably more intelligent than the average person in, say, the Roman Republic.

Edited 2016-01-15 08:32 UTC

Reply Score: 4

RE[4]: AI
by No it isnt on Mon 18th Jan 2016 18:38 UTC in reply to "RE[3]: AI"
No it isnt Member since:
2005-11-14

That is not a good criteria for "hard AI" because there is no agreed standard on what constitutes an analysis of The Catcher in the Rye on an eighth grade level. In fact, things like the Sokal affair shows how easy it is for machines to make up nonsense that can trick humans.


No, that's not at all what the Sokal affair shows. The Sokal hoax was done by a human, and the point was that some po-mo academic journals would accept texts that they admittedly didn't understand. Later on, other academic journals (even in maths) have been tricked into accepting computer-generated nonsense, for various reasons without a Sokal-like effect. But all of that is still just nonsense! You can't discuss a book with a nonsense generator, but you can with an eighth-grader.

It doesn't matter that there's no standard for an analysis, what matters is that to the AI, reading The Cather in the Rye isn't a meaningful experience. It doesn't understand language, it just performs linguistic operations.

Reply Score: 2

RE[5]: AI
by kwan_e on Mon 18th Jan 2016 23:32 UTC in reply to "RE[4]: AI"
kwan_e Member since:
2007-02-18

Later on, other academic journals (even in maths) have been tricked into accepting computer-generated nonsense, for various reasons without a Sokal-like effect. But all of that is still just nonsense!


I don't see how that proves your point at all. Computers had been able to trick humans.

It doesn't matter that there's no standard for an analysis, what matters is that to the AI, reading The Cather in the Rye isn't a meaningful experience. It doesn't understand language, it just performs linguistic operations.


Again, for most people people reading The Catcher in the Rye isn't a meaningful experience. They're just performing linguistic operations. This is too arbitrary and subjective and an example of what I was talking about. Things like Catcher in the Rye are cultural accidents. It's not even a sensible way to characterize human intelligence.

Reply Score: 2

RE[6]: AI
by No it isnt on Tue 19th Jan 2016 17:54 UTC in reply to "RE[5]: AI"
No it isnt Member since:
2005-11-14

Now you're just bullshitting, exemplifying yet another specifically human form of intelligence that computers have yet to master.

And yes, it's all about subjectivity: computers lack that (and objectivity, too), and thus fail to replicate human thinking. Humans are socially situated animals. Looking for objectivity in replicating human behaviour is looking in all the wrong places.

Reply Score: 2

RE[7]: AI
by kwan_e on Tue 19th Jan 2016 23:09 UTC in reply to "RE[6]: AI"
kwan_e Member since:
2007-02-18

Now you're just bullshitting, exemplifying yet another specifically human form of intelligence that computers have yet to master.


You're the one doing the exemplifying. Seriously "analyse Catcher in the Rye at an eighth grade level"? What do you call that, if not exemplifying another specifically human form of intelligence that computers have yet to master?

And yes, it's all about subjectivity: computers lack that (and objectivity, too), and thus fail to replicate human thinking.


We're talking about artificial intelligence. Who says that has to replicate human thinking? Who says human thinking is the exemplification of intelligence? That's a huge bait and switch, don't you think? And this is my problem with people who says AI isn't intelligence. They say computers will never be intelligent, then they bring up the bullshit that you do that has nothing to do with intelligence at all.

There's a reason why it's called "Artifical Intelligence" and not "Artificial Human".

Looking for objectivity in replicating human behaviour is looking in all the wrong places.


I'm not looking for objectivity. I'm saying there is no objectivity. Maybe you want to reread the whole thread because you're basically arguing for what I said in my first comment.

Reply Score: 2

RE[3]: AI
by Andrius on Fri 15th Jan 2016 09:11 UTC in reply to "RE[2]: AI"
Andrius Member since:
2016-01-11

No, you just redefined AI to include machine learning.

OK, so define AI for me properly, then. And no, "can offer an analysis of The Catcher in the Rye on an eighth grade level" is not a proper definition. I think that "AI" at this point is more of a esoteric term without a single proper definition, much like "soul".

Reply Score: 2

AI-powered assistants
by l3v1 on Thu 14th Jan 2016 07:56 UTC
l3v1
Member since:
2005-07-06

Well, my problem with "AI-powered assistants" is the used "AI" term there, which is currently far, very far from a point where it could actually be intelligent, thus useful in the term's proper sense. In today's newspeak "AI" is used for a lot of things which, at most, have a very very distant relation to artificial intelligence. And until we reach a closer point, any such-powered assistants will only make everyone's life miserable and reduce general usability - including time to obtain relevant information - and usefulness - i.e., what they help in is actually something we could find practically useful - by orders of magnitude.

Reply Score: 3

RE: AI-powered assistants
by kwan_e on Thu 14th Jan 2016 10:25 UTC in reply to "AI-powered assistants"
kwan_e Member since:
2007-02-18

Well, my problem with "AI-powered assistants" is the used "AI" term there, which is currently far, very far from a point where it could actually be intelligent,


I believe the word "artificial" covers it. If AI were intelligent, it would just be I.

In much the same way, artificial legs aren't really good at being legs, but they work well enough.

Reply Score: 2

Google Now
by Bill Shooter of Bul on Thu 14th Jan 2016 15:36 UTC
Bill Shooter of Bul
Member since:
2006-07-14

There are somethings I used to have to do, that I no longer do, due to google now notifications:

1) Do anything to see a sports score I'm interested in.
2) Figure out how to track a package, and do that.
3) Look at a weather app to see if significant storms were headed my way.


Its also trying to aggregate all of the stories I might be interested in, as well as real world events around me that I'd like to explore.

And its only getting better. So undoubtedly there is a lot of hyperbole in the story, but some apps have already fallen by the wayside due to google now. I'm sure more will follow.

Reply Score: 3

Death of the PC...
by leech on Thu 14th Jan 2016 20:44 UTC
leech
Member since:
2006-01-10

I always got angry at the 'death of the PC' stories as well, because they all use the wrong argument. But using the correct term of Personal Computers, I would say that thanks to Windows 10 and Digital Assistants that all gather your personal information and make it less personal is what is causing death to the personal computing devices....

Reply Score: 5