Enter the message bots. As 2016 dawns, there’s a sense in Silicon Valley that the decades-old fantasy of a true digital assistant is due to roar back into the mainstream. If the trend in past years has been assistants powered by voice – Siri, Alexa, Cortana – in 2016 the focus is shifting to text. And if the bots come, as industry insiders are betting they will, there will be casualties: with artificial intelligence doing the searching for us, Google may see fewer queries. Our AI-powered assistants will manage more and more of our digital activities, eventually diminishing the importance of individual, siloed apps, and the app stores that sell them. Many websites could come to feel as outdated as GeoCities pages – and some companies might ditch them entirely. Nearly all of the information they provide can be fed into a bot and delivered via messaging apps.
This seems a bit… Overblown. Bots are going to revolutionise a lot over the coming decades, but messaging bots replacing the point and click interface we’ve been using ever since Xerox invented it?
Much like the death of the PC or Apple, the end of our current HUI metaphor has been predicted more times than I can remember – I don’t see how this one is any different.
These “bot” assistants suck dick. According to predictions we should already have space ships, AI, hover boards, flying cars, cold fusion (never mind that such reactor would kill everyone in same room because of radiation), evil robots (why would robot want to kill humanity still baffles me). Unless it’s connected directly to brain and it’s extension of human body, it ain’t gonna happen.
And what about the accent ?
If predictions are as accurate as the weather reports, that’s no use asking where the distortion in space-time came from.
You cannot imagine how much people redo things from scratch, hence loosing time and resources, just because of private companies and patents.
But even in the FOSS things are not brighter : forks and systemd.
Imagine if everything was aggregated in a wiki-like fashion, with only one branch for cars, one for operating system, one for space rockets, technological advancement and improvement would go light speed ahead.
Like, no Blue Origin vs. Space X dicksize fighting. Putting in common their knowledge to go forward toward the infinite and beyond.
But I know, that’s only a dream I had once upon a time.
Diversity and competition is always good. Having only one brand/model of cars would mean having only shitty cars overall, because different people have different needs, and you can’t make one perfect car for everything.
Also, we already had such thing in former soviet union — only 1-2 different car models accessible for broad non-privileged public, and both of them were stinking really badly. Probably one of the worst cars ever created in human history.
I think you misread the comment – he was talking about sharing the knowledge and research costs, not about having only one manufacturer.
Consider all the money dedicated to cancer research. Many individuals donate money to research charities, but the benefits of such research accrue to large multinational pharma companies who frequently then screw many of the same people for access to those medicines, let alone being motivated to suppress negative results in drug trials (cf Seroxat) and many other malpractices such as price gouging.
If you really think “Diversity and Competition is always good”, you really need to do more reading.
SystemD, regardless of your opinion of it, is an interesting case on collaborative work in this day and age.
SysVInit sucks, doesn’t do what most want.
Apple creates Launchd which is better, but the license is restrictive, so ubuntu creates upstart.
Upstart is good, but has some issues, and… the contribution requirements are too restrictive, so Red hat devs create systemd which arguably is better.
Systemd has no license issues for most people (GPL v2). Has no contribution restrictions. And has contributors from a variety of distros. Because of this, I think it will stick around for a while without a whole new init system taking mindshare.
So each version gained some insight from the previous design, but had to create a different project rather than refactoring the previous.
The question remains, how many man-month worth has been spent trying to circumvolute around badly designed technology and/or stupid licenses when all the people involved could have worked together on the main problem ?
Just like Blue Origin and Space X doing basically just doing exactly the same thing. Twice. If all the workload was spent only on one project, where would have it been so far now ? Landing on Mars already and coming back ?
I agree with what you’re saying. Keep seeing studies after studies on money lost on patent trolls, but no one seems to be asking the question: how much money is being lost to work around even ostensibly “good” patents in order to do something similar but better not envisioned by the patent holder. That’s becoming more and more an acute problem the faster technology advances but patent legal systems remain mired in the slower paces of technological developments from a couple hundred years ago when patent protections were envisaged to enhance product development.
That is a good question. Do you get better results from two independent groups working towards the same goal? Or from one group with twice as much resources?
Consider changing tires for race cars. Sure one guy is slow. Two is better. If you graph the number of people helping to the time taken you should see a linear improvement, but at some point you stop seeing that benefit from adding a single person. Maybe at five you still increase your time, but not as much as adding number four. By the time you get to 9, you may actually start losing time, if there isn’t enough space to move efficiently.
I think large projects can turn out that way too. Too many chefs and what not.
Well here would be a large staff dedicated to landing but then the added resources could be dedicated to interspace cruising, another one to… When you reach the state of the art, the only improvements can only be cosmetic.
I will digress too much about chefs so I pass on this one.
You are confusing predictions with science fiction.
It’s not that we don’t have them. It’s just that they aren’t very good yet:
http://www.aeromobil.com/
Edited 2016-01-14 13:27 UTC
It’s not *just* that they aren’t very good yet, it’s also no one in their right mind wants the average motor vehicle driver flying those things either. That’s why there’s no serious work being done on flying cars.
No, you just redefined AI to include machine learning. We still don’t have the “hard AI” prophesied in ancient scrolls. We have computer programs that can translate somewhat passably between English and Mandarin, but none that can offer an analysis of The Catcher in the Rye on an eighth grade level.
But that’s OK. Machine learning is what we’re going to get, and AI was never more than a pipe dream.
That is not a good criteria for “hard AI” because there is no agreed standard on what constitutes an analysis of The Catcher in the Rye on an eighth grade level. In fact, things like the Sokal affair shows how easy it is for machines to make up nonsense that can trick humans.
Furthermore, the shifting goal posts of what is considered “hard AI” is just people trying to put human capabilities beyond computers – capabilities that most humans don’t actually have either – in order to satisfy a deep seated desire to feel untouchably special.
Most humans can’t play chess, write symphonies or solve outstanding conjectures of number theory, let alone break conventions and create whole new fields/paradigms. Most humans require other humans to teach them stuff too, so the argument that computers can only do what they’re taught equally applies to humans. Modern AI is probably more intelligent than the average person in, say, the Roman Republic.
Edited 2016-01-15 08:32 UTC
No, that’s not at all what the Sokal affair shows. The Sokal hoax was done by a human, and the point was that some po-mo academic journals would accept texts that they admittedly didn’t understand. Later on, other academic journals (even in maths) have been tricked into accepting computer-generated nonsense, for various reasons without a Sokal-like effect. But all of that is still just nonsense! You can’t discuss a book with a nonsense generator, but you can with an eighth-grader.
It doesn’t matter that there’s no standard for an analysis, what matters is that to the AI, reading The Cather in the Rye isn’t a meaningful experience. It doesn’t understand language, it just performs linguistic operations.
Again, for most people people reading The Catcher in the Rye isn’t a meaningful experience. They’re just performing linguistic operations. This is too arbitrary and subjective and an example of what I was talking about. Things like Catcher in the Rye are cultural accidents. It’s not even a sensible way to characterize human intelligence.
Now you’re just bullshitting, exemplifying yet another specifically human form of intelligence that computers have yet to master.
And yes, it’s all about subjectivity: computers lack that (and objectivity, too), and thus fail to replicate human thinking. Humans are socially situated animals. Looking for objectivity in replicating human behaviour is looking in all the wrong places.
I’m not looking for objectivity. I’m saying there is no objectivity. Maybe you want to reread the whole thread because you’re basically arguing for what I said in my first comment.
OK, so define AI for me properly, then. And no, “can offer an analysis of The Catcher in the Rye on an eighth grade level” is not a proper definition. I think that “AI” at this point is more of a esoteric term without a single proper definition, much like “soul”.
Well, my problem with “AI-powered assistants” is the used “AI” term there, which is currently far, very far from a point where it could actually be intelligent, thus useful in the term’s proper sense. In today’s newspeak “AI” is used for a lot of things which, at most, have a very very distant relation to artificial intelligence. And until we reach a closer point, any such-powered assistants will only make everyone’s life miserable and reduce general usability – including time to obtain relevant information – and usefulness – i.e., what they help in is actually something we could find practically useful – by orders of magnitude.
I believe the word “artificial” covers it. If AI were intelligent, it would just be I.
In much the same way, artificial legs aren’t really good at being legs, but they work well enough.
There are somethings I used to have to do, that I no longer do, due to google now notifications:
1) Do anything to see a sports score I’m interested in.
2) Figure out how to track a package, and do that.
3) Look at a weather app to see if significant storms were headed my way.
Its also trying to aggregate all of the stories I might be interested in, as well as real world events around me that I’d like to explore.
And its only getting better. So undoubtedly there is a lot of hyperbole in the story, but some apps have already fallen by the wayside due to google now. I’m sure more will follow.
I always got angry at the ‘death of the PC’ stories as well, because they all use the wrong argument. But using the correct term of Personal Computers, I would say that thanks to Windows 10 and Digital Assistants that all gather your personal information and make it less personal is what is causing death to the personal computing devices….