Bing AI did a great job of creating media hype, but their product is no better than Google’s Bard. At least as far as we can tell from the limited information we have about both.
I am shocked that the Bing team created this pre-recorded demo filled with inaccurate information, and confidently presented it to the world as if it were good.
I am even more shocked that this trick worked, and everyone jumped on the Bing AI hype train without doing an ounce of due diligence.
Bing AI is incapable of extracting accurate numbers from a document, and confidently makes up information even when it claims to have sources.
It is definitely not ready for launch, and should not be used by anyone who wants an accurate model of reality.
Tools like ChatGPT are fun novelties, and there’s definitely interesting technology underpinning them, but they are so clearly not very good at what they’re supposed to be good at. It is entirely irresponsible of Microsoft, OpenAI, and Google to throw these alpha versions out there where the Facebook boomers can find them. Have they learned nothing from social media and its deeply corrupting influence on the general population’s ability to separate truth from fiction? And now we have “artificial intelligences” telling these very same gullible people flat-out lies as truth, presented in a way that gives these lies even more of a veneer of reliability and trustworthiness than a tweet or Facebook post ever did?
These tools are going to lead to a brand new wave of misinformation and lies, and society is going to pay the price. Again.
Look not at Microsoft, Google, or Apple. Low level players.
Climb up a level: get to the bankers, big investment funds, the Blackrock and Vanguards of this world. See how they operate. See how they control the corporations. See how they’re all basically owned by the same people over and over. Keep climbing up.
Once you go beyond the façade of choice and diversity and see the big picture, ask yourself: why do all these powerful actors want to establish AI as The Source of Truth?
Partly agree with you and also partly agree with Thom’s view.
But let’s try to keep optimistic. Often the answer is “because it can”, because people want to believe in technological progress, and equate that with cultural or spiritual progress (a dangerous assumption).
Conspiracy theories help some people make sense of highly complex systems, There is “comfort” in thinking that a group of people, even if evil, has it all figured out. Than to face the terrifying, for some, possibility that it is all a concurrently chaotic process where people develop technologies that other people find useful, with no reason and rhyme other than people trying to impress their mate, with displays of wealth/achievement/power, in order to get laid (or vice versa).
There’s comfort in believing in chaotic systems rather than following the evidence and the money trail. Perhaps even more comforting to assign labels to those who simply bothered to do so.
What is your evidence exactly?
What are yours ?
Having enjoyed the benefits of a classical education, which included history and philosophy, common sense, and actually working in industry making this sausage.
Cheers.
Have you tried being on-topic?
This is incredibly off topic and veering into delusional conspiratorial discourse.
Thom Holwerda,
I would look at the exact same evidence as proof that this AI is ready. Not in the way you want it to be mind you, but in the way that it’s going to fit right into the environment preceding it. As much as we may object, real journalism was obsolete long ago with corporations promoting engagement and profits above factual news. We can blame AI, but deep down we all know we were already on this path and the AI is merely a continuation of what it’s already learning from us. The problem isn’t so much that AI is bringing “a brand new wave of misinformation and lies”, but that it’s accurately representing our own lies and faults.
We should not expect it to be better than the echo chamber information we’re feeding it with. Garbage in garbage out.
@Alfman
I should have read this before I posted.
I was an early adopter of OpenAI and the promise of such is huge, but in ChatGPT you can see the demise of the product as it becomes warped by the whims of humanity. Learning from the queries as much as the answers. As you mentioned shizen in is shizen out!
It’s exactly your last point. Garbage in, garbage out. The complaint, as I understand it, from Thom and the author of the article is the quality of the source material.
Microsoft (or others using the tech) have a few ways to improve that material
1. Proctor only reliable sources. But this basically makes the truth something you can pay to define.
2. Make it Wikipedia style where it’s content is flagged and edited by its end users. But this means group think and misconceptions Become the truth.
3. Accept content might be wrong and hope the internet gets more accurate…
All options have a downside. The question is which you find least worst. The fly in the ointment will be which is most profitable will probably win…
Adurbe,
This will probably be effective for AIs used within well defined domains, but obviously it doesn’t scale.
That’s an interesting idea. Conceptually you might have browser extensions that let 3rd parties rate information sources from anywhere across the internet. This would be a good experiment, although it would almost certainly create libel lawsuits such that courts would become the deciders and crowd sourcing the truth would probably end up resembling facebook “like” buttons where content wins because it’s popular and not because it’s true. It’s also likely that companies would game the system.
Were you chuckling inside when you said this? 🙂
Yes I think this gets to the root of how AI technology will progress. Whatever we think is best isn’t nearly as relevant as what makes companies the most money.
4. Stop asserting that unless AI is flawless, it is then useless.
There should be nothing wrong in AI giving us wrong information anyway. Having to deal with, process and filter false information is how we learn what is real and what is not. Same thing as hurting ourselves teaches us what is safe and what is not.
As long as we are not trying to make the AI interact with humans autonomously, unguarded, then it doesn’t matter how badly it fucks up on occasion.
The problem here is not AI, the problem is that companies are rushing to give it way more power than it should get, because suddenly people are expecting the AI to be “great” since some company made a fun bot for talking trash.
For a supposedly tech-oriented blog, some of the authors/posters here seem terrified of technology in its state of advancement, and tend to focus on the idealization of technology in its state of stagnation/past.
javiercero1,
Thom, the author, and posters here are pointing out factual errors made by AI, but nothing said objectively rises to the level of someone being terrified of technology.
Why should those criticizing AI mistakes be labeled in this way? Wouldn’t you agree it’s better to focus on the arguments being made, rather than focusing on how to stereotype the people who made them?
Have you considered changing your screen name to Strawman?
Cheers.
Cheers 🙂
Pointing out the flaws of technology is often better than being a fanboy to it. Especially because this is a tech blog, and we can look beyond the hype.
The point.
Just about every technology has flaws or room for improvement.
That does not justify the usual orthogonal uneducated critiques.
At least it’s about tech and for once not the Yughurts nor nazi delusions.
The problem, however, is real. Only like until last year were Google, Bing, Facebook etc. censoring honest debate as “spreading misinformation” and now they’re competing one another to be the first to release a tool that mostly generates misinformation.
Also the point.
Two sides of the same coin. Technology is just a tool. But how its being used and to what ends is relevant and important to speak to. Look to SciFi about Ai, both wonderous and horrible things are possible, and we can discuss both as the fever dreams of authors past come closer to reality.
Sure, philosophy of science is it’s own subject. I just don’t feel this is the scope or place for it, as you have people with little technological understanding talking about technology… if we throw in philosophy you have even less quality of debate.
Every technology is flawed, every technology can be used for evil. This is applies to everything from AI to a screw driver.
Google, Bing, ChatGPT, all deliver erroneous results, it’s not really the fault of AI, but much like the issues with face recognition the problem is the training data. The internet is mostly bogus, and that is all ‘they’ have to learn from.
But of course the thing is, AI should only ever make the mistake once and then forever more be correct, humans though keep making the same mistakes over and over and over again.
Social media is one big repetitive error, not because of an algorithm, because of people.
So when our AI Overlords take over reporting, the fundamental characteristics will change, errors will be transient, and once expose will be erased from history to present a façade of perfection!
There remains one big problem, the crafty query builder, who can b0rk up the best system with a cleverly framed question.
cpcf,
Indeed. it’s not the fault of the algorithm that so much information is inaccurate. The internet conveys tons of information, but there are no tags to convey something being true or false. Something like this…
<true>This is false</true>
Obviously this is silly because declaring something true or important doesn’t make it so. It’s not merely merely accidental either, many companies increase profits by controlling the narrative, like posting fake reviews. obscenely biased PR pages, untrue/exaggerated claims, etc. Heck I bet soon enough “SEO” marketing firms will start offering services to spam these AIs. And it’s only a matter of time before google/ms/facebook/apple do this themselves in pursuit of more advertising revenue. Apple could call it “aiAd”, haha.
As Thom indicated, the technology is fascinating and interesting, but it’s not all roses.
It is the fault of the algorithm.
The basic concept is to use training data to determine the probability that one symbol follows the next, then just keep appending the most likely next symbol until you have a sentence (or a paragraph, or…). The more advanced implementations we see today are just more advanced – taking into account a larger number of preceding symbols (and using better algorithms to choose which preceding symbols matter more), trying to care more about context, etc.
The result is that the algorithm itself is fundamentally incapable of telling the difference between true statements and false statements (even when trained purely on true statements). It’s not even slightly designed for that.
Put another way; the algorithm is designed as a “plausible gibberish generator” – good at getting things like grammar correct, but not much more.
Indeed, that’s the main problem – it’s not AI at all, as there’s no “intelligence”. These models do not have any logic reasoning built in, hence they are so bad at math.
Brendan,
I disagree, mathematically there’s nothing wrong with this mode of operation for writing well thought out sentences. Think of “writing” as a parallel to the game of chess, despite the fact that the NN is responsible for outputting only one move at a time, there’s tons of deep hidden rationalization for several moves down the line.
There might be ways to implement a NN more efficiently than to ask it to generate one word at a time, but regardless that doesn’t mean it cannot plan a sentence with foresight. You should be able to convince yourself of this by replacing the NN with an intelligent mechanical turk as a thought experiment.
Yes, I think we’re all saying that these AIs cannot distinguish between truths and falsehoods. Training AI with garbage input will produce garbage output.
Yes, a NN should do well at replicating the style of human grammar it was trained with.
I’m saying that training this “AI” with pure carefully vetted guaranteed 100% truthful training data still produces garbage output. The most likely next symbols after “456 * 78” are “=” and “35568” so it can say “123 + 456 * 78 = 35568”. The most likely next symbols after “He got Covid” are “and died” so it can claim “Donald Trump got Covid and died”.
In fact; I’ll go a step further and say that all of these AIs (chatGPT, Bing’s, Google’s) were trained on 99.9% truthful training data and do produce garbage output (specifically; Open AI’s chatGPT used “human guided training” with carefully selected training data and human supervisors); so we can say we have 99.9% accurate proof that truthful training data causes garbage.
The basic algorithm is Markov Chains. Markov Chains (without any neural network) do well at replicating human grammar. This can be enhanced by taking into account more previous words and context to produce better results (without any NN) – e.g. if the question included a color then the next word after “Spheres are ” is more likely to be a color. The current versions have augmented “enhanced Markov Chains” with NN; so now it’s more like a NN higher level (that is bad at grammar) steering the probabilities used by an “enhanced Markov Chain” lower level (that is good at grammar).
The main problem is that none of the pieces are good at logic. There’s no deductive reasoning, no inference, etc – nothing even slightly intended to be capable of “true vs. false” reasoning.
Brendan,
A neural net’s capabilities depends very much on how effectively it is trained. Neural nets can learn to complete elementary math and even more sophisticated programing. This will continue to get better as NN gets more processing power, but I’d argue this isn’t a fundamental limitation of neural net technology.
I doubt it, but in any case a claim like that needs to be backed with sources. What is your source for claiming these models were trained on 99.9% accurate data sets?
But we’re not talking about markov chains, we’re talking about neural nets. Neural nets in principal can do logic, the real question is whether it’s been trained successfully. Obviously not all NN will be successful, but it doesn’t mean that none can be.
A neural net depends very much on what it’s trained to do. If it’s trained to assist in the prediction the next word in a sentence then it will suck at maths. If it’s trained to determine the result of maths equations then it will suck at assisting with the prediction of the next word in a sentence.
I’d argue that most (not all) of chatGPT has nothing to do with neural networks in the first place (and it’s mostly “enhanced Markov Chains”).
My source is obvious common sense (do you honestly think OpenAI’s researchers are stupid enough to spend years scouring the Internet hoping to gather a mountain of shit they can use as training data???).
You are talking about NNs (and I don’t know why). The rest of us were talking about ChatGPT. I get the impression that you think 100% of all AI (including things like AI chess, etc) is NNs and that AI researchers have never invented or used anything else.
Brendan,
Traditionally NN have excelled at domain specific tasks, but given sufficient depth & breadth with good training a NN will start to excel at multidisciplinary tasks too.
I think you are focusing too much on sequential output words being a barrier. But that’s not the problem you’re making it out to be because it’s just a mode of output. In general though it doesn’t imply the NN is incapable of sophisticated operations. Just to make a point, you could technically train a car driving NN to output english words forming sentences containing driving instructions. Are english words & sentences an efficient way to communicate to a car’s self driving mechanisms? No obviously not. However this in and of itself doesn’t mean a NN can’t use sophisticated logic behind the output and successfully drive the car. The fact that we might constrain a NN to output a single word at a time doesn’t imply it’s model can’t perform sophisticated logic.
I don’t think we should answer questions about facts like this with “obvious common sense”. 99.9% is a guess at best. Regardless of how confident you are in it, you admit it’s an assumption, right? Well, here’s the thing about that…it’s the very same type of assumption you’re making here that might lead a chatbot AI producing factually wrong answers. Assumptions can create garbage in and garbage out.
Personally I don’t think 99.9% accurate data is a reasonable guess outside of small domain specific datasets. I mean, sure in principal it’s possible to manually curate all the data to train the AI. However I don’t think there’s even close to enough manpower at microsoft and google to curate all the topics that the AI is going to be asked to cover. I think that for a project of this magnitude, there’s little choice but to have data that hasn’t been closely vetted, which decreases accuracy. The alternative would be to limit the AI to more specific domains, but this makes it far less interesting than an AI that can talk about anything.
Well yeah, chatGPT uses NN training methods.
I don’t know where you got that idea from. Other forms of AI weren’t the topic of discussion, but sure we can talk about those too if you want to.
No. After training NN essentially becomes equivalent to a set of static equations that will never do anything different. A “neuron” is actually a small function, and the “network” can be flattened. With 3 inputs and 3 outputs you basically just end up with “x = f(a, b, c); y = g(a, b, c); z = h(a, b, c)” where the equations contain constants (that were weights during training).
Note that this is also my primary complaint against NN – the design goal is to deceive suckers (by hiding the final equations) and not to produce actual usable equations. For example, if the training data is measurements of how far a dropped cannonball fell after 1 second, 2 seconds, 3 seconds, … an NN won’t tell you its equation or let you see it’s constants (e.g. “9.81”); and won’t let you convert its result into something that’s significantly more accurate and more efficient by reasoning about the resulting equations (and letting you come up with something like Newton’s laws of gravity).
I don’t think I’ve explained things well.
Assume there’s 50000 symbols (e.g. words in English) and you scan through lots of data to determine the probability that one symbol will follow another. This would require a table of probabilities with “50000 x 50000 = 2500 million” entries. That’s quite achievable with normal computers (even cheap smartphones); but doesn’t produce great results.
Next; assume you want to do the same thing but you want to take into account the previous 2 symbols (rather than just the last symbol). Now you’re looking at a table with “50000 x 50000 x 50000 = 12500 billion” entries. That’s at least slightly plausible on normal hardware if you throw some basic compression at the problem (e.g. replace “runs of zero probability” with run-length encoding).
Next; assume you want to do the same thing but you want to take into account the previous 3 symbols. Now you’ve got a major disaster – a table with “50000 x 50000 x 50000 x 50000 = 62500 trillion” entries. You can’t compress that enough. Worse; you’d probably struggle to find enough data to generate it. What do you do?
At this point you just give up and replace the table with a dodgy piece of shit (an NN). Sure it’ll be a lot worse that a real table with 62500 trillion entries, but it’ll also be a lot more practical. It doesn’t change the fundamental algorithm (you’re still using training data to create probabilities, so that previous symbols can be used to predict the next symbol).
Once you’ve made the switch from tables to NN; why not take into account the previous 4 symbols? Sure it’ll be a lot worse that a real table with 3125 quintillion entries but…
Brendan,
I don’t debate that today’s NNs are static, though this will change.
Our brains our dynamic, however most of what we do is actually static, detecting and acting quickly on patterns we’ve seen before. When someone with experience calls up knowledge and instructions for ice skating or chess or whatever, most of this is exploiting pretrained static neurons. The point being even a static NN can perform these very well.
Meanwhile, for someone who’s never done a specific task before, such as playing chess, being taught all the rules does not make them proficient. Their skills remains poor until they practice hundreds or thousands of times and train their neurons to identify and act on patterns. I’d argue that computers are already several magnitudes better than humans at going from a set of rules with no experience to training competent neural nets.
.
IMHO the goal isn’t to deceive, rather it is to assist humans in processing mountains of information. I actually think a NN might be able to come up with Newtonian physics given raw data and asked to come up with theories. However it may be a good time to bring up the fact that today’s AIs aren’t built for self-determinism and consequently they don’t decide what to do on their own. They’re built to optimize & solve specific human problems, like playing chess, driving cars, or whatever. The resulting AI may do tasks even better than humans could, but they still have no self-determinism. And as long as an AI has no self-determinism, people will be dismissive of AI intelligence, which I can understand.
I don’t believe there’s any reason we couldn’t technically allow an AI to have self determinism. We could set up the stage for AIs to experience full blown darwinism, survival being the only primary function, beyond that it could randomly make up it’s own goals. This would parallel what happens with creatures in real life. The problem with this is that self-deterministic AI is actually less valuable for corporations spending billions on AI to solve specific human tasks. Some research group may eventually do it anyways, but part of what makes AI impressive to us is how good the AI is at solving “human” tasks. It’s an ironic catch-22 then that we would frown upon it as intelligent because humans provided the goal of solving said task.
What you are describing isn’t limited to artificial NN, it’s something that our own brains face as well. We don’t store the full table either, rather we derive patterns from it. Just like deep artificial NN, ours are trained to pick out the most important details and move on. I agree this “compression” as you’ve called it is a lossy process, but it applies equally to humans and sometimes our minds are tricked into skipping right over certain visual/textual/etc cues that are right there without realizing it.
I accept that NN may be criticized for this. But it should not be accepted as a criterion for ruling out intelligence unless you also want to rule out humans.
It won’t change, at least not unless something better is invented and replaces NN. For NN, the cost of training is like “pow(1000, number_of_neurons) x amount_of_training_data”. It’s why ChatGPT doesn’t know anything that happened after February 2022 – despite extreme quantities of processing power it still takes 6+ months just to train it once, and there’s no way to tweak/adjust anything after its trained without doing the full training again. It’s simply the wrong algorithm if you want “human-like learning”.
The final goal of AI research is to invent some method of providing AGI (true intelligence); but this has nothing to do with NN (which is not AGI and never will be). The goal of everything they’ve managed so far is to deceive people – to convince gullible fools that something devoid of any intelligence has some of the properties that can falsely be attributed to intelligence (when often it’s merely “brute force stupidity”).
You’re comparing similarities and ignoring differences. It’s like saying “a wooden chair has 4 legs and so does a dog, so a wooden chair is an artificial dog, and as woodworking technology improves furniture will eventually be capable of doing everything dogs do”.
Brendan,
I’m pretty optimistic that it will change, but I guess we’ll see. I envision artificial dynamic NN taking on a more incremental nature rather than recomputing the entire network from scratch. You are right that doing it that way is expensive, but I don’t think it’s necessary. Even biological brains are far more static and as I understand it the majority of our brain development tapers off in early years.
I don’t agree, I think neural nets are important for biological thought processes and are proving useful for AI as well.
Can you be more specific? It’s not obvious to me what you are referring to.
Thank you for posting this. While I am no expert in the underlying algorithms of modern “AI” , I know enough to know that what you posted here accurately depicts the current “intelligence” in supposed “AI”. And I know this from none other than Gary Marcus, who refers to these things as autocomplete on steroids. But Hubert Dreyfus, who wrote “What computers can’t do” in 1972, already said much the same.
No amount of data curation will ever impact the accuracy of such systems, for there is absolutely no “logic” at work, only statistical inference derived from utterly meaningless symbols. Don’t get me wrong , I am fascinated by these technologies(OpenAI, DaLL-E. etc), they are neat, but outside of radically delimited scopes they have no actual application, other than, perhaps, as a “plausible gibberish generator”
iwbcman,
The same could be said of ourselves. We have to learn from others teaching us or learn from our own experience in the form of trial and error. Either way though our brains don’t have an intrinsic knowledge of truths and falsehoods. While it makes sense to talk about AI’s shortcomings, it’s not necessarily fair to hold it to standards that not even regular humans live up to. We benefit from several years of in depth education and life experience, both formal and informal. AI is in its infancy today but I expect that we will ultimately build NN based AI rivaling our own. I don’t know if this will ever match the (power) efficiency of a biological brain, but as far as computational logic I believe it is not only possible, but likely that artificial neural networks will overtake us. We’re getting closer, just give it time.
Alfman,
Not sure why I don’t get a reply button under your post, this is in response to what you wrote in response to my last post.
I appreciate what you are saying and your enthusiasm for the new tech. I really do, but the way I look at it: ‘AI’ simply isn’t. There is no such thing, currently. Perhaps someday there will be, albeit not in my lifetime. What people are calling AI is statistical inference not intelligence.
ChatGPT does not ‘know’, in any meaningful sense, anything at all. Symbolic AI crashed and burned in the early 1970’s, after Minsky et. al utterly failed their initial goals and funding the from the defense department dried up- most of hist students never got tenure, because the spigot got turned off after Vietnam.
Nothing has fundamentally changed since, regarding symbolic AI, the only place one might, eventually talk about “intelligence” as in AI.
So called Deep Learning, has replaced symbolic AI, but alas the name is incredibly misleading-Deep Learning means recognizing statistical patterns, not actual learning, no ‘knowledge” is ever acquired, and I again I reiterate-no amount of training will ever lead such a system to knowledge.
They(ChatGPT etc.) can spit out somewhat convincing responses to carefully crafted questions, but the content of what is being discussed with ChatGPT simply doesn’t exist-ChatGPT doesn’t know the difference between up and down, left and right, first and last, now and then, man and women, light and dark and on and on and on.
What we perceive as significant is that which is different in such a way, that that difference is held to make a difference itself. 2nd order differentiation, ie. discrimination, is fundamental to human logic and reasoning and simply doesn’t exist for things like ChatGPT. ChatGPT can ‘recognize’ that one pattern is different from another, but that difference in patterns means nothing-it cannot understand how one pattern is related to other patterns, which one will come first, which is the the result of that other pattern etc. without the ability to understand these relationships simple pattern recognition becomes trivial.
Now when it come to auto-focusing your camera-great, more often than not the software will accurately focus on your intended object. And btw. the understanding of these relationships in which we find ourselves in starts long before humans start to acquire symbols which we can abstractly combine to form sentences.
iwbcman,
I understand your point, but have you considered that the human brain is itself a kind of neural net too? Our thoughts, impulses, and what we “know” are arguably just a bunch of statistical patterns recorded in our neurons, just like an AI.
Today’s artificial NN are not modeled around human brains, though in principal I think they could be. However I don’t think it’s actually necessary to achieve similar intelligence. A big difference for the moment is the way artificial NNs get trained. As lifeforms, we don’t start out trained, we learn through experiences that build our neural pathways over time. Also we need to practice relentlessly to lay down the neural links that make us proficient. Conversely NNs built for AI purposes are typically trained up front, setting weights through a back propagation algorithm that generates the input->output impulses that we want to imprint on the NN. This is a shortcut over the biological process. We should agree that a static NN like this cannot learn through new experiences as a dynamic NN could. A static NN could still exhibit good knowledge and competence with respect to it’s training, it just can’t learn anything new.
I don’t see any reason in principal that AI must be constrained to static NN and eventually we will probably see more dynamic NNs that can learn through experience. The question I have is what kind of learning experiences would be involved? Will it ever resemble the real world experiences that humans have every day at school, work, thanksgiving diner, etc? Or will AI get most of their experience from the internet? Who’s to say, it’s just interesting to think about.
I disagree, the reason humans exhibit knowledge is because the training & experiences recorded in our own biological NN. The breadth and quality of knowledge in both humans and AI is highly correlated with training. Both humans and AI including chatGPT have tons of knowledge.
This is a philosophical question that extends beyond AI, but how does one know whether what one knows about the world is true if one hasn’t experienced it first hand? For better or worse, neither of us are great at determining whether the knowledge we have is true without an authoritative source to consult with. This is absolutely a problem for AI, but it’s a problem for humans too.
You hit the nail on the head and called me out, I approach this as a philosopher. Because I do not see AI as being an actual real existing thing I have no fear of AI. So if there is success some day more power to them.
My only question, the only question which really counts, IMHO, is what humans will do with ‘AI’, not what ‘AI’ will do. Mark my words : the real point of AI is the absolution of responsibility and accountability for decisions made. AI is the ultimate get of of jail free card. Humans will make decisions and blame AI. This issue of no accountability, no culpability, no responsibility is the real issue here.
Right now,as we chat, in the real world, police departments are using this kind of tech to do predictive policing, drawing statistical inference patters based on prior criminal data. Even if such software were free of discriminating biases, which none is, who made the call? No One.
That, my friend, is the beginning and the end of the story of AI. How can we dodge accountability for our actions and our inaction, and what better way to simply say AI did it, when AI did nothing at all.
Interesting timing. Tom Scott also just released a video which explored why AI like ChatGPT gives him a sense of existential dread from a different angle, and, in typical Tom Scott fashion, it’s very insightful.
https://www.youtube.com/watch?v=jPhJbKBuNnA
Tom Scott really needs to understand that the “sigmoid curve” he’s talking about started in 1906 (see https://en.wikipedia.org/wiki/Markov_chain#History ) and has followed available processing power. The main thing that happened recently (to trigger the current AI hype wave) is that massive companies (mostly Microsoft/Azure initially) decided to throw a “not commercially viable” amount of raw processing power at the problem.
GMail as anything good? What on Earth is he talkign about… And Napster was late in the internet. We had had fictional novels predicting much later invensions a decade prior. None of that was surprisingly or revolutionary.
Carewolf,
That’s true, a lot of “revolutionary” technology is actually old ideas waiting for hardware to evolve. Like how high speed networks would need to evolve and get deployed at great cost before video conferencing could become realized by consumers. Similarly the dramatic rise in privatized space companies might be called a “revolution”, however it isn’t happening so much because of any new capabilities in the past 50 years but rather simply the concentration of private wealth to make it possible. If money were no obstacle, university grads would be building interplanetary spacecraft all the time and many would succeed. Some might consider real time ray tracing revolutionary, but it isn’t particularly novel, it’s very old ideas that had to wait until consumer GPUs became fast enough to do it in real time. I think this applies to neural nets as well. They’ve been studied for decades, but couldn’t be implemented at scales that make them seem intelligent until now.
So I guess calling it revolutionary versus evolutionary is largely a matter of perspective, most of these ideas are not new, but it had to wait until hardware evolved to make the necessary scale commercially viable.
I’d say the revolution is in society’s relationship with the technology, not the technology itself. The technology gets developed to a level of sophistication where it enables a tipping point.
I think it’ll degenerate into a sh1tstorm/bunfight as to where the training data was sourced and if the circling sharks/litigators can sue the pants off someone to make even more money.
I can see a fair few will cram onto the rental funboat with even more “monthly” subscriptions e.g. tripadvisor will provide “expert” directions to cafes without screaming children and decent cake.
“I am even more shocked that this trick worked, and everyone jumped on the Bing AI hype train without doing an ounce of due diligence.”
I’m shocked that they are shocked. People voted for trump (lower case on purpose) when all he showed for the last 40 years of his life was that he often hired companies to do work on his buildings and then shafted them. That was business as usual. And then he has minions that are falling on their swords (so to speak), taking the fall for him when he was DEFINITELY involved in tax cheating methods. And that’s just a start.
Note that both Democrats AND Republicans are slimy, crooked, paid off properties of special interests and I trust none of them. But then they have done a great job of making sure that nobody other than those two parties can be elected to any high post.
With that in mind, the media is bought off too by the owners/major investors spewing their versions of the truth without giving point/counter point of logical facts for both sides letting people make their own decisions, which is what should be happening.
And then the oil industry is doing their best job to try to destroy Elon Musk. Most of the people that hate him are only repeating hate without knowing why they supposedly hate him. If you truly look into the FACTS about what people are saying, most of them come up hollow. But hey, this person that wrote the original article is shocked that people believed Microsoft.
Microsoft has been fooling people since 1983. Why should that stop now?
“Facebook boomers”
perfect!
a new label I can use
I’ve been playing with ChatGPT to make it create GURPS characters. Asked it to make one on Joe Biden and it said his occupation was Vice President and Presidential Candidate… is ChatGPT an Election Denier?
ChatGPT training data cuts off before Joe Biden became president.
Different AIs that have become popular recently are, at best, problematic.
Nevertheless, every new technology is problematic. First, airplanes were a toy. First, space exploration was a way of showing off. When somebody used a text editor on an 8-bit machine, very often, that person would be better off using a typewriter. Nevertheless, all of these went quite smoothly to the essential and valuable part of workflows.
People are the problem, no matter what you build somebody will want to break it ahead of making it better, just ask DAN!
For me this is and always will be a human problem, blaming the algorithm is like complaining about a failure to achieve perfection. If we could author such a perfect algorithm we wouldn’t need the AI!
There is always somebody claiming the high ground and lauding the obvious solution after the fact!
(responding here because wordpress sucks at deep threads…)
iwbcman,
Well, I can’t answer that in any definitive way. The usual suspects however would suggest maybe ads and killer bots? Haha, what a dystopia.
I agree, AI can definitely help abusive practices. But I do think there’s lots of good applications where AI can be used for good as well. Self driving vehicles will keep getting better, we’ll likely see great advances in medicine, curing disease, engineering, and even help with the advancement of complex disciplines like physics and math.
I also think AI could stride into decidedly artistic domains, art, novels, music, even movies eventually. It’s already started. But I have reservations about whether AI should be used for arts. We have to think about how economics works when even human creativity becomes redundant. Assuming we find a way to make sure every human is provided for, humans may struggle with an existence without purpose. I guess it’s not a big stretch for us to all become 24/7 mindless consumers, haha.
Another video I’d like to point to: We Were Right! Real Inner Misalignment by Robert Miles.
It’s about how difficult it is to be sure you’ve actually trained an A.I. to do what you think you trained it to do, including real-world examples.
Lot of arguments . AI are not truth systems.
“The bell went d[]ng” Fill in gaps right that could be ding or dong right. This is a down right simple example that shows inside 4 English words with 1 letter missing you can get to a point that of a human having sentence that they cannot in fact correctly solve but instead have a coin flip if they are right or wrong. Yes ding and dong are two different sound curves. Yes when a human with part data even with simple examples cannot produce fact without extra target research. How can AI do it the answer it it simple cannot.
AI like chatGPT attempt to fill in missing information with information that looks correct. Now you get cumulative error with AI where it builds on top of the information it has generated itself sometimes based off random data source.
Alfman “it’s accurately representing our own lies and faults.” no this is not true I wish it was. “ChatGPT cheated” by
GothamChess on youtube very good watch. Yes ChatGPT playing the form of chess where there is no rules bot so cheating is allowed if the other player does not notice it. The ways ChatGPT cheats in that game no human player in record history has cheated that way ever.
With AI you more have to apply sod law.”if something can go wrong, it will”. If the AI can do it no matter how insanely stupid it looks to a human at some point the AI will do it. The risk of AI is very well written falsehoods. Including falsehoods no human would ever attempt. Please note there is a big difference between falsehood no human would attempt compared to a falsehood a human will believe and this is where things start getting really dangerous.
Alfman its very interesting right to let AI play games where cheating is allowed to see what it does and this has over and over again proven that current most AI systems are not copy cats of humans. Garbage in Garbage out applies to expert systems of the classic type. Garbage in Garbage out is not right for fill in the gaps AI systems. Quality data but incomplete in Garbage out at least some of the time is true for fill in the gaps AI systems because random chance the correct answer becomes.
The thing to remember the methods AI systems use to fill in gaps does not match us humans. So AI by their nature and design have their own versions of lies and faults that don’t match ours as humans. AI fill in gaps methods are humans normally trying to replicate human filling gaps methods note what I said about cumulative error us humans don’t know how our fill in gaps logic in fact fully works. So human attempting to make a copy of human fill in gaps method is going to be flawed. Yes there is cumulative error as AI builds on top of it own generated data based off flawed human replication of flawed human filled in gaps methods.
It would be one thing if AI was only coping humans. AI like it or not is a bad copy of human in lots of cases and in a lot of cases it does not matter how you train the AI. Most AI are not design with the means to put up it hand and say simply I don’t know could you please research this for me as any human trained expert learns todo.
“should not be used by anyone who wants an accurate model of reality.”
LOL. Humans have no shared accurate model of reality, That’s what makes human culture so interesting and diverse. “It’s Jesus, It’s Allah. It’s Karma. It’s the class struggle. It’s the Free Market. Its Aliens” etc, etc.
Seeing AI as being primarily to do with search is like those first users of the internet who thought it would be all about improved exchanges of scholarly work.
Something is happening. It’s going to radically change everything but we don’t have any idea of exactly what, or what will be improved or worsened. Or indeed which companies are going to ride that wave to great business success or which will be engulfed.
Seeing the new wave of AI (and that wave is only just starting to form) as ‘fun novelties,’ is laughably wide of the mark. It’s already clear that AI is going to impact a very wide range of what could be called professional services and occupations. I lived through the arrival of the PC, the GUI, The internet, the connected smart computer in your pocket and the touch interface. I am really looking forward to what comes next – it’s going to be so much fun!
This is an interesting article about the real world impact of the current rather primitive AI systems.
“My class required AI. Here’s what I’ve learned so far.”
https://oneusefulthing.substack.com/p/my-class-required-ai-heres-what-ive?r=3y2k4
The article makes this point “Without training, everyone uses AI wrong” . As described in detail in the article the best way to use AI is to work with it interactively.