OpenAI’s efforts to produce less factually false output from its ChatGPT chatbot are not enough to ensure full compliance with European Union data rules, a task force at the EU’s privacy watchdog said.
“Although the measures taken in order to comply with the transparency principle are beneficial to avoid misinterpretation of the output of ChatGPT, they are not sufficient to comply with the data accuracy principle,” the task force said in a report released on its website on Friday.
↫ Tassilo Hummel at Reuters
I’m glad at least some authorities are taking the wildly inaccurate nonsense outputs of many “AI” tools seriously. I’m not entirely sure when a tool like ChatGPT can be considered “accurate”, but whatever it is now, is not it.
Europe is trying hard to make themselves the backwater of the 21th century.
I mean, they could potentially ban the use of modern AI tools inside their territories. However I am not sure they will be able to influence their peer nor their competitors to do the same. Would China, russia, UK, or even the US stop using “faulty, inaccurate” ML models and AI? Or would they just accept the risks and move along?
Given I have really seen a massive increase in productivity in people who are using these tools effectively, they are not only closing the market to themselves, but also essentially crippling their own citizens in the global arena.
(The other option of ChatGPT being “accurate” might came too late).
(sorry for being very opinionated about this topic)
At least to me, you appear to be very pragmatic and reasonable here. I see much more “opinion” elsewhere.
> Europe is trying hard to make themselves the backwater of the 21th century.
I am a proud European — who has to agree with this sentiment unfortunately.
They appear so desperate and overwhelmed by this force which will shake up the over-boarding welfare states. (I do so much agree with Alfman’s consideration of the impact and change coming…)
Imagine politicians enacting rules on “accuracy”. Its hilarious.
Yes, sometimes the answers of the “AI” models are problematic. But this is what you have a brain for! Yesterday, I wanted to get an approximation of a Solar panel array catering for air conditions and ev-cars. My goal was a list of components and steps to consider as well as some guidance for discussion with the company I would choose eventually.
Despite some grave initial mistakes (like suggesting 1(!) inverter per panel initially) it did a great job! It corrected any obvious nonsense after pointing it out and within 10 minutes I felt like I’ve gotten an accurate draft for benchmarking the commercial offers.
Imagine me going to a company without this kind of preparation. They would have ripped me of and lied about needs and pricing even more (but intentionally!).
.
So like it or not, the world will move on. Seen from a distance (like Indonesia), Europe is tiny!
Andreas Reichel,
That is a good thing. I would really like Europe to wake back up, and become better.
Yes, they mixed up the usage of the tools. Many assume GPT is a replacement of Google or Wikipedia, and ask questions like “Who is “? and get discouraged when the results are incorrect.
If you need Google or Wikipedia, then… you can use Google or Wikipedia!
(And I’m not sure they too would pass these arbitrary accuracy rules, but that’s another story)
That is an excellent use case for an AI assistant. Even if the results are not 100% accurate, they seem to be definitely helpful.
> Many assume GPT is a replacement of Google or Wikipedia,
In fact it was the perfect replacement for Politicians and “Big 4” Consultants. Maybe that’s the reason why they run scared an switch to a full populist agenda.
Massive increase in productivity where? Genuine question, i still have yet to see it.
I will grant you that some software developers feel they are more productive with tools like copilot, but what i see it doing is trivial stuff that with more thought could have been done much simpler and with less code. If typing speed is holding your back, you are doing something wrong.
Writing stuff, if it ain’t worth doing manually, maybe the value is so low that it should simply not be done at all?
Support chat bots – i wish there was a law against those, i am sure money is saved though, at the expense of wasting the end users time instead.
But i would love to be proved wrong if anyone can come with real use cases where it adds real productivity improvements, not just helping make even more low quality busy work.
Troels,
Well, for software developers. Searching for answers and treading through documentation is a time-sink. Not having to write boilerplate code could save a considerable amount of time in aggregate and that’s already one of LLMs strengths today.
It depends on what you think the outcome will be. This type of legislation forces the LLM/AI models to create more accurate implementations. Russia/China/etc might well push ahead. But if the models they are using are built on producing inaccurate results the economic value of that is potentially negative, not positive.
Take a simple example of someone writing a scientific paper to produce a new cancer drug. If the AI they use provide a false model you might spend millions/billions developing a drug whose effect isnt what you expected and/or dangerous. But improve the model to the point of being able to trust it and new drug development will be accelerated.
What these clowns simply don’t understand: it’s just about a Best Estimate, determined by the probability and by the impact. It’s true that those model don’t consider the Impact and just aim for highest probability (only) — and so sometimes create hilarious output.
But so do politicians, just the other way around: “Die Rente ist sicher” (the pensions are safe) is one of the more infamous examples for this, high impact despite low odds.
Those “AI” discussion remind me more and more of frogs arguing against draining swamps while the farmers look already what crops to raise on the new irrigable lands.
Adurbe,
What you say is logical, however I think it may be overlooking something critical: the source of the error. The LLM itself isn’t the source of the error so much as garbage in garbage out. Garbage in the training data will increase the odds of garbage output. LLM has little basis for determining real world facts on it’s own and consequently an LLM is very susceptible to bad input.
However training by example isn’t the only way to train an AI. In the case of creating drugs I would think a self reinforcement learning algorithm would yield much better results without the weaknesses of relying on outside sources. We can still use LLM models, but by sticking to training data that can be auto-verified through protein folding experiments. Such models would be much less vulnerable to misinformation while being able to predict proteins/drugs that could yield the best results even before executing intensive protein folding tests. I think that in this way, AI could open up a lot of doors to drugs that were previously impossible/impractical to brute force.
Alfman,
https://www.assemblyai.com/blog/how-chatgpt-actually-works/
ChatGPT actually used “reinforcement learning from human feedback” (RLHF), which gave him? the awe inspiring capability when it first came out.
I’m not sure they kept up the quality, but initially it would write college level essays easily, as it was rumored to be trained by feedback from PhDs.
sukru,
It would be ideal if LLMs could stick to verifiable data sets with no garbage input., but obviously the challenge is scaling up. Training these LLMs on verified input could be a viable option in limited cases, such as within a controlled business environment. But when speaking of a global LLM containing all of human knowledge without any garbage,, this would be a monumental challenge to say the least.
Despite this, I still see chatgpt as extremely useful for querying and compiling data from public sources. As long as we use it for what it is then I don’t see a good reason to ban it. It only becomes a problem when people misuse it as an authority or arbitrator of facts. I feel that the EU should focus on education & training rather than imposing technology bans (assuming the EU intended to do that). Education can be beneficial whereas bans are more likely to harm their own constituents.
@Alfman:
What exactly are verifiable data? And who watches the watchman?
Please don”t get me wrong: You have a strong point here as long as everyone act rational. Although my concern was that this “verifiable data” would prompt become a target for agendas from both/all sides of the spectrum.
If the world was about “verifiable data” then pensions, immigration, education, taxes, religion, gender (and many more, the pun is indented) and pollution, those all would look very differently.
“I feel that the EU should focus on education & training rather than imposing technology bans (assuming the EU intended to do that). Education can be beneficial whereas bans are more likely to harm their own constituents.”
Alfman for president!
Andreas Reichel,
Even a hypothetically advanced AI is reliant on the quality of input data and it stands to reason that quality of training data should at least be on par with the quality of the output we expect. I Godel’s incompleteness theorem may be insightful here.
“Who watches the watchman” is a fair question. The victors tend to rewrite history in their favor and it puts the impartiality of our public works into question. You are right to bring up watchman integrity although I think an even bigger practical issue is the quantity of data involved. The notion that we could even scale up an operation capable of validating any significant portion of it seems untenable. I suspect companies will succeed at training LLM on their own verifiable business domains, but in terms of this general LLM trained using the public internet, I think we have to recognize this data source has grown beyond our ability to manage/verify it.
I believe AI and deep sophisticated simulations have the capacity to solve many of our social problems in the real world in much they same calculated way they utterly trounce adversaries at chess. However the problem is that people won’t voluntarily follow such plans. Even an otherwise bulletproof solution to X can fail because humans aren’t going to listen. Moreover some humans are even the benefactors or cheerleaders of negative outcomes. IMHO the main reason AI will fail to solve these problems won’t be the lack of a good plans, but the failure of humans to cooperate.
To the extent that a sentient AGI ever materializes, it would have to realize this too. This realization could be the biggest risk of AGI taking over. Even in the absence of any specific malice towards humans, some chain of proofs could lead to the conclusion that control needs to be taken away from us in order to save humans from our own failure to save ourselves. Anyway this is getting off topic.
Haha. I cannot vote with my residency status, much less run for president.
Despite all this technological buzz-words the world seems to reverse development. There was once an understanding that sub-optimal products and tools would delete themselves from the market. We also learned that Central Planning simply does not work (it took only 20+ failed attempts and maybe 200+ Mill. dead people, who cares, we had to continue trying after the first 5 attempts, did we?).
Yet, we banned “credit scoring” — because we clearly knew that the commercial Bank’s would willfully establish wrong, faulty algorithms in their approach to make bad credit decisions and reject good obligors in order to harm their bonuses and shareholders. Wait, what?
We also keep increasing pensions despite knowing the all European countries can be considered bankrupt yet when accounting for the existing pension liabilities. Every corporate accountant would be jailed for this kind fraud on charged of Insolvenzverschlepping (Thom, help me out on this word please.)
And we keep trying hard to undermine end-to-end encryption for our own safety and to protect the children of course (while ravaging the educational system to the ground at the same time.)
Now those same politicians want to charge approximation models because they do not fit the narrative?
@Sukru: This is what I call an “emotional” post 🙂
Andreas Reichel,
I see your point 🙂
I’ve tried various AI’s about 10 different times. Complete garbage. It’s been 80%-90% completely wrong and fabricated answers that are dead simple to fact check. Whatever the modern AI’s are training on, it’s not useful at all.
However, I do know people who swear by them to help them write business emails. I’ve been a writer and a business correspondence writer for ages, don’t have any use for algorithms to advise me on how to put together a few sentences. But I have seen some people become better and more confident writers with their help, so at least in that regard the AI’s seem to have some value. I highly doubt that it is worth the cost of the energy inputs required to run them though.
I do not agree with you on this, but I would die for your freedom having this opinion.
The main point is: you have tried the tool, found it not to be useful for you and so you move on. Not for a milli-second you would consider enforcing your opinion on the tool on its vendor or on other users — especially as you have recognised yet that other users may find it useful in certain aspects — would you not?
>”Not for a milli-second you would consider enforcing your opinion on the tool on its vendor or on other users — especially as you have recognised yet that other users may find it useful in certain aspects — would you not?”
Not true. If I’m in charge of a team of employees I would not allow the use of AI for anything I could think of. From what I’ve seen, AI will only increase costs by adding a large amount of incorrect data into the process. So I definitely would enforce my opinion on others if it related to my responsibility for a business budget.
andyprough,
Medium and long term, employers won’t only be using today’s publicly available LLMs, they will be training their own purpose built ones. I think these are going to be devastating for employees who assume more specialized AI won’t take their jobs.
>”I think these are going to be devastating for employees who assume more specialized AI won’t take their jobs.”
Well so far I think the only jobs we have heard about being lost to AI is “search engine optimization writer” – those people who were mostly copying and pasting other people’s writing under their own names for online “news” outlets. And I think we all assumed a long time ago that some sort of computerized algorithms were going to replace those jobs.
As far as a hallucinating LLM replacing highly specialized jobs, I haven’t heard it. Maybe tax preparer? Software is already doing a lot of that work. But that’s also not highly specialized, and neither is “SEO writer”.
andyprough,
That’s only because you’re not looking. Some may not be ready to go public because of bad press, but many human employees are already starting to be displaced (and successfully I might add).
https://www.hollywoodreporter.com/movies/movie-news/hollywood-ai-artificial-intelligence-cannes-1235900202/
https://techcrunch.com/2024/01/09/duolingo-cut-10-of-its-contractor-workforce-as-the-company-embraces-ai/
https://www.marketplace.org/shows/marketplace-tech/ai-is-already-taking-jobs-from-some-voice-actors/
For better or worse, the numbers will keep growing. This trend will not be driven by our sentiment, but whether corporations manage to use AI to effectively boost productivity to save labor costs.
I get that emotions are strong, but understand boardrooms and investors will ultimately decide and none of our opinions about AI really matter. It’s all about the almighty dollar.
Btw, on the “business e-mails”: it’s not about the capability but rather about the time. In the financial industry and the associated business lines, today there is at least 80% of activity related to regulation and bureaucracy — guarded by people who have not the slightest idea what they are talking about. Ever have been part of an audit by one of the big 4? They people on ground coming to your bank will be learners desperate to fill their time sheets (I have rephrased this sentence 10x or so to avoid sanctions from Thom.)
Responding to those costs a lot of time and it does not really matter what you respond, when the questions was idiotic (I stand by this word!) in the first place. Prime business case for “AI” tools! Massive increase on productivity: let “AI” respond to the leeches and occupy them (in the sense of “defuse” them) with nonsense answers so you can focus on your actual business.
>”In the financial industry and the associated business lines, today there is at least 80% of activity related to regulation and bureaucracy — guarded by people who have not the slightest idea what they are talking about. Ever have been part of an audit by one of the big 4? They people on ground coming to your bank will be learners desperate to fill their time sheets”
This is not a good analogy, especially in response to my last point which is that any advantages of AI are probably not worth the cost of the energy inputs. I don’t think that it’s worth massively increasing global energy production and energy usage in order to make it easier for finance people to deal with big 4 audit drones. Set up email filters that auto-respond to the audit drones and redirect them to online searches for their questions.
What is AI, Thom?