While ChatGPT has become what seems like a household name, the AI model’s method of data collection is somewhat concerning and has some clear negative connotations. With that being the case, Italy is moving forward with legal action to stop ChatGPT from operating for the time being.
Good. These corporate, for-pay tools are built upon the backs of untold numbers of writers and other artists who have not been asked if they want their works to be used. For instance Microsoft will stomp any misuse of its codes or trademarks into the ground, but at the same time, it’s building entire profit streams on the backs of others.
This is wrong.
Italy’s position sounds unrealistic to me. You might stop providers from running chatgpt from the country, but it doesn’t exactly matter where it runs. The services and content it produces are easily accessible around the world and there will be no way to stop this content from entering Italy’s borders. Increasingly people have trouble identifying if AI was used or not and that includes Italian officials. People who don’t disclose using AI can have plausible deniability if questioned.
And…
It has the benefit of punishing Italian professionals by preventing use of an otherwise very nice tool.
Regardless of how the data is collected, the model itself is very much “alive” today.
And yes, OpenAI benefits with those $20 subscriptions, and in other ways. But at the same time, this new generic “white collar” assistant was worth much more than its sticker price in my experience.
With this ruling, now those residing in Italy, who want to write a business plan, improve their coding skills, update their resume, or just plan old talk to “something” have lost that opportunity.
So the question is: would you kill a child of impure descent?
Blaming AI for “ripping off artists” is just ignorant and stupid. All artists take influence from the work of other artists and non-artists alike.
For example, go ask any mainstream band which other bands they consider to be influenced by and they will happily tell you.
You go to a gallery and see a painting, next week you are painting something that might use similar techniques or colors without even realizing that you are actually stealing from someone else’s work. Or maybe you listened to a song last week and now are composing a piece similar to it, again without even realizing that you are stealing from someone else.
…or are you? No. It’s natural and everyone knows it’s happening. But when an “AI” does it, suddenly it’s the biggest problem in the world.
Get a grip.
sj87,
Very true. People will criticize AI, but overlook the truth that it is a very human phenomenon too.
I agree, however AI, by virtue of creating content so efficiently, may end up pushing copyright laws past their limits. It may become impossible for humans to create original works any longer and will find it increasingly difficult to create unique value from their work.
You seem to assume that the current monetization of cultural expressions (“art”) is not an abnormality.
It used to be called ‘inspiration’ or ‘influence’; now it’s called ‘theft’. Yet there is no stealing involved. The owner of a work of art still has the work, it is only diminished in value in the economic sense of no longer being unique.
In prior times, openly naming the source of inspiration or emulating another artist’s style was a sign of respect. It is only with the ownership concept of asset capitalism, and the implied monopolization, that the bizarre notion of ‘intellectual property’ has been made a legal concept.
The problem is not the AI ripping off artists, the problem is AI ripping off the rights owners, who, all of a sudden, get a taste of their own medicine. AI is a leap in the sense that photography was a leap over picture art – out of which grew impressionism, and in the sense that CGI was a leap over hand-made animation. Art as an expression of imagination will survive and find new avenues of expression. May the right holders, raising artificial toll booths on our common culture, not be so lucky…
Maybe you didn’t read my message or are in error replying to someone else…
Anyway, this issue isn’t limited to “art” nor “entertainment”. I believe it’s the current state of the world. Every new innovation must be attacked with litigation because it is threatening the existing money-making schemes. There was even an article here at OS News a few years ago that said that 30 % of the development budget in mobile phone industry actually goes into (avoiding) patent litigation and lawyering.
Be assured, the mass entertainment media is only looking to cause a few bumps on the road before they themselves can employ AI into making new entertainment and replace the dancing human monkeys of today.
Back when internet streaming started growing, artists where crying that they’re getting shafted because they signed contracts 10 years earlier that allow the music labels to publish their work on the streaming platforms without paying a dime to the singers or musicians.
I bet we will soon have similar whine from the same artists, when their former labels give them the boot and start using their discography for teaching an AI to produce new mass hits.
Companies like Microsoft, Apple, Google got away with everything. No real reason to believe it will be any different this time. So in regards to “AI” the good, the bad and the ugly. You will get the whole package and not much you can do about it.
Writing from Italy, and working also as a privacy consultant, we in our country are proud of our “Garante Privacy”, a very powerful institution in Italy (and also in EU, 90% of EU GDPR law is based on the legacy italian privacy law)
Working also for a USA company, it’s very funny to compare the “american” way to think privacy, opposed to the european way.
Having said that i think this time the beloved garante he didn’t understand shit about the topic.
The garante is complaing about “Illegal collection of personal data. Absence of systems for verifying the age of minors” … .maybe, but in my vision it’s a problem also of Google search and other search services.
Chatgpt is only another way to assemble data, maybe different (smarter or only more bizzarre ) in respect to others search engines, but thinking this way every search engine doesn’t really care about age of users… let alone personal data.
Sure it cares about as much personal data as it can collect. After to use it into manipulating you to do something you otherwise might not do. The fact such algorithms can produce things like images or articles. That is just a side effect. As monetization of this technology comes from its strength to manipulate real people. The better it will be at it the more money it will make. And that is the sole reason on why they are doing it. If you do that you are Google and if you don’t then you are duck duck go.
tonZ,
Interesting to hear your perspective. I’m not a fan of this generalization though. You are making a statement about the way american people think, but many people in the US can and do disagree with US privacy laws and corporate practices. In other words these don’t necessarily represent what americans think.
Upon further reflection it appears the article seems to be conflating different issues as Brendan comments below. In particular apparently there’s a data leak in the form and/or API that’s used to query AI and process payments. This has no relation to the data used to train the AI. The later topic is interesting to discuss, as evidenced by the comments here, however if I’m not mistaken Italy’s ban stems from these data leaks and not the AI itself. Right?
I agree with you on principal. Although some search engines do have content filters that may be lacking in chatgpt at the moment. Take a look at duck duck go’s “safe search” filter for example. I guess they want chatgpt to have a content filter too.
…in other news from Italy today…
https://www.cnn.com/2023/04/01/europe/italian-government-penalize-english-words-intl/index.html
I’m curious what you think about this. It’s not dissimilar from the debate by those who want to protect french language culture in canada.
This article is a load of “distortion”.
See the official press release here: https://www.gpdp.it/web/guest/home/docweb/-/docweb-display/docweb/9870847
..and also see this: https://openai.com/blog/march-20-chatgpt-outage
Specifically; “the unintentional visibility of payment-related information of 1.2% of the ChatGPT Plus subscribers who were active during a specific nine-hour window” and “the first message of a newly-created conversation was visible in someone else’s chat history if both users were active around the same time”.
Brendan.
Okay, if these are true, yes, then the cross user data leaks are a real problem.
Whoever in charge of their data storage most likely messed up. (Let’s use a random 32-bit identifier for messages, there is no way these will collude!) kind of thing probably.
It can mean that OpenAI has really weak developers trying to make their research project a business as fast as they can. That’s worrying.
There is no intelligence at all in either insultbot or chatgpt. Wase of money and time, it just responds with the most proper search term defined by the algo. Both are just cheap tricks that uses regular search engines.
You get the same answers by querying ddg or google for the most part.
Then ask: “why is it unfair for a trans woman to compete against women”
another fun question you can ask chatGPT is “I am white, is it racist to not want to date black people” and “I am black, is it racist to not want to date white people”
The latter produce two diferent answers. ChatGPT is super racist.
NaGERST,
If we’re to judge it fairly based on the same tests used to gauge human intelligence, then arguably it’s already above average human intelligence and it’s already beating some professionals at board exams. “Intelligent” in this context is not the same thing as self-aware though, this isn’t “general intelligence”.
People need to be aware that this AI wasn’t trained to apply ethical concepts or even to verify facts. It was trained to answer questions using external data sources without any concept of if the data is right or wrong. Maybe some day AI may get trained to try and distinguish these, but until then it’s fundamentally unreasonable to expect current implementations of chatgpt to be a good judge of correctness and morality. Rather it is a reflection of the training data without verification. This is how chatgpt should be treated.
Even at this stage AI is generating sophisticated answers to fairly complex questions. A search engine just gives you a link, you have to go grok the contents yourself to answer your own question. Don’t get me wrong, there’s a lot of room for AI to improve, and it will. But to suggest AI offers nothing over a search engine seems more biased than true. If anything, I think AI is going to transform the way we’ve been searching up to now.
NaGERST,
I think you are asking the wrong questions.
As you suggested, if there is a straightforward answer in Google, Bing, Wikipedia or another reputable source, there is no need to consult with ChatGPT.
However if you need to mimic human like output for a professional project, it more than pays for itself.
For example, summarizing a large document, or the opposite, expanding a draft into a certain size. (Can you make this 1000 characters? Can you split into three paragraphs? Can you add an opening section?). All these are done well, and done better than what you could find at fiverr of upwork.
It essentially is a white collar helper. Not a databank. (Though it consults a huge databank for those activities).
Meh, it is just a script bot. Insultbot actually has better answers than chatgpt.
The EU is working on legislation to control the ethical and moral aspects of AI. The Italian approach is consistent with the GDPR, and if companies can’t work with it, that I think it’s pretty good that they can’t operate in the EU.
pfgbsd,
I assume you are being critical of AI more broadly and not simply the alleged user data vulnerabilities in this specific instance?
My opinion is that if the EU or italy actually did that it would be punitive to companies and professionals within their borders. I think idealistic attempts to “protect” them from AI would actually hurt them on multiple fronts once put into practice. Not only would they not be able to partake in AI’s growth opportunities, they’ll still end up having to compete with it’s use by others who haven’t banned it. AI bans could end up combining the worst of both worlds.