Daniel Stenberg, creator and maintainer of curl, has had enough of the neverending torrent of “AI”-generated security reports the curl project has to deal with.
That’s it. I’ve had it. I’m putting my foot down on this craziness.
1. Every reporter submitting security reports on Hackerone for curl now needs to answer this question: “Did you use an AI to find the problem or generate this submission?” (and if they do select it, they can expect a stream of proof of actual intelligence follow-up questions)
2. We now ban every reporter INSTANTLY who submits reports we deem AI slop. A threshold has been reached. We are effectively being DDoSed. If we could, we would charge them for this waste of our time.
We still have not seen a single valid security report done with AI help.
↫ Daniel Stenberg
This is the real impact of “AI”: streams of digital trash real humans have to clean up. While proponents of “AI” keep claiming it will increase productivity, actual studies show this not to be the case. Instead, what “AI” is really doing is create more work for others to deal with by barfing useless garbage into other people’s backyards. It’s like the digital version of the western world sending its trash to third-world countries to deal with.
The best possible sign that “AI” is a toxic trash heap you wouldn’t want to have anything to do with are the people fighting for team “AI”.
In Zuckerberg’s vision for a new digital future, artificial-intelligence friends outnumber human companions and chatbot experiences supplant therapists, ad agencies and coders. AI will play a central role in the human experience, the Facebook co-founder and CEO of Meta Platforms has said in a series of recent podcasts, interviews and public appearances.
↫ Meghan Bobrowsky at the WSJ
Mark Zuckerberg, who built his empire by using people’s photos without permission so he could rank who was hotter, who used Facebook logins to break into journalists’ email accounts because they were about to publish a negative story about him, who called Facebook users “dumb fucks” for entrusting their personal information to him, is on the forefront fighting for “AI”. If that isn’t the ultimate proof there’s something deeply wrong and ethically unsound about “AI”, I don’t know what is.
It may just be me, but it seems AI’s main feature is mass surveillance and ideological enforcement. If the majority of our friends are programmed to think a certain way, so will we.
Good or bad, it is right around the corner. We finally arrived to techno-feudalism. As long as we keep critical information in central institutions, we are vulnerable to manipulations and blackmail.
Thom Holwerda,
One need not be a proponent of AI to predict the rise of AI. There is no doubt a lot to complain about. I’d rather be around humans than AI machines, scammers have been quick to embrace AI, AI leads to loss of jobs that feeds families, AI isn’t good at distinguishing between truth and fiction etc. There is a lot to criticize. However I don’t think that any of these criticisms are going to override corporate interests. Some of us have been warning about this for years and too often the response was “no it won’t, AI isn’t good enough and will never be good enough”. The short term challenges aren’t going to stop the progression of AI long term though. AI doesn’t need to achieve AGI to threaten billions of human jobs. I can’t hammer this home enough: the majority of jobs genuinely don’t require AGI, just a well trained custom model will do. As training improves and AI costs go down, more corporations will switch.
It may not be the best outcome for human society, but honestly since when have corporations done what’s best for us? We are only a means to an end, cogs in a giant machine. Humans are practically irrelevant to the extent that we can be replaced in the machine. This is why, as job specific tranining improves and costs come down, we can expect more humans will be replaced. I understand many people are strongly in opposition to this, but I don’t see how they are going to stop it.
The AI capabilities will only grow. The sooner we dig our heads out of the sand and think how to leverage the capabilities that are emerging, the better off we’ll be.
It will not stop, it is not “AI” in quotes. It is a real form of intelligence that is emerging. Deal with it.
The AI like any automated tool just narrows things down, and then a human has to double check and weed out the false positives…. It’s the same with all code checking/scanning tools.
The problem curl are having is that someone’s dumping the raw feed on them rather than going through it first.
If you’re debugging your own code then these tools can be very useful, and you can quickly spot false positives because you know the codebase and understand what it’s doing.
That’s one of the big problem I see with the current crop of AI zealots. I keep hearing how AI is the future, which I’m not contesting, but they seem to forget that we do not live in the future, we live in the present, and at present, AI is only able to do a fraction of what people say it will be able to do. Overzealously trying to sell AI’s capabilities is only going to cause an eventual backlash and set it’s usefulness back years, possibly decades.
“The problem curl are having is that someone’s dumping the raw feed on them rather than going through it first.”
That’s the problem: It’s not narrowing things down. It’s adding a ton of noise to the signal.
I take Zuck is using the “royal we” there…