“AI” assistants misrepresent news content 45% of the time

An extensive study by the European Broadcasting Union and the BBC highlights just how deeply inaccurate and untrustworthy “AI” news results really are.

  • 45% of all AI answers had at least one significant issue.
  • 31% of responses showed serious sourcing problems – missing, misleading, or incorrect attributions.
  • 20% contained major accuracy issues, including hallucinated details and outdated information.
  • Gemini performed worst with significant issues in 76% of responses, more than double the other assistants, largely due to its poor sourcing performance.
  • Comparison between the BBC’s results earlier this year and this study show some improvements but still high levels of errors.
↫ BBC’s study press release

“AI” sucks even at its most basic function. It’s incredible how much money is being pumped into this scam, and how many people are wholeheartedly defending these bullshit generators as if their lives depended on it. If these tools can’t even summarise a text – something you learn in early primary school as a basic skill – how on earth are they supposed to perform more complex tasks like coding, making medical assessments, distinguish between a chips bag and a gun?

Maybe we deserve it.

6 Comments

  1. 2025-10-24 11:32 am
    • 2025-10-24 12:01 pm
      • 2025-10-24 12:30 pm
        • 2025-10-24 4:20 pm
  2. 2025-10-24 12:32 pm
    • 2025-10-24 1:11 pm

Leave a Reply