Let’s check in on TrueNAS, who apparently employ “AI” to handle customer service tickets. Kyle Kingsbury had to have dealings with TrueNAS’ customer support, and it was a complete trashfire of irrelevance and obviously wrong answers, spiraling all the way into utter lies. The “AI” couldn’t generate its way out of a paper bag, and for a paying customer who is entitled to support, that’s not a great experience.
Kingsbury concludes:
I get it. Support is often viewed as a cost center, and agents are often working against a brutal, endlessly increasing backlog of tickets. There is pressure at every level to clear those tickets in as little time as possible. Large Language Models create plausible support responses with incredible speed, but their output must still be reviewed by humans. Reviewing large volumes of plausible, syntactically valid text for factual errors is exhausting, time-consuming work, and every few minutes a new ticket arrives.
Companies must do more with less; what was once a team of five support engineers becomes three. Pressure builds, and the time allocated to review the LLM’s output becomes shorter and shorter. Five minutes per ticket becomes three. The LLM gets it mostly right. Two minutes. Looks good. Sixty seconds. Click submit. There are one hundred eighty tickets still in queue, and behind every one is a disappointed customer, and behind that is the risk of losing one’s job. Thirty seconds. Submit. Submit. The metrics do not measure how many times the system has lied to customers.
↫ Kyle Kingsbury
This time, it’s just about an upgrade process for a NAS, and the worst possible outcome “AI” generated bullshit could lead to is a few lost files. Potentially disastrous on a personal level for the customer involved, but not exactly a massive problem. However, once we’re talking support for medical devices, medication, dangerous power tools, and worse, this could – and trust me, will – lead to injury and death.
TrueNAS, for its part, contacted Kingsbury after his blog post blew up, and assured him that “their support process does not normally incorporate LLMs”, and that they would investigate internally what, exactly, happened. I hope the popularity of Kingsbury’s post has jolted whomever is responsible for customer service at TrueNAS that farming out customer service to text generators is a surefire way to damage your reputation.
Dont worry, and to not take it too hard. You are just sun deprived. After four years you adjust to the sun in Boden. But hen you have to adjust to the non-services for mental health, they even shut down the only closed mental clinic down to fun the not working one. hillarious.
Sunderbyn is a place you go to die, and if i was ever asked i would scream “NOT TO SUNDERBYN I DONT WANT TO DIE” like a child.
It’s got the telltale signs of an LLM and an uninformed one at that. This is one of the problems with using generic LLMs that aren’t trained on the subject mater. They certainly need to do better. I think a highly specialized AI could actually work very well for tech support. However these generic AIs are too generic and it’s quite evident when you use them that they don’t know what they are talking about.
From my perspective customer support was already a dumpster fire going on for many years. I needed support from godaddy this week and they were telling us things that were plainly wrong, at least it did get escalated successfully. I have a client who’s significantly invested in google and oh boy google have some of the worst support I’ve ever encountered – agents literally sending us around in circles at google. The issue went unresolved because none of google’s agents knew how the platform works and furthermore none of them knew how to get in touch with anyone who could find out. It’s insane to receive such incompetent support from a trillion dollar company, I genuinely think google may have laid off critical employees who would have been able to help. Another company twilio I had to deal with this year doesn’t bother providing human support at all, just some useless AI service. Ironically their product is a messaging platform that helps businesses manage customer contacts, something Twilio themselves can’t even be bothered to do. I lost the job because of Twilio’s lack of support.
In theory companies should worry about damaging their reputations, but in practice they seem to be willing to explore how far they can take bad service. Providing “normal” support doesn’t bring in bonuses, but cutting costs does.
I clicked the link and got “Unavailable Due to the UK Online Safety Act”. Love it. I think many, many websites should do this and maybe normal people will start to think – or at least research VPNs.
j0scher,
Can you clarify what link was blocked, and by who?
I looked up the error you quoted, albeit from another site, and this link suggests the website operators are geofencing their websites in the UK.
https://helpforum.sky.com/t5/Broadband/quot-Unavailable-Due-to-the-UK-Online-Safety-Act-quot/m-p/4995719
If this is indeed what’s happening, I’m not sure what there is to love about the policy. The idea behind the law is presumably to get websites to comply by changing the way they deal with user privacy, but such messages suggest that the websites are complying by depriving access to UK residents. If you respond, do you mind clarifying why this is good?
On further thought, I can’t tell if your post was sarcastically suggesting it’s a bad policy.
I am not familiar with the Safety Act and had to look it up…
https://www.gov.uk/government/publications/online-safety-act-explainer/online-safety-act-explainer
I wonder how legislators would respond if too many websites end up blocking UK users to avoid the risk.