If you’re eating a bag of chips in an area where “AI” software is being used to monitor people’s behaviour, you might want to reconsider. Some high school kid in the US was hanging out with his friends, when all of a sudden, he was being swarmed by police officers with with guns drawn. Held at gunpoint, he was told to lie down, after which he was detained. Obviously, this is a rather unpleasant experience, so say the least, especially considering the kid in question is a person of colour. In the US.
Anyway, the “AI” software used by the police department to monitor citizens’ behaviour mistook an empty chips bag in his pocket for a gun. US police officers, who only receive a few weeks of training, didn’t question what the computer told them and pointed guns at a teenager.
In a statement, Omnilert expressed regret over the incident, acknowledging that the image “closely resembled a gun being held.” The company called it a “false positive,” but defended the system’s response, stating it “functioned as intended: to prioritize safety and awareness through rapid human verification.”
↫ Alexa Dikos and Rebecca Pryor at FOX45 News
I’ve been warning that the implementation of “AI” was going to lead to people dying, and while this poor kid got lucky this time, you know it’s only a matter of time before people start getting shot by US police because they’re too stupid to question their computer overlords. Add in the fact that “AI” is well-known to be deeply racist, and we have a very deadly cocktail of failures.

“rapid human verification” is a fun little euphemism for racial profiling and frivolous arrest.
Unfortunately without seeing the evidence, none of us are in a position to judge it or the AI for that matter. Humans who reviewed it say it did look like a gun. I don’t want to take humans with a conflict of interest at their word, however nothing reported seems to indicate the AI didn’t do it’s job correctly. It alerted humans, who looked at the same images and then called police.
I get this is newsworthy because AI is attached, however humans make these same mistakes all of the time.
“Paranoid Cop Mistakes Gas Pump For a Pistol”
https://www.youtube.com/watch?v=O5JwWf_Q5qA
Without seeing the data we need to be open to the possibility that the AI’ actually performs better than humans. In hyperbole speak: more AI could save lives. I have to concede that I’m not privy to the data and don’t necessarily trust insiders behind closed doors at face value. It’s not right to ask the public to blindly trust technology. In principal though, I think it’s important to approach these things scientifically and not to use animus against AI to assert a predetermined conclusion.
I also think the story lacks a lot of context about the state of affairs with guns in US schools. Parents are desperate to protect their kids at school and we know for a fact that over a 180 day school year there will be an average of 1-2 shootings per day. This is the reality.
https://echomovement.org/school-shootings/
Even though I am a parent and this incident is scary, it’s really hard to agree that we shouldn’t use technology to look for guns in schools. I think police training to come out with guns blazing is often not appropriate, but it can be hard to judge what’s right to do at any given moment without the benefit of hindsight. These are all serious problems and I don’t think anyone has a complete solution.
The problem is that this technology puts us at the precipice of a steep and very slippery slope. Once the “AI” has been “proven” to be correct, say, 90% of the time, human review will be removed from the equation and officers will be authorized to shoot on sight anyone carrying a bag of chips or a handkerchief or whatever else in that 10% window the “AI” hallucinates as a weapon. No room for threat evaluation, no chance of talking to the alleged perpetrator, just shoot to stop the perceived threat.
And I’m not just saying that as some nerd on the Internet, I worked adjacent to law enforcement full time for the first twelve years of my adult life, and part time for several years after that while I was getting established in my current IT role. I went through a lot of the same training as them, including active shooter scenarios, and I spent A LOT of time around officers. I came away from that experience extremely distrusting and fearful of the American police community. This technology gives them all the excuse they need to be able to shoot “suspects” with no oversight and no fear of reprimand from the courts. They will absolutely eat this up and they cannot wait for Robocop to issue kill orders on anyone outside of a very narrow profile.
Morgan,
I agree the alerts can be improperly responded to. And not only that the introduction of phones/dash cams/body cams have shed new light on police abuse that likely always existed but was going on under the radar forever. Once again, I’d like to quantify things scientifically rather than knee jerk reactions, but I don’t deny you’re right about it happening. I’m tempted to suggest we should change the environment/procedures/training/screening/temperament/etc to help cull the bad outcomes…but I’m totally aware that this is far from an original idea and things must be a lot more complicated on the ground. I also worry that behind the scenes (and out of public view) that police may be socially engineered into believing the rules are an annoyance that need to be navigated around rather than actually internalizing them.
The police in this country (that’s the freedom loving USA) don’t need much of a reason to point a gun at kids. This happens all the time, those kids are just usually brown, so it doesn’t make the news. ‘merica!
CaptainN-,
I think the AI is right to alert for a possible gun. It’s humans who have discretion of what to do next. Sometimes the presence of police in and of themselves endanger people more than anything likely to happen in their absence.. Clearly in this scenario it would have been a better outcome for a school resource officer to investigate the situation before bringing the police in. However the crux of the problem is that we don’t know the scenario we are in until after the fact. That’s what makes it so hard to prescribe a formulaic fix.
I think it could make sense for the police to take a strictly “backup” role to assist the school resource officer rather than committing the situation to police tactics right away. This could be a better fit for most scenarios, but I realize things are messy in reality and that the sensible course of action doesn’t always prevail.
Sorry to keep replying to you, I promise I’m not picking on you! Just some things that stand out, again, due to my experience in this field.
This is absolutely true, no matter the intent of the police presence.
This is what forensic investigation is for. When you have a car accident, especially in the days before cameras being everywhere including on the car’s dashboard, the police who arrive on the scene and investigate will take notes on what all drivers involved have to say about it, but they don’t base their outcome on that. They take photos, draw diagrams, and study the entire scene — even measuring traffic light timing — to paint an overall picture of what happened. There’s a lot of science and math involved, and a lot of training surrounding accident investigation (unfortunate that their other training isn’t this detailed and complete but that’s another topic altogether).
With that said, there is absolutely a shit-ton of investigation done after events like this, or at least *there should be*. Unfortunately as I said in my other comment, this is one of those areas where police leadership will see an opportunity to wash their hands of any responsibility by passing it off as a “computer problem” and the false arrests and shootings will continue to escalate.
I absolutely agree with this sentiment, but the unfortunate fact is that school resource officers by and large are regular cops who couldn’t hack it on patrol or in an investigative role so they are stuck doing school security. They are usually just as itchy to pull their gun to solve a problem as any other cop, so they will often be the point of escalation for a given scenario. That’s of course not true of all resource officers; one of my oldest friends took the role because he was dedicated to keeping schools safe and educating the kids on drug abuse and such (D.A.R.E. program for those familiar). But he is a hidden gem in a mountain of dirty gravel.
Morgan,
There’s all the time in the world to investigate things after the fact, but not when you need to decide what actions to take in the moment. So I think we need to solve the latter.
I concede that I don’t know if AI had a different psychological impact on the police in this case. But to the extent that there may be a psychological component in justifying actions on account of AI, in principal this is pretty easy to fix by not telling police the image was flagged by AI at all. Whether it was AI or staff who initially noticed it shouldn’t really change the nature of the police investigation should it?
I find it very difficult to fault AI for doing it’s job and flagging images that look like guns. Given that this (allegedly) did look like a gun, the next logical question is whether the response was appropriate given the totality of circumstances.
I completely see your point that police will rationalize their escalations by claiming the danger is greater than it actually was. In this specific scenario, it looks excessive. But you have to pick a response before you can determine the scenario you are in. Was having police show up with their guns drawn absolutely necessary or is there a better alternative?
I hear you, but I can think of other examples where this doesn’t happen. For the TSA finding guns is every day routine and they don’t respond this way. Most aren’t even arrested.
“TSA screeners still find guns in carry-on luggage”
https://www.youtube.com/watch?v=S1HFf6GeN-4
So maybe we need to make the case for better training.