Hungama Technology AI detection software mistakes a Doritos bag for a gun

AI detection software mistakes a Doritos bag for a gun

AI detection software mistakes a Doritos bag for a gun post thumbnail image

Police handcuffed a 16-year-old student after an AI gun-detection system flagged a “possible weapon”, which turned out to be a bag of Doritos. This has sparked a debate about the place of AI in everyday lives, especially in schools, and paying attention to this is imperative for Pakistan, as we should learn from international incidents to inform our own implementation of AI.

This took place on Monday in Maryland, US, outside Kenwood High School. International news outlets say that local officers responded to a report of “suspicious person with a weapon”, and on arrival, the police ordered the teenager to the ground, handcuffed and searched him, but found no firearm.

The student told their local outlets he had eaten some crisps and placed the empty packet in his pocket when several police cars arrived “with guns” drawn. “They made me get on my knees, put my hands behind my back, and cuffed me,” he said. According to the police and Omnilert, the company whose AI tool analyses video for potential threats, the way he had placed his hands in his pockets “closely resembled a gun”.

Human error or AI failure?

In a letter to families, the school’s principal said the alert was reviewed “at approximately 7 pm” by the district’s safety team and cancelled after it found there was no weapon. According to international news, school officials described the episode as a communications failure rather than a software malfunction, as the police were still called despite the alert being cancelled.

News outlets report that school superintendent Myriam Rogers said the programme “worked as designed” — raising an alert for human verification — but a spokesperson said that the principal “didn’t realise” the alert had already been cancelled when police were requested.

Omnilert reported to international outlets that their detection software is just “one layer” in a wider safety process that relies on human oversight. Omnilert has been in the news before, failing to detect a gun in January 2025 before a school shooting in Nashville, Tennessee. On Monday, its detection system was correct, but a student faced down the barrel of a gun anyway.

This incident is an example of a false positive: the software worked as intended, even if it made a mistake, but lapses in communication caused an incident that could have been avoided.

How to implement AI?

With this year’s policy to implement AI on a large scale throughout different sectors, the question that Pakistan can ask itself is how to implement AI detection software in a way that is both accurate and beneficial for all.

A sensible line we can draw is set by evidence, not promises. For example, the US Federal Trade Commission acted against Evolv Technologies, a company that designs AI-driven security systems, for misleading performance assertions.

This serves as a reminder that hype can influence policy. The bar should therefore be raised for oversight of AI systems before an alert triggers police deployment, and policy should then be made in a way that emphasises checks and balances between AI systems and human oversight.

Despite recent tensions with Afghanistan abating, Pakistan still has major concerns to face regarding security and citizens’ safety. It is up to policymakers and officials to look to countries internationally and consider ways to bring in AI, for security systems and otherwise, in a way that does the most good and prevents the most harm.

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Post