Artificial Intelligence: Stopping Violent Attackers in Their Tracks
“Guns don’t kill people, people kill people.” For decades, the NRA and other pro-gun special interest groups have justified their exorbitant lobbying spending – over $250 million total – on this line of logic. However, this deceptively straightforward reasoning neglects the reality that the skyrocketing accessibility to firearms has made it significantly easier for people to carry out acts of violence. The majority of weapons are sold legally to users who are authorized to carry firearms via background checks, but there remain loopholes that have gone unregulated. In fact, the dark web has gained notoriety as a platform for lone-wolf terrorists to carry out attacks like the 2017 Las Vegas shooting that resulted in the death of sixty. The United States itself is responsible for 60% of illegal weapon sales conducted on the virtual black market, leaving attackers nearly untraceable until it is too late. The last frontier of resistance appears to be artificial intelligence; as technology grows more sophisticated, it will have to keep up with society’s most sinister individuals.
Existing Surveillance Mechanisms
In the aftermath of 9/11, the Bush administration signed the PATRIOT Act. At the time, it was the most dramatic expansion of the federal government’s surveillance capabilities. The growth of discreet wiretapping as a means to identify and apprehend suspected terrorists has become a mainstay for American law enforcement, especially because it does not require proving probable cause, a direct violation of the Fourth Amendment. However, this expanded power has not meaningfully prevented crime. The Justice Department found that “FBI agents did not identify any major case developments that resulted from records obtained.” In fact, the inaccuracies of these expanded surveillance measures were most prominently exposed in 2004, when the FBI arrested Brandon Mayfield, a recent convert to Islam, in connection with bombings in Madrid. After holding him captive for two weeks, they admitted to possessing inaccurate data and records. In conjunction with warrants, wiretapping continues to be the foremost mechanism for law enforcement officials to crack down on violent crime, with nearly 3,000 authorized wiretaps alone in 2018. Fundamentally, existing measures operate on a preemptive or retroactive basis: either they run the risk of punishing the wrong people (like Mr. Mayfield) or they only help dole out convictions after the crime has been committed.
Use Cases for AI in Weapons Surveillance
Artificial intelligence’s primary advantage is its ability to respond to threats in real-time, thereby providing the most accurate assessment of violent crimes. Specifically, algorithms can be “taught” to detect weapons in possession by examining every frame of a scene and matching preloaded images of guns to the firearms being carried. Firms like ZeroEyes and Athena Security that have made advances in this space liken the technology to a personal computer’s graphics processing unit (GPU) and claim that the algorithm can also provide law enforcement officials with a description of the perpetrator and the level of threat they pose. In fact, Lisa Falzone, the CEO of Athena Security, has stated that her firm’s computer vision technology can detect weapons with more than 99% accuracy. Such technology will equip law enforcement with a significant advantage before they arrive on the scene and prevent misidentification of criminals.
Omnilert, the company that popularized the emergency mass notification system, even conducted tests with their technology. The results were reassuring: a suspect with a gun is recognized by the AI system, which immediately notifies security via text along with a picture of the suspect. However, the most significant drawback to existing AI algorithms in weapons detection is their imperfect response to environments with low amounts of light. Without a clear line of sight like those used in trials, current AI algorithms do not have the capability to perform at the near-perfect accuracy rate that they’ve repeatedly touted. As tech firms have continued to funnel money and resources into research and development of their technology, Omnilert specifically has expressed its interest in rolling out their product on college campuses to test their efficacy. However, skeptics have been quick to point out the potential Orwellian concerns associated with technology that operates off consistent scanning of its surroundings.
Ethics of AI in Weapons Detection
This year, the 1 billionth surveillance camera will be installed somewhere on our planet. Traditional forms of camera-based detection like CCTV have long been criticized for playing into the hands of law enforcement’s racial biases. Specifically, facial recognition software from leading companies like IBM and Microsoft have repeatedly misidentified black women at the highest rate. The datasets, which tend to be composed primarily of white individuals, in conjunction with mugshot images of predominantly people of color, heavily skew the accuracy of predictive policing technology. In turn, these inaccuracies continue to bolster the resistance of adopting more AI-centric surveillance mechanisms.
However, AI firms in the weapons detection industry have attempted to quell these ethical concerns. For example, Athena Security’s cameras intentionally blur the faces of individuals in the frame and do not immediately reveal their personal identification. Instead, they simply provide a description of the individual that suffices for law enforcement to be adequately prepared upon arrival at the scene. The company likens its product to a smoke detector; instead of profiling individuals, it simply searches for potentially dangerous objects and notifies the appropriate law enforcement officials to respond. But, this “description” of the individual still needs to provide enough information for a quick and accurate response, creating a dilemma for companies who do not want to feed into existing biases. In the coming years, through federal legislation, the line between privacy overreach and necessary surveillance will be defined, and in the process, the safety of potentially every American.
Looking Ahead
Artificial intelligence has impacted nearly every facet of our lives, whether we realize it or not. It has made our commutes quicker, shopping more convenient, and communication easier. However, in a country where there are more civilian-owned guns than civilians, there’s one essential question that AI has yet to answer: will it make us safer?