Author’s note: this is a report of someone else’s findings. A report on a report. A summary. I have neither verified these claims, nor do I claim that their findings were accurate. For their full reporting, read Motherboard’s article.
According to an investigation from Vice/Motherboard, ShotSpotter is modifying data from gunshot detecting AI at police request. AI makes mistakes, so re-examining initial findings wouldn’t be so bad, right? Except, according to Motherboard’s investigation, this can sometimes involve redefining the place, time, and type of event, from a firecracker a mile away from a crime scene to a gun shot that implicates a suspect police happen to like for a shooting. Their findings suggest police could be asking companies to modify evidence so they can increase arrest rates, even if they have to arrest the wrong person to do it. It’s a hefty accusation, but Motherboard seems to have the receipts, and they’re particularly damning.
What if the evidence used against you in a court room was literally modified to implicate you? It would be like police putting your fingerprints on a murder weapon, or digitally adding you to security footage. Or, in the case of evidence supplied by acoustic gunshot detection technology, potentially claiming that a firecracker a mile away was actually a gunshot from your parked car at the time of a murder.
In This Article:
ShotSpotter
ShotSpotter is a sci-fi like tool that is, on its surface, pretty simple. Using an array of microphones hidden across a city, monitoring for loud noises, and AI that can differentiate between a firecracker, car backfire, or gunshot, software can triangulate and identify gunshots in a city. This allows police to respond faster than someone can report a gun shot, and with, ShotSpotter claims, a lower chance of false positives. It’s like something you’d see in science fiction, the police looking at a map that shows the time and place of a gun shot, but it’s very real. The recorded data and audio can later be used in a case against a suspect.
The technology is especially cool because, in a world where surveillance and privacy are serious concerns, a piece of tech that collects the most limited amount of information possible, sound, is capable of telling people where gunshots occurred. This is preferable to privacy advocates to video surveillance or facial recognition.
That is, if the data isn’t manipulated with intent to implicate a person.
Motherboard’s Investigation
Obviously I don’t have this data myself, so I highly recommend reading Motherboard’s full article on the matter. I can only report on what they’ve reported, a summary. According to Motherboard, ShotSpotter changed data they had stored on incidents at the request of police, seemingly to implicate particular suspects when other suspects couldn’t be found. They specifically brought up a few cases, including the case of 64-year-old Michael Williams. He brought a man, 25-year-old Safarain Herring to a Chicago hospital with a gunshot wound to the head. Herring died two days later. Williams claims that the man was shot in a drive-by shooting. ShotSpotter’s data didn’t initially directly implicate Williams. However, edits to that data “corrected” what was originally classified as a firework to a gunshot, and changed the location to a mile away, where Williams’ car was parked. Months passed between the first edit from a firework to a gunshot, and later the location to where Williams’ car was parked.
A public defender made the claim that ShotSpotter’s output was changed specifically to target his client.
“Through this human-involved method, the ShotSpotter output in this case was dramatically transformed from data that did not support criminal charges of any kind to data that now forms the centerpiece of the prosecution’s murder case against Mr. Williams.”
– Mr. Williams’ lawyer
Police Involvement?
“[ShotSpotter’s] analysts frequently modify alerts at the request of police departments—some of which appear to be grasping for evidence that supports their narrative of events.”
– From Motherboard’s report
This alone doesn’t prove beyond any doubt that it was specifically police who changed the data in this case. The months between edits, however, could suggest that ShotSpotter changed this data specifically to implicate an individual. Since they couldn’t have known details about the case, like the location of Williams’ parked car, without police involvement, it’s possible police were involved with these edits. That or the location was simply wrong. In other cases, police reportedly directly asked ShotSpotter to re-examine evidence, looking for specific new outcomes.
The lawyer defending Williams put in a Frye motion, a request that the judge examines the scientific validity of a particular piece of evidence. Rather than defend ShotSpotter’s provided data, prosecutors withdrew the evidence.
According to Motherboard, Williams’ case wasn’t alone. Motherboard claims that ShotSpotter frequently modifies alerts identified by AI, they claim, to correct mistakes the AI made.
“The [Motherboard] article referenced is misleading and inaccurate about the use of ShotSpotter in criminal cases. We categorically deny any allegations that ShotSpotter manipulates any details of an incident at the request of the police. We respond to requests to further investigate an incident but only to provide the facts that we can determine and not to fit a pre-determined narrative. ”
– from a response to Gizmodo from ShotSpotter regarding Motherboard’s investigation
In some cases, data from the incident is all that survives, with the actual audio recordings being deleted so a jury or judge cannot examine the original evidence. If SpotShotter’s techniques haven’t been verified, that’s almost like digital hearsay, an unverified technology and process making the claim that an audio recording contained damning evidence, instead of providing both the audio and the analysis. In one case, a jury cited ShotSpotter’s unreliable analysis and missing audio of a gunshot as reason to acquit the accused. Motherboard’s reporting mentions numerous other suspicious cases.
Unverified Technology
ShotSpotter’s technology has allegedly not been scrutinized in a court of law. If prosecutors had decided to continue using evidence against Williams from ShotSpotter, it would have been the first time a court examined and vetted ShotSpotter’s techniques and code, according to Jonathan Manes, an attorney at the MacArther Justice Center, who spoke with Vice/Motherboard.
“Right now, nobody outside of ShotSpotter has ever been able to look under the hood and audit this technology. We wouldn’t let forensic crime labs use a DNA test that hadn’t been vetted and audited.”
– Jonathan Manes
ShotSpotter’s Senior Vice President defended the technology, stating in an email to Motherboard that, “ShotSpotter has no reason to believe that these decisions are based on a judgment about the ShotSpotter technology.”
Racial Bias
Like many topics in policing, deployment of technologies like ShotSpotter’s could have some racial bias. By ignoring historical racial bias in policing, technology can perpetuate it, rather than eliminate those biases. Vice’s Motherboard has data showing a large difference where Chicago deployed ShotSpotter’s sensors. According to Motherboard’s reporting, cities add more ShotSpotter sensors in predominantly Black and brown neighborhoods. Considering police and researchers claim tools like ShotSpotter often increases their reporting of gunshots, and reportedly these are frequently false positives, this means the technology could be unnecessarily sending police into predominantly non-white neighborhoods. They’re going in with the expectation of violent confrontation, and that sets the tone for their interactions.
Squandered Potential
Acoustic gunshot detection technology, like ShotSpotter’s, could, in theory, help neighborhoods stay safe. After all, militaries use similar technology in war zones to identify the location of shots so their soldiers can quickly find cover and respond. However, cities and researchers have questioned the technology’s accuracy and ability to differentiate between a false alert and an emergency. Gunshot detection could help identify where people need immediate help, and help implicate criminals. But it hasn’t been properly vetted by courts or independent third parties, and some studies claim acoustic gunshot detection systems don’t help their cities at all, actually increasing strain on police resources. They suggest human reports through calls to 911 are sufficient and more accurate. Deployment of gunshot detection could also potentially increase racial bias, leading to more violence against racial minorities at the hands of police.
ShotSpotter claims their auditors agree with the AI in 90% of cases, standing by the accuracy of their technology. However, if any technology is tainted with involvement from anyone with an objective outside of the technology’s use, like identifying gunshots vs compiling evidence, it could become a tool for misdirection. According to Motherboard’s reporting, that’s exactly what happened between police and ShotSpotter.
Possibly as a result of both previous analysis and the details found by Motherboard’s most recent report, activists have rallied against deployment of ShotSpotter in their neighborhoods. In San Diego, the decision on a ShotSpotter contract has been delayed. In Chicago, activists have pushed the city to not renew their contract with SpotShotter, which ends this month. Communities could turn against these tools due to the potential or corrupt and dangerous application of them.
Sources and Further Reading:
- Abbie Alford, CBS8
- Todd Feathers, Vice/Motherboard, [2]
- Andy Grimm, Chicago Sun Times
- Dennis Mares and Emily Blackburn, via ResearchGate
- Chris Mills Rodrigo, The Hill
- Lucas Ropek, Gizmodo