Facial Recognition Lead to an Arrest. It was the Wrong Person.

Reading Time: 7 minutes.
A number of people with points of their face identified and connected with lines used in facial recognition.

Facial recognition hides many biases. Image: Microsoft

We’ve long known that facial recognition in the United States is most accurate for white men and inaccurate for everyone else. In fact, our AI in general tends to lean towards racism and sexism, because humanity has those biases, especially in tech. This has to do with training sets and the fact that programmers coding the algorithms don’t realize that if they don’t account for bias, bias will exist. It’s a far more complicated issue than that, but the outcome is the same: AI is racist and sexist.

Basically, if you’re not thinking about how AI could misinterpret something, it almost certainly will. Since issues like racism and sexism aren’t at the forefront of the predominantly white and male engineering departments in most tech companies, their software often tends to carry the biases that you’d expect. Facial recognition makes far more mistakes for Black, indigenous, and people of color (BIPOC), as well as women.

This has lead to problems before, but amidst ongoing protests for racial justice and defunding/reforming the police in the United States, a false arrest can finally lead to change. It can serve as the spark that helps people realize what a widespread issue this really is.

Police arrested Robert Williams in Detroit in January. They did so based on facial recognition that claimed he had committed a crime. Police treated him poorly and failed to follow protocol. However, Williams was innocent. After a night in jail and hours of interrogation, police released him. His case may be the first time that a department has admitted that facial recognition lead to a false arrest, but it certainly wasn’t the first time this has happened. In fact, without drastic changes to policing, it wont be the last time it happens either.

Robert Williams’ False Arrest

In October of 2018, a Shinola location reported a shoplifter. Bad faith, bad AI, and bad policing lead to police arresting an innocent man in January of 2020. Their only piece of evidence? Facial recognition matching Robert Williams’ driver license photo to the blurry and unreliable photo provided by the security footage. The fact that Williams is a Black man clearly lead to his arrest.

Williams wasn’t the man who stole from the Shinola, a fact that seemed obvious to him, even looking objectively at the still from the video cameras. He asked detectives, “You think all black men look alike?” Their only response was, “I guess the computer got it wrong,” as if there weren’t two humans in the room who could have made the comparison for themselves.

Police arrested Robert Williams, an innocent man with an alibi, based solely on the output of a facial recognition program they didn’t understand.

Bad Police Work From Start to Finish

Normally, you need evidence to arrest someone. Some proof of a crime. You need enough evidence to tie a person to a crime for an arrest and trial. Police in the case of the Shinola robbery had just one piece of evidence: a blurry video. They attempted to use facial recognition from DataWorks Plus, a company the Detroit police department has a $5.5 million deal with. DataWorks Plus doesn’t have their own algorithm, they use a variety of services and return results to police. They don’t actually test the accuracy or validity of their results either. There’s nothing scientific in the results they hand out. In fact, Todd Pastorini, general manager at DataWorks, said, “We’ve become a pseudo-expert in the technology,” leaving much to be desired.

Now police had a name. DataWorks Plus gave them a match to Williams’ driver’s license. Proper police work means they should now investigate Robert Williams, treating this result as a clue, rather than a smoking gun. What they should have done is show his photo to an eyewitness, as the police were aware of one. The Detroit police should have checked his social networking profiles for evidence of owning the clothing seen in the video, or the stolen merchandise. They could have interviewed him, questioned him without arresting him. The detectives could have gotten a search warrant to look for stolen goods as a final measure. They did none of these things.

Police did take one step to verify the facial recognition, but see if you can spot the problem. They went to Katherine Johnston, an investigator at Mackinac Partners, the loss prevention firm Shinola hired. This woman had only seen the video that she provided to police. She was not an eye witness. They showed her a series of 6 matches, and she picked out Williams, having only seen the video herself.

Can you see the problem?

The police are still using just a single piece of evidence. A blurry still from a video. They tried to tie it to Robert Williams without ever asking an eyewitness or doing any of the necessary police work. They just figured they’d arrest him and see how it played out.

For BIPOC, the answer is usually the same: not well. Bias in policing, sentencing, and jury selection works against them, sending innocent people behind bars.

In January of 2020, Robert Williams was arrested in front of his wife and young daughter on his front lawn for a crime he didn’t commit by police officers who didn’t want to do their jobs. He was held for 30 hours, released on a $1,000 personal bond, and has had to miss work multiple times. All because police has a single piece of evidence, trusted the often racist facial recognition software, and didn’t care to verify that Robert Williams was actually the shoplifter.

They let their own biases supplement the bias of the facial recognition.

Had They Checked…

Had police in the case did anything other than assume a black man was guilty at the mere suggestion of it, they would have found that he never had possessed the stolen goods. He did not own the clothing seen in the video. An eyewitness, who was never interviewed, could have stated that the man in their photo was not the thief. Finally, they could have found the Instagram video Williams posted at the time of the robbery. He was heading home from work, and a song he loved came on the radio. It was “We Are One,” my MAZE, ft Frankie Beverly.

“I can’t understand

Why we treat each other in this way

Taking up time

With the silly games we play.”

– “We Are One” lyrics

Robert Williams with his wife Melissa and their two daughters

Mr. Williams with his wife, Melissa, and their daughters at home in Farmington Hills, Mich. Photo: Sylvia Jarrus/The New York Times

 

Had police cared not to profile him and use a lone piece of evidence against him, they would have discovered a caring family man with an alibi. They didn’t care, they just wanted to close a case.

Facial Recognition and AI Flaws

Arrows flowing in a circle. Text reads Humans > Create > Data > Fed into > Algorithms > Discover > Patters > Returned to > Humans

If you tell a kid that 1 + 1 = 3 all their life, they’re going to believe you. Eventually, as they get older, they may call you out on it, but with that faulty dataset, they’ll continue to repeat the falsehood that 1 + 1 = 3. AI algorithms aren’t much different. If you feed them data full of bias, then you’re going to get bias out. If you write algorithms that don’t account for that bias, you could actually amplify the effect. And, if you ask for results from those algorithms that are also biased, you end up taking a small amount of bias and targeting someone.

This is how limited datasets, algorithms that are designed by people who don’t have to think about bias in their everyday lives, and police who just want an easier job turn AI and facial recognition into a nightmare for BIPOC and women. In fact, AI can be 100 times more likely to make a mistake identifying a Black woman than a Caucasian man. To quote The NY Times, it’s as though BIPOC are in “a perpetual lineup.” With AI combing through photos on social networks, taken by security cameras, licenses, and passports, you’re always one step away from being arrested for a crime you did not commit. Like the existing biases in policing, this disproportionately affects BIPOC thanks to those same biases existing in the tech industry.

Tech is still dominated by white men. Managerial and senior engineering roles, especially are held almost exclusively by white men. I’ve joked before that the difference between a senior engineer and an engineering manager is usually whether or not he’s married a nice woman and had a kid yet. Yes, I’m also implying they’re straight. I’ve been a senior engineer for years now, yet people still assume I’m a tester, associate, or entry-level engineer. That’s just one of many microaggressions women face. There are many others for BIPOC that make moving up in their field extremely difficult. As a result, AI and decisions about AI are made by white men who think they lack bias, while they continue to perpetuate and benefit from systematic bias.

We’re Using AI Wrong

Risk assessment for white and black people. AI often contains racial biases

This risk assessment was wrong, and damaged lives as a result. Prater committed a crime again, Borden did not.

It’s not just cases like Mr. Williams’. Facial recognition is only one concern. Even if we could remove the bias in facial recognition, it would still mean surveying people at every step through their lives. It’s an end to privacy. But let’s pretend there is a right way to use facial recognition for a moment. Just pretend. It would be only to amplify human police work. Help narrow cases down, not to help them find their only suspect. Consistently, officials are relying on AI alone to make decisions, rather than a tool to give them more information about the decisions they should be making.

Risk assessment scores showing scores for Black people are randomized and evenly distributed, while white people are less likely to be considered dangerousTake, for example, parole and sentencing. Judges and panels often use risk assessment tools today to help them figure out how long a person’s sentence should be for based on the likelihood of whether or not they’d commit a crime in the future. The problem, as you may have guessed, is that these tools have extreme bias. They consistently overestimate the likelihood of a BIPOC committing a crime again, while underestimating the likelihood of a White person committing a crime. The system is flawed, and numerous studies have proven this, but, to keep the stress of appearing racist themselves away, judges and parole boards continue to use these tools. Knowing the outcome is racist is just as bad as making a racist assertion yourself.

Finally, Change?

This year has not been an easy one. But if any good can come out of it, it’s that people have become more aware of the racial bias in policing, the disproportionate use of force, the lack of oversight and training, and the overuse of the police. People have become more aware of the biases that, frankly, they should have been aware of a long time ago. Finally, enough people are asking for change that we may actually see changes.

Detroit has taken a small step back from AI. They’ll only use it for violent crimes now. Hopefully they’ll also add in actual police work, because, without it, it’ll just increase the average severity of false positives. Rank One, makers of the software that DataWorks Plus used to identify Williams is changing their contract so they can cut ties with companies that misuse their technology. Amazon is putting a 1-year moratorium on police using their facial recognition, hoping that congress enacts laws in that time to protect against bias. Though, a larger measure would be cutting them off entirely forever, or at least until laws and results protect people. Microsoft has called for regulation of AI and facial recognition in the past. IBM recently announced they’re abandoning facial recognition, and also calling for congress to regulate facial recognition and AI.

These companies represent small steps forward, but what we need is ground-up reform. These are nice measures, but for those who have had their lives turned upside down by racism and bias, this isn’t close to enough.

Mr. Williams is free, but how many others falsely accused like him are not?


Sources: