Microsoft Calls for Laws Regulating AI Facial Recognition

Reading Time: 6 minutes.
A number of people with points of their face identified and connected with lines used in facial recognition.

Soon, a simple camera could ID you anywhere. Image: Microsoft

A woman walks into a hospital. She closed her car door on her pinky and now her pinky is larger than her thumb. She fills out her forms in sloppily written scrawl and, after a wait that seems like forever, gets to see a doctor. They set her bone, give her a brace for her finger, and give her a gigantic bill that will keep her in debt for many years. But on her way out, an ICE agent grabs her, handcuffs her, and throws her in the back of a car. He says facial recognition nailed her as an undocumented immigrant, and she’s going to be deported to Mexico.

But she swears they misidentified her and she was born here, in the United States. She’s prepared for deportation anyway.

Sound crazy? It’s not. Our overzealous immigration officers almost deported a U.S. citizen, born in Philadelphia, PA to Jamaica. The decision was largely based on his race.

Amazon’s Rekognition AI isn’t much better than overzealous and racially biased ICE officers either. It’s more likely to find a false positive on dark skinned people and women than anyone else. This is, in part, thanks to Amazon’s own biases in their primarily white and male engineering department. They just didn’t think to make their software to look at faces with different features or skin tones than their own. In fact, Amazon’s Rekognition recently matched 28 members of congress to mugshots. Despite the small numbers of people of color and women in congress, those falsely accused of being former criminals by Amazon’s AI were disproportionately people of color and female.

It’s in this setting that Microsoft, one of the leaders in AI in the U.S., is pushing for greater regulation of our AI efforts. If we’re not careful, we can create a system that enhances existing racial biases. It looks like that has already happened.

What Regulation Would Look Like

There are two key ways we could improve our lives and processes with AI without making our systems worse for women and people of color. First, we need to properly regulate how the software is made, taking the input from machine learning professionals and applying it to regulations. Next, we have to regulate how that AI is put into use. If using AI could put someone in danger, such as causing people to avoid hospitals and die in their homes instead, then it’s wrong.

Documentation

A Java method with unknown functionality and no comments explaining what it does.

It’s tough to get novice programmers to put notes in their code, let alone write documentation. But it’s necessary.

Microsoft’s president Brad Smith called out for more ethical AI implementation. This was likely in response to China’s growing and questionable uses of the software, as well as the U.S. government’s unethical usages for identifying undocumented immigrants. Interestingly, he didn’t call out for regulation on how AI is developed, only saying that companies should have to provide documentation. This would show others how they developed their AI, and allow others to critique it for flaws or bias. However, it doesn’t set a standard.

Documentation is a minimum requirement. Without it, the government could implement a system that is racially biased or bias against women without realizing what they’ve done. This is what’s happening now.

It's a joke. It's actually an image of Stimpy, from Ren and Stimpy. He's pressing a big red button before they all get zapped into nothingness.

Pictured: U.S. representative interacting with AI

Documentation allows for criticism and peer review. Furthermore, it allows other researchers to learn from a company’s successes and mistakes. Microsoft and Apple have already taken some of these steps, releasing white papers on their AI tactics and data anonymization steps. Documentation is vital. Not knowing how a system works is like pushing random buttons in a dangerous factory and wandering around. It’s like pushing a giant red button that says “Extremely dangerous, do not push” then standing on a target. We must know what we’re implementing.

Restricting How AI is Made

Internet Police bursting in on a woman on her computer

No, not like this. Artist: Matt Bors + Lubchansky

However, I think we need to take it a step further. I believe that we need to regulate how we write AI, and, more specifically, how we’re training our machine learning models. Now, obviously, I’m not advocating for making AI programming illegal unless it’s done under strict circumstances. No “AI Police” here. Instead, our government should release guidelines for ethical AI model design and training. We should work closely with a diverse group of researchers. They would come together and write guidelines for ensuring that AI is unbiased and safe. The regulation they’d pass would include steps like:

  • Ensuring a diverse group of people trained the model
  • Using a diverse group of people to test the input and results of the model
  • Proper documentation of how the AI works
  • Diverse group of developers or a diverse group of people to oversee or test their work
  • Peer review

Professionals and legislators should be able to understand exactly how the technology was made so they can be sure it doesn’t include bias. If we skip any steps in that process, we leave gaps where human bias can slip in. AI magnifies our own biases, it doesn’t remove them. That’s why we need to be extremely careful when creating it.

These restrictions would only come into play when implementing the AI. If it’s for a government contract, it would require peer review, proper documentation, and the above steps to ensure its creators had diversity in mind when writing the software. Other companies would likely follow the guidelines, even if we don’t require it by law. This only ensures that the AI we create would not have racial or gender biases, it doesn’t necessarily ensure that our government would use AI responsibly. That’s why we need to regulate how we make and use AI.

Regulating AI’s Use

Bender from Futurama. He's in a court room, admitting to committing crimes.

Will I use Bender in every AI article? If I can help it!

In science fiction, AI is given rules to follow. For example, Asimov laid out three infamous laws that, while they’re not without their faults, lay out the groundwork for what AI should and should not do.

  1. First, a robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. Second, a robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. Finally, a robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Obviously AI doesn’t yet have free will, and is not at the point when it could violate any of those laws. However, we, as far as we can tell, seem to have free will. We can ensure that the AI we create won’t hurt people or allow harm to come to them, and function only as the user intends. However, we’re already using AI wrong.

New Rules

The laws we make for AI have to deal with people, and how we can use AI. An example of laws revolving around human use of AI would look like this:

  1. AI cannot be used to harm someone or put them in danger. This includes using AI to influence human behavior into self harm.
  2. A person cannot use AI to subvert someone’s free will or the democratic process.
  3. AI, if it’s capable of alerting a person to a dangerous situation for themselves or someone else, must do so.
  4. A person or group cannot use AI to discriminate.
  5. People must understand the AI they use.

Take the hospital situation I mentioned at the beginning of this post. If you knew that ICE would deport or detain you, even if you’re a U.S. citizen, would you be less likely to go to the hospital? This would violate the first rule. AI would put people in danger by preventing them from using vital services. In some way, it violates the second rule as well.

The third rule would mean that something like a self-driving car should not automatically make a decision when given the trolley problem. The software should alert its user to the danger, especially since they may be able to avert it.

The fourth example is obvious. We cannot allow people to use AI to make racist, sexist, or homophobic decisions. AI could look at a large number of résumés and tell a person if one is most likely from a man or a woman for the sole purpose of discrimination.

Finally, people need to know how AI is making decisions. They need to know how and when it could make mistakes so they can avoid those situations or question the AI when it does make a decision in this gray area.

Why We Need Regulation

Risk assessment for white and black people. AI often contains racial biases

This risk assessment was wrong, and damaged lives as a result

We’re already not careful enough about our AI, yet we’re using it. Facial recognition, especially, is dangerous. Because it’s far more likely to misidentify women and people of color, groups already at a disadvantage, it serves to harm people already in danger of profiling or discrimination. Another example would be policing, which falls into the trap of sending officers to traditionally black neighborhoods more often than traditionally white neighborhoods after redlining. Amazon is using their AI to decide where to allow 1 day shipping. With no surprises, it’s carrying the same racial biases of the past, compounded. People are spending more time in jail simply because of the bias in software used to predict whether or not they’ll commit a crime again.

We’re setting ourselves up for failure by developing AI without thinking about how it functions. We’re not taking the necessary steps to ensure it remains morally sound. Developers aren’t doing enough to ensure it doesn’t discriminate because there’s a dearth of female and people of color who are working on these projects. Regulation around AI needs to include how we make it and how we use it. Otherwise, we’re going to compound and multiply the racism, sexism, and discrimination of our past, and engrave it into our future.


Source: Liam Tung, ZDNet