Leaf&Core

Google Fired Yet Another Ethics in AI Researcher

Reading Time: 7 minutes.

In December, Google fired ethics in AI researcher, Dr. Timnit Gebru. She had questioned Google’s racial bias in their application of AI, as well as their bias against poorer communities and extreme electricity usage due to inefficient data gathering. Coworkers stated that Google didn’t handle the situation well, and made use of racial stereotypes when attempting to validate her firing. Since then, Google admitted that they need to make changes, but has not admitted fault or apologized to Dr. Gebru.

“For the record: Dr. Gebru has been treated completely inappropriately, with intense disrespect, and she deserves an apology.”

– Margaret Mitchell, former Google employee

Dr. Gebru wasn’t alone at Google. She co-lead the ethical AI team with Margaret Mitchell. Mitchell didn’t believe Google rightfully fired her coworker. She went through her email to compile evidence of Gebru’s mistreatment. Google locked her out of her accounts for a month while they completed a review. They’ve now fired her. She didn’t “strike a positive tone.” I suppose.

It seems glaringly obvious that Google’s ethical AI department isn’t about ethics, but about trying to whitewash Google’s unethical behavior. It’s not working out well.

Compiling Evidence

“Please know I tried to use my position to raise concerns to Google about race and gender inequity, and to speak up about Google’s deeply problematic firing of Dr. Gebru. To now be fired has been devastating. It is my hope that speaking out will lead to one more step on the path of ethical AI.”

– Margaret Mitchell, on her firing

Office Politicking

If a coworker from another department is pushing you to do something that may be bad for the product, consumer, security, or reliability of your product, CC your manager on communications. Whoever is bothering you will quickly change their tune when they realize they can’t push the blame on you if something goes wrong. Always make sure these discussions of a questionable nature happen in written form, for proof. Especially if you’re a woman or minority in a field like engineering. Studies show women catch the blame for mistakes more often than men, so be sure to protect yourself.

Women know to do this. They know that they’ll be the last to receive praise and the first to be blamed. That’s why having proof of communications is so important. By roping in a manager or HR, they’re able to ensure there are witnesses. This can reduce the chance of problems and stockpile evidence if there is a problem,

If the problem is coming from your manager, try to rope relevant parties into the discussion. If it’s a people issue, talk to HR.

But what if the problem is coming from HR and your manager? Then you compile evidence for defense against firing and wrongful termination lawsuits. That’s what Margaret Mitchell was doing. If her coworker’s head was on the chopping block for doing her job, then she had a person obligation to help her colleague and potentially defend herself.

But then Google fired her for compiling information.

The irony of Google firing someone for collecting information is not lost on me.

Data Policies

Here’s a catch-22. You need evidence for a lawsuit or to help someone at least get severance and an apology. You need that evidence to protect yourself. The person who’s offering to protect your data is the same person who you need to protect yourself against. They offer to store all your evidence in their safe, which they can lock you out from at any time.

Do you take the offer?

No. Obviously.

But that’s what data policies at companies often do. They state that you can forward no communication from your corporate email to your personal email or other forms of personal storage. However, if you do not do this, it can be difficult to form a case. It can also be difficult to get these emails later via subpoena, especially if the company has deleted them.

It’s a catch-22, there’s no way out. If you compile evidence, your employer may terminate you for that. If you don’t, you may have no defense later.

Filtering Emails

“After conducting a review of this manager’s conduct, we confirmed that there were multiple violations of our code of conduct, as well as of our security policies, which included the exfiltration of confidential business-sensitive documents and private data of other employees.”

– Google on the firing of Margaret Mitchell

Have you ever had a problem at work and knew an old email had the answer you needed? Maybe it was proof someone asked you to do something against your better judgement or in spite of your protests. Maybe it’s just an important link, or details for an API response. Then you’ve come dangerously close to what Margaret Mitchell supposedly did.

Mitchell used a script to compile emails that were potentially relevant to Dr. Gebru’s firing. She compiled these emails to use as proof of Google’s wrongdoing, something Google itself was also supposedly investigating. According to Google—in contradiction of Mitchell’s statements—Mitchell sent copies of those emails to herself for storage. It’s the catch-22. Do you rely on the company that you know is guilty of wrongdoing and hiding evidence to hold the evidence you compiled, or do you back it up?

According to Google, Mitchell preserved those emails, so they fired her.

Google Acts as Though it Knows They Were Wrong

After firing Timnit Gebru, Google did their own internal investigation. Surprisingly, they decided to change policies, though stopped short of admitting any wrongdoing or apologizing to Dr. Gebru. They would later still fire Margaret Mitchell for compiling her own evidence of wrongdoing.

In the future, Google will tie pay for the VPs in charge of their own departments to meeting diversity and inclusion goals. If they can’t increase their diversity, their pay will suffer. Though, knowing tech companies, this will likely only be a part of bonuses. Google hasn’t said how much of their salaries or bonuses would rely on diversity.

Google will also streamline their process for publishing research. The paper Dr. Gebru was involved with had been submitted months in advance, and seemingly received approval. This wouldn’t help employees do good research, but at least would tell them they can’t publish something after it’s already been published.

Finally, Google will increase staff for employee retention, using new procedures around “potentially sensitive employee exits.” This could prevent some exits, but likely won’t actually free employees up to do their own research unimpeded either. Google’s changes will mostly help them cover their bases, but still could have prevented the situation that lead to them firing Dr. Gebru.

We Need a New AI Approach

“…the populations subject to harm and bias from AI’s predictions and determinations are primarily BIPOC people, women, religious and gender minorities, and the poor—those who’ve borne the brunt of structural discrimination. Here we have a clear racialized divide between those benefiting—the corporations and the primarily white male researchers and developers—and those most likely to be harmed.”

– Alex Hanna and Meredith Whittaker, for Wired

AI is already broken. Much of what you hear from scientists is, “We have to do ‘X’ or <something else> will get worse.” “We must wear masks and socially distance ourselves or this pandemic will last for over a year and claim millions of lives!” “We must cut carbon emissions or climate change will send humanity into wars, food scarcity, poverty, and destruction like we have never witnessed before!” You know, that sort of warning. Well, computer scientists have a different message: “the bad AI is already here.”

Fired, Hired, Arrested, Killed… by AI

HireVue used AI to comb through videos of potential applicants. They dropped that feature after a third-party audit found problems. What kind of problems? Well, facial recognition isn’t very good at detecting the faces of anyone with darker skin and women. Autistic people don’t always display the same facial emotions as neurotypical people. People with accents may not be heard properly. You, a human (presumably), can likely extrapolate the issues that come from an AI with those flaws. You can see why it shouldn’t be used in hiring. However, the engineers who made it weren’t aware of the flaws the software they were creating had, or how they could exacerbate existing societal issues.

It’s hardly the only case. Facial recognition has lead to the arrest and even deportation of wrongly accused suspects. People have faced longer sentences without parole because an AI with racial bias determined that to be fair.

The bad AI is here. Now we have to figure out how to roll it back.

Audits and Regulations

Before we put drugs out in the world, they’re tested by a government body and vetted by peer-reviewed papers, trials, and studies. Food is tested to maintain basic safety levels. Cars have to pass basic crash safety requirements before going on sale. We have multiple regulatory bodies to ensure the safety of products, drugs, food, and more. For AI, despite the fact that it can be far more dangerous, we do nothing.

If you’re a vegetarian, you can avoid problematic practices in the meat industry. Don’t drive? The airbags on a new car mean nothing to you. But with AI, you can’t escape. You could be arrested and nearly convicted for a crime you didn’t commit or get turned away from a job. Your cancer could be misdiagnosed or your doses administered improperly. You can’t escape bad AI. That’s why we must regulate AI.

The government could require auditing companies to review any AI used for facial recognition, employment, the medical field, criminal justice, financial services, and other purposes. This auditing would be funded by taxes on the companies creating the software, and vetted by independent parties who would have to test certain aspects and meet certain standards. They’d have to measure the output for defined inputs and ensure outputs do not carry bias. They’d have to interview the people making it, docking points if they did not account for potential biases or do not have a diverse staff.

AI experts like Dr. Timnit Gebru, Meredith Whittaker, and Margaret Mitchell would be responsible for drafting the legislation. The exact kind of critical experts that Google’s afraid of because of what they find.

Unions

Meredith Whittaker was forced out of her role at Google for her ethics in AI work outside of Google and for being one of the organizers of the Google Walkout against Google’s rewarding of sexual abusers. Google’s been harsh to any employees who criticize their practices, especially those in AI, and especially, it would seem, to women. Now three of the forceful departures from within ethics in AI departments have been women.

Unions allow employees to speak up without worrying about whether or not they’ll lose their jobs for doing so. This not only protects employees, but it ensures the products they create won’t do harm. If employees to not fear telling managers or other employees of their concerns about the ramifications of a piece of tech, they can prevent the release of software that could do harm, like reinforcing racism and sexism.

Tech employees have also wanted unions to protect themselves. Often, the tech industry engages in “crunch time,” forcing employees to work long and busy hours for many weeks. The game industry is especially guilty of this. Unions protect employees. They allow employees to speak up about problems they want to solve without undue repercussion. In turn, they can protect the public.

Social Consciousness

We can’t act unless lawmakers know what needs to be done and how important it is to do so. They’re not going to get that from experts. Experts have warned that COVID restrictions were too light, that we’re not doing enough to combat climate change, that we need universal healthcare, and that AI carries racial and sexist bias, among other biases. Politicians have done little to nothing on these fronts. That’s because they rely on public outcry.

What we need is public awareness. We need a social movement. We need to make the problems in AI easy to understand and digest so non-technical people will call up their senators and ask why they haven’t introduced legislation to audit and regulate AI.

I just don’t know how to spread the word, do you?


Sources:
Exit mobile version