Google’s Researchers Forced to Make Google Look Good

Reading Time: 5 minutes.
Same protest sign from earlier, "Not OK Google #DontBeEvil" from another angle. Google Employees have their fists raised in the air with signs like "My red lipstic at work isn't for you to 'sexy' comment on!" and "Equal rights at Google!" A few other "Times Up Tech" and "Don't Be Evil" signs.

Photo: Noah Berger/AP Photo

“Don’t shoot the messenger.” It’s a saying as old as, well, messengers. Let’s consider why. Pretend you’re the Queen, King, or Themperor of your empire. A messenger comes from the front line with news. If you don’t retreat, your army will be crushed and you’ll have no way of regrouping. You kill the messenger for bringing you bad news. The next time a messenger comes back from the front line, they, remembering the fate of the last messenger, tell you everything is perfect. We defeated the invading army. Mars is ours (oh, did you not realize this was a space fantasy?). Well, that messenger then went, packed their bags, and thew on the enemy’s uniform because, whoops, they lied to save their own neck. Mars is not yours. You’re about to die.

If you refuse to hear bad news, it’ll come to you eventually. Usually fatally.

This is where Google finds itself now.

They continue to fire researchers like Dr. Temnit Gebru, who bring Google bad news. It’s now official policy: don’t talk about Google’s AI problems, even if it could spell disaster for all of humanity.

You can likely guess where that’s going, right?

Researchers at Google have to choose between doing important research that could potentially make Google change course and cost them money, or lose their jobs, often in a public way that can make a job search more difficult.

After killing all the messengers, we’re next.

“Strike a Positive Tone”

Many Google employees in front of the campus, protesting in front of Google's large sign on the front of the building.

Employees walking out of Google’s campus in California. Photo: Mason Trinca/Getty Images

Google engineers spoke with Reuters about a new policy Google started enforcing in June. The policy is unclear, and sporadically enforced, but the gist is simple: if it’s a “sensitive topic” or could case a technology or industry that Google’s involved with in a negative light, the paper may undergo Google-forced revisions. The company may also ask that the paper is not published.

At least three researchers claim Google told them not to paint Google’s tech in a negative light. One paper was forced to point out that recommendations algorithms could help increase “accurate information, fairness, and diversity of content.” The paper’s topic was how these technologies aren’t doing that in practice and are actually spreading disinformation, discriminatory or otherwise unfair results,” as well as “insufficient diversity of content” and “political polarization.” YouTube, for example, frequently sends people down a rabbit hole into increasingly alt-right topics due to innocuous-seeming videos with a hidden “red pill,” an issue to trick people down the alt-right rabbit hole.

Open-Ended Censorship

A list of example “sensitive topics” includes, “Oil industry, China, Iran, Israel, COVID-19, home security, insurance, location data, religion, self-driving vehicles, telecoms, and systems that recommend or personalize web content.” These are topics that could get Google in trouble with investors, or, in the case of recommendations or personalizations of web content, go after Google’s search results, YouTube suggestions, or even their ads. They cast a wide net. The rule of thumb is this: if you ask Google to change a practice that is harmful, Google won’t approve it.

“If we are researching the appropriate thing given our expertise, and we are not permitted to publish that on grounds that are not in line with high-quality peer review, then we’re getting into a serious problem of censorship.”

– Margaret Mitchell, senior scientist at Google

Margaret Mitchell was a member of Dr. Gebru’s team. Their roles are to investigate issues with ethics in AI. Since AI, in practice today, includes racial, ideological, religious, sexist, and anti-LGBTQ bias, companies need to change. Instead, researchers say they could be facing, “a serious problem of censorship” at Google. Those who did go through the review process stated that it was long, arduous, and frequently took over 100 emails to get a paper in a condition that Google was willing to allow for publication. Often, this meant the core aspect of the paper was dismantled, or Google employees would have to remove their name from the paper, something that could get authors in legal trouble.

Google is not only censoring their own employees, but also the people who work with Google employees on research.

Losing Diverse Voices

A study found that diverse teams are less likely to create biased AI algorithms. This is because a team with a diverse background brings a wide range of experience to a team. Minorities in tech, especially, understand the importance of eliminating bias from their code. The people who are most frequently victims of bad systematic practices and bad AI are more likely to be aware of and address those issues. These are the same researchers who will speak up to their employer about problematic applications of AI.

Google, by constantly playing whack-an-engineer with their research teams, is picking off the people who speak up. It’s a system that, once again, disproportionately hurts minorities in tech, whether that’s women, people of color, immigrants, religious minorities, people whose first language isn’t English, and others. Basically put, it ensures that the people who are least able to find problems in AI are the only ones who get to continue to work in AI.

After Google retaliated over the Walkout for Change protests and now with the firing of Dr. Gebru, they’ve cleared out female and Black voices from a team that lacks women and Black employees. They’ve narrowed down their team’s views to only those with “uncontroversial” findings.

Google’s ensuring only the most profitable ideas bubble to the top, without regards to who’s hurt by those decisions.

What’s The Worst That Could Happen?

Vernon Prater, white, low risk (3). Brisha Borden, black, high risk (8).

Brisha had 4 juvenile misdemeanors. She never committed another crime. Vernon had two armed robberies, one attempted armed robbery, and later committed one grand theft.

I am so glad you asked. Science fiction gives us plenty of examples. There’s Skynet, of Terminator fame. That AI destroys humanity, forcing us to rely on time travel a lot, which just creates an alternative reality that might be slightly better than the one the fated Terminator came from. Really, it just pushes back the inevitable, as long as AI research doesn’t consider ethics. There’s the dangerous machines in Horizon Zero Dawn (don’t worry, I won’t spoil, but go buy it for PS4 or PC now). HAL 9000 in 2001: A Space Odyssey. GLaDOS in Portal. Vicky in I, Robot (sorry, spoilers). And many, many more.

But those are fiction. All set far in the future where AI can solve logical and ethical problems and has the power to do something about them. That’s too far off to consider realistically right now. What could happen now, or by 2030?

Well, we’re already seeing problems today. Application of facial recognition that ignores the racial bias in facial recognition has lead to false arrests and even false deportations. That alone could lead people, mostly people and women of color, to spend less time in areas that use facial recognition, furthering systematic racism. Google fired Dr. Gebru for a paper discussing the energy usage of AI and, more importantly, the discriminatory issues that occur from performing large data processing based on gobbling up data from all over the internet indiscriminately and without weighting. You know, exactly what Google does. It can disenfranchise groups and empower others. It can also lead to giving a greater voice to fascists, as Microsoft’s Tay AI chatbot helped the company discover. Bad AI is often used in sentencing and parole, giving white people shorter sentences and more likely to get parole than any other group.

The Future is Now

Tay tweets calling for attacking women and Jewish people

Tay turned to hate very quickly. These were actually more tame examples.

The worst is already happening. We’re elevating humanity’s biases, hate, and bigotry. This is why we need AI researchers to prove these problems and help us course-correct. AI will make issues like sexism and systematic racism even worse.

We’re shooting the messenger. That messenger is trying to tell us that economic and social disparity between privileged and less fortunate groups is growing as a result of our AI. If we do nothing, humanity will become less equal and more divided. Look around. Do you see that already happening? I wonder what’s to blame. Certainly not the algorithms that decide what news articles to show you on Facebook, Twitter, or Google search results, right?

If you want to see the future of unchecked AI, you can look at the present. Plague, war, fascist and authoritarian leaders, climate change and catastrophe, genocide, systematic racism, sexism, and more. The world is already being pulled apart by AI that hasn’t been properly studied. We require vetting and peer review for vaccines to save lives, but AI systems that control our lives, our politics, our well-being, our jobs, our safety, and our justice system? That’s all proprietary, unchecked, and secret.

AI needs to be open, peer-reviewed, and transparent. We can’t trust those making money off of division to have our best interests at heart. We’re watching our kings shoot the messengers, and if we don’t speak up now, no one will be left to save us from an already worsening future. AI’s not necessarily bad. When it’s well-researched, peer reviewed, and developed by diverse teams. But the companies profiting from it don’t want to go through the steps that will help humanity, but reduce profits.

The future? Division, death, plague, unhinged leaders, and rich corporations. You know, exactly what we already have, but worse.


Sources:

 

,