A video uploaded by The Daily Mail showed a white man harassing and calling the police on a group of Black men. The video showed a disturbing trend of white people using the known biases of police to escalate issues and threaten Black people.
The topic of the video is troubling enough, but Facebook made matters far worse. Facebook uses AI to identify what’s in photos and videos. For posts with text, this is quite handy in the form of optical character recognition and alt tags for images. For most photos and videos, it’s so Facebook can tag the items for searching and advertising purposes. Post a photo of some Lego? Expect ads for Lego. Facebook tags these items using AI, and, often, AI can be sorely mistaken. Most often, it makes racist assumptions.
Um. This “keep seeing” prompt is unacceptable, @Facebook. And despite the video being more than a year old, a friend got this prompt yesterday. Friends at FB, please escalate. This is egregious. pic.twitter.com/vEHdnvF8ui
— Darci Groves (@tweetsbydarci) September 2, 2021
In the case of The Daily Mail’s video, Facebook added a “primate” label for the Black men. This isn’t the first time something like this has happened either, Google had previously done the same, labeling photos of Black people as “gorillas.” In fact, rather than actually fix their racist algorithm, Google just blocked searching for gorillas. It’s not certain how many other videos of Black people on Facebook have been labeled as such, but if it happened here through AI, it’s likely that it’s on many if not most other videos featuring Black people.
AI extrapolates from a racist system, a racist data set: human behavior. But that’s not the only issue at play here. The real problem is that no one at Facebook, despite Google making the exact same blunder years before, thought to prevent this. Perhaps that’s because only around 2% of Facebook’s technical employees are Black, and even fewer are in technical leadership roles.
The Real Pipeline Problem
When pointing to abysmal diversity numbers at a company, recruiters and hiring managers will often whine about the so-called “pipeline problem.” The idea is that there just aren’t enough candidates from certain demographics. However, most companies complaining of this are using the same old and tired recruitment tactics. If you always start your pipeline in the same place, it’ll keep turning up the same results.
But the real pipeline problem doesn’t stop with hiring. It goes from a workforce that is made up of the same demographics, making software that does not take into account every demographic, and producing something racist or sexist.
Racist culture leads to racist software. That’s the real pipeline problem.
Facebook’s Race Issue
Black employees have complained about treatment at the company before, as well as a lack of diversity. Only 2.1% of Facebook’s technical employees are Black as of 2021. That’s abysmal for the industry. Even Apple, which has its own problems regarding diversity, reports that 6% of their employees are Black. Facebook has an office in NYC, where nearly a quarter of the population is Black or African American. The pipeline problem isn’t the issue, Facebook is.
Even within leadership organization-wide, not just technical leadership, only 4.7% of Facebook employees are Black. Over the past 7 years, Facebook only increased their number of Black employees by 1.1%.
The problem with this is that you end up with people who do not understand the problems that AI creates for marginalized communities having the only say in AI development.Someone who does not worry about being misidentified by AI isn’t going to realize that it’s a potential problem for other people.
Facebook has allowed racism to take hold on their platform. The January 6th insurrection grew on Facebook. Islamophobic hate lead to genocide in Myanmar, all due to Facebook posts and groups. The company is rife with issues on its platform. Those issues stem from a lousy culture.
What Facebook and Everyone Else in AI Has to Figure Out
AI. Is. Racist.
Not inherently, no. But, if you train AI on human patterns, and those human patterns have bias rooted in historic and systematic racism, guess what? It’s going to be racist. The only way to stop that from happening is to use a diverse team who can recognize and call out those patterns, stopping them from negatively affecting others. The problem? Facebook’s got just a bit too much of that systematic racism built in to their system. Most tech companies do.
Facebook joins the shameful ranks of companies like Google, who made the same racist misidentification. People in the software industry like to paint themselves as smart people and quick learners. The data seems to disagree with their notions.
Sources:
- Apple Diversity and Inclusion
- Facebook Diversity and Inclusion
- Dustin Jones, NPR
- M. Moon, Engadget
- James Vincent, The Verge
- World Population Review