So what do you do when it isn’t one death, but hundreds, perhaps even thousands? What do you do when it’s 700,000 displaced or murdered people? What’s the negligent homicide version of genocide?
Asking for a Meta employee.
Since 2012, groups warned Meta, then Facebook, that the company’s website was pushing Myanmar towards violence, with the ultimate end being genocide. Facebook didn’t do enough to limit hateful content. In 2014, violence broke out between Buddhists and Muslims in Mandalay, driven by Islamophobic hate speech on Facebook. The government temporarily shut Facebook down. It couldn’t be shut down permanently, Facebook is the internet in Myanmar. When it came back, Facebook made small changes that actually made the situation worse. According to the U.N. and now Amnesty International, in 2017, a constant flood of dehumanizing content on Facebook contributed to a genocide of the Rohingya people.
Facebook knew their content, suggestions, and lack of moderation would lead to more violence. Even employees within Facebook desperately tried to change the direction of the company. But leadership refused to make those necessary changes. In 2017, partially as a result of that negligence, hate overtook Myanmar, and 700,000 people were displaced or killed.
What do you do when you find out you’ve contributed to genocide? Change the policies that lead to it? Compensate survivors and help them rebuild their lives? If you’re Meta (aka: Facebook), pretend you owe the victims nothing.
Thanks to the Facebook Papers, Amnesty International was able to take a closer look into Facebook’s role in the Rohingya genocide in Myanmar. The picture the paint is a company that had ample time to change, but chose not to. Now 700,000 are displaced or dead, and, five years after the brutal military and vigilante action that brought pain, suffering, and death to the Rohingya people, Amnesty International is pushing for change. Meta has to change, Meta owes a debt to humanity, and they’re not going to agree to that unless someone makes them.
In This Article:
Facebook’s Role
The U.N.’s Independent International Fact-Finding Mission on Myanmar (IIFFMM) has suggested senior military officials in Myanmar should be investigated for “war crimes, crimes against humanity, and genocide.” Of Meta, then just “Facebook,” the IIFFMM did accuse Facebook of playing a role in those crimes, adding that, “[t]he extent to which Facebook posts and messages have led to real-world discrimination and violence must be independently and thoroughly examined.” In was a response to that call for further investigation that lead to the Amnesty International report on Facebook, titled, “The Social Atrocity: Meta and the Right to Remedy for the Rohingya.” They found that not only did Facebook play a role in the genocide, they had adequate warning to prevent boosting the very hate that lead to those crimes against humanity.
Shaping the Narrative
“Meta’s content-shaping algorithms proactively amplified and promoted content on the Facebook platform which incited violence, hatred, and discrimination against the Rohingya”
– Amnesty International Report
Terrible people exist. They just do. There will always be people who are motivated by fear, hate, “othering,” and the violence they believe will protect them from their fears. Hate is, unfortunately, part of the human condition. While tribalism may have helped early humans protect resources, it was community-building and trust that helped us thrive. Unfortunately, that vestigial distrust and hate remains. We don’t need to boost that hate. Unfortunately, that’s exactly what Facebook does.
“Ultimately, this happened because Meta’s business model, based on invasive profiling and targeted advertising, fuels the spread of harmful content, including incitement to violence.”
– Amnesty International Report
Facebook boosts posts that will be more likely to get engagement. If a user engages with a post, they spend more time on Facebook. Spending time on Facebook leads to more visibility for ads, longer ad views, more repeat visits, and more revenue. So when Facebook’s algorithm found angry reactions lead to more engagement, it boosted them. Angry reactions mostly came from inflammatory, hateful content. People reacted with angry reacts because it confirmed their hate and fears, or because they were pissed about hate on their news feed. Either way, it counts as engagement. Facebook took that and boosted it.
In 2014, after hate on Facebook lead to violence in Myanmar, Facebook tried to implement a new reaction, a flower. This “flower speech” was made to label hate speech. It was supposed to acknowledge that hate so it could be ignored. Instead, it just gave another thing for Facebook’s algorithm to use to boost hateful content to drive engagement. That spread of hate leads to engagement and profit. Focusing on engagement alone will usually spread the most inflammatory content. Facebook’s algorithms allowed them to profit from genocide. They made money from this. If a company doesn’t have a responsibility to humanity, then Facebook actually succeeded here. They made money. If that’s all that matters, what more do they need to do?
Suggesting Hate
“This evidence shows that the core content-shaping algorithms which power the Facebook platform — including its news feed, ranking, and recommendation features — all actively amplify and distribute content which incites violence and discrimination, and deliver this content directly to the people most likely to act upon such incitement”
– Amnesty International Report
Facebook personalizes recommendations. They track users across the web, in every interaction they do, to form their content shaping algorithms. These say, “Hey, you’ve expressed an interest in skateboarding, here’s an ad for a skateboard,” or, “You recently bought a guitar, here’s more guitar videos” on Instagram. Meta’s bread and butter is keeping people engaged with their products, and they use their content shaping algorithms to make sure your news feed and Instagram feed are tailored to keep you scrolling.
But what if your interests aren’t guitars, dogs, journalism, or other harmless hobbies? What if your interests lie with hate?
In 2019, investigators used a fake account for a “conservative mother.” Within just five days, Facebook was sending her suggested pages and posts from Q-Anon. The FBI calls the conspiracy theory cult of Q-Anon a “domestic terrorism threat.” Facebook was suggesting it to run-of-the-mill conservatives, pushing them to the alt-right. In fact, despite the violence stemming from Q-Anon conspiracy theories, Facebook allowed it on their platform for 13 months.
Facebook’s content shaping algorithms found the people who were most likely to engage with hateful, violent content, and showed them more. They took the people most likely to commit acts of violence, and showed them inspiration. They normalized that hate to the very people who just needed to feel like their hate was justified to go out and commit acts of violence.
Allowing the Spread
“While the Myanmar military was committing crimes against humanity against the Rohingya, Meta was profiting from the echo chamber of hatred created by its hate-spiralling algorithms.”
– Agnès Callamard, Amnesty International Secretary General
Publicly, Mark Zuckerberg has claimed that Facebook responds to hateful content before it’s reported 94% of the time. What he failed to specify is that this isn’t 94% of the hateful content on Facebook, but 94% of the hateful content they identify. According to internal documents, Facebook may only identify 5% of hateful content on their site. Amnesty International suggested that number may be as low as 2%. Meta just doesn’t make finding and removing hate a priority. Why would they? Those who spread hateful content are among their most active, and therefore most profitable users.
The IIFFMM found that when they attempted to report hateful content they found, specifically hate targeting a person that spoke to them as part of their investigation, Facebook ignored the report. Four times they reported hateful posts calling for violence against a single person they spoke with as part of their investigation. Eventually, they had to speak to a contact at Facebook to get the original post removed. However, it had, at that point, been reposted and spread so often that it was everywhere. There was no removing it now, it became a part of the network.
Facebook had these reports. They knew this person’s life was in danger because of the hate they spread. They just didn’t care enough to step in until the U.N. itself asked a representative to do so. What hope does anyone else have?
Ignoring Desperate Warnings and Pleas
According to Amnesty International’s reading, the Facebook Papers show that Meta’s employees have not been blind to the damage they’re doing. Internal chats, emails, and study results confirm that Facebook knew they were creating potentially violent and deadly situations.
When Meta introduced Facebook to Myanmar, they did so as part of a program to bring the internet to the nation. Facebook is the internet in Myanmar. It’s the universal homepage, the internet’s first node. When people in Myanmar buy a new phone, shop employees would often help them set up a Facebook account to get started.
With Facebook being ubiquitous in the nation, the hate that it spreads is amplified. It’s one thing when only half the population of a nation is exposed to that hate, but what about when it’s nearly the entire nation? In 2012, Facebook first started receiving warnings about their content. That lead to violence in 2014, and genocide in 2017. At every step of the way, Facebook was warned that the situation was only worsening, and they were at the heart of it.
Can we say the genocide wouldn’t have happened without Facebook? Not with certainty. But studies show that not only does the U.N. and Amnesty International believe the company contributed to the violence, even Facebook agrees it had a role in that violence. Had it not been for the widespread dehumanization of the Rohingya, often started by the military through posts on Facebook and later spread by brainwashed masses, they would not have had the numbers to commit such atrocities.
Amnesty’s Advice for Meta and Legislative Bodies
“Despite its partial acknowledgement that it played a role in the 2017 violence against the Rohingya, Meta has to date failed to provide an effective remedy to affected Rohingya communities.”
– Amnesty International
Amnesty International wants change. Some of it could come from within Meta, but they acknowledge that, “It is abundantly clear, however, that Meta will not solve these problems of its own accord.” Still, they’ve asked for Meta to make changes, and legislators to enforce those changes.
For Meta, Amnesty International expects “human rights due diligence” otherwise they risk “contributing to serious human rights abuses again.” They also believe Facebook should pay reparations for “physical and mental harm, lost opportunities, including employment, education and social benefits, material damages and loss of earnings, including loss of earning potential, and moral damage.” The Rohingya people asked for just $1 million U.S. to fund education in the refugee camp Facebook helped force them into. Meta refused. It would have been less than 0.002% of Meta’s 2021 profits, but it was still too much for Meta.
Meta’s algorithms were a large, if not primary part of Facebook’s role in the genocide. Amnesty wants them to change their content shaping algorithms to focus only on context, not tracking. This would reduce harmful content being shared with people who are most likely to do harm. Amnesty also thinks Meta needs to make all content shaping opt-in, rather than opt-out. Users should know what they’re getting into.
As Americans, we know the hateful content we can find on Facebook. However, as bad as it is here, it’s far worse in the rest of the world, especially the Global South. In 2020, 84% of Meta’s anti-disinformation resources went to the United States, with far less making its way to the developing world. Amnesty wants Facebook to change that, focusing more on the areas they’ve allowed hate to spread.
Advice for Lawmakers
We know Facebook has admitted some culpability. Still, they haven’t taken responsibility in a way that helps anyone, or even in a way to prevent this from happening again. That’s why we need legislators to step in. Lawmakers need to oversee Meta’s suggestion algorithms and data tracking. They need to ban targeted advertising and require human rights due diligence. We need laws that force companies like Meta to protect people first, and focus on profit second. Despite knowledge that their website was creating hate, Meta didn’t work to fix the problem with Facebook in Myanmar. Regulators need to make human rights abuses unprofitable. Companies like Meta need to be responsible for the content they carry and suggest.
Meta is currently being sued for their role in the genocide, to force them to help the victims they helped hurt. Amnesty International has called out similar patterns in Ethiopia, India, and beyond, where Facebook is still driving suffering. Meta knows what’s at stake for these people. They won’t take drastic action unless they believe they’ll be held responsible. So far, that hasn’t lead to the significant change marginalized groups all over the world desperately need from the American corporation.
Sources:
- Amnesty International
- Dell Cameron and Mack DeGeurin, Gizmodo
- Dell Cameron, Shoshana Wodinsky, and Mack DeGeurin, Gizmodo
- Cristiano Lima, The Washington Post
- Natasha Lomas, TechCrunch, [2]