Facebook Sued for $150 Billion for Role in Rohingya Genocide

Reading Time: 4 minutes.

Facebook logo with red glow on dark backgroundAfter the genocide of the Rohingya people in Myanmar, which started in 2017 and killed or displaced the population, with 730,000 fleeing the nation, the U.N. pointed at least partial blame at Facebook (now Meta). On Facebook, hate speech and calls for violence spread largely unperturbed. Now, years later, documents from Facebook whistleblowers and a report from AP News point to a company that still doesn’t take the nation seriously enough. Hate spreads rampantly, and Facebook employees have raised concerns that the company isn’t doing enough and has even abandoned some of the measures put in place after the Rohingya genocide.

Someone has anonymously brought forward a class-action lawsuit on behalf of all Rohingya. The suit, in two parts, applies to Rohingya wherever they are. For spreading hate—leading to genocide and violence—the lawsuit is asking for $150 billion.

Facebook, now Meta, collected nearly $86 billion in revenue in 2020 alone. However, expenses are growing faster than profits. $150 billion could seriously damage Facebook. The company profited off of the hate that killed or displaced hundreds of thousands. Does a company like that even deserve to exist? After this lawsuit, can companies like Meta still exist?

The Lawsuit

A woman brought two class action suits against Meta (Facebook). One, in the U.K., covers the entire world outside of the United States, the other, as you may have guessed, covers the U.S. It’s unprecedented, not only in the damages requested, but a lawsuit of this magnitude against a social network. Until now, social networks have been mostly immune to such lawsuits, in part due to laws like Section 230 in the U.S. However, the situation around the lawsuit is also unprecedented. According to the U.N. and activists, a company directly contributed to genocide. Fortunately, it’s exceedingly uncommon, and, just perhaps, it’s not something Facebook will get away with.

Plaintiff’s Claims

According to the plaintiff, Facebook knew they were contributing to hate on the platform. It’s something David Madden, a tech entrepreneur who worked in Myanmar, confirmed to Reuters: “They were warned so many times. It couldn’t have been presented to them more clearly, and they didn’t take the necessary steps.” Two years after Madden says Facebook was aware of the hate they spread on their platform, the coordinated violence against the Rohingya began.

Hate spread on Facebook, and is continuing to do so, all while Facebook was allegedly aware. According to the claims of the lawsuit, Facebook—now Meta—made that worse. Despite knowing about the problem, Facebook reportedly did what they’ve been accused of doing all over the world: favored hateful and violent speech. Fake stories made to incite violence and hatred see more interactions. That keeps people on Facebook longer. Rather than hide these stories, Facebook reportedly boosts them and rewards the people posting them with reactions and greater interaction.

“While the Rohingya have long been the victims of discrimination and persecution, the scope and violent nature of that persecution changed dramatically in the last decade, turning from human rights abuses and sporadic violence into terrorism and mass genocide. A key inflection point for that change was the introduction of Facebook into Burma in 2011, which materially contributed to the development and widespread dissemination of anti-Rohingya hate speech, misinformation, and incitement of violence.”

– From the lawsuit

Facebook, now Meta, fostered technical illiteracy in the nation. Since offering free internet, Facebook essentially became the internet in Myanmar. As a result, they created an environment where people trusted what they read on Facebook without fact checking. The wealth of misinformation and hate overwhelmed users. According the the claims of this lawsuit, it was Facebook who drove hate in Myanmar, even after they knew they were doing it. The Facebook papers Frances Haugen gave to Congress and the press largely support these claims.

The Lawsuit’s Potential

Facebook on an iPhone with a dark background.

In the United States, Section 230 of the Communications Act protects networks. That means they can’t be held liable for the things people share on their network. However, Facebook’s post recommendation algorithm for its News Feed raises some interesting questions. In a way, Facebook is editorializing these posts, especially since they focus on specific attributes, negative ones, in this case. The lawsuit targets a “defective product” argument. Basically, Facebook could be “reasonably expected” to provide a service that reduces harm, not introduces or boosts it. If spreading hate and violence isn’t Facebook’s goal, they have a defective product, as it most certainly has done this. Facebook has been negligent with their users’ safety, as well as the people who never even agreed to Facebook’s terms and conditions, those around the people Facebook posts radicalized.

It’s a tough case, but there is a chance here. Meta, formerly Facebook, has a lot of influence and money, however, and legal arguments aside, that certainly gives Meta an advantage.

The Future of Liability and Social Networks?

Gif of the logos for Facebook, Instagram, WhatsApp, Messenger, and Oculus rolling into the Meta logo

Facebook is desperately trying to rebrand with Meta. You can make your own assumptions as to why.

Meta (come on, you know, Facebook) claims they want revisions to Section 230. They say they want more regulation for social networks. However, it’s possible—if not likely—that Facebook simply wants to have an active role in writing legislation they know is coming anyway. It’s my bet they want to write it so networks only have to “prove they tried to limit dangerous, hateful, or violent speech.” For Facebook, they already meet those goals with their inadequate efforts. However, if these provisions can be written to ensure networks don’t introduce anything that can increase the spread of hate speech or violence, then Facebook could be in trouble. The News Feed sort of relies on promoting whatever gets the most interaction, and that, as it turns out, is hateful content.

If this lawsuit is successful, Meta will be liable for the hate they spread. Other networks, especially those that have suggested posts or a non-sequential feed, like Twitter or TikTok, will also be liable. Suggested feeds could be too dangerous to have at all. We may go back to our—much preferred—sequential feeds. That, or, you know… Meta and other companies could simply ban the bigots instead of boosting them.

You’d think after contributing to hundreds of thousands of people displaced or killed, Meta would do that anyway, but it seems nothing can motivate Meta/Facebook to do the right thing.


Sources:
,