Mozilla Asks Social Networks to Do More than Deplatforming

Reading Time: 3 minutes.

Photo: Leah Millis/Reuters

Deplatforming is the term we use for when social networks ban a person. Alex Jones knows all about it. Networks banned him after he repeatedly harassed survivors and family of mass shooting victims. He specifically targeted victims of the Sandy Hook massacre. Laura Loomer, an Islamophobic activist, was banned from social networks after complaining that she couldn’t find an Uber with a non-Muslim driver. Donald Trump’s deplatforming lead to far more. Twitter, Facebook, and others banned Trump after he encouraged an insurrection that killed five people, where enemies of the state took the Capitol for the fist time in over 200 years.

But this wasn’t the first time social networks were the vehicle of violence.

Facebook contributed to genocide. They’ve helped wealthy billionaires steal personal information for political campaigns. Twitter has lead to doxxing, offline harassment, and death threats, completely uprooting people’s lives. Pinterest has… okay, Pinterest, we’re cool. Instagram has carried bullying. WhatsApp, Telegram, and other chat services have allowed the spread of misinformation that lead to lynchings and other hate crimes all over the world.

Our social networks have elevated the worst in humanity because humanity’s worst is profitable. It drives traffic and engagement. There are communities on Facebook where the “Stop the Steal” and “Storm the Capitol” groups helped people organize last week’s insurrection. Parler was the worst of it, however networks helped shut Parler down permanently in the wake of these riots. Twitter had to shut down some of the more violent hashtags, like those calling to hang Mike Pence.

Mozilla is calling for social networks to do more. Banning people after they incite violence isn’t enough. We need to be able to prevent this violence from spreading in the future. Mozilla has a few ideas on how to do that.

Mozilla’s Plan

Gallows erected in front of the Capitol building

Trump supporters chanted “Hang Mike Pence” and set up this crude gallows for murdering politicians and press. Photo: AFP via Getty

“But as reprehensible as the actions of Donald Trump are, the rampant use of the internet to foment violence and hate, and reinforce white supremacy is about more than any one personality. Donald Trump is certainly not the first politician to exploit the architecture of the internet in this way, and he won’t be the last. We need solutions that don’t start after untold damage has been done.”

– Mitchell Baker, writing for Mozilla

Mozilla doesn’t want social networks to simply scramble to ban people after they’ve incited violence. They want social networks to be proactive, ensuring such violence doesn’t happen, and, if it does, the cause of it is blocked, not just the person. This means transparency, fact checking, and studies to see just how algorithms are influencing people’s habits on these platforms. Basically, it’s allowing the public to see what networks are doing so we can hold them accountable for times they refuse to moderate or actually suggest violence. Many people who joined alt-right pages, including those for the siege at the Capitol, were suggested to do so by Facebook.

Mozilla has four steps they want to see social networks take:

  1. Reveal who is paying for advertisements, how much they are paying and who is being targeted.
  2. Commit to meaningful transparency of platform algorithms so we know how and what content is being amplified, to whom, and the associated impact.
  3. Turn on by default the tools to amplify factual voices over disinformation.
  4. Work with independent researchers to facilitate in-depth studies of the platforms’ impact on people and our societies, and what we can do to improve things.

Will it Work?

A man in the Capitol carrying a Confederate Flag

Photo: Jim Lo Scalzo/EPA-EFE/Shutterstock

This is a relatively small ask. It’s transparency in ads, specifically politically motivated ads, as well as studies and transparency around the news feed and suggestion algorithms. Both are things that researchers have called for since Facebook, Twitter, and others switched from chronological news feeds to algorithmic black boxes, which hide their true intention. It seems these news feeds polarize and separate us, while promoting extreme views. With transparency, we can hold Facebook accountable for doing something like telling a user to join a group of white supremacists who want to plan an insurrection at the U.S. Capitol.

Social networks aren’t going to go along with this willingly. They know that, if they have to change their algorithms to reduce violence, they’re going to lose money. For them, violence is profit. It’s likely, therefore, that this is an appeal more to lawmakers, activists within these companies, and users, who may want to engage in their own activism.

One thing’s for certain: our social networks have done untold damage to the fabric of our society and democracy, all over the world. We’re starting to see some of that violence come to a head now, but it won’t be the last or the worst of it. We need to make drastic changes now to ensure this doesn’t happen again.


Sources: