Mozilla Wants Facebook and Twitter to Stem Hate and Disinformation Better

Reading Time: 3 minutes.

The Mozilla logo, which is the name "Mozilla" with a stylized 'i' and two 'l's to appear like the :// that is found after "https" in a website URL.Fake and intentionally misleading news is spreading on Facebook in greater volume than it did in 2016. There’s more fake news on Facebook than real news! Mozilla has been an ally to an open, free, and private internet since it was formed. To date, Mozilla’s Firefox browser is the most private, fast, and feature rich browser around. Users can easily block tracking, ads through plugins, enable a VPN directly in the browser, and use “Tab Containers,” what are tabs that act like a completely different browser, so no one can track you across the web.

Mozilla cares a lot about safety and security, but they also care about tech in general and, by extension, how tech can undermine democracy. That’s why they’re reaching out to Facebook and Twitter now to get them to change their platforms until January, 2021. The goal is to limit disinformation about politicians and the election leading up to the election and in the time afterward. The last quarter of 2020 will be its toughest yet. We’ll likely have disputes around the election, a questionable and potentially illegitimate supreme court nomination, and the possibility of more federal investigations. It’ll be a tumultuous time. Our social media should help spread knowledge, not fear. To do that, Mozilla has a few small suggestions for each platform.

Ditch Suggestions

An image of the open letter Mizilla wrote to Facebook and Twitter. The full text of this is available on Mozilla's website, linked in this image and below.

Mozilla’s open letter to Facebook and Twitter can be boiled down to this: ditch the suggestions. Facebook suggests groups and pages to users, in an effort to increase their time spent on Facebook. Twitter has suggestions in the form of trends and popular hashtags. Neither system is perfect and can lead to some dangerous usage of these platforms.

“64% of all extremist group joins are due to [Facebook] recommendation tools… [Facebook] recommendation systems grow the problem.”

– Jeff Horwitz and Deepa Seetharaman, The Wall Street Journal

As it turns out, Facebook’s suggestions drive a large number of users to extremist groups. Facebook has tried to filter out groups, but they’re imperfect. Hate groups find ways around censors, and use memes that seemingly talk about innocent things. It’s part of the reason now that so many hate groups don’t even sound like hate groups. Proud Boys, QAnon, 3 Percenters, Boogaloo Bois. They’re hate, conspiracy, and/or militia groups, and they pop up on Facebook in a variety of forms. While Facebook’s algorithms can’t seem to figure out that these are hate groups, they do know the type of person who might be interested in them. A few suggestions later, and someone is being radicalized by a hate group, off to bring an AR-15 to a peaceful protest.

“[Twitter’s Trending] system has often been gamed by bots and internet trolls to spread false, hateful or misleading information.”

– Kate Conger and Nicole Perloth, The New York Times

On Twitter, users often create hashtags in the same way. They’ll use bots to get a particular hashtag trending to link to fake stories and disinformation with the intention of causing a polarization of the electorate. These tactics are also used to “red pill” users (a term the Matrix creators hate), where a person is duped into extremist, often far-right beliefs.

This is why Mozilla is asking Facebook and Twitter to abandon their suggestion features, at least until after the election. This could dramatically reduce the amount of intentionally misleading or hateful content on their networks.

Baby Steps too Small (And Not Enough)

Twitter has probably done the most to stem false and misleading content on the site. The new “For You” tab shows context for trends. This keeps users from discovering misleading stories, and can help Twitter recognize and block disinformation or other harmful content, like misinformation about COVID-19, before it reaches all Twitter users. Still, fake hashtags, bot-driven stories, and fake news often end up on Twitter’s “Trending” tab.

Facebook, on the other hand, seems to relish in their misinformation and hate. Misinformation on the platform is worse than ever, the company seems to intentionally sabotage some stories, and they don’t want anyone looking into how anything ends up on your feed. Worst of all, Facebook knows they have a serious problem. That’s why they’re suspending political advertising on Facebook… after the election. This is, of course, after they allowed politicians to share false information in political advertisements. Facebook doesn’t want to get better. However, they have caved in the past. Facebook stopped recommending health groups to people as a way to stomp out fake information about health, like COVID-19 and vaccines.

What does it say about Facebook that they have to ban entire categories of groups from automatic suggestions to cut down on false information?

Mozilla’s efforts are likely futile. Both Twitter and Facebook know their suggestions are a boost to user engagement and therefore revenue. Those suggestions aren’t going anywhere. However, Mozilla has long term goals. They’re hoping that, by pointing out the problem now, they can still help save the 2024 presidential election. With the privacy features of Firefox, they’re already contributing a great deal. Hopefully legislators pick up the ball and go with it, because we’re going to need new laws if we want to end the incredibly profitable business of spreading fake news.

It’s about time lawmakers start talking about tech.


 Sources: