Leaf&Core

After #WomenBoycottTwitter, Twitter Vows to Make Changes (Again)

Reading Time: 10 minutes.

Twitter’s CEO Jack Dorsey has promised in the past that Twitter would become a platform where people can feel safe. Unfortunately, it hasn’t gotten a whole lot better. People, women and minorities especially, still face harassment on Twitter, and too little is done when they come forward with complaints, leaving them silenced and afraid to speak at all. Twitter was used to stage GamerGate, targeted harassment of women in the gaming industry, it’s being used by white supremacists to spread hate and organize, and reports of this hate have largely gone unstopped until it becomes large enough for Twitter to step in. I myself, after a tweet of mine went mildly viral, found myself using Twitter’s report tool. The man child who was targeting me was banned for a few days, and he was automatically blocked from accessing my profile, but little more came of it. I’m sure he’s unbanned now and spreading his hate like nothing had happened. This is all too common, and doesn’t stop Twitter from becoming a toxic place.


The issue with being a platform for unfiltered free speech is that it always turns nasty. 4chan used to be a fun place to find humors stories, discuss video games, and share memes. Believe it or not, it was popular among both men and women. However, it rapidly became toxic, it’s more level headed users leaving nearly a decade ago. Now it’s a wasteland of the alt-right, immature young men doing hateful and hurtful demonstrations “for the lulz,” that is, just for kicks. It on sites like this that Nazi propaganda spreads, and white supremacists talk and gather. It’s here that they gather in support of what they consider “free speech” but is really nothing more than harassment and hate. These bastions of “free speech” often devolve into gathering places for the most deplorable members of our society. Their hate speech leads inevitably, to violence. 

From Charlottesville, Virginia, to Gainesville, Florida, and beyond, white supremacist, alt-right, neo-Nazi, and pro-Confederacy gatherings that were planned online frequently lead to real world violence and harassment. Often, these organizations come together and spread their messages on Twitter before conducting their riots in public. Friday night in Gainesville, neo-Nazis took to the street after a Richard Spencer speech, threatened protesters, and even fired a gun into the crowd, which, fortunately, hit only a building, missing all the protesters. The harassment that starts online doesn’t end there, and Twitter has become the place where this harassment starts. 

Just a small percentage of the tweets Anita Sarkeesian received for fighting for less sexism in gaming… in a single day.

GamerGate started after a guy ranted online about his ex-girlfriend, a game developer, who allegedly cheated on him with a few guys, including a few video game journalists. These journalists weren’t directly involved with the reviews of her game, nor did the times match up, but it was enough for angry gamers to cry foul. Most were guys. You can likely see where this is going. The men’s real anger was towards a woman, someone they saw as untrustworthy, someone who they believed cheated, someone who they thought would use her sex appeal to get good reviews of her game. These are all sexist remarks that women in any industry have to deal with when they become successful, but it’s especially prevalent in tech, a male dominated industry. Gamergate grew. It was organized on websites like 4chan and 8chan, and harassment was carried out on Twitter. Women in the gaming industry were targeted for harassment. They were doxxed and forced out of their homes. Brianna Wu, a female developer that GamerGate targeted, still receives harassment to this day. Twitter made the situation worse, allowed people to make threats against these women, and did little to stop them. Since then, they began the first wave of crackdowns to improve their service, but, obviously, they haven’t done enough.

I want to say unequivocally, Gamergate did this to me. If I’m saying this less on Twitter it’s because I feel fanning the flames will endanger my life even more. I know that many #gamergate supporters are trying their best to distance themselves from these events, but I agree with many others that feel the movement is inexorably linked with misogyny and sexism.”

-Brianna Wu on the harassment she faced on Twitter and in her life thanks to GamerGate

Everyday, people face harassment online and on Twitter. Women are targets of misogynists, sent lewd remarks, doxxed and harassed for speaking out, and sent obscene photos. Women are frequently the most affected targets of this harassment, but they aren’t the only ones. People of color, Muslims, Jews, LGBTQ people and, well, basically everyone who isn’t white, straight, cisgender, male, and Christian (our at least atheist) is sent vile harassment through Twitter. I myself have been the receiver of harassment on Twitter after a tweet got some attention. In my case, the harasser was quickly suspended until they deleted the tweet. I blocked them, and haven’t heard of it since. It seems few people are so lucky. 

Twitter hasn’t done an excellent job at dealing with harassment in the past. Each time, they attempt to get better, and somehow don’t improve enough. To their credit, Twitter is definitely a better place to be than it used to be, however the rules aren’t strictly enforced. In fact, it seems as though the only way to get your complaint heard is to call out Twitter for it and have your tweet get retweeted a few hundred–perhaps thousand–times. This was pointed out after one woman found herself doxxed, with her harasser posting personal details about her. Twitter did not find it to be against the rules until her story had been retweeted enough. Nothing Twitter has announced leads me to believe this lack of enforcement will be changed.

As Jack Dorsey announced, Twitter’s adding rules to help prevent this kind of behavior, but will it be enough? What does it change and what won’t be protected? First, Dorsey says Twitter will go after unwanted sexual advances. This is a huge problem women face, and includes sexual messages, “dick pics,” and other sexually explicit harassment. I actually learned something about Twitter’s rules here, pornographic content is “generally permitted” on Twitter. Up until now, the only way Twitter would go after someone for distributing sexually explicit content was if one of the parties reported the other. However, with so many harassment reports ending up unfruitful for women, many simply block the user, and ignore it otherwise. The harassment still happens though. To help combat this, Twitter will make it clear in the rules that this is unacceptable behavior. They’ll still require a party to report it though, but will use past interactions, such as people muting this sort of thing, to decide how to proceed more quickly. This sounds like Twitter won’t be aggressive enough to put a halt to this behavior. 

Twitter does not allow non-consensual nudity on the site, that is, photos taken of people (nearly always women) and shared without their permission. This will soon include “peep shots,” revenge porn, and “upskirt” photos. Twitter currently locks people who start the spread of this content out of their account until they delete the tweets in question, and permanently suspends them if they do it again. However, this only applies to the original person who shares it, not anyone who retweets it afterwards. In the future, Twitter will permantly suspend users who do this on their first infraction, as well as anyone else who intentionally or maliciously shared photos of non-consensual nudity. They won’t require the person in the photos to report them either, as they may not know the photos exist.

Twitter will also be banning hate symbols, violent groups, and tweets that glorify violence. This will allow them to go after neo-Nazis, the KKK, the alt-right, GamerGaters, and anyone else looking to inspire violence against a person or a group of people. These users will likely only get temporary bans if they’re tied to an individual, but groups may be banned permanently if their only purpose is to stir up hate or violence. Twitter will also be more clear when suspending accounts, to better deter the behavior and clear up any confusion ahead of time. 


These measures mostly sound good, so what else should Twitter be doing? To Twitter’s credit, there are hundreds of millions of tweets a day, which can be difficult to look through, but they could be scanned better. Twitter could scan these tweets clientside, that is, on user’s devices, to free up server processing, if it was to burdensome to do it on their own servers. They could warn the user of harsh language, detecting phrases or words often use in threats, and flag the tweet for manual moderation after it’s submitted. If it’s done clientside, this can be secured using a shared key system, so someone couldn’t sneak nasty tweets past the censors without blocking their own tweet. By doing it on the user’s device, they could even focus on messages that are tweeted at someone, or private direct messages, to ensure that targeted harassment is taken care of first and foremost, though stopping the spread of hate speech is also vital. Twitter has access to millions of tweets and reported tweets, they could easily use machine learning to train AI to recognize abusive patterns and flag them for a human to review.

Twitter also needs to manually look into every complaint, and remain cautions. If someone reports a person, chances are, it greatly offended them or was potentially dangerous, as reporting a person is a serious matter. That’s too grave a situation for a bot to look through. It should be deal with quickly, with the person who sent it either being suspended or banned permanently if they’ve become known for this behavior. With the automatically warning and the flagging of tweets before they are even posted, along with manual checks of complaints, Twitter could stop a serious issue they’re having right now: you’ve got to be famous or go viral to have your harassment complaints dealt with in a serious manner.

There’s one final idea I had: the blue checkmark. Twitter has to reconsider how users verify their accounts. Verification shouldn’t be saved for celebrities or large organizations. As of this writing, I’m not verified and neither is Leaf and Core, and that shouldn’t be the case. I’ve provided more than enough information to Twitter to verify my actual identity, but Twitter saves the blue checkmark for users who have a lot of followers. Instead, they should see it as a safety measure. They should allow anyone to become verified by submitting enough information to prove their identity, and then allow Twitter users to filter their feed, so they can only be seen by other verified users, so they can only see other verified users, or, leave it the way it is. This would allow Twitter users to quickly cut off troll accounts, accounts spun up by bigots to dole out their harassment without consequences. So often, an account is banned and the user who made the account just creates another account a few minutes later. Bans mean nothing to these trolls. If Twitter really wants to stem their harassment problem, they’re going to have to bring real verification into the process. If they still want the same recognition they give celebrities and other large blogs, they could introduce another checkmark or symbol for them. Perhaps a gold checkmark shows a celebrity or large influencer, and the blue checkmark would be for users who have verified their identity with Twitter. This measure alone would cut down on a large majority of harassment on the platform, as harassers are less likely to do so if they’re not anonymous. The fact that Twitter hasn’t considered something like this shows, definitively, that they don’t take harassment on their platform seriously enough. The lack of this measure is exactly why women are still unhappy with Twitter’s response to #WomenBoycottTwitter. Twitter has an obvious solution in their back pocket, they just don’t think the issue is important enough for them to deploy it. 

Free speech must be for all, but it’s not if we allow harassment and intimidation. Photo Credit: Mobilus In Mobili

Should we defend free speech? Absolutely. In fact, I still think it may be our most important right as Americans, and I’ve written about it frequently on my personal Medium blog. Our founding fathers thought so too, when they put freedom of speech in our first amendment. There are three reasons the first amendment doesn’t apply to hate speech and harassment on Twitter. First, our right to free speech, like all of our rights, cannot be completely unchecked. The second amendment doesn’t mean you can buy a Predator drone at Walmart along with your large capacity semi automatic high-caliber rifle (oh, you can still get one of those? Why?). You can’t use your right to drink to go out and drink and drive. You can’t use free speech to shout “fire” in a theater. Hate speech, like these other freedoms that can be taken to extremes that infringe upon the rights of others, can also be limited. Just as someone could be arrested for telling people to harm a public official, so, too, must people be punished when they call for ethnic or religious genocide or the “cleansing” of a country. Hate, bigotry, ignorance, and violence are the only things that dirty the personified hands of a country, and the only things that should be cleansed from it. The second reason is that Twitter is a private institution, and can control what’s said in the confines of their business. What’s said on Twitter and allowed on Twitter reflects on their company, which could lead to lower engagement, fewer investors, and, eventually, a failed company. Twitter can ban users who spread hate, just as A&E could have fired hateful members of the show Duck Dynasty when they spoke out aginast the equal rights of LGBTQ people. It’s very different than, for example, the government banning companies from refusing service to LGBTQ people, because the act of providing a service is not speech. Finally, Twitter is not a government institution. This isn’t as though the president is trying to limit voting rights (oh, he is?) or threatening to take away broadcast rights for news organizations who point out his mistakes and flaws (oh, he’s doing that too?), this is a private company saying what is appropriate conduct on their own platform. White supramacists, Nazis, and those threatening nuclear annhilation of the entire planet can have their own social network like Twitter, if they so choose to do so. There is no law preventing them from having their hate speech, no matter how vile, in the United States, but that doesn’t mean Twitter is forced to be a platform for it or we have to listen to their hate. They can take their right to free speech anywhere.

By allowing hate speech and harassment, Twitter has silenced their most vulnerable, blocking the free speech of all in favor of the speech of hateful bigots.

The line we must walk to defend free speech but stem the tide of hate speech is a tricky one. Twitter once called itself the “free speech wing of the free speech party.” But that was an easy title to hold when Twitter was being credited for assisting in the revolution of democracy that was known as the Arab Spring. Now, free speech is simultaneously under real attack in the United States from a president who has stated he wants “guilty until proven innocent” laws for libel, and has threatened to pull the broadcasting rights of organizations who have written critical pieces about his presidency and personal life. On the other hand, perhaps to lessen the impact of these attacks, people on the alt-right, misigynists, and neo-Nazis (how often they all overlap!), have sought out to claim their hate speech and harassment should be covered under free speech in all forms. Richard Spencer has twice lead to violence, Milo Yiannopoulous has attacked transgender people personally, singling them out in speeches meant to rile up hatred wishing a crowd. Men on Twitter harass women daily. These platforms haven’t become a bastion of free speech, because hate speech is being used to drown out the speech of others. I’ve been afraid of speaking out, as a woman in technology, because I don’t want my life uprooted as others have been. I’ve been partially doxxed before in result to writing about a woman who was targeted by GamerGate. This can’t be the norm, we can’t allow violent, hateful bigots silence everyone else. For Twitter to become a true platform of free speech, they have to rid themselves of bullies. To do this, they’ll need new rules, they’ll need to take complaints of harassment more seriously, and they’ll have to remove the veil of anonymity to make their rules truly impactful, limiting the kind of hate that can only be spread from behind a mask. Twitter tried to respond to #WomenBoycottTwitter, but they haven’t done enough. As women find their voices silenced by hate, we may leave the platform, and in what way is that defending free speech? Twitter could be doing a lot more, and the first thing they need to do is take their victims more seriously, and set out to make large changes, or their platform will become as toxic as other alt-right corners of the internet. What we have now is the equivalent of shouting “fire” in a packed building where women are trying to speak. Twitter needs to step up.

Sources:

Exit mobile version