Meta and Pinterest Implicated in Young Teen’s Death

Reading Time: 7 minutes.

Content Warning: This story involves suicide. If you’re depressed, suicidal, or need help, reach out to someone.

Instagram and Pinterest logos

Molly Russell would have been 19 years old this year. She never made it. Five years ago, when Molly was just 14, she took her own life. The coroner has not ruled it a suicide. Coroners are judges in the U.K., and can make a ruling. Instead of a suicide, the coroner, Andrew Walker, stated that, “she died from an act of self-harm while suffering from depression and the negative effects of online content.” That harmful content came from Instagram and Pinterest, where he placed the blame. Meta’s Instagram flooded Russell with photos related to suicide, self-harm, and depression. She interacted with over 2,000 of these posts before her death. Pinterest actually sent her marketing emails, asking her to engage with, “10 depression pins you might like.”

In 2012, Facebook did a highly unethical study on its users. They changed the content users saw to see if they could detect discernible changes in their mood. Research shows it did. Despite this, in 2017, social networks were still flooding depressed users with content that could drive them deeper into depression. According to Walker’s ruling, that lead to a young girl taking her own life.

Instagram and Pinterest’s Role

In the six months before her death, Molly Russell viewed 16,000 posts on Instagram. It seems like an extreme amount, but social media is designed to be addictive, to trap people in a cycle of engagement. Of those 16,000 or more pieces of media, 2,100 were related to suicide, self-harm, or depression. On Pinterest, she had a pinboard of 469 images related to those topics as well. Pinterest rewarded her with a marketing email, “10 depression pins you might like.” If you’re a fisher, you use different types of bait to catch different types of fish. The right setup, the right hook, and your prey will never get off the line. For Molly, the algorithm found its hook: depression. It didn’t let her go.

“These binge periods are likely to have had a negative effect on Molly. Some of this content romanticised acts of self-harm by young people on themselves. Other content sought to isolate and discourage discussion with those who may have been able to help.”

“It is likely that the above material viewed by Molly, already suffering with a depressive illness and vulnerable due to her age, affected her in a negative way and contributed to her death in a more than minimal way.”

– Andrew Walker, Coroner

Meta (Facebook, Instagram) requested content redactions as part of the inquiry. That’s why this process took an arduous five years to reach a resolution. Pinterest’s representative, Judson Hoffment, admitted that Pinterest was “not safe” when Molly was using the service, and admitted that harmful content “likely exists” on the site today. A representative for Meta would say the posts Molly saw were safe, but would later apologize for content that broke Instagram’s rules getting past moderation. Meta’s representative still insisted their service was safe, though internal documents show Meta knows it’s not.

This is the first time a ruling has implicated social media companies in a child’s death. The UK’s National Society for the Prevention of Cruelty to Children (NSPCC) called it a “big tobacco moment,” implying this could be the beginning of finally holding these companies responsible for the harm they’re causing people. In the U.S., a family is suing Meta and Snapchat over the death of their 17-year-old child. A 10-year-old’s mother is suing TikTok over the “Blackout Challenge” which she claims killed her daughter. In California, two parents are using evidence from the Facebook Papers to sue Meta over their child’s eating disorder. Meta’s own research shows children report Instagram makes them feel worse about their bodies and has been the source of suicidal thoughts.

Social media has had a stranglehold on the world. Its victims are fighting back.

 

“There were periods where I was not able to sleep well for a few weeks, so bearing in mind that the child saw this over a period of months I can only say that she was [affected] – especially bearing in mind that she was a depressed 14-year-old.”

– Dr Navin Venugopal, testifying on the content Instagram and Pinterest had shown Molly

 

Caring for Users in a World of Infinite Growth

Russell family lawyer, Oliver Sanders KC: “Do you agree with us that this type of material is not safe for children?”

Elizabeth Lagone, Meta’s head of health and wellbeing, responded by describing the content as a “cry for help.”

Sanders: “Do you think this type of material is safe for children?”

Lagone, Meta: “I think it is safe for people to be able to express themselves.”

Sanders then repeated his question a third time.

Lagone, Meta: “Respectfully, I don’t find it a binary question.”

Andrew Walker, Coroner (Judge): “So you are saying yes, it is safe or no, it isn’t safe?”

Lagone, Meta: “Yes, it is safe.”

The coroner would later rule that it was not.

“If this demented trail of life-sucking content was safe, my daughter Molly would probably still be alive and instead of being a bereaved family of four, there would be five of us looking forward to a life full of purpose and promise that lay ahead for our adorable Molly.”

– Ian Russell, Molly’s father

Meta has long-known their site had the ability to affect the mood of their users. In 2012, the company put nearly 700,000 Facebook users into an unethical mood manipulation study. They showed the users either sad or happy posts, and then measured whether or not they were more likely to make sad or happy posts in return. The study had flaws, for example, using the limited AI at the time to measure sentiment, but the results were statistically significant: Facebook could alter a user’s mood. Facebook conducted the study without running it by any form of ethics committee, which would have shot it down for not informing participants of potentially damaging side effects for participating. Facebook also didn’t give users the option to participate or opt-out. They were chosen for the potentially harmful study at random.

Mood is Profit

Meta has since realized that certain moods drive engagement. They tailor the news feed specifically to surface these moods. According to leaks and studies into the company’s behavior, that driving mood seems to be anger. Have you noticed Facebook pushing anger? It would lead to things like division, hate crimes, coup attempts, or even genocide.

Have you noticed anything like that? Well, Facebook did.

Of course, anger may not drive every user. Molly was driven to engage by depressing stories. Meta’s algorithms are a black box, but knowing what we know of how it recommends posts that the user is most likely to interact with, it likely started recognizing patterns that would increase engagement from Molly, specifically. It could have begun suggesting more depressive content. We know this happened with Pinterest, as they sent her a marketing email with depression-related posts to pull her back into the app. It doesn’t get more blatant than that.

Control a person’s mood, and you can keep them engaged. That’s profitable. But what about the potential damage from influencing certain moods, like anger, hate, or depression? Would Facebook dial those back if they knew it could damage short-term profitability? They’ve reportedly dialed back suggesting posts with an angry reaction, but still create leeway for untrustworthy websites and public figures, while burying more legitimate sources of news.

Meta wants to project infinite growth to shareholders. Despite being a billion-dollar company, Meta is freezing hiring and may conduct layoffs, just to project an image of fiscal stability. It seems like a company only interested in short-term profits and shareholder sentiment. Protecting users just isn’t profitable this quarter. For big tech’s strategies, it never was.

New Features Over Repairs

New features drive immediate engagement more than protecting users. Software engineers are a limited resource. Companies choose where they allocate those resources. They could choose to put them on user safety, security, and performance tasks, or they could push them to build new features. The Metaverse, “Digital Collectables” and all of those other features from the sidebar you may never look at. Meta has to decide which is more important: working towards making sure their product is safe for children, or driving engagement. A social network that is safe for children wouldn’t have hateful, bigoted, or bullying posts. It wouldn’t seek to drive engagement or trap them in the app with addictive little pops of serotonin. It wouldn’t have hateful groups, plan coups, or spread negative emotions. Children aren’t experienced enough to realize when they’ve been sucked into something designed to make them addicted. When that thing has a surprising amount of control over their mood, it becomes extremely dangerous.

We can’t say exactly how Meta, Pinterest, Snapchat, or TikTok make their internal decisions. But they all seem to have found different ways to make their apps addictive, and allow children to use those apps despite the addictive quality and potentially harmful content.

We’re All Frustrated. We’re All Victims.

How could I truly remain unbiased in this? I have a Facebook account. I’ve seen how Meta’s algorithms have turned into an internet hate machine, dividing communities and spreading anger to drive up engagement. I’ve fallen victim to it myself, losing my cool more than once over the past few years over a number of issues. Can any of us who engage with content on these platforms really say our moods on the issues—or more importantly, the people holding opposing views—are the same now as they were a decade ago? We’ve seen the ability of these social networks to influence mood. We’re all victims to it. Our moods were changed to make us more profitable.

What if social networks were—unrestricted—spreading depression and suicidal thoughts just as easily? An algorithm that doesn’t know the moral difference between photos of puppies that keep you engaged and depression posts keeping a child engaged. Strong feelings drive engagement. Engagement is profit. That’s all it “knows.” A person overseeing that AI could teach it to not push content seen as angry, or depressing, or hateful, but where’s the profit in that? How do you increase profits forever if you stop the machine causing that growth? Companies have to decide where the line is between profitability and dialing back potentially harmful suggestions. Unfortunately, they may not place that line in a place that causes no casualties. Victims like Molly just got caught up in Instagram’s infinite profit machine, and we’re supposed to just accept that.

Mr. Sanders questioned Ms. Lagone on the content Meta showed children, getting heated towards the end of his questioning, shouting, “You are not a parent, you are just a business in America. You have no right to do that. The children who are opening these accounts don’t have the capacity to consent to this.”

Perhaps he didn’t consider just how profitable showing that content to minors is. Or maybe, to him, to the Russell family, to—I hope—all people reading this: no amount of profit is ever more important than a child’s life.

It’s a shame big social networking corporations don’t seem to hold that view.

 

“It’s time the toxic corporate culture at the heart of the world’s biggest social media platform changed … It’s time to protect our innocent young people, instead of allowing platforms to prioritise their profits by monetising their misery.”

– Ian Russell


Sources: