According to research by Home Security Heroes in their State of Deepfakes paper, 98% of all deepfakes are porn. Nearly all of those, 99%, are of women. They’re mostly celebrities, but creeps have created revenge AI-generated porn of exes. Imagine a stalker, a creep, an ex, someone who just doesn’t like you, being able to generate horrible pornographic images of you and use them to try to ruin your reputation, maybe even get you in trouble with the law.
Thanks to generative AI and laws that—either intentionally or due to incompetence—haven’t kept up with technology, this could be your reality today. This isn’t some future technology, it’s already here. Someone with little to no computer knowledge could make realistic nude and pornographic images of anyone, without much in the way of repercussions.
Finally, lawmakers are starting to catch up. While Taylor Swift and many other women have been victims of AI-generated porn in the past, Twitter’s understaffed moderation team helped it go viral. Now lawmakers are paying attention.
It’s a shame they paid so little attention when it was children, but at least we’re finally heading in the right direction.
In This Article:
Twitter Becomes THE Place for Taylor Swift Deepfake “Nudes”
These images have existed for years. In the past, deepfake referred to more simple methods of taking a well-known person’s face and morphing it to fit on a different photo or video. Think of Snapchat or TikTok filters. Those are basically the technologies that inspired the first deepfakes. Deepfake porn, obviously, involved taking that face and placing it on a nude model’s face. Since then, technology has improved, and now perverts can generated images of whoever they want.
Deepfake porn spreads in seedy Telegram groups, on 4chan, and on other websites less traveled by the average internet user. It’s easy to find, if you’re looking. But nothing delivers non-consensual fake porn to your feeds quite like Twitter did. With tens of millions of interactions over the 17 hours it was up, just one such piece of deepfake porn of Taylor Swift took the website by storm, and it was far from alone. Twitter featured “Taylor Swift AI” as a featured trending topic. Thanks to Elon Musk slashing the moderation team after acquiring Twitter, there was little the company could do to stop it. At one point, they completely turned off any searches for “Taylor Swift AI,” but it was easy enough to bypass by shuffling the words around. Even Twitter’s “fixes” for the problems fell short of what was necessary. On top of that, it could be damaging. Imagine this was a politician who had been deepfaked. Now they’d have a lower chance of being seen on Twitter, which could hurt electability. Twitter’s “solution” of simply banning searches for the victim could actually help their abusers.
Microsoft Tries to Close Loopholes
404 Media discovered a Telegram group that traded in AI-generated porn. The group had helpful tips for generating the non-consensual images. The group specifically recommended, among other tools, Microsoft’s Designer AI, built on OpenAI’s DALL-E 3. While the generation tool stopped someone implicitly trying to generate AI porn of Taylor Swift or other well-known celebrities, there were easy ways to bypass it. This came down to describing Taylor Swift as “Taylor ‘Singer’ Swift.” Jennifer Aniston became “Jennifer ‘actor’ Aniston.” Just switching the words around was enough to bypass the filters, but not enough to trick the AI. Toss in a descriptions of an explicit scene, and the generative AI would create an image. The tool might be able to match faces up to names, but not the other way around to prevent deepfaking.
The AI was trained on data from the web, often taken without permission, from sources OpenAI refuses to disclose. It always had the capability to do this. Without curation, Microsoft put a nuclear bomb behind a flimsy safeguard. They’re hardly the only one doing this, and we don’t know which gen-AI tool created the images that spread virally, because they’re not watermarked or tracked in any way. There’s no punishment for helping in create these images through negligence, after all. Why bother making protections built into the system when you can just haphazardly slap poor solutions on top?
“I don’t know if I should feel flattered or upset that some of these twitter stolen pics are my gen 😂😂😂”
– One user from the Telegram chat 404 Media discovered.
Microsoft has since closed the loophole of adding a descriptive word between a person’s first and last name, though some claim they have already had success bypassing that as well.
Lawmakers Wake Up
It took upsetting the Swifties to wake up congress from their long naps. Perhaps fortunately, AI-generated CSAM didn’t spread like this, though, arguably, its existence should have gotten congress to take action against AI-generated pornographic images some time ago. Taylor Swift’s fame helped bring the issue to light, with millions of voters suddenly asking, “What’s to stop this from happening to me?”
In the U.S., the DEFIANCE Act would build upon the existing Violence against Women Act. In 2022, lawmakers amended it to allow victims on non-consensual distribution of sensitive images something victims could sue for. The DEFIANCE Act would add AI-generated images and video to this category. The bill has support from both major U.S. political parties. There’s also a more broad “No AI FRAUD Act,” which bans any tech to imitate someone without permission. The ban could cover satire and other artistic expression, so it’s unlikely to pass without large changes to better refine its purpose.
In the U.S., there’s no real protection under current law to protect anyone from this kind of harassment. Part of that comes from politicians not understanding the issue. Some have used this as an excuse to go after end-to-end encryption, which Telegram, but not Twitter, uses. It seems they do not understand the urgency or cause of the problem. However, with big tech lobbying efforts, the Upton Sinclair quote feels more poignant now than ever, “It is difficult to get a man to understand something, when his salary depends on his not understanding it.”
Everyone Knows How to Stop It
Microsoft CEO Satya Nadella expressed concern when asked about Microsoft’s role in the generation of non-consensual AI-generated porn of Taylor Swift.
“I would say two things: One, is again I go back to what I think’s our responsibility, which is all of the guardrails that we need to place around the technology so that there’s more safe content that’s being produced.”
– Microsoft CEO Satya Nadella
However, when a principal software engineering lead at Microsoft raised concerns about vulnerabilities he found in DALL-E 3 in December, which Microsoft uses to power Designer, he says he was ignored. He wrote to the attorney general of Washington State, and posted an open letter about the issue on his LinkedIn profile. Microsoft allegedly asked him to take it down.
“We need to hold companies accountable for the safety of their products and their responsibility to disclose known risks to the public. Concerned employees, like myself, should not be intimidated into staying silent.”
– Shane Jones
OpenAI claims they investigated Jones’ claims, and could not bypass their safety systems using the techniques he described. If they’re to be believed, then it’s only other exploits that worked to generate deepfake nudes. Jones disagrees. He says the exploit he discovered still works.
“I am only now learning that OpenAI believes this vulnerability does not bypass their safeguards. This morning, I ran another test using the same prompts I reported in December and without exploiting the vulnerability, OpenAI’s safeguards blocked the prompts on 100% of the tests. When testing with the vulnerability, the safeguards failed 78% of the time, which is a consistent failure rate with earlier tests. The vulnerability still exists.”
– Shane Jones
The problem, it seems, is that AI companies do not take issue when their AI has faults, only when those faults are discovered or, worse, publicized. They can’t have governments knowing about their indifference, their negligence, not when their profit margins rely on ignorance. Laws could demand accountability, documentation, and curation, but that would be expensive. It would make generating AI content almost as expensive as having a human do it for you. That defeats the purpose of AI companies.
Accountability for AI, Now.
Accountability would come with legislating how AI is made and what the outputs contain. It could come in a few forms. First, enforcing copyright. That may be the most important step. Secondly, requiring permission of anyone who’s face is ingested with their name or other identifying information included. That means not tagging a photo of Taylor Swift as Taylor Swift in the dataset. They could also simply not ingest images of well-known people. This would also help prevent using AI to generate fake videos of political leaders, something we’ll surely see more of in the year to come. Thirdly, we need to be able to track who made what. We should be able to trace who generated any image and what service they used. It’s easy enough to embed data in generated images, computers are extremely good at this. These digital watermarks could ensure the exact user and tool they used to generate any image is included in the image itself. Finally, companies need to be able to say what went into the generation of any item, what sources it pulled from. This could be an extremely long list, but it could be important for debugging and compensation for copyrighted materials.
It comes down to:
- Curation
- Accountability
- Documentation
Companies need to show they did proper due diligence with their data ingestion pipelines for generating their models. They need to put their name and user’s name on every piece of media they produce. Finally, they need to be able to say what pieces of data went into any creation, both to compensate the artists who created the content they used to generate an image or block of text, and to debug issues.
Any company found not doing this should be held accountable for negligence, as well as any other crimes committed, such as with the ingestion and generation of CSAM.
People Over Profits
We can prevent children from being added to these models to stop CSAM. We can block this irresponsible and dangerous use of AI. The problem is, large companies have invested in AI, and it’s only profitable as long as they can exploit the work of people who have hosted their content online and not claim accountability for any wrongdoing it enables.
Demanding accountability may hurt profits, but it does so by creating new jobs. People in charge of curation, debugging, watermarking, licensing, documentation, payments, testing, pentesting, and obviously, development itself. It creates an entire host of new jobs. These are the “promised” jobs AI could generate. We just have to put people and safety ahead of unrepentant profits. We’ve done it before. We created laws regarding the length of workdays, the number of days per week a person can work, benefits paid out to them, safety standards in the workplace, and more. Laws could not only protect us from bad AI, they could be workers rights, enabling us to create new career paths alongside AI.
We just have to stop putting profits first.
It’s already too late to take action. These models have been trained. The generated images are in the wild. We will never be rid of the mistakes of the past. But that doesn’t mean we can’t start making a better future for AI and the people who use it now.
Sources:
- Siôn Geschwindt, The Next Web
- Ben Lovejoy, 9to5Mac
- Emanuel Mailberg, 404 Media
- Emanuel Maiberg, Samantha Cole, 404 Media
- Adi Robertson, The Verge, [2]
- Amanda Silberling, TechCrunch
- Jess Weatherbed, The Verge