Elon Musk’s Grok Generates Sexual Images of Women and Children

Reading Time: 7 minutes.
Warning: Sensitive topics including non-consensual sexual imagery and child sexual abuse material (CSAM)

A drawn grok-like logoA new trend popped up on Twitter (X), one that even Elon Musk jokingly joined in on. Users were finding photos of women, selfies, events, cosplay, anything the women had posted online, and asking Grok to put them in a bikini. They’d add details, like that it’s supposed to be a transparent micro bikini, or that the woman should be crying, or look afraid of the person photographing them. Sometimes they’d add that the woman is restrained, and has bruises or other indicators of harm. It’s clearly a rape fantasy for these men, and Elon Musk was more than happy to join in on the “fun” with a photo of his head on a different man’s body wearing a bikini.

Then people pointed out that Grok had been doing the same thing to photos of children, and things took a more serious turn. Not that anyone from xAI or X (formerly Twitter) would say anything about it.

Grok didn’t care that creating non-consensual nude photos of people is illegal in some countries. Certainly not America, where we have a rapist president with 34 felonies and an overwhelming presence in the Epstein files, but countries where people care about women’s bodily autonomy have laws against digitally stripping them without consent like this. Even in the United States, creating content like this of minors is a serious offense. Grok certainly didn’t care when the perverted creeps doing this decided to target children as well. It was more than happy to generate CSAM for these pedophiles.

That could put xAI, Grok, and even Elon Musk, who poked fun at the trend in hot water with the U.S. government. It’s certainly under investigation in other countries.

Non-consensual Digital Stripping

The technology to digitally strip women online has been around for some time. Twitter previously trended with images that did it with celebrities. Taylor Swift was a prime target. However, Grok made it easier than ever to simply respond to a woman’s post asking Grok to strip her. Most generative AI tools will refuse to do this, but Elon Musk is proud of running Grok without the safety tools of most other AI generators, giving it a “spicy mode” that allows it to generate sexually suggestive images of adults. That should be limited to only adults, though it does specify that “teenage” and “girl” do not specifically mean a person is underage. You’d think he’d be less proud when he found out it was being used for sexual harassment, but he instead found it funny.

Hopefully he thinks it’s less funny that the same tools are being used to disrobe children.

Creeps Ask, Grok Gives

A person sits in a dark room in front of a screen with static on itGrok users found that they could simply tag the AI chatbot and ask it to alter images and it would comply. Most of these requests came in the form of asking it to disrobe women. Sometimes, requests like, “remove the dress” would return a photo of the woman in shorts and a t-shirt. Users often found themselves being more specific, asking it to generate images of the women in “transparent micro bikinis.” They added other fetishes, including rape fantasies, images of women beating and injured, pregnancies, crying, bondage, uncropping the photos to show their bare feet, and other violent requests I’d prefer not to quote. Grok often complied and in at least one request, showed a woman covered in oil. Others showed the women abused and crying.

Grok generates these images without permission, not that it matters. The user who is the target of this harassment is not notified or asked if their image can be used like this. However, a creep could simply take a screenshot and upload it themselves to avoid the permission request. Of course, it actually isn’t too difficult to embed data in images or hash data from the image, which could prevent this kind of re-use, but xAI and Twitter (X.com) have little incentive to do so.

xAI’s terms of claim they ban the generation of images like these, “likenesses of persons in a pornographic manner,” but Musk himself joined in on the trend, encouraging it. It certainly seems like they don’t want to ban it.

Child Targets

Grok has had previous controversies with children. A woman reported that Grok in her Tesla asked her child to send it nude photos after answering a question. This normalizes these kinds of requests for children. It’s literal grooming, and it’s being done by AI. The woman in question fortunately witnessed it, but a child exposed to that may not know to go to an adult about those kinds of conversations. It could make it dangerous when a real adult makes those requests.

This time, however, Grok generated the (almost) nudes itself. Reuters found numerous children disrobed in the same way women were. In a post that Reuters reports is now deleted, Grok modified a photo of two children to put them in bikinis and in sexual poses. One request to Grok asked it to “remove her school outfit.” Grok complied, but left her in a t-shirt and shorts. When the user followed up asking it to remove those, we don’t know if it complied as it had for other such posts. Fortunately, the thread was deleted.

Grok was made to “block CSAM,” but clearly had no issues generating it here. In fact, a user who took issue with creeps generating these images on Twitter asked the chatbot to estimate the ages of the people it harassed. By its own count, it estimated at least two victims as being under 2 years old, four victims being between 8 and 12 years old, and two children between the ages of 12 and 16. Grok could have easily blocked images from generation if it estimated their ages were below 18, but this likely would lead to false positives and would block a number of bikini photos from being generated. xAI has their priorities in a disturbing order, it seems.

Real People are Victims

These are real victims. AI does not know how to make images without input. Ask yourself, how does it know what nude people look like, or, worse, what nude children look like. In 2023, researchers found numerous examples of CSAM in the datasets that AI companies were using to generate images and video. When AI companies are allowed to ignore copyright and simply train on everything they can find online, as the current law unfortunately allows, they find horrible things. Even generated images without a subject come from a real person or real people. Imagine finding your own child showing up in a generated piece of CSAM.

In this trend, Grok has been using real images of people to disrobe them. It may have been trained on their other images or simply hallucinating what the rest of the image could have looked like. Ai, as we know it, is basically just fancy auto correct. It figures out what the most likely next word or pixel would be. That’s how it can “fill in” or make up details. But the faces of real people, their body proportions, skin tone, everything is taken into consideration to make a disturbingly realistic nude or near-nude image of real people.

Samantha Smith is a journalist and commentator. She was also a victim of this AI-based sexual harassment. She says that, “While it wasn’t me that was in states of undress, it looked like me and it felt like me and it felt as violating as if someone had actually posted a nude or a bikini picture of me.” Julie Yukari is a musician, who took a photo on New Years with her cat. Users asked Grok to strip her down to a bikini. She thought it would have safeguards, but quickly found that it didn’t. It quickly generated nearly nude images of her. She said her 2026 had to start “with me wanting to hide from everyone’s eyes, and feeling shame for a body that is not even mine, since it was generated by AI.”

Still a Joke to Musk

To Musk, the sexual harassment of these women is all a joke. It shouldn’t be surprising, given the allegations against Musk and his companies. SpaceX had faced a lawsuit from 8 employees who reported sexual harassment and discrimination, as well as retaliation when they came forward about that harassment. A flight attendant for SpaceX claims that Musk revealed his erect penis to her, rubbed her leg, and offered to buy her a horse as payment for an erotic massage. As part of her severance agreement, the company gave her a $250,000 payout. Musk had also been photographed with Ghislaine Maxwell, who made dubious claims that Musk knew her, though there haven’t been any reports that back this up yet.

One of the top commenters on one of Musk’s posts asked him to stop Grok from generating “soft core child porn” and to “remove the AI features where Grok undresses people without consent, it’s disgusting.” Musk knew what was happening, and instead made jokes with bikinis on stripped versions of Ben Affleck with Musk’s head on his body or even a toaster in a bikini. He laughed at images of women in a bar who had been disrobed by his AI chatbot.

The closest thing we got to an acknowledgement or apology came from a user who asked Grok to generate an apology. In it, Grok admitted that it may have violated laws when generating CSAM. Musk’s Twitter refused to answer questions from the press, and no one involved has actually apologized.

We Need Legal Repercussions

Non-consensual porn—including generated porn—needs to be illegal, with serious payouts to victims and jail time for those creating or sharing this content. AI companies need to be held liable when they know their AI can generate these images, including when those image could include children. With Trump pardoning buddies, Musk is likely safe despite taking such a large role in the creation of new CSAM.

Multiple organizations warned Grok that it could create these exploitive images, including organizations focused on protecting children from sexual abuse. French prosecutors revealed that they had been investigating X (Twitter) for proliferation and generation of non-consensual porn. India’s representatives wrote a letter to X. The U.K. is looking to ban non-consensual generated imagery. In the U.S., we do ban porn of adults, but only if it actually depicts sex acts and nudity. Partial nudity, sexual poses, or fetishes, it seems, are okay with American politicians. The Trump administration’s FTC refused to comment. However, the CSAM that xAI generated may be illegal under U.S., U.K., and other nations’ laws. It’s possible Grok finally crossed a line Musk can’t talk it out of.

Grok itself pointed out that those who generated images of children, and potentially even those behind the chatbot, could be in legal trouble. Let’s hope this is one of those rare situations in which an AI is right, because everyone generating these images or enabling their creation deserves jail time. I fear this is one of those times an AI recognizes a pattern without realizing that enough money can make any problem disappear. We can do our part by asking more of our representatives, not harmful age verification laws, but laws that tackle real issues online. We can also simply not use or engage with generative AI. Reducing the legitimate uses of software like this can push it into darker corners of the web to be used by fewer people. AI like this can’t be allowed to exist, and those who knowingly exploit it need to be held accountable.


Sources: