
A movie came out during the pandemic, Don’t Look Up. In it, a comet is heading towards Earth. It will hit the earth and destroy virtually all life on the face of the planet. There are a few solutions available, but, due to the misinformation campaigns of wealthy elites controlling our media, who had their own plans to escape the blast and did not care about the demise of mankind, humanity chooses death. We literally ignore the scientists and intelligent people to choose distraction and the ultra-wealthy’s plans for us, which don’t help us. There’s a comet, they say, we don’t have to worry about it, just don’t look up. Bread and circuses until we all die a preventable death.
I drank half a bottle of whiskey before I was an hour into the film.
I felt that urge to grab the bottle again when reading a story that illustrates just how bad our society’s AI psychosis has gotten. Anyone with an ounce of decency or critical thinking skills is sounding the alarm on AI, while the ultra wealthy dangle AI in front of us like keys to a baby. Maybe AI is destroying our ability to connect and relate to each other, or even our ability to think for ourselves. But is it really a problem if you just don’t think about it? Just don’t look up!
This particular story of AI psychosis is disgusting. Esquire, a once respectable name in magazines, couldn’t get an interview with an actor from the new One Piece live action adaptation, so they used an AI chatbot to simulate him. Then they published it, made it the cover story, and discussed a very real actor’s dead father with the chatbot, framing heartfelt stories as being similar enough the actor’s own.
Barkeep? Knob Creek please. Leave the bottle, keep the glass.
Can’t Get a Human? Pretend a Bot is One!
It’s 11am as I write this, far too early to grab a drink. There’s pregaming for a night out and then there’s drinking at 11am like the worst writer cliche you’ve ever seen. But this story feels like one that’s hard to write about without turning down my higher brain functions.
Maybe I could just use AI for a few hours. That seems to make people incredibly stupid.
Inventing an Actor
Esquire Singapore magazine wanted to interview One Piece actor Mackenyu Maeda, who plays Roronoa Zoro in One Piece. However, they couldn’t get a response from him. Neither he nor his representatives answered emails requesting an interview or the interview questions they sent. Clearly, he had other priorities and didn’t want to do the piece. That didn’t stop them from putting a photo on the cover of their magazine of him, labeling it as “Echoes of Mackenyu,” and publishing _something_ anyway.
Esquire Singapore posted a cover story interview that
***did not interview the subject***
***but instead***
***generated the entire interview using AI***
jfc wtaf gtfo
— Alex Zalben (@azalben.bsky.social) April 2, 2026 at 1:30 PM
The cover does not reveal that the piece is AI, so someone buying it off a newsstand without peeking into the contents would be fooled into thinking the piece was real instead of some sick AI imitation of the real person.
Esquire does admit that the “interview” “was produced with Claude, Copilot, and edited by humans.” But they treat it like a real interview, instead of some twisted fantasy put to print. It reads like the kind of AI psychosis that people who make an AI bot like a character or actor and fall in love with it (before it tells them to kill themselves) would write. I wouldn’t be surprised if they take it down in shame, so here’s an archived link to the article. It’s a worthless read though, just an idiot blathering to a chatbot. Who needs that?
ESQUIRE SINGAPORE: How important is it to you to draw boundaries in your career?
(AI) MACKENYU: Boundaries are important, but I didn’t always have them. When you’re young, you want to say yes to everything. I had this anxiety that if I stop, I disappear. Many young actors feel that way. It took the pandemic for me to slow down. It was the first real silence I’d had in years, and I heard myself for the first time.
They asked an actor about his boundaries and when he didn’t respond to their requests. Esquire seemingly ignored those boundaries, the irony, clearly, lost on them. They also published an AI response that talked about how Mackenyu is looking to make his deceased father proud, or how the AI reports Mackenyu views fatherhood through the lens of his own, again, deceased, father’s example.
This is AI psychosis. This is what happens when you use AI too much and forget it’s little more than autocorrect. AI psychosis has become such a problem that a formerly reputable magazine published its editors’ unhinged fantasies as an “interview.”
I hope none of them are trusting a personal AI chatbot a little too much.
No One Wants This
Obviously Mackenyu’s fans aren’t happy. These sick people invented an AI chatbot to simulate a person who’s work they admire. Esquire asked it questions that were personal and published the fake responses, framing them as something Mackenyu might say because the AI chatbot trained on his previous interviews said it. While they never claim the AI is Mackenyu, they do frame it as an AI amalgamation of what he might say, which feels like putting words in his mouth.
Humanity matters. These AI-brained cult victims have forgotten that. If you or someone you know is experiencing an unhealthy relationship with AI, please, seek help. Deprogramming cult victims is similar to pulling someone away from an AI using the same manipulative techniques these toxic people employ to break someone down and normalize their level of control over the victim’s life. It’s delicate work. Avoid falling into these traps the way you’d avoid a conversation with any other cult leader, and if you know anyone who has fallen into a level of AI psychosis so bad they think it’s okay to publish interviews like the one Esquire just published, help them in any way you can.
Extend a human hand, before they’re too far gone to realize humanity is what they need.
Sources:
- Samantha Cole, 404 Media
- Nathan Grayson, Aftermath
- Kyle Orland, Ars Technica
- Lewis Parker, Kotaku