Sports Illustrated Allegedly Used Fake AI Authors for Stories

Reading Time: 6 minutes.
A doodle of a robot, looking at a volleyball. Text reads "Game Sphere Acquired."

Did you know? Volleyball is tough to play without a volleyball!

Just earlier this week, I wrote about someone who made fake profiles for speakers for his developer conference, possibly to make it seem like more women were speaking at the event. But what happens when a company makes up AI-generated “journalists” to spam list articles with affiliate links? How are they creating these fake authors? Are they, like others have been accused of, making their organization seem more diverse by using marginalized groups for their fake journalists? Are they putting journalists out of work for these fake articles? Are they stealing the work of the very people they’ve laid off?

What do you do when the labor is free because we don’t yet pursue copyright laws against companies using intellectual property they have no rights to for their large language models (LLM) and image generation models? How can journalism survive when corporations can claim no liability for what the algorithms “write?”

Sports Illustrated deceived readers. They claimed articles were written by human beings with interests like outdoor activities and food. Instead, they were AI-generated personas. They may have written AI-generated articles as well. The articles attributed to these fake profiles certainly seem to have been written by a robot, and Sports Illustrated removed them as soon as journalists began asking about them. The company claims AI didn’t write the articles, but they already lied about “who” wrote them. Why should you trust anything else they have to say?

AI has something called “hallucinations.” It’s the term we use when AI comes up with false or fake conclusions when it doesn’t have details to write out a fact. It’s when an AI answers a question with bullshit, essentially. So how can you ever know an article written by AI was true when AI sees no difference between truth and lies and companies have no obligation to keep their AI—and its stolen work—honest?

It’s the death of journalism, and people without integrity or a stake in journalism are in charge of taking the people out of the stories in the name of profit. In a world without accountability, flooded with AI “journalism,” can you ever trust anything you read?

Fake Humans, “Real” Stories?

What a tangled web this is. Let’s paint a dystopian picture for you. Magazines and many other longstanding and respected sources of news are often owned by marketing companies now. Journalism had a price, and people just weren’t willing—or able—to pay. Free websites, expensive subscriptions you can’t combine, and ads killed journalism. However, corporations happy to gobble up web real estate were happy to pay up. Take Sports Illustrated. Once a proud weekly magazine for sports news, it’s now mostly online, with one print issue a month. Could be worse, actually, many publishers haven’t been able to keep up monthly magazines. It’s in part thanks to The Arena Group, who owns Sports Illustrated and a number of other news properties. According to communications between news outlets and a spokesperson for The Arena Group, a third party, AdVon Commerce, was contracted to write stories for Sports Illustrated. They specialize in “ML/AI Solutions for E Commerce.”

You probably see where this one is going already.

AdVon, through these fake AI-generated avatars, published a number of “rankings” articles. You’ve likely seen stuff like this all over the web. You search for “Best Headphones” and you get one of these articles with the “Top 5 headphones,” a small blurb, some pros and cons, and a link to buy. The link is an affiliate link, and the article may have been generated by a bot or, if you’re lucky, hastily written by a human for affiliate link pay. Affiliate links, for those who need a recap, are links to websites where you can buy something. The author of the article gets a small portion of any sales that result from linking the new customer to the site. In the case of bot-generated content, those funds are just going to a company somewhere.

AdVon claims the articles were written by humans. But would a human describe volleyball as a sport that “can be a little tricky to get into, especially without an actual ball to practice with?” Probably not. But that’s what “Drew Ortiz” wrote for a Sports Illustrated listicle on volleyball balls. That’s an archive link. Sorry I can’t link to the real story anymore. As soon as Futurism went sniffing around, The Arena Group was quick to wipe out the stories submitted by AdVon for their illustrious website. They had apparently already been investigating these stories themselves.

The Arena Group says AdVon assured them the articles were human-written. But how many humans are confused about needing a ball to play volleyball? Let’s consider Drew Ortiz.

Drew Ortiz, Sora Tanaka, etc.

“I was like, what are they? This is ridiculous. This person does not exist.”

– An anonymous source who spoke with Futurism

The people above? They don’t exist. Futurism discovered people who supposedly wrote these articles don’t exist. The profile photos associated with their fake stories came from a website that generates human faces using AI. Some of them are a bit “uncanny valley,” but, as a thumbnail at the end of a story, you might be fooled. The Arena Group and AdVon wants us to believe that their authors were fake, but their stories were written by real humans. But that raises further issues.

First, how do you trust a company that already lied to you once?

Secondly, who wrote these then? Because taking human-written articles and attributing them to AI-generated “authors” could be problematic if the people who wrote those stories didn’t have a say in the publishing of their posts. Did they agree to write them? If so, where are they? This story broke earlier this week, why is the only response denial with the only claim backing it up that we trust a company that has already lied to us? They created fake people to “write” stories, made money from affiliate links, and can just as easily delete the stories, delete the “people,” and claim everything’s fine? Where’s the culpability? Where is any reason to trust anyone involved here?

The Danger of Fake Stories

AdVon claims that their articles were written by boring, confused, ignorant humans, not AI. The Arena Group and Sports Illustrated seem to be standing beside that claim, so far. Still, the dangers of using AI in this capacity are far greater than just a few terribly bad articles.

The cost of using AI to write an article is, on the surface, quite low. Sure, it’ll use more electricity than you’d expect, thanks to bloated LLMs used to train them, but those costs are low, for now, thanks to the burning of fossil fuels. AI is terribly messy for the environment, but cheap for corporations. We already know how that goes. If AI can write an article for less than a human, companies will pick the shit article written by AI over a human. Consider these sloppy listicle articles. They needed no thought, no care. It was just a few long paragraphs to trick Google’s search engine optimization (SEO) to lead users to the page, and then a bunch of affiliate links. If people buy their volleyballs or knee pads from them, they’ll make money.

What happens when you want any search result online and it’s all this auto-generated crap? 1,000 words to break the “3 minute reading time” minimum and then nothing but fake blurbs and affiliate links. All garbage because SEO, like the AI that generated these articles, is pretty stupid. It’s not going to sit there and realize a sentence is a garbled mess about confusing volleyballs. But you will, if you try to search for the best volleyballs for professional play. You’ll find an article directing you to whatever generates the most sales on Amazon, not what might work best for you. These articles will generate profit for little to no cost. They’ll flood the web.

Then you have other problems. CNET found that a majority of their AI-generated articles had serious flaws in them. Some of those flaws included financial advice. Who’s liable when a computer gives you horrible financial advice because it doesn’t “know” the answer to the prompt and just made something up? Will you be able to sue the company? Which one? The one who made the AI or the one who used it? You could be financially ruined, and it’ll take the courts years to figure out who’s liable because we still don’t have good laws to protect people from bad use of AI.

Of course, there is some hope. These companies are using AI-generated stories for profit. That means they’re taking intellectual property they may not have the rights or permission to, and using it to generate responses. If copyright law forces AI companies to only use IP they have gotten permission to use, as it should, then many of these articles will have to come down. That’ll leave a huge hole in the internet. Broken links, broken SEO. It’ll be worse than Google was during the Reddit blackout, because it’ll be a huge chunk of the web at that point.

AI articles are more dangerous than just taking jobs away from humans who need jobs for food and shelter. It could disrupt entire industries and even the entire web. It has to be contained, called out, and controlled now.

Stolen Work and Cheap Profits

Text written into static with a robot emoji. Text reads, "If you put 10,000 humans in a room, eventually they will write Shakespeare."

Sports Illustrated is looking through their articles, doing their own investigation of the possibly AI-generated content that appeared on their pages. The web archive of their page is full of fake profiles, writing terrible listicle articles trying to sell you on whatever will generate them some money.

The thing with AI-generated content is that it is all human generated. All AI content came from human minds first. Human-written websites, articles, posts, tweets, helpful answers to questions, all taken, usually without the author’s permission, to generate collage sentences that seem to make sense. A human writes for a human audience, to be understood, to get an idea across. AI just writes to make the sentence complete in the most likely path. It’s autocomplete. All AI work is just borrowed human work. But we’re not getting paid for it. When a company takes human-written content, mashes it up in their LLM, feeds it to their AI, and spits out content, they’re taking the work of humans and claiming it as their own. The humans aren’t getting paid, but the corporation running the AI is. They’re making money from every affiliate link, every ad. It’s a world where only the C-Suite and the company make money from the work of real people, who never see a dime of it.

People worry about some kind of AI uprising, where humans are slaves to AI. But what do you call it when humans lose their salaries and their labor is used, without permission, retribution, or pay, for others to profit from? You’re already in the dystopia, it just doesn’t have a grainy filter over it.

I’m sure AI will add it in post.


Sources:
,