CNET Tried to Replace Journalists With AI. It Didn’t Go Well

Reading Time: 5 minutes.

Screenshot zoomed in on the last part. Text reads: What Is a Credit Card Charge-Off? You're still responsible for debt that's been charged off. CNET Money headshot Written by CNET Money Edited by Jaclyn DeJohn Updated Nov. 11, 2022 3 min read This article was assisted by an AI engine and reviewed, fact-checked and edited by our editorial staff.Who can watch the watchmen? It’s a question dating back over two thousand years, likely far further. Who will speak truth to power? Who will keep power in check? Who will expose the corrupt and abusive? The answer has been the press. Journalists don’t just report the facts, they give them context. Apple’s marketing department may say they have the best smartphone of the year, but they’re a bit biased, aren’t they? A politician may claim that natural gas is the best way to power their state, without revealing they have investments in natural gas and a natural gas company is financing their campaign. A police report may say that they had to use force against protestors, but it’s a journalist who shows you it was against teenagers with their hands up.

The press keeps power in check. It’s vital to have a human at the helm who just wants to guide their readers to the truth. I’d trust an AI president or an AI CEO before I’d trust AI journalists. And yet, we find ourselves replacing the positions that most need to be guided by human hands with AI.

CNET recently ran an experiment, using AI to write a number of articles. They had to fix most of them for factual errors. Who will watch the watchmen in a world where AI just lets the lies through to print?

A Truth Filter Missing

Police often send reports to journalists. If those reports have errors or falsehoods, it doesn’t matter to police. Most of these reports are made to paint police in a good light. It’s the job of journalists to find inconsistencies as well as all sides of a story and report the truth. It doesn’t always go to plan. Journalists are, like all humans, fallible. However, a fallible filter is still better than no filter.

AI doesn’t filter for the truth. In fact, AI doesn’t have a concept of “true” or “false.” Ironic because computers are all still based on binary processors, 1’s or 0’s, truths or falsehoods. Yet a computer cannot easily tell you if something is true or not without a source of truth. It’s a machine. Data in, a function is performed, and data comes out from that function. At its core, it’s little more than a complex string of math problems. Think of it like a machine that can sort coins by size. You dump coins in and they always come out in the same place. If you put in a Canadian quarter with your U.S. quarter, it’ll go into the same place. The machine won’t tell you that one quarter isn’t legal U.S. tender, it’ll just tell you that you have 50¢.

Without a person who can take in far more information to ensure the truth ends up in print, politicians, police, and marketing departments have no filter. From their mouths to the front page of your favorite news blog. (I won’t be offended, I know it’s not Leaf and Core.) Media literacy means trying to find bias in journalism by reading stories from multiple sources. But what if they’re all the same story, reported by, basically, the same source?

CNET’s Automated Blunder

This month, it came to light that CNET had been using AI to write articles under the “CNET Money Staff” byline. The articles were mostly Search Engine Optimization (SEO) articles. These are articles that cover common questions, with content that aligns with search engines’ prioritization. The titles may be a common search phrase with the content answering the question, perfect for driving traffic from search engines like Google into a website. CNET was using AI to make less important articles and drive traffic to their site.

The blunder? The answers had errors in more than half the articles from CNET’s AI.

CNET published 73 articles written by AI. Initially, they had a human author as a byline, but have replaced it to mention the article was written by AI. CNET says editors dictated the creation and edited the articles before publication, but over half had inconsistencies or falsehoods. This included poor phrasing on articles where math had been done. AI doesn’t really know the meaning of words, it just knows to put them in particular orders. As such, when it wrote about earned interest on $10,000, it suggested that a person would accrue $10,300 in interest, rather than the actual amount accrued, $300. It confused APR and APY, and other mathematical and factual errors. The AI can’t really be trusted to explain the meaning of something, only repeat sentences it has seen before. Often, when shuffling the words around to try to make something that seems unique, the initial meaning is lost.

Can I Copy Your Homework?

Another issue seems to be a common one with any AI “creation.” The articles seemed to be rife with plagiarism too, spelling potential legal trouble for CNET for approving the AI-written articles. AI cannot generate content on its own, it doesn’t have the capability for inspiration. It can only regurgitate what it ingests. There’s no artistry in AI creations. In this example, it used the work of real journalists. As AI has for code and even visual art, this runs the risk of copying something a little too closely.

“How to avoid overdraft and NSF fees”

“Overdraft fees and NSF fees don’t have to be a common consequence. There are a few steps you can take to avoid them.”

– From a CNET AI-written article

“How to Avoid Overdraft and NSF Fees”

“Overdraft and NSF fees need not be the norm. There are several tools at your disposal to avoid them.”

– From an article in Forbes Advisor

Jon Christian of Futurism lined up other examples in his article about the AI’s use of other people’s work. They were quite numerous and often came down to little more than a word or two swapped out for a synonym. Even the most simple bots can do a synonym replacement, using a thesaurus to replace

A Cash Grab

Since 2020, Red Ventures has owned CNET. They seem to be known for SEO-driven content, made to appeal to search engines, with affiliate links. These articles may push readers to signing up for a service or buying a product. A popular version of this, especially in financial blogs, is credit cards. These can earn anywhere between $250 and $900 per card signup. Red Ventures also owns The Points Guy, Bankrate, and CreditCards.com, all of which also have used these AI-written articles made to drive traffic and get readers signing up for services for a commission.

“AI lowers the cost of content creation, increasing the profit for each click. There is not a private equity company in the world that can resist this temptation. The problem is that there’s no real reason to fund actual tech news once you’ve started down that path.”

– Mia Sato and James Vincent, The Verge

CNET had been using Wordsmith, nicknamed “Mortgotron” internally due to its focus on writing formulaic articles about, you guessed it, mortgages. They used Mortgotron for the past year and a half. Last fall, they informed staff they’d be using it elsewhere. Others have used Wordsmith as well. AP News, a well-respected source of news, has been using it since 2014 to write simple updates about earnings reports, as well as simple sports news since 2016. It’s an easy choice for companies. They can churn out more articles, focus on SEO, driving traffic to sites, their ads, and their affiliate links. It’s a practice known as “SEO Farming,” where sites drive traffic using articles “written” specifically to appeal to search engines, then make money from users funneling in to their site.

“For these [SEO] farms, I do not expect that people really read it. As soon as you get the click, you can show your advertisement, and that’s good enough.”

-Fabian Langer, AI Writer founder

Stock ticker showing skyrocketing Buzzfeed stock

The stock market loves AI replacing employees, but will readers and actual workers be as happy?

Last year, CNET announced multiple rounds of layoffs. Buzzfeed, who laid off 12% of their staff late last year announced this week that they’d use OpenAI to produce content. Buzzfeed stock shot up over 200% on the announcement. Employees fear more layoffs in the future. With AI churning out articles on the cheap, who shouldn’t be worried?

You Can’t Trust a Bot

Despite all of this, CNET is sticking to their AI and it could grow in the industry. CNET has announced they will use it again after a momentary pause. AI is cheaper than humans and journalistic integrity doesn’t pay the bills. Though the company could face legal issues due to bad advice or worse, plagiarism. Furthermore, if it extends to writing articles about the tech industry or tech reviews, you’ll have to wonder if you’re just reading some PR for a company, or an actual review. I don’t know, who do you trust more to review this Kensington trackball mouse I’ve been testing, Kensington who needs you to buy it, or me, who needs nothing from you? You’re going to have to ask yourself that question on many articles in the future: did a human write this, or an AI on behalf of a company?


Sources:
,