AI Leaders Claim They Need Your Labor to Replace You

Reading Time: 6 minutes.
OpenAI Claims It Needs Your Labor, DeepMind Co-Founder Admits AI Is “Labor Replacing Tool,” and Layoffs Confirm: AI Is Here for Your Work, you’re training your replacement.

Robot emoji to back of sweating human emoji. Text reads "I copied your homework. Now I have your job."

AI both requires your labor, and uses it to replace it. You’re not getting compensated for your stolen labor or lost job either. Even if you do agree to terms and conditions when you use AI, those terms could change, and you cannot withdraw your data once it has become part of the model. At least, that’s how it works now. But things could change. They don’t have to be this way.

Humans are in an abusive relationship with AI. That is, most humans. Those at the top, like OpenAI who insists their company needs copyrighted material for their corporation, despite a lack of permission to use it, or DeepMind, who agree that AI is, as it currently exists, a “labor replacing tool,” will continue to profit. Their product is your work, and you likely didn’t give anyone permission to use it. If you did, you might not even realize it.

Those at the top know AI is bad for the average worker, but, they’re not the average workers. They own the AI that’s taking your job, and while it’s cheaper to hire a machine than a person, it’s still a potentially profitable business. Using the creations of others to sell work back to other companies is a brilliant scam, and it’s working perfectly.

Just not for you.

It doesn’t have to be that way.

AI Is Here To Replace You

From the mouth of Mustafa Suleyman, Co-Founder of DeepMind, AI is currently used primarily as a “labor replacing tool.” Asked in an interview by Rebecca Quirk if AI was “going to replace humans in the workplace in massive amounts,” he responded, “I think in the long term—over many decades—we have to think very hard about how we integrate these tools because, left completely to the market…these are fundamentally labor replacing tools.” AI isn’t here to serve as a tool to increase productivity or bolster creativity, it’s being used to replace people. Using the output of real people—often without permission or compensation—to replace them.

I think the most telling and important part of his statement isn’t necessarily the part about calling AI a “labor replacing tool.” That’s a good headline, an attention-grabber, but perhaps not the main point in his statement. He stated that these tools “are fundamentally labor replacing tools” if we leave them “completely to the market.”

Unbridled capitalism has always been the enemy of the worker. A corporation would pay less than minimum wage if they could. They wouldn’t offer benefits or 40-hour workweeks (which, studies say are too long anyway, but that’s another discussion). We needed laws to protect workers’ rights, to give them bargaining power, unionization, a 5-day work week (again, too long), and 8-hour days (yeah, that’s too long too). Every bit we got didn’t come from corporations being nice. They’re out to make a profit. That’s it. The goal of a corporation is to make money. Paying people for their labor gets in the way of profitability, but the law requires that.

Right now, we’re not sure if existing copyright laws are enough to protect creators from having their work stolen, used for AI, and that AI used to replace them. Because laws require humans have a minimum wage for most work, there’s no such thing for AI. As a result, without laws to protect us from greedy corporations, they will replace workers with AI. Even if the work is sub-par, if it’s good enough to make a profit, it’ll be acceptable.

AI Needs Your Free Labor

Sketch of a robot claiming, "All your art are belong to us."

I am no artist, but Sketches Pro does make sketching fun.

 

The U.K.’s House of Lords communications and digital select committee is investigating large language models (LLM). These are what drive generative AI text tools like OpenAI’s ChatGPT. For years, academics trained LLMs by simply scraping the web. They were testing theories, not releasing products, so it didn’t matter if the resulting models held prejudices, misinformation, biases, or even copyrighted data, as academic purposes are covered under “Fair Use.” However, as companies begin to release their language tools, collecting data from people who use them and selling the product to users and other companies, this has become an approach with multiple issues.

Among the most common complaints is alleged misappropriation and profit from copyrighted materials. A number of authors have sued companies like OpenAI. Even The New York Times has sued OpenAI after finding ChatGPT can regurgitate their articles, and also falsely attribute quotes to the newspaper. OpenAI has emerged as a leader in generative text AI, built on their language model, which they refuse to disclose. They won’t say what they trained it on, but going off of the copyrighted material it can output, as well as personal information, it’s likely a vast dataset full of material that may be in a legally gray area.

OpenAI, in their letter to the U.K.’s House of Lords, stated they require copyrighted material. In older versions of ChatGPT, when they did disclose their sources, the New York Times was their third most frequently used source, with the top two either being public domain or creative commons. However, they insist they can’t train on public domain works, suggesting they’re too outdated for the needs of modern users. Tell that to the rise in Mickey Mouse and Winnie the Pooh creations though.

” Because copyright today covers virtually every sort of human expression—including blogposts, photographs, forum posts, scraps of software code, and government documents—it would be impossible to train today’s leading AI models without using copyrighted materials.”

– From OpenAI’s letter

In other words, OpenAI needs your copyrighted work in order to replace people who make copyrighted works. However, they don’t say why they need to get this for free. The answer, obviously, is because taking data for free, rather than paying the people who made it, would be a far more profitable tactic. However, when workers’ rights, including the right to own their own creations, may outweigh company’s “rights” to profit.

After all, a company could make a larger profit from slave labor than paying workers too, but, thankfully, the law requires they pay us.

Layoffs Prove: AI Companies Are Winning, Humans Losing

Sports Illustrated has used AI “reporters” without disclosing that they were auto-generated. They say they didn’t even know. CNET also used AI, with inaccurate articles as a result. Both companies have also done layoffs. Even respectable publications like Axios have added AI “reporting.” It’s clear: AI is already replacing jobs, especially those who use the written word to make a living. After all, until modern versions of ChatGPT, which won’t tell us where its data comes from, one of the top sources for their LLM was The New York Times. These models were trained on journalists’ work, without their permission. Of course they’re ready to emulate those very journalists, possibly out of a job.

2023 saw massive layoffs for the tech industry. According to reporting from The Information, hundreds of Google employees laid off from the ad sales department in 2024 replaced with AI. Even software engineers are worried about code complete tools, like Microsoft’s CoPilot, trained on the GitHub data we’ve handed over to GitHub for safe keeping for years. Good engineers trained GitHub’s tools so lousy engineers could copy/paste. Thanks to the monopoly GitHub has on public code repositories, many developers need to use it to make their portfolio available to potential employers. We, too, are forced to automate ourselves out of work.

Platforms change. OpenAI pledged they wouldn’t contribute to defense work. People made agreements with them, used their products, and helped train their language models on that premise. Now they’re allegedly in talks with the Pentagon. These companies will pivot, they will ingest data, and they will grow to consume whatever they can.

Goldman Sachs says AI could impact 25% of workers. McKinney is claiming 15%, or about 400 million people. Those estimations will only grow with the capabilities of AI. IBM CEO Arvind Krishna is looking forward to it, saying the increase in unemployed workers will give companies more flexibility.

Universal Basic Income is a Lie

Any time AI companies are pressed on the issues of labor replacement, they tout “universal basic income” (UBI). This would be, basically, welfare. A bare minimum amount of money that might cover some housing or food costs so people without a job can languish in barely-funded poverty. We can’t even get the minimum wage to reflect housing and food costs from this millennia, let alone this decade. The fight for a $15 minimum wage is so old that it should now be nearly $25. UBI would never actually support anyone.

It’s also a clever way to offset costs. Companies have long used welfare, food stamps, and other government programs to subsidize their underpaid workforce. Employees with a part-time job at a fast food restaurant who still need food stamps to buy food for their families are subsidized workers. The government completes what they need out of a paycheck. Our taxes pay for lousy wages so large corporations don’t have to pick up the slack. UBI would be the same. Corporations who are taking copyrighted materials without permission, or forcing people to contribute to training AI through their job requirements, will use that AI to replace human workers. Those people will need UBI. The money won’t come from the corporations that replaced them and profited from their work though, it’ll come from taxes, something corporations are exceptionally good at avoiding.

Fighting for a Future for Humans, by Humans

The actors and writers strikes last year showed companies that people are willing to fight back against AI. It showed us that companies will back down too. They just don’t have the technology to completely replace labor yet. That means now is the time to fight back, before labor becomes so cheap and unimportant that withholding it is meaningless.

If the purpose of AI isn’t to replace workers, why aren’t AI companies more willing to license, document, curate, and pay for the data they use to generate output? Why are they only using labor to replace human labor? The primary purpose of AI, right now, is to cut the laborer out of the income stream to make greater profits.

So far, it’s working.


Sources: