AI Workers Are Afraid to Speak Up

Reading Time: 6 minutes.

A robot face in front of a bunch of faces with zippers over their mouths. The text reads "Don't worry, everything is fine."Google spent the early part of the 2020’s firing AI ethics researchers. After they questioned racial bias, sexism, and ecological impact, Google showed them the door. It, supposedly, was unrelated to their stances on AI, and how it countered Google’s own work. Despite that, Google claimed they made internal changes, but openness to criticism of AI never made a comeback. It’s not as though internal critics were saying that Google should scrap AI, just produce it ethically, with people’s safety in mind. That may have been too much for Google.

Other AI companies haven’t been keen on that message either. They’re in a race to the largest models instead of building models that are smaller, more ecological, more efficient, less likely to plagiarize, and have better output. But that would require more upfront costs. Investment for longer gains. Modern investors don’t actually like investment, they want short term profitability and infinite growth. That’s lead to a culture of bad practices, layoffs, and toxic workplaces in the tech industry. Companies have seemingly decided that it’s better to take advantage of AI’s lawlessness now, and beg forgiveness later.

Now AI researchers have spoken out about the need for stronger whistleblower laws to protect them when they do speak up about their concerns. Because, from impersonation, deepfakes, racism, sexism, homophobia, transphobia, ecological disasters, misinformation, election interference, slavery, and abusive material of minors, there’s a lot they could be speaking up about.

Instead, large companies have a gag order on them.

Conditions in AI Spark Need for Whistleblowers

Current and former AI workers from Google, OpenAI, and Anthropic have released an open letter asking for a “right to warn” about potentially dangerous AI practices. Their open letter asks companies to address four primary concerns:

  1. Non-disparage agreements cannot prohibit discussion of “risk-related concerns.” They cannot retaliate either.
  2. Companies will have a “verifiably anonymous” reporting process for issues, so workers can speak about problems without the fear of repercussion.
  3. Alongside that will be a “culture of open criticism,” removing the fear of speaking up by encouraging employees to “raise risk-related concerns.”
  4. No retaliation for those who share “risk-related confidential information after other processes have failed.”

“So long as there is no effective government oversight of these corporations, current and former employees are among the few people who can hold them accountable to the public.”

– From the Right to Warn Open Letter

These aren’t outrageous demands. In fact, in any industry where public health and safety are concerned, which now includes AI, they’d be standard. Companies have a financial incentive to ignore any issues that could interfere with short-term profits. We need external and government-funded oversight of AI, that much is certain. However, in its absence, we need to ensure employees can speak up about issues before they become safety problems for consumers. AI has shown racial and gender-based bias, could lead to more car accidents with pedestrians of darker skin tones, will be involved with military actions, suggests content, and now with generative AI, can produce non-consensual porn of anyone, even children. The need for regulation in AI has long since past, but the best time to start improving these conditions is today.

OpenAI’s History of Anti-Reporting

Unfortunately, some AI companies disagree. Companies looking to grow to huge sizes fast enough to outrun copyright claims and other regulation have financial incentive to build as fast as possible, without regards for safety. If that was a driving factor for a company, they’d work hard to make sure employees couldn’t speak up, even after they leave the company.

OpenAI made employees leaving the company sign a Non-Disparage Agreement, forcing them to give up their equity in the company if the company deemed their disclosures to be “disparaging.” Some background is important here. Startups often give shares of the company to employees instead of compensation that would match what they could make elsewhere. These shares can be worth quite a bit of money if the company ever goes public, has tender events, or sells itself to another company. Early employees of OpenAI, for example, could be looking at millions of dollars in shares, making toiling with less pay worth it in the long run. Companies give these shares, but employees can’t sell them until specific periods or if a company goes public. Companies don’t take back shares. Usually.

“If you have any vested Units and you do not sign the exit documents, including the General Release, as required by company policy, it is important to understand that, among other things, you will not be eligible to participate in future tender events or other liquidity opportunities that we may sponsor or facilitate as a private company.”

– Leaked NDA from OpenAI provided to Vox

OpenAI’s legal filings allowed them to retain and repossess any shares they deem necessary to do so with “near-arbitrary authority.” That allowed them to be “aggressive” with former employees. Sam Altman in a letter to employees said he was “genuinely embarrassed” with the treatment of former OpenAI employees, saying they’d remove the clause in the NDA stating they would take back shares if the ex-employee didn’t sign it or spoke out against the company later. However, leaked documents obtained by Vox showed Sam Altman must have known about the clause in advance. The company has refused to answer questions about amending previously signed NDAs to protect former employees.

OpenAI seemingly cared so much about secrecy about the potentially problematic methods they’re using to make AI that they were willing to threaten employees with losing hundreds of thousands, potentially millions of dollars in previously-promised compensation. That’s not just a culture of silence, it’s one of fear. With the dangers of AI becoming more apparent every day, it’s becoming clearer that oversight and reporting should be top priorities. News stories about AI tend to make the industry look even worse than we thought.

Kenyan Tech Employees Compare Working for AI Companies to “Modern Day Slavery”

“In Kenya, these US companies are undermining the local labor laws, the country’s justice system and violating international labor standards. Our working conditions amount to modern day slavery. Any trade-related discussions between the US and Kenya must take into account these abuses and ensure that the rights of all workers are protected.”

– From the workers’ open letter to President Biden

Kenyan tech employees wrote an open letter to President Biden last month, asking for more accountability and pressure on companies to treat workers right. Nearly 100 workers from tech companies contracted by OpenAI, Facebook/Meta, and ScaleAI spoke out about what they call “modern day slavery.” Employees from companies like these have previously reported working for less than $2/day on projects that left them feeling completely broken. One tech employee, who spent his day combing through vile descriptions of what he described as “graphic sexual violence” stated that “It has destroyed me completely.”

“Our work involves watching murder and beheadings, child abuse and rape, pornography and bestiality, often for more than 8 hours a day. Many of us do this work for less than $2 per hour.”

– From the workers’ open letter to President Biden

Workers report conditions that have lead to PTSD, with little to no support from their company. They looked at hundreds of passages a day for between $1 and $2/hour, depending on how many of these vile passages they could make it through. In some cases, a company contracted by OpenAI had employees collect images of violence, sexual violence, and even sexual violence against children. That company ceased its relationship with OpenAI over the request. OpenAI claims the collection of illegal images of child abuse were the result of a “miscommunication.”

These companies are paying horrible wages that result in a few dollars a day for long hours of truly damaging work. All to turn it around into billions in profit from generative AI, which still contains shortcomings anyway, due to misguided and secretive data ingestion techniques.

“We need these jobs, but not jobs at any cost.”

– From the workers’ open letter to President Biden

Workers Need More Protection

Protecting workers is the right thing to do. Obviously we shouldn’t be causing lasting psychological harm to people for the sake of auto-generating some photos… especially photos of non-consensual nudity. We do need this work. AI will become a daily part of our lives. While better data selection would lead to a reduced need for moderation, we’ll still need workers to look at data coming in and out of models. As a result, we have to do everything we can to make sure that AI is created in ethical ways that reduce harm, not cause it.

Currently, AI is practically a harm-generating machine. It’s using people’s copyrighted work without permission, ingesting data from the most vile corners of the internet, and forcing people do do horrendous jobs that result in lasting psychological trauma for a few dollars a day.

OpenAI CEO Sam Altman’s favorite movie is reportedly Her. This came up recently as OpenAI’s new voice sounded suspiciously like Scarlett Johansson, who starred in the film. The male lead in that movie, played by Joaquin Phoenix, worked at a company that created heartfelt “hand-written” personal letters. Oddly enough, that shows a fantastic way to create datasets for GenAI without including problematic data. It’s said that you can put 100 monkeys in a room and they’ll eventually write Shakespeare. Obviously it’s an exaggeration to make a point about randomized input. However, if you do put 100 humans in a room and ask them to create writing samples in their own personal, natural tone, you’ve found a brilliant way to create generative AI without sampling the entire internet. A way to train models without needing to tag the most vile passages of text online. You can curate inputs better, improve outputs for specific needs, use smaller data sets and less electricity, and prevent models from having the means to create horrific output.

But we’re not focused on making AI better. We’re focused on making it more profitable. That leads to silencing would-be whistleblowers with threats to their livelihoods, and slave-like labor conditions. We need people to feel free to speak up about AI, because the only people who care about this are those who aren’t concentrating solely on profits. Since our lazy, outdated, and ignorant government won’t step in, that means freeing people to speak up themselves. Unfortunately, getting protection for workers whose only interest is protecting people seems near-impossible as well. Unionizing will be the hero of the tech industry. No one will give workers their freedoms, they have to be gotten through collective action. Governments and corporations are corrupted by the need for profit, we have to look out for each other by ourselves.

That starts with freeing everyone to speak up about risks to people’s safety. Never has that been more necessary in the tech industry than right now.


Sources: