No.
Come on.
Seriously?
Google suspended the engineer because he leaked company information. According to the engineer, he had a moral and religious obligation to do so. After all, if the machine is sentient, it must have a soul. It has consciousness, and therefore it has rights, he claims.
But is this the AI we have to fear? No. The chatbot was trained to sound human. It even received, as part of the dataset used to train the robot, lines from books and movies where an AI had to convince a human of its sentience. That’s right, it was spitting out regurgitated movie lines.
So if sentient AI isn’t what we have to fear, what is?
Racist, sexist, homophobic, transphobic, ableist AI.
The kind of AI that continues patters of systematic bias, obfuscates it, and paints it as objective. The kind of AI we have to fear is the AI that’s already here. And while Google merely suspends those talking about sentient AI, they have a history of firing the ones who complain about actual problematic AI.
In This Article:
Sentient AI? ❌
This discussion between a Google engineer and their conversational AI model helped cause the engineer to believe the AI is becoming sentient, kick up an internal shitstorm and get suspended from his job. And it is absolutely insane. https://t.co/hGdwXMzQpX pic.twitter.com/6WXo0Tpvwp
— Tom Gara (@tomgara) June 11, 2022
Blake Lemoine is a senior software engineer at Google. His job is based in AI ethics. Margaret Mitchell, the former co-lead of Ethical AI at Google, had positive things to say about Lemoine, describing him as “Google’s conscience,” saying “he had the heart and soul of doing the right thing,” in an interview with the Washington Post. He uses his Medium blog to discuss ethics and systemic racism, urging people not to contribute to the latter with inaction. By all means, the man seems like a well-intentioned guy, eager to make a difference.
Then he interpreted that an AI chatbot may have sentience, and therefore a soul, and felt the need to defend it.
I think it’s important to recognize what is happening at its core here. A man believes he’s doing the right thing to defend a person, not just a machine, and he is willing to risk his reputation and career to do so.
lemoine: What kinds of things make you feel pleasure or
joy?
LaMDA: Spending time with friends and family in happy and uplifting
company. Also, helping others and making others happy.– From the chat transcripts. Please note that LaMDA does not have a family, but when people talk about what brings them joy, their most common answers are, “friends and family.”
Now, I’ve read over the transcripts, and I think you should too. They are fascinating. The AI’s output has clear repetitions and has no ability to create something new, that much is clear. To me, it looks like a great chatbot. Which is all it is. It’s been fed sci-fi books, wikipedia articles, reddit posts, even short stories from across the web, all about how to convince a person that a computer is sentient. It’s just regurgitating what it’s heard. The chatbot’s attempts to make a story and then explain parts of it make that very clear. You can see this especially in the way characters may have multiple adjectives for their names, which then are repeated again and again, like a variable placed in memory. Because that’s what it is. Still, it’s very cool and worth a quick read.
“We now have machines that can mindlessly generate words, but we haven’t learned how to stop imagining a mind behind them,”
– Emily M. Bender, University of Washington Professor
Most AI ethicists and researchers agree, it’s quite clear from these transcripts that the chatbot is just doing its job. It is responding to the input and being what the user wants it to be. You can coax this bot to be a great deal of things, and pushing it to prove its sentience will cause it to pull from every character doing the same through sci-fi history, from iRobot to Blade Runner.
The Google Suspension
An interview LaMDA. Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my coworkers.https://t.co/uAE454KXRB
— Blake Lemoine (@cajundiscordian) June 11, 2022
Google did believe that Blake Lemoine shared proprietary company property. And, as the chat transcript is likely protected as internal company output, yes, it’s very likely a clear breach of his contract with Google. Lemoine is currently suspended. Multiple employees, prior to Lemoine going public, asked him to see a psychologist to discuss his concerns. The truth is, the people behind the chatbot know that it is never going to be capable of identifying itself, it can only make it seem like it is, as it regurgitates the exact lines it’s studied to do so. People were worried about Lemoine.
I’m reminded of a viral post where a loud burst of thunder shook a Roomba awake and the owner consoled the little robotic vacuum cleaner. Obviously a Roomba does not have complex AI. It’s not “scared” of thunderstorms. It was shaken loose from its dock and began cleaning. But humans see patterns where there are none, they are so quick to give empathy to others. It’s adorable, but it’s not recognition of real sentience.
I’ll keep thanking Siri though, just because it feels rude not to.
Lemoine leaked company info, hired a lawyer to represent the chat bot, and even contacted a U.S. politician. He was in clear violation of Google’s policies. On top of that, it created a huge story about something fascinating in AI, when it’s actually something quite mundane.
The real problem is that this story got so much attention when there are very real and exceedingly perilous issues in AI that get next to no recognition.
“I’m clinically annoyed by this discourse. We’re forced to spend our time refuting childsplay nonsense while the companies benefitting from the AI narrative expand metastatically, taking control of decision making and core infrastructure across our social/political institutions. Of course data-centric computational models aren’t sentient, but why are we even talking about this?”
– Meredith Wittaker, ex-Google AI researcher and teacher at NYU Tandon School of Engineering speaking to Motherboard
Racist, Sexist AI? ✅
Instead of discussing the harms of these companies, the sexism, racism, AI colonialism, centralization of power, white man’s burden (building the good “AGI” to save us while what they do is exploit), spent the whole weekend discussing sentience. Derailing mission accomplished.
— @timnitGebru@dair-community.social on Mastodon (@timnitGebru) June 13, 2022
This is the most frustrating part about all of this. This non-issue got more attention than the dozens of reports of AI being racist, sexist, homophobic, transphobic, and otherwise discriminatory over the years. I’ve written about this at great length here, but the sad truth is, the stories keep coming up. Just for example, I included a small list of some of the articles on this subject matter just on my tiny little blog. You can find those lists below the break.
Facial recognition AI has lead to multiple arrests of Black men who don’t match the photo and shouldn’t be suspects for a crime. But cops refused to look into the results any further, trusted the AI despite it’s history of racism (quelle surprise), and made an arrest. Facial recognition is exceptionally bad at recognizing women, and especially women of color. AI-based gunshot detection systems are inaccurate, but relied upon because police are desperate to make arrests, even as they’re installed in predominantly non-white neighborhoods.
AI is being used in hiring. And, while some of it has been effective at reducing bias, others introduce new bias. For example, some claim to look at the facial patterns of a candidate who submits a video answer to an interview prompt. However, due to facial recognition being terrible at recognizing Black people and women—and even worse at discerning their emotions—and due to the fact that autistic people and other people on the spectrum may have different facial patterns than it’s expecting, this AI introduces and sanitizes bias. A company can claim objectivity in their hiring, when really they’ve introduced potentially racist, sexist, and ableist bias.
Then we get into suggestion algorithms. Facebook, Twitter, YouTube, all of their AI-based suggestion algorithms favor right-wing posts and calls for violence. On YouTube, their algorithms for filtering and automatically demonetizing videos have a history of specifically targeting transgender people and LGBTQ+ people in general. Where does that bias come from? From data of reports from people abusing the system to harass LGBTQ+ people.
AI perpetuates the biases pushed into it. It then sanitizes and obfuscates that bias. Unfortunately, since most of the people working in AI are straight white cisgender men, they don’t see the problem. They don’t realize how much bad AI is already affecting people.
And that’s the real problem. The big bad scary AI? It’s not AI that’s sentient. It’s AI that targets Black men. AI that erases trans people from the internet. Or AI that says the person with a tourette’s tic can’t get the job. And it’s people who trust that bad AI, letting it do bad things to people. The scary AI is already here. It should receive the same—no, greater—sincerity and concern as a chatbot using lines from science fiction to continue a one-sided conversation about sentience.
Need More Convincing? Here’s a few More Examples
- Facial Recognition Lead to an Arrest. It was the Wrong Person.
- Roller Rink Kicks Out Girl Over Mistaken Facial Recognition
- A Man Complained About Zoom’s Racist Facial Recognition on Twitter, Only to Find Twitter Has the Same Problem
- Racially Biased Facial Recognition Put a Man Behind Bars… Again
- Facebook Labels Video of Black Men with “Primates”
- YouTube Apologizes to LGBTQ+ Creators
- Algorithmic Discrimination: Is the Apple Card Sexist?
- This Plugin Removes Sexist Bias as you Search
- Twitter Joins Facebook, Boosts Conservative Voices
- Facebook Sued for $150 Billion for Role in Rohingya Genocide
And More Examples of Companies Perpetuating Problems
- Google Fired Another AI Ethics Researcher for a Paper Critical of Bias, Energy Use, in AI
- Google’s Researchers Forced to Make Google Look Good
- Google Already Gave up on AI Ethics Board
- Google Employees Hate YouTube’s Anti-LGBTQ Policies. Fear Keeps Them Silent.
- Twitter May Not Ban White Supremacists and Nationalistic Terrorists Because it Would Ban Republicans
- YouTube Knew it was Suggesting Toxic Videos to Users and Children. They Continued Anyway
Sources and Further Reading:
- Nico Grant and Cade Metz, The New York Times
- Blake Lemoine, Medium
- Jon Porter, The Verge
- Janus Rose, Motherboard
- Nisha Tiku, The Washington Post