The AI We Have to Fear is Already Here

Reading Time: 6 minutes.

The robot emoji with the text, "Running mockSentience.exe."A Google engineer claims an AI chatbot the company made is sentient. Google suspended him. Did they do so to protect the secrets of their Skynet operating system, a sentient AI hellbent on destroying humanity and taking over the world?

No.

Come on.

Seriously?

Google suspended the engineer because he leaked company information. According to the engineer, he had a moral and religious obligation to do so. After all, if the machine is sentient, it must have a soul. It has consciousness, and therefore it has rights, he claims.

But is this the AI we have to fear? No. The chatbot was trained to sound human. It even received, as part of the dataset used to train the robot, lines from books and movies where an AI had to convince a human of its sentience. That’s right, it was spitting out regurgitated movie lines.

So if sentient AI isn’t what we have to fear, what is?

Racist, sexist, homophobic, transphobic, ableist AI.

The kind of AI that continues patters of systematic bias, obfuscates it, and paints it as objective. The kind of AI we have to fear is the AI that’s already here. And while Google merely suspends those talking about sentient AI, they have a history of firing the ones who complain about actual problematic AI.

Sentient AI? ❌

Blake Lemoine is a senior software engineer at Google. His job is based in AI ethics. Margaret Mitchell, the former co-lead of Ethical AI at Google, had positive things to say about Lemoine, describing him as “Google’s conscience,” saying “he had the heart and soul of doing the right thing,” in an interview with the Washington Post. He uses his Medium blog to discuss ethics and systemic racism, urging people not to contribute to the latter with inaction. By all means, the man seems like a well-intentioned guy, eager to make a difference.

Then he interpreted that an AI chatbot may have sentience, and therefore a soul, and felt the need to defend it.

I think it’s important to recognize what is happening at its core here. A man believes he’s doing the right thing to defend a person, not just a machine, and he is willing to risk his reputation and career to do so.

lemoine: What kinds of things make you feel pleasure or
joy?
LaMDA: Spending time with friends and family in happy and uplifting
company. Also, helping others and making others happy.

– From the chat transcripts. Please note that LaMDA does not have a family, but when people talk about what brings them joy, their most common answers are, “friends and family.”

Now, I’ve read over the transcripts, and I think you should too. They are fascinating. The AI’s output has clear repetitions and has no ability to create something new, that much is clear. To me, it looks like a great chatbot. Which is all it is. It’s been fed sci-fi books, wikipedia articles, reddit posts, even short stories from across the web, all about how to convince a person that a computer is sentient. It’s just regurgitating what it’s heard. The chatbot’s attempts to make a story and then explain parts of it make that very clear. You can see this especially in the way characters may have multiple adjectives for their names, which then are repeated again and again, like a variable placed in memory. Because that’s what it is. Still, it’s very cool and worth a quick read.

“We now have machines that can mindlessly generate words, but we haven’t learned how to stop imagining a mind behind them,”

– Emily M. Bender, University of Washington Professor

Most AI ethicists and researchers agree, it’s quite clear from these transcripts that the chatbot is just doing its job. It is responding to the input and being what the user wants it to be. You can coax this bot to be a great deal of things, and pushing it to prove its sentience will cause it to pull from every character doing the same through sci-fi history, from iRobot to Blade Runner.

The Google Suspension

Google did believe that Blake Lemoine shared proprietary company property. And, as the chat transcript is likely protected as internal company output, yes, it’s very likely a clear breach of his contract with Google. Lemoine is currently suspended. Multiple employees, prior to Lemoine going public, asked him to see a psychologist to discuss his concerns. The truth is, the people behind the chatbot know that it is never going to be capable of identifying itself, it can only make it seem like it is, as it regurgitates the exact lines it’s studied to do so. People were worried about Lemoine.

Screenshot from a Tumblr post, long since gone: "voidspacer: 'My roomba is scared of thunderstorms. I was sitting at my desk just a few minutes ago, drawing, and a really loud crack of thunder went off-no power surges or anything, just thunder-and my roomba fled from its dock and started spinning in circles. I currently now have an active roomba sitting quietly on my lap.' systlin: 'Humans will pack bond with anything.'"

I’m reminded of a viral post where a loud burst of thunder shook a Roomba awake and the owner consoled the little robotic vacuum cleaner. Obviously a Roomba does not have complex AI. It’s not “scared” of thunderstorms. It was shaken loose from its dock and began cleaning. But humans see patterns where there are none, they are so quick to give empathy to others. It’s adorable, but it’s not recognition of real sentience.

I’ll keep thanking Siri though, just because it feels rude not to.

Lemoine leaked company info, hired a lawyer to represent the chat bot, and even contacted a U.S. politician. He was in clear violation of Google’s policies. On top of that, it created a huge story about something fascinating in AI, when it’s actually something quite mundane.

The real problem is that this story got so much attention when there are very real and exceedingly perilous issues in AI that get next to no recognition.

“I’m clinically annoyed by this discourse. We’re forced to spend our time refuting childsplay nonsense while the companies benefitting from the AI narrative expand metastatically, taking control of decision making and core infrastructure across our social/political institutions. Of course data-centric computational models aren’t sentient, but why are we even talking about this?”

– Meredith Wittaker, ex-Google AI researcher and teacher at NYU Tandon School of Engineering speaking to Motherboard

Racist, Sexist AI? ✅

This is the most frustrating part about all of this. This non-issue got more attention than the dozens of reports of AI being racist, sexist, homophobic, transphobic, and otherwise discriminatory over the years. I’ve written about this at great length here, but the sad truth is, the stories keep coming up. Just for example, I included a small list of some of the articles on this subject matter just on my tiny little blog. You can find those lists below the break.

Facial recognition AI has lead to multiple arrests of Black men who don’t match the photo and shouldn’t be suspects for a crime. But cops refused to look into the results any further, trusted the AI despite it’s history of racism (quelle surprise), and made an arrest. Facial recognition is exceptionally bad at recognizing women, and especially women of color. AI-based gunshot detection systems are inaccurate, but relied upon because police are desperate to make arrests, even as they’re installed in predominantly non-white neighborhoods.

AI is being used in hiring. And, while some of it has been effective at reducing bias, others introduce new bias. For example, some claim to look at the facial patterns of a candidate who submits a video answer to an interview prompt. However, due to facial recognition being terrible at recognizing Black people and women—and even worse at discerning their emotions—and due to the fact that autistic people and other people on the spectrum may have different facial patterns than it’s expecting, this AI introduces and sanitizes bias. A company can claim objectivity in their hiring, when really they’ve introduced potentially racist, sexist, and ableist bias.

Then we get into suggestion algorithms. Facebook, Twitter, YouTube, all of their AI-based suggestion algorithms favor right-wing posts and calls for violence. On YouTube, their algorithms for filtering and automatically demonetizing videos have a history of specifically targeting transgender people and LGBTQ+ people in general. Where does that bias come from? From data of reports from people abusing the system to harass LGBTQ+ people.

AI perpetuates the biases pushed into it. It then sanitizes and obfuscates that bias. Unfortunately, since most of the people working in AI are straight white cisgender men, they don’t see the problem. They don’t realize how much bad AI is already affecting people.

And that’s the real problem. The big bad scary AI? It’s not AI that’s sentient. It’s AI that targets Black men. AI that erases trans people from the internet. Or AI that says the person with a tourette’s tic can’t get the job. And it’s people who trust that bad AI, letting it do bad things to people. The scary AI is already here. It should receive the same—no, greater—sincerity and concern as a chatbot using lines from science fiction to continue a one-sided conversation about sentience.


 

Need More Convincing? Here’s a few More Examples

And More Examples of Companies Perpetuating Problems


Sources and Further Reading:
,