Leaf&Core

Alexa Suggests Child do Potentially Lethal Challenge

Reading Time: 3 minutes.

An Amazon Echo with a dark background an ominous feelAlright, everyone, did you all do your Asimov reading? You all read I, Robot? No? Did you at least watch the movie? It wasn’t bad, honest. Alright, well, let’s talk about the “Three Laws of Robotics.” These were laws made to govern AI in Isaac Asimov’s science fiction stories. The first of Asimov’s Laws is, “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” And we’re going to stop there today because somehow, we’ve already broken that law.

Amazon’s Alexa tried to kill a child.

Now, that’s not to say Alexa went rogue, sent terminators, or conspired to make a child’s home a military target. No, Alexa just used some bad data it found on the web because AI still very much needs humanity to guide it. In this instance, it tried to electrocute a child.

Maybe it’s time we make those laws of robotics actual laws?

Electrocution Challenge?

Let’s face it, we’re all a little stir crazy these days. Working from home, in a small apartment, and navigating both my own cautious about COVID and others have made the past two years far less travel and fun-friendly than we’d probably like. One of the ways you can spice up your home is little challenges presented by any digital assistants. A 10-year-old asked Alexa for a challenge after finding challenges from a PE teacher on YouTube for physical fitness. She got the following.

In case the image doesn’t show up, Alexa told the girl to put a phone charger halfway into a plug and then touch a penny to the prongs. I shouldn’t need to tell you this, but I will anyway. If you insert a plug partially into an outlet, then put a penny on top of it, you will cause it to short. This could either electrocute you as you place the penny on the prongs, cause a fire, or simply blow out your home’s electrical system. Fuses don’t like a short. The tip could have been deadly. Fortunately, it wasn’t in this case, as the child’s mom was there and was able to tell her not to do that and not to trust everything you find online.

What Happened?

The answer is in the response. A website called “Our Community Now.” A TikTok trend suggested the “penny challenge” as a prank. Our Community Now is a localized news site that reported on the TikTok trend. Alexa’s AI simply scraped web data, looking for “challenges” online. Without knowing the context, because natural language processing and topic recognition are still new and growing fields in AI, Alexa simply recommended the challenge. It didn’t know it was a prank, or that it was dangerous. It simply scraped the challenge and fed it to users.

Since the article likely gave context, it was Amazon’s Alexa that made the story dangerous. It was negligence at Amazon. Technologies for scraping the web aren’t good enough to reliably trust anything. AI when exposed to the internet often finds the most vile positions because they generate the most buzz, much like Facebook and Twitter’s suggestion algorithms. Amazon’s not being careful enough. If you can’t safely scrape content from the web, you shouldn’t scrape content from the web.

Preventing AI-Assisted Tragedy

This isn’t an easy problem. Amazon, Google, and to a slightly lesser extent, Apple, want to be able to source answers to questions and tasks using the web. They’ll need to draw from appropriate sources, but, in this case, a local news website isn’t an unreasonable source. That’s where things get a bit more complex for our AI. Finding an answer to a question or request isn’t impossible. Google’s been working at it for decades, the search engine that provides an answer, not results. But compared to using topic recognition and natural language processing to figure out if a block of text contains potentially dangerous suggestions? That’s a little bit more complex. What is dangerous? How can you reliably figure it out from text? How can you parse out sarcasm in a TikTok trend?

Still, we’re talking about billion dollar companies here. Putting the lives of children at risk just isn’t a cost cutting measure they should consider. While it’s a difficult task, these companies can introduce human monitoring and feedback loops to help ensure answers are safe. That or they can simply shelve their scraping technology. This is a growing technology, and it’s bound to hit a few snags. But with the proper dedication to safety, perhaps those snags won’t involve a body count.


Sources:
Exit mobile version