U.S. President Joe Biden met with his technology advisors to discuss the “risks and opportunities” of artificial intelligence. Afterwards, a reporter asked Biden if he thinks AI is dangerous. He responded, “It remains to be seen. Could be.” He was wrong.
Biden played it safe with his answer, not taking a side or making a proclamation, but in doing so, he missed the point. AI is already dangerous, in fact, especially for non-white people, AI is a menace. Bad facial recognition AI has gotten children banned from skating rinks and adults arrested. With it being built into vehicles that will have to make decisions like, “Which obstacle to hit,” the inability to effectively identify people with darker than light skin tones is an extreme danger. AI is more likely to misidentify non-white people and women, the datasets are just as likely to perpetuate hate as an anonymous message board, and the environmental impact of the giant data models disproportionately affect poverty-stricken people. The implications for privacy and stalking that AI brings also disproportionately affects women.
AI is already dangerous, just not to people like Joe Biden. And that’s a part of the problem. No one’s representing the people most hurt by AI right now. Biden isn’t going to be arrested for a mistaken identity anytime soon. He’s not going to be misidentified by a car’s AI. Therefore he, and people like him, need to listen to the people researching ethics, the kind of people companies like Google fire, not the companies trying to sell their latest product.
We need adequate responses to the dangers AI brings right now. AI has already disrupted lives, it’s not potentially dangerous at some point in the future. It’s dangerous right now.
And that was before it could order a job over Task Rabbit.
Part of the Picture
Joe Biden wasn’t flat-out wrong. He also stated that tech companies “have a responsibility, in my view, to make sure their products are safe before making them public.” However, AI companies don’t have much of a legal responsibility to make sure their products are safe. If Chat GPT plagiarizes something, OpenAI doesn’t feel the legal repercussions. If it spreads misinformation or libel, it’s not OpenAI that puts out corrections. The companies making facial recognition don’t have to worry when their AI-based systems flag the wrong person, taking a year from their life as they fight a wrongful conviction. Bad AI has sent people to jail, but never the people responsible for the AI.
Biden also called out for privacy legislation. AI has the ability to undermine privacy in unexpected ways. Not only can facial recognition track you, but your phone has been sending your location to companies for years. That data makes it easy to predict your location using AI, even without access to your current location. Some have used AI to create videos of actors and political figures saying things they’ve never said. If there are recordings of your voice out there, it could be possible to do the same to you. What do you think your mom will do when she gets a call from “you” in desperate need of help and some money? You think she’ll take the risk that it could be an AI? That is technology that is here, right now. It sits alongside AI that disproportionately calls for the arrests of Black men and bans Black girls from having fun at a skating rink with their friends. It’s the AI that’s already disrupting lives.
Could AI be dangerous in the future? We don’t have to wait, it’s already dangerous. The AI we have to fear is already here, and it’s only getting worse.