Trump Uses AI’s Lawlessness to (Falsely) Claim Flubs Were Faked

Reading Time: 5 minutes.
A robot in a blonde wig with misquotes of Trump around it.

These might be misquotes, but the stories behind them aren’t AI-generated, as Trump claims. Also, that robot in a wig could be anyone!

I’ve gotten pretty good at detecting AI photos and videos. I’ve caught ones that even my techie friends missed. There’s a generic smoothness to the texture, the lighting like a perfect headshot on each face. Like corporate art taken to the ultimate extreme. On top of that, there’s the artifacts. Hair is a big one. It often overlaps, has levels of focus that don’t match the distance to the camera, and doesn’t appear to have a point of origin. There are other signs, of course. There’s different items in focus or not in the background, gibberish text, textures on surfaces that don’t belong, and more little oddities that can clue you in. It’s obvious once you start looking for it. The longer you look, the more you can spot.

However, AI can often produce accurate enough images of celebrities and popular politicians to fool people at first glance. Sometimes, especially when it aligns with your political views, that’s all it takes to make you believe a false claim is true. There are no laws against generating and spreading deepfakes in the United States. No protections for anyone, not even Taylor Swift. Politics would be a fantastic place for deepfakes if you wanted to undermine democracy. You could easily making Trump seem like an idiot with deepfake imagery. People who hate him will look past AI artifacts and believe it.

Trump claims videos shown of him at a deposition were just that: fakes. It’s not the first time he’s made the claim. After all, AI’s lawlessness means it’s easy to fake images and videos. In fact, Trump’s own supporters have done it. There’s just one tiny little problem: the videos are all real. But AI has given Trump just enough plausible deniability that he can claim his flubs are just examples of AI fabrications. His fans love him enough to believe even these easily debunked lies. He found it worked before, why not try it again?

It was bad enough when anyone could use AI tools to make deepfakes of anyone. Now we have to worry about politicians claiming their mistakes were actually fake videos.

In a sense, Trump, “told you to reject the evidence of your eyes and ears. It was their final, most essential command.” I suppose Big Brother would have an easier time in today’s world, being able to claim all information contrary to their decree is “AI.”

The AI That Cried Wolf

Truth Social post by Donald Trump. It reads: Avatar Donald J. Trump @realDonaldTrump The Hur Report was revealed today! A disaster for Biden, a two tiered standard of justice. Artificial Intelligence was used by them against me in their videos of me. Can’t do that Joe! Mar 12, 2024, 10:23 PM

Many modern uses of AI come down to deception. Deepfakes, plagiarism, imitations of other people’s art, and of course, propaganda, have all made use of AI. It’s easy to distrust AI, the companies making it have made it clear: they don’t deserve our trust. AI’s previous deception makes it an easy target for dishonest people. It’s the perfect scapegoat for their own failures. When a dishonest person claims they were the victim of a deepfake, it’s easy to believe them, because we’ve been given no reason to trust AI.

AI has “cried wolf” so many times, we expect to distrust it.

The fact that there is no protections against dishonest uses of AI mean it’s an easy excuse for people like Trump, who need to run from videos of them looking like complete idiots. Someone in Trump’s position may have said and done incredibly stupid things, but they would be right to take advantage of a technology that has lost consumer trust. It’s easier than fighting accusations of poor mental acuity. Trump’s proving it works, giving his followers exactly what they want to hear. They’re all to eager to believe him, distrusting what they’ve actually seen, because AI has become lawless enough to give them just enough cause to stick to their existing biases and trust whatever fraudster tells them to stand by Trump.

Once More, AI Regulations Lag Behind Citizens’ Needs

Trump won’t be the only person taking advantage of people’s distrust of AI to hide his own mistakes. We’ll see dishonest politicians, misbehaving celebrities, corporation leadership, and influencers making their apology videos use AI to deceive people into believing their mistakes were just videos created by malevolent actors. It won’t matter if it’s easy enough to disprove (for now). They’ll only need to sow the seeds of doubt. That little bit of confusion takes them from canceled to controversial, with ardent defenders pushing off naysayers and their “fact-checking.” It’s all too easy to start a fight over AI.

We need laws to guide AI towards more honest uses. If you commission a human artist to create a slanderous piece of media, they can potentially get in trouble for creating it if they knew they were creating something slanderous. However, do it with AI, and everyone acts as though the fault cannot be traced. It could be, very easily, in fact. By requiring watermarking and registered users for all generative AI, you could track who made something and what AI generator made it. It could be easy to find the tool used and the user responsible for any generated image or video, but companies don’t seem to want to implement real watermarking because it could make them liable. Right now, no one expects them to do the right thing. The law could change that.

From there, laws that prevent deepfaking, especially in political situations, can enable us to go after these harmful and unethical uses of AI. We need to protect not only celebrities and politicians either, allowing everyone to request their likeness is pulled from datasets so no one can use their face or body to create a deepfake of them or someone else. You shouldn’t have to be famous to deserve safety and to own your own likeness.

Stop Giving AI Companies Everything They Want

We need to change the way we think about AI, we must stop asking whether or not these companies deserve to profit from our information, our likeness, our creations, our bodies, and instead ask how to keep people safe. We’ve allowed these AI companies to take everything we have, everything we are, just so they could profit from us.

This isn’t extraordinarily difficult. It would put more pressure on large companies like OpenAI, Microsoft, and Google. These companies have a lot of lobbying power, but if they comply, and for the sake of their image, they should, and make efforts to protect citizens around the world, they have nothing to worry about. All they’d have to do is make completely reasonable efforts to prevent misuse on their platforms. Social networks have already had to do this. They already have to comply with harassment laws, child trafficking laws, and revenge porn laws. They already work with law enforcement when a user misuses their platform. Why not AI companies? We have to ask them to share responsibility. That includes not ignoring employees who warn them of disturbing output.

AI is going to become a huge part of everyone’s lives over the next few years. Everyone will make use of AI, and likely generative AI as well. If we don’t make sure we are both creating AI ethically, that is, paying the humans whose work contributed to the generative output of it, and ensuring it doesn’t reinforce biases through curation and treatments after models are generated, then we’ll create a society where we can’t trust or ethically use the biggest advancement in technology in decades. We need AI to work alongside us, to enrich the livelihoods of creators, not steal it from them. We need to ensure AI doesn’t become the tool of the liar and the conman to influence politics. There are simple steps we can take, but the first is to stop looking at AI as a monolith, and instead see it as a product we all trained, we all worked on, and then ensure it works for us, not huge corporations who took our data, our work, our likenesses, without asking.


Sources: