
I was going to make another creepy bot but I liked this goofy sales bot, so they’re the mascot now. The bots took our retail jobs and are just as bored with them as we are.
Advertising is a lot of work, and requires a great deal of expertise at every level. There’s the business people who study a great deal to figure out what consumers like and how to research and analyze trends, data analysts who crunch numbers to see the success of previous campaigns and make theme and content suggestions for ads, the people who do set design, the stage managers, the PAs, and, of course, the actors. Ads are annoying, but they have a deeply human element. Often, advertising is an early way for actors to get their break into a larger world of acting. Being in an ad, especially for an inexperienced actor, is a thrilling achievement. As someone who loves acting, has been in an ad, and studied marketing in school, advertising is a funny thing for me. On one hand, I absolutely hate it. On the other hand, I’ve had a lot of fun with it before.
But some companies want to scrape the human element out altogether. Forget the people who make ads possible, they just want profits. It’s a disgusting take that turns away around half of all consumers. Companies have been using AI to create horrible nightmare-like AI ads that sit in the uncanny valley, leaving real people unemployed and viewers put off. I thought it might be worth mentioning a few companies that think so little of people that they won’t even feature humans in their advertising. These companies deserve far more ire than they’re getting, and we can’t let these anti-human decisions slip under the radar. So let’s call out the companies actively working against humanity and, more importantly, learn to recognize AI in advertising when we see it.
In This Article:
Examples of Companies Using AI in Ads
Sometimes sales bots are friends. BFFs, even. Screenshot via CD Projekt Red’s Cyberpunk 2077.
I don’t intend to check back in and keep this list up to date. That means some of these companies may make new pledges and no longer feature AI. Others will start using AI. This isn’t a name-and-shame list. I just hope to show some examples to help you recognize the problems in these ads. I want to show you a few examples of the bad actors who have replaced human labor with AI for marketing, perhaps to help you make your own choices.
Coca-Cola
I can’t watch the ad Coca-Cola made without wondering what the hell they were thinking. Coke is one of the companies we studied repeatedly in every marketing class I ever took. For so many experts to look at that garbage video that is supposed to be a celebration of their classic advertising and believe it’s fit for broadcast is insane.
The ad is made of short video clips, but you could pause at any moment to find glaring issues. Trucks that seem to have tiny front windows, or no front end. Strange eerie uncanny valley animals with odd features, distorted wheels and other perspectives, almost shaky text between frames, people in backgrounds looking like misshapen blobs. It’s a mess. Coke used to be an example of good marketing. I suppose they’ll still use Coca-Cola in marketing classes, but with more about what not to do. In an ad about the season of love for your fellow human, Coke cut the human out of their brand.
Vodafone
Vodafone’s “AI Influencer” inspired this list. Influencer is already a cringe career path, but, god, at least it’s employing humans. But Vodafone decided that influencers churning out videos every day for scraps are just too overpaid, and decided to spend money, generate more pollution, use more electricity, and reward suspicious sources of data sets to use AI instead. Anything but helping actual people, right?
It’s not the company’s first time flirting with AI in advertising, having shared ads entirely generated by AI in the past.
Amazon
Amazon scrapped an internal AI experiment to analyze applicants’ resumes after learning it had the tendency to turn away female applicants. It reflected the hiring standards in the tech industry, which still heavily favors men. The AI took the human bias and made it a hard rule.
However, Amazon also got caught using AI to market their Fallout TV show. It’s a great show, if you haven’t checked it out, but a promotion showing an obviously poorly made AI postcard was not a good way to market it. With access to game assets and characters, they could have easily made something true to the games with a human creator. Instead, they used AI, and received well-deserved bad press.
Swatch
This is a perfect example that can help you learn how to recognize AI artifacts. I just wish it didn’t come from a company whose products I love! Screenshots via Swatch’s reel on Instagram
This one hurts! I love Swatch watches. But as I was scrolling Instagram instead of proofreading this article, I noticed the telltale signs of AI in a post from Swatch. I looked in the comments for the reel and I wasn’t the only person who spotted it. Viewing the frames individually showed clearly what I was feeling: this is AI. It’s undeniable, it’s not even good AI. There are artifacts in everything from the fake Lego bricks, the globe has parts just inexplicably missing, another one has the pin going through the globe in the wrong place because AI doesn’t know why the earth is tilted on its axis. A human artist would. A human artist would want a paycheck though because humans need food and shelter, while corporations need only profits. For a company that prides itself on the work of artists (I have Keith Haring and Basquiat watches from Swatch), turning their back on the creatives that make their branding possible is just a low blow.
Accidental AI Happens!
Wacom
Wacom makes tools for artistic professionals. Their drawing tablets are infamous, and over the years, they’ve moved to displays and tablets that industry professionals swear by. Seeing them on a list like this may set off red flags, but it seems this was a clear misunderstanding.
Wacom shared an ad that included an image of a dragon with strange features that appeared to be clear AI artifacts. The company removed it and was silent about it for some time. However, they eventually admitted the image came from a third party and they simply didn’t flag it for AI before publishing. To me, it seems like they should have had more creative people in their flow, because the image clearly is AI. However, rushed professionals make mistakes too, so let’s give them a pass. For now.
Wizards of the Coast
Fellow geeks, we know WotC. You may have strong opinions about them for Magic or D&D, but outside of any other controversies, they have stated they wouldn’t use AI in their products in the future. They had in the past and fans were upset, so it’s great they listened to fan feedback.
However, it appears they may have used AI in an ad. According to them, it was an honest mistake made by a person who had used a tool in their photo app to generate part of an image. Adobe Photoshop has tools like this with “AI Fill” which can fill in an area with a visual autocomplete. It seems accidental, and accidents happen, especially when people are involved. Using tools like Adobe Photoshop and others can get more difficult for professionals who want to use AI that may be trained on the work of others without their permission. It’s the shape of the industry, and can be difficult to avoid, but at least we know it wasn’t intentional.
Some Companies Have Taken a Stand Against AI
Other companies have taken a tough stance against AI, stating they wouldn’t use it. Keep an eye out for more companies to make announcements like this. If they care about the human touch in how they market, surely they’ll care about their customers too.
Dove
Dove is a beauty brand that markets itself as a company that focuses on “real bodies.” No air-brushing or faking, people from all walks of life in their ads. It’s a nice change of pace from the rest of the ultra-processed ads seen elsewhere in the industry. Dove has also taken a pledge not to use AI. Real people, as it should be.
Although they did create a “playbook” of ways to use generative AI to create more diverse results. The equivalent of saying that, if you’re going to drive like a maniac, at least do it in an electric car to save the environment a little.
Nintendo
Nintendo may have plenty of issues, from representation of women and LGBTQIA+ people in games, to their rather extreme take on copyright law. However, one thing they won’t do? Use AI in their games. Games for humans, by humans.
Cara
Want to find art made by humans? Avoid the AI slop that Instagram and others seem to be shoving in our faces no matter how many times we tell these apps not to show it to us. Instead, join a social network of creators who don’t use AI in their art with Cara.
There’s No Ethical Use of Generative AI
You just can’t use generative AI, at least not as it exists today, ethically. It uses monumental amounts of water, as climate change pushes new areas towards drought and larger wildfires than we’ve ever experienced. Billions of people have scarce water supplies for at least one month a year. Wars have been waged over access to clean drinking water, and we’re wasting it to generate images of cute girls for ads.
The massive increase in electricity for these gigantic data models and processing means that simple searches or the work of humans now uses an exponentially larger amount of electricity. That electricity usage is raising electric bills substantially. It’s also causing electric companies to once again shift towards fossil fuels, increasing pollution.
Labor issues also come into play, and not just the people being replaced by AI. Time reported on how Kenyan workers had to endure hours of viewing troubling content to make ChatGPT less toxic, all for under $2/hour. Meanwhile, labor practices around the world reportedly account to “modern-day slavery” to make our electronics. More device demand could mean greater labor violations.
Data centers are loud, and we’re building more of them, either for cryptocurrency mining, like that example, or those used for generative AI. That noise pollution may sound like a silly complaint, but that constant drone is associated with serious health issues.
Finally, there’s the most obvious issue: most of them use stolen works, claiming it’s “free use,” when taking such content for any other profit-motivated reason would be illegal. They use artwork we didn’t consent to sharing to generate images, text, video, and even music. We didn’t ask for our art to be used in this way, we haven’t been compensated for it either. We’re being replaced by machines that are using our labor without compensation or consent.
There’s no ethical use of generative AI, full stop. So recognize when it’s being used and make your purchasing decisions around it. Of course, don’t use it yourself either. Get better at drawing, learn to play some music, get friends together and form a band. Put in the effort. I promise, it’ll feel better.
Sources:
- Kenneth Andersen, Metro
- Jeffrey Dastin, Reuters
- Cindy Gordon, Forbes
- Terry Gross, NPR
- Bruna Horvath, NBC News
- Miles Klee, Rolling Stone
- Dan Lieberman, More Perfect Union via YouTube
- Billy Perrigo, Time
- Gret Petro, Forbes
- Rashi Shrivastava, Forbes
- Paul Tassi, Forbes
- Kat Tenbarge, NBC News
- Jess Weatherbed, The Verge, 2