Why trust actors but not AI?

In this guest post, Made This managing director Vinne Schifferstein Vidal examines a double standard around authenticity when it comes to AI.

In countless conversations with industry leaders and clients, one theme keeps surfacing: transparency — especially when it comes to using artificial intelligence in our creative work.

Clients often feel compelled to disclose AI use, fearing a backlash that could tarnish their brand’s authenticity. Many are comfortable using AI-generated elements for backgrounds, textures, or product mockups. But replacing real people? That’s still often seen as crossing a line.

But here’s the uncomfortable question: why?

Audiences have never been told that the smiling woman on the billboard had every blemish, freckle, and wrinkle retouched. No one ever disclosed that the idyllic sunset behind a car in a TVC was extended with CGI or, more recently, generative fill in Photoshop. Designers and creatives have been using post-production tools like Facetune, Clone Stamp, After Effects, and now generative AI for years — without a whisper of explanation. And no one cared or questioned it.

We’ve accepted illusion in advertising for decades.

We’ve learned from the L’Oréal’s foundation ads featuring celebrity Beyoncé in 2008 that her skin had been digitally lightened — sparking a conversation about beauty standards. And rightly so, but no word was mentioned about the need for disclosure of digital manipulation. There was no outrage about whether the faces were “real.” The criticism was about the message the image was sending.

More recently, consider the Maybelline mascara stunt on Tiktok, where the brand used CGI to show a subway train covered in a giant mascara wand brushing over eyelashes mounted to the train. It wasn’t real. It was playful, surreal, and completely digital — and it went viral. The fact that it was AI-assisted or CGI-made wasn’t the issue. It was the idea that landed.

So why does the mood shift when we talk about AI-generated people?

Vinne Schifferstein Vidal, the author

Suddenly, transparency becomes a moral imperative. Creatives and clients alike say using AI-generated people “feels inauthentic.” One marketer told me recently they wouldn’t use AI-generated people in campaigns for young audiences because they “don’t want to trick them.” But they are using actors — people hired to pretend to be someone else, reading lines written by a copywriter, wearing borrowed clothes on a set.

So let’s be honest: from a transparency point of view how different is an AI-generated person from a professional actor?

Both are playing a role. Neither is “real” in the context of the story being told. And we’ve never felt the need to disclose that someone in an ad is an actor. We just accept it.

Look at insurance ads, for example. The cheerful family waving from the porch? Not a real family. The emotional mother in a detergent commercial? Acting. Do we disclose that to audiences? Never.

The notion that many fashion brands are already using AI-generated models causes an uproar. Levi’s first campaign using an AI-model in 2023 and Vogue’s Guess ad in their July edition as the most recent example. An interesting debate followed about authenticity and the future of fashion models. If you’re interested in learning more, I would highly recommend reading this TechCrunch article.

Somehow, when the person in the ad is AI-generated — trained from countless images and modeled into a synthetic face — the instinct is to panic about ethics, disclosure, and consumer trust.

So what’s really driving this?

It could be the “uncanny valley” effect — that subtle unease when something looks human but isn’t quite. Or perhaps it’s fear. Fear of public backlash, fear of being called out, fear of misusing a technology that’s moving faster than regulation can keep up.

But from a practical standpoint, what does transparency even look like here? A disclaimer on a billboard? A “This person is AI-generated” caption on Instagram? Do people even read those? Not really — especially not when they’re scrolling through stories or walking past a bus stop.

So maybe the question isn’t “Should we tell people it’s AI?” Maybe it’s: “Why do we think they need to know?”

Because if we’ve already accepted actors playing roles, scenes enhanced beyond recognition, and faces airbrushed into oblivion — then AI isn’t some radical new truth-breaker. One could argue it’s just the next tool in the kit.

That’s not to say ethics don’t matter. They absolutely do. We should think carefully about what kinds of images we put out into the world and who we represent — or don’t. But disclosure for the sake of disclosure?

If the ad is honest in what it says — about values, product, or message — then how it was made might not matter nearly as much as we think.

Get the latest media and marketing industry news (and views) direct to your inbox.

Sign up to the free Mumbrella newsletter now.

"*" indicates required fields

 

SUBSCRIBE

Sign up to our free daily update to get the latest in media and marketing.