How far can brands go with AI before they lose consumer trust?

Recent examples of AI being used to create models and creators have led everyday consumers to question whether they can believe anything they see any more. Sage Kelly, ADMA’s regulatory and policy manager, has researched consumer relationships with chatbots, and wonders whether the AI juice is worth the squeeze if you risk losing hard-won trust.

Mia Zelu has got a life most people would envy. Travelling the world with friends and family dressed in the latest fashions and snagging VIP seats at Wimbledon and heavyweight boxing matches.

But there’s a twist: it’s all made up. It’s not that she’s staged the photos to fool us like some creators before her – it’s that she doesn’t actually exist at all. Her entire existence is in bits – she’s an AI creation.

It was a revelation that left some of her 166,000 followers in bits too – wondering who they could actually trust.

AI creations are also appearing in the pages of high fashion magazines. Just a few weeks ago, another AI creation hit one of the most premium and traditional media channels when Guess made headlines by featuring two hyper-real AI models in Vogue’s August issue. The AI models were created by Seraphinne Vallora, an AI marketing agency which “create editorial-level campaigns using AI”.

The disclosure issued by Vogue – buried in fine print – did little to stem the backlash. Readers accused the brands of deception, undermining human creativity and diversity.

Enjoying Mumbrella? Sign up for our free daily newsletter.

These moments are more than isolated controversies. They mark a turning point in how technology, psychology and commerce collide. They raise a fundamental question for every marketer: how do we use AI to engage without eroding trust?

AI influencers are not new. In 2016, Lil Miquela, a teenage pop singer from California, hit the headlines after gaining tens of thousands of followers and having her account hacked by a rival. It turned out to be two AI avatars controlled by the same studio.

In some ways, Lil Miquela’s faux-existence has become normalised as the storyline of her life has continued to play out (in 2020 she broke up with her human boyfriend). She also has her own Wikipedia page which switches between treating her as a fictional and real character.

Lil Miquela

What has changed since 2016 is the realism, speed of creation and the ease with which anyone with a smartphone can deploy AI influencers. Consequently, people are now hyper aware of AI and the abilities it has, resulting in heightened scepticism in what we see.

When AI is clearly artificial – like a talking animal mascot – audiences treat it as playful branding, as they do Mickey Mouse or the Jolly Green Giant. But when synthetic personas are presented as human without disclosure, it becomes deception.

Followers can form parasocial relationships – one-way emotional bonds – with a figure they believe is real. When the illusion is broken, trust collapses.

In Australia, consumer trust in AI is already among the lowest globally with a recent report from KPMG finding only 30% of Australians believe the benefits of AI outweigh the risks – the lowest of any other country. This gap between expectation and reality can be potentially reputationally catastrophic.

The psychology of “creepy”

Recent Australian research confirms this tension. A UTS study found audiences were more comfortable engaging with less human-like AI influencers, such as stylised 2D avatars, than with hyper-realistic models, which many found unsettling and “creepy”.

This is the uncanny valley effect: when something looks almost human, but not quite, discomfort spikes.

Yet paradoxically, other studies show anthropomorphism – the more human qualities AI exhibits – the more likely people are to adopt it.

The bridge between these findings is clear: trust is the key variable. If people know what they are dealing with, they are more open. If not, they recoil.

Legally there is nothing stopping people creating these fictional AI characters (note that using the likeness of a real person such as a celebrity is a potentially fraud area). Australia has no AI-specific legislation.

Instead, marketers must fall back on the Australian Consumer Law for some guidance:

  • Section 18: prohibits misleading or deceptive conduct in trade or commerce;
  • Sections 29 and 33: prohibit false or misleading representations about goods or services.

In other words: if a consumer believes they’re dealing with a person when they are not, and the brand has not disclosed this, the brand could be at legal risk.

At a minimum, they are in breach of consumer trust, which for many businesses is just as catastrophic given how long that takes to earn and regain.

Sage Kelly – a real person

Why this matters for brands

AI promises cost savings, scale and infinite creative flexibility. But without transparency, it invites:

  • Authenticity erosion – undermining brand heritage and community trust;
  • Privacy backlash – when consumers realise they’ve shared personal data with a machine;
  • Credibility collapse – endorsements from AI lack genuine lived experience;
  • Labour displacement outrage – empathy for “jobs lost” to AI, amplified by cases like the Guess/Vogue scenario.

These aren’t speculative risks – they are visible in the backlash we’ve already seen.

Context is everything. In high fashion, audiences expect artifice. Videos and articles about how photoshopped most of the images in fashion mags have been doing the rounds for over 25 years, yet there was still a large backlash to the Vogue/Guess ads.

For community-driven brands replacing real people with synthetic ones would undercut decades of credibility. Imagine how you’d feel about those Bunnings ads with their worker testimonials if you found out they were all actually AI avatars, not someone you could find stocking the paint at your local store?

The safe middle ground? Make AI use explicit, intentional and even stylised. Consumers are more accepting of clearly artificial personas than of hidden mimicry.

Five guardrails for responsible AI in marketing

As part of my PhD I studied the relationships people have with AI chatbots. The findings can be extrapolated to inform some best practice guidelines when using AI in a business context:.

  • Disclose early and often – clearly label AI use in every interaction.
  • Design with intent – use stylistic cues so consumers recognise synthetic content.
  • Use AI to augment, not replace – preserve human voices where authenticity is core.
  • Respect likeness rights – avoid near-replicas of real individuals without consent.
  • Audit the trust equation – weigh financial savings against long-term brand equity.

Australians already approach digital content with scepticism. If we don’t build industry-led disclosure standards, every synthetic campaign risks widening the trust deficit.

AI can enrich creativity, scale and connection – but only if brands commit to honesty. Deception is the real enemy, not the technology. The brands that embrace transparency now won’t just avoid backlash; they’ll set the standard for trust in the synthetic age.

Get the latest media and marketing industry news (and views) direct to your inbox.

Sign up to the free Mumbrella newsletter now.

"*" indicates required fields

 

SUBSCRIBE

Sign up to our free daily update to get the latest in media and marketing.