Opinion

Building brand trust in the AI era 

Australians are using AI daily but trusting it less than ever. Because of this, brands face a crisis of credibility, forcing CMOs to rethink how they build trust in an age of synthetic content, scams and soulless automation, writes Kim Verbrugghe - managing director at creative company SLIK.

CMOs are raising concerns around the erosion of consumer trust in online. In particular, how to ensure that trust is maintained in relation to their brands at a time when AI is on the rise. 

A KPMG and University of Melbourne study released in April shows that only 36% of Australians are willing to trust AI, even though 50% use it regularly, and 78% are concerned about its negative outcomes like misinformation or privacy issues. 

We thought AI would be just an efficiency play, where operations could be streamlined, campaigns optimised and behaviours predicted at scale. While there’s plenty of ways AI is being used to add value, there’s a trust crisis that threatens the very relationships we’re trying to cultivate the most – the customer. 

The problem isn’t just that everyone has access to the same tools now. It’s that in our rush to automate everything, we’ve made the digital world feel fundamentally untrustworthy. Customers aren’t just asking ‘Is this good?’ anymore, they’re asking ‘Is this real?’. For marketers and brand managers, trust is becoming the new currency.   

The authenticity economy 

People are increasingly unsure how to separate what’s fact and fiction or distinguish between what’s real and synthetic. The same KPMG study found 77% of Australians are unsure whether online content can be trusted because it may be AI-generated. Technology is becoming so sophisticated and believable that AI-generated profiles, images and videos are proliferating on social platforms. Scam ads masquerading as legitimate offers are now pervasive and even dating apps are overrun with dodgy profiles.  

When more than six out of ten younger consumers assess a company’s authenticity before making a decision, we have no choice but to pay attention. The solution requires considering what every growth hacker has told us to avoid – friction. Companies that add intentional verification steps such as badges, secure logins, two-factor authentication, even manual reviews, are discovering that friction isn’t the enemy of conversion. It’s now a trust signal. 

Offline is regaining value 

With a rise in online distrust, we will see a return to highly authenticated or physical channels for important decisions. And activating through higher trust channels will be the new distribution opportunity. 

While we still obsess over digital reach, customers will start voting with their feet and seek more real life interactions. Investment in brand experiences is already on the rise and face to face and in-person customer service is becoming a competitive advantage again.   

The reason isn’t nostalgia, it’s trust. When everything digital feels potentially fake, customers want experiences they can verify with their own senses. They want to touch the product, look someone in the eye, have a conversation that doesn’t feel scripted by an algorithm. 

This doesn’t mean abandoning digital channels, but it is recognition that physical experiences are trust-building machines. An Accenture study found customers who interact with brands in person spend more online, return more often, and recommend more frequently. The in-person experience creates a trust residue that carries over into digital interactions, and purchase. The front line staff in store or other physical touchpoint becomes an important long term brand building moment, now more than ever.    

Kim Verbrugghe

Cursor as a cautionary tale 

Earlier this year Cursor.ai, a vibe coding tool, learned how quickly AI can destroy customer trust. A user emailed support about login issues and received a prompt response from ‘Sam’ stating that Cursor only worked on one device per subscription ‘as a core security feature’. The policy didn’t exist, Sam was an AI bot that had hallucinated it. Within hours, the false claim spread  and customers canceled subscriptions en masse. The cofounder spent 48 hours in damage control, clarifying that no such policy existed, but the damage was done. In the span of a single AI response, months of customer trust evaporated. 

AI can triage routine queries, especially when your customers seek convenience, but humans must handle high stakes or emotionally charged issues: humans-in-the-loop allow us to add care into the process. We must build AI systems that recognise when they’re out of their depth. And even if we get to a point where the AI can handle it, the question is ‘should it?’ What message are you sending to your customers in their moment of emotional need? “Thanks for your business, but we didn’t think this was important enough to send a human.” 

Content that feels like nothing 

Audiences can smell soulless content a mile away. We run the risk of optimising our way into irrelevance, creating perfect algorithmically-friendly posts that hit every keyword but miss every human emotion.  

Scroll through the comments and you’ll see customers aren’t connecting. They’re consuming content the way they’d consume elevator music, present but unengaged.   

Personalisation trap 

Spotify’s Wrapped teaches us everything about the difference between creepy and clever personalisation. When Spotify shows users their year in music, it feels like a gift – a collection of memories wrapped in beautiful graphics. When other brands try to copy that intimacy without earning it, it feels like surveillance.   

Privacy-first personalisation isn’t just about compliance, it’s about customer psychology. Offer clear opt-ins, disclose when AI is used and explain why users see specific content. Let customers control the experience instead of pretending they won’t notice it’s happening.   

Rebuilding trust through ethical AI 

At a time where customers question not just “Is this good?” but “Is this real?” brands need to demonstrate that they use technology responsibly. Ethical AI provides that pathway. At its core, ethical AI is about designing and deploying AI systems in a way that shows transparency, fairness, accountability, and respect for human values.  

Practically, this means being clear when AI is used versus when a human is behind the interaction (and being intentional about that choice); putting safeguards in place to reduce bias or misinformation; protecting customer data with strong governance; and ensuring there’s a human in the loop for sensitive decisions like customer complaints or financial approvals. It’s not about hiding the machine, it’s about showing customers you’re using it with their best interests at heart. 

When brands adopt these practices, they signal that efficiency hasn’t replaced empathy, and automation hasn’t replaced accountability. Ethical AI turns technology from a potential trust risk into a trust builder, reassuring customers that a brand’s commitment to honesty, fairness, and transparency remains intact, even in an AI-driven world. 

Brands that succeed in the AI revolution won’t be the ones that automate everything fastest, but the ones that automated wisest. The opportunity is in being the brand that defines new authenticity codes, the fresh cultural signals consumers will use to decide what’s credible. The question isn’t whether your brand will be affected but whether your brand or business will be building for short term efficiency or long term trust. 

ADVERTISEMENT

Get the latest media and marketing industry news (and views) direct to your inbox.

Sign up to the free Mumbrella newsletter now.

"*" indicates required fields

 

SUBSCRIBE

Sign up to our free daily update to get the latest in media and marketing.