Opinion

CMOpinion: AI and tech – the potential to negatively impact consumers in the changing media landscape

In her regular Mumbrella column, 8-Star Energy CMO Diana Di Cecco looks at the issue of human rights in the data economy.

There are cultural moments that create juxtapositions in life – that intersection when something previously considered uncool becomes cool, or vice versa.

Fitting examples include when it became cool to be smart, when it was no longer cool to smoke, when acting on climate change became cool (like right now, given the number of organisations and nations making bold commitments in their quest for net zero carbon emissions), or when it became cool to celebrate LGBTQ+ (did you notice how many business logos donned the rainbow colours in June?).

We live in a generational climate that has not yet seen the full effect of how digital footprints impact us. Most people reading this grew up and learned all things digital in school or at work, some might remember the launch of Facebook (I do!), and we’ve migrated from manual to automated processes. But there is a generation born in the last 10-15 years who know no different; these are the true digital natives of our lifetime. This is the cohort whose digital footprints started when their parents shared their first baby pictures “on the socials” – without their consent by the way (but more on consent later). This is the cohort who is least vigilant with their data. And this is the cohort who will be most impacted by a lifelong digital footprint. Given we’re in a constantly advancing data and tech economy, I have a distinct feeling it’s about to become cool to protect your personal data and the human rights that accompany it.

data code personalisation computer coding

Marketing is forever changing and forever complex. Right now, the landscape is as interesting and complex as it has ever been. We have a stampede toward first-party data, Google removing third-party cookies, not to mention Apple’s foray into protecting privacy. Impacts to the media mix will mean one-to-one targeting changes but we’re not exactly sure how that will play out given unknowns about “walled gardens”, the “Privacy Sandbox” or user IDs. Topping it off, we have an outrageously outdated Privacy Act (1988) and given that Governments operate at a glacial pace, who knows how long that might take to catch up to real life. Lots has already been written on these topics so if you’re unfamiliar, read up on them, stat.

A major characteristic behind these advancements is technology. In particular, artificial intelligence (AI), automation, robotics, biometrics, ubiquitous computing, and the Internet of Things (IoT); these elements seamlessly infiltrate our everyday worlds, sometimes without us even knowing it. My point is that this technological paradigm affects how we construct and express our identities. It affects how technology makes decisions about us. And it also affects how we’re treated in those contexts. While technology and AI provide new ways to operate, optimise efficiency and provide value, it also brings significant threat. In a nutshell, poor technology design and irresponsible use of AI can have major societal implications such as discrimination, unfairness, and dystopian threats, such as excessive surveillance and abuse of government and/or organisational power.

The Australian Human Rights Commission (AHRC) recently delivered its final report on Human Rights and Technology. As they put it, “now is a critical time where Australia is on a pathway towards becoming a leading digital economy”. And they’re right but how long will it take to get there? (Let’s hope it doesn’t emulate Victoria’s speed toward EV infrastructure). I won’t attempt to summarise the 240-page document, but it made me contemplate three things we need to more deeply consider as Marketers at this point in time.

AI-informed decisions

Whether you’re in the private or public sector, the increased use of AI is prevalent – we see it at work, in society and as consumers. AI can help achieve incredible things – a few of my favourite examples are self-driving cars, virtual assistants, chatbots, and humanoids – 30 years ago, these were merely futuristic. But AI can also cause havoc – let’s not forget Robodebt, Microsoft’s Tay, and Google’s image recognition feature. AI can be used for many decision types but when it makes decisions about people, extra vigilance must apply. If you’re using AI-informed decision-making in this context, check-in and ask yourself: Is it being used responsibly? Is it explainable? How are its decisions impacting people? And more specifically, is it impacting human rights? Any questionable responses should lead you to revisit how your AI is being regulated. While voluntary codes are being developed to help manage co and self-regulation, it could be ideal to advocate for best practice so you’re three steps ahead. Tip: Start with a human rights framework and assessment tool.

Algorithmic bias

This is a digital dilemma. On one hand, algorithms are mathematical rules and actions used to help solve problems. On the other hand, when infiltrated with bias, algorithms can treat groups with varying preference, without justification. Whilst usually statistical, outcomes of algorithmic bias can include unfairness, inequality and discrimination. The consequence: an adverse impact on human rights. So, if you’re using algorithms, take a second and third look at your data sets and ask yourself: How is the algorithm being trained? Do you have clean, relevant and verified data to train from? Who is responsible for algorithmic decisions and can their inherit personal biases be affecting the algorithm? Have you built in human oversight? What measures are in place to mitigate the risk of discrimination and unfairness? The aim is to maximise accuracy and minimise bias, in addition to avoiding negligent algorithmic discrimination caused from data quality and type. Like humans, algorithms are not perfect, so they too need some help to get things right.

Data privacy

Data has never been more important or valuable. The data economy is one where we continuously create volumes of data with every move we make and every device we touch (90% of the world’s data was created in the last two years and forecasted to double every two thereafter). Consumers know how their data is used (re-marketing, targeting) and are demanding greater privacy, transparency, choice and control. We also have the concept of consent – this was a featured legal basis of Europe’s General Data Protection Regulation (GDPR) and is likely to impact Marketing operations, especially if you work for a global brand. But even consent could change.

Let’s revisit the younger aforementioned cohort and think about their digital footprint before obtaining a mobile phone (the age for which a first has crept down to 12). For that period, photos of them are predominantly shared by parents on social media platforms without the child’s explicit consent – is that going to be a future problem? When a child finally acquires a mobile phone and their new and pre-mobile footprint meet up in the digital sphere, the data matching gods rejoice. What we don’t yet know is how/if that accumulation of data affects them. Can it be used to make decisions about them? Is it gathered and mined from which inferences are made? Could they be treated unfairly or discriminated against because of those inferences? (You see where I’m going with this…..)

So, if you’re collecting data, do you have a use for it? Are you using it for that intention? If your data was compromised, do you have a plan to tackle such a crisis? Is data privacy a number one priority? It should be. As we navigate our way through the changing landscape, it is not the end of advertising or targeting, but it is the end of a mediocre attitude toward data privacy (there were over 1,000 data breaches reported to the OAIC in 2020 – a number that is increasing and that is simply not good enough). Managing personal data and making concerted efforts to protect it is everyone’s responsibility. It needs to be elevated from being a “personalisation and improved customer experience” tool to being “sensitive legal information” – perhaps such a designation change will help improve the perception of its importance.

In case you didn’t notice, these three aspects come back to one underlying movement: the importance of human rights in technology. While this is a brief overview and there is so much more to discuss and consider, my intention was to start a conversation and offer a lens that hasn’t been popularised, one that prioritises the impacts of technology on humans – on you, your family, your colleagues, your parents. What if it became cool to advocate for a human rights approach to new technology?

Technology isn’t going away. The AHRC phrased it perfectly when claiming that “new technologies are reshaping our world”. They present unprecedented opportunities and threats, especially to our human rights.” The second part of that is what unsettles me because I believe our use of technology should be ethical and people-focused to avoid contravening Human Rights Law, UN principles, and people’s basic freedoms. Don’t you?

So, while we wait for an independent AI Safety Commissioner to be appointed (should that recommendation make it across the line), be proactive and start thinking about how your business/brand/department manages data and technology. The key question for us all is how to find the balance between digital advertising and data privacy? I haven’t 100% figured it out yet but if you do, let me know.

Diana Di Cecco is the CMO of 8-Star Energy. CMOpinion is a regular Mumbrella column.

ADVERTISEMENT

Get the latest media and marketing industry news (and views) direct to your inbox.

Sign up to the free Mumbrella newsletter now.

 

SUBSCRIBE

Sign up to our free daily update to get the latest in media and marketing.