AI is a legal minefield for Australian businesses
Australian businesses who dabble in AI are opening themselves up to many potential legal issues.
Mark Carter runs Glow International, a workplace consultancy firm for SMEs. Here, he unpacks the many risks -- legal and otherwise -- for business that aren't responsible about their use of AI.
An area of deep concern around the evolution of AI tech has the lack of protections against its potential negative use on the public.
With the technology being embraced across the world, Australia’s regulator ASIC is warning businesses to be aware of introducing AI without thought to controls or consequences.
Whilst it is clear that the world is not going to wait for legislation, warnings of potential legal action around the overreach or misuse of emerging AI technology by authorities should give all businesses pause for thought.
It is a tricky feat to create legislation for such fast moving technology and, from a behavioural perspective, it’s fascinating to read the different perspectives when conversations around legislation requirements are raised.
For example, getting the right people in the right rooms to have collaborative discussions in the first place, let alone to then agree on even the basics surrounding application or legal context, is a task in itself.
On one hand there are those who see an absolute necessity in introducing protections, whilst acknowledging the challenges in doing so.
On the other, there are those who believe governance only serves to strangle innovation, so industries ought to be left to their own devices – with a stern promise that serious self-regulation will endure.
AI itself is neither good nor bad. That’s why early discussions and implementations revolve around responsible and ethical use, with a little bit of leeway to those in the self-regulation camp. Meanwhile, governments and professionals wrangle with updating antiquated laws not designed for a digital world.
We can likely mostly agree that the suite of offerings under the label ‘artificial intelligence’ are advanced tools – and throughout human history, as tools advanced, so has governance around these tools.
From the first time a human picked up a weapon some half a million years ago was followed by a steady, simultaneously, brutal, evolution.
The weaponry in WW2 advanced this destruction to an unimaginable level, when the innocently named ‘Little Boy’, was dropped from a bomber over Hiroshima, leaving 70,000 people dead. There are good reasons why not anyone can design and unleash a nuke.
Even Elon Musk, whose company ‘X’ came under the watchful eye of the Australian Information Commissioner for possible breaches of Australian law over data harvesting for artificial intelligence, is backing the requirement for some AI safety bills.
AI may well deliver an amazing, impressive, list of life changing positive advancements for humanity in areas of healthcare diagnostics and treatment, agricultural optimisation and protection, educational advancements, natural disaster prediction or response, and even fraud detection and cyber security.
But, despite the benefits, AI as a tool carries significant risk of potential danger, including job displacement, environmental degradation, and mass surveillance. There is also a gamut of cyber issues including breaches of privacy, mass data exploitation, social media manipulation, addictive platforms, deepfakes, and misinformation.
The back end of that ‘downside list’ is also the commercial upside for many companies. The benefits of using ‘AI to make you buy’, by using ever-learning technology for target marketing, and to even change behaviour, is a temptation for most businesses. And history suggests that when responsible self-governance conflicts with commercial realities, governance is a necessity.
We are in a hyperconnected digital age, characterised by the ripple effects of global trade, domino effects of worldwide economic entanglement, interwoven financial systems, and geographical freedoms of borderless entry for scoundrels into countries on the internet of things.
Do people really want to live in a world where you don’t know, without transparency or some kind of governance, that the person you are talking to, dealing with, taking advice from, or even trying to date, is real or artificial?
Are people okay giving unfettered innovation rights and power to some faceless corporations, who’ll be happy to increase growth through every addictive ping, ding, dang and dong turning you, even more, into a profit puppet?
Throughout history, advancement of tools and technology has never been stopped.
In the case of AI, it is vital that government and industry work collaboratively together, to ensure that its advancement is done so ethically, and with the good of all in mind.
Keep up to date with the latest in media and marketing
Very wise opinion Mr. Carter.
User ID not verified.
There is nothing to fear from AI.
We should be full speed ahead and as few regulations styming growth as possible.
User ID not verified.
Have your say