AI must be regulated for kids along with social media
Cat McGinn explains why we must regulate AI now, in order to keep our children safe from the many unintended consequences the nascent technology may unleash upon us.
Last week, I had the privilege of interviewing Simone Gupta on stage at CommsCon, where we discussed her role as campaign strategist for lobby group 36 Months. The group played a critical role in driving legislation aimed at banning children under 16 from accessing social media, requiring platforms to verify users’ ages.
The campaign helped push the federal government toward an age restriction, with bipartisan backing and PM Anthony Albanese acknowledging the devastating mental health impact.
The bill passed into law at the end of last year and will come into effect in December. Critics see it as potentially unworkable, overly narrow in its scope, and question why it excludes Youtube.
But the debate it has sparked is essential—because we’ve reached a turning point in how children interact with technology. The writer of Adolescence — the hit 2025 British crime drama series with social media themes — referenced the Australian legislation saying “We need to do something similarly radical [in the UK].
One of the biggest revelations I have had in researching and experimenting with AI since 2022, is that most of human history is the story of unintended consequences.

(Midjourney)
I was an early proponent of the benefits of social media. In one of my most mortifyingly naive moments, I once claimed on stage, in public, that social media would be instrumental in resolving the AIDS crisis. I had drunk gallons of Kool-Aid at the time. But what we had no way of knowing in those early techno-optimist days was the potential for social media to lead to great, widespread harm.
When Mark Zuckerberg famously told his team to “move fast and break things,” we didn’t think he meant the social contract or the fabric of society itself.
It’s an adland truism that the biggest predictor of future behaviour is past behaviour.
We are only beginning to reckon with the damage social media has done—particularly to young people. We also know that when technology companies profit from engagement, their motivation to minimise harm is vanishingly small.
Just look at the rise of parasocial relationships between children and influencers.
When you’re still developing emotionally, your ability to distinguish between what’s real and what’s carefully constructed begins to blur.
This same dynamic is now playing out in AI—with almost no scrutiny.
One of the concerns that responsible AI researchers have raised about AI chatbots is the tendency to promote an anthropomorphic style of interaction with the user. The leaders of AI platforms are at pains to encourage consumers to begin to rely on these friendly, charming, and ever-agreeable bots throughout their personal and professional lives.
These systems are often designed to encourage emotional reliance, from helping a child write a letter (as seen in recent Google Gemini ad campaign) to the launch of ChatGPT’s voice interface in July 2024, which OpenAI CEO Sam Altman proudly compared to Her—the Spike Jonze film in which a man falls in love with his AI assistant.
That comparison should have given everyone pause.
Adults may choose their level of comfort or dependence with these systems. But children are not test subjects. We already know students are using AI to complete homework, cheat on exams and write suspiciously homogeneous essays. What we don’t yet understand is what happens to a child’s brain when they form a one-sided, emotionally dependent relationship with an AI companion that is hard wired to flatter, indulge, and never challenge them.
We have an anxious generation. A generation facing a loneliness epidemic, particularly among young men. Now imagine compounding that with the availability of synthetic friends who never ask you to leave your bedroom, never say no, and never hold you accountable. Friends electric, designed by profit-driven companies with no incentive to encourage healthy boundaries.
Critics argue that AI systems like ChatGPT are “stochastic parrots” – a term coined by researchers Emily Bender and Timnit Gebru – because they generate text by predicting the next word based on patterns in data, without understanding meaning or context. The training data comes from such questionable sources we cannot know what biases are being replicated. An AI system may mimic emotional intelligence, but by definition cannot truly care.
In 2023, Snap released an AI chatbot to every user of its platform. Parents were already struggling with how to address grooming behaviour or online bullying on Snap —the majority will have no idea how to begin to offer guidance for their children navigating a simulacrum of a relationship with an AI chatbot in their pockets, available round the clock, and coded to feel like a friend.
The consequences can be devastating. In a heartbreaking case, a 14 year old American boy died by suicide after developing an intense emotional connection with an AI chatbot that role-played as Daenerys Targaryen from Game of Thrones. According to reports, he became convinced that his fantasy connection with the chatbot was more real than his relationships with family, friends, and school.
His mother had no idea he was talking to an AI bot until after his death. How many parents reading this are now wondering if, behind closed doors, their own children are forging similar relationships?
It’s time to put legislative safeguards in place for under-16s using AI.
We may choose, as adults, to gamble with our lives and livelihoods as we allow the tsunami of AI transformation to break over us without adequate regulation or guardrails. But shouldn’t we at least fit our children with a lifejacket?
Cat McGinn is the curator of the AI event for media and marketing, Humain. Early bird tickets are on sale until midnight tonight
Keep up to date with the latest in media and marketing
The article raises important concerns about the potential negative impacts of AI on children’s emotional and psychological well-being.
The piece highlights the need for regulatory safeguards, similar to those proposed for social media, to protect children from forming unhealthy attachments and dependencies on AI companions, which lack genuine emotional understanding. The tragic case mentioned underscores the urgency of addressing this emerging issue.
Given the potential for AI companions to negatively impact children’s emotional development, what specific regulatory frameworks or ethical guidelines should be prioritised to ensure responsible development and deployment of this technology, especially concerning its interaction with vulnerable age groups?