AI must be regulated for kids along with social media
Cat McGinn explains why we must regulate AI now, in order to keep our children safe from the many unintended consequences the nascent technology may unleash upon us.
Last week, I had the privilege of interviewing Simone Gupta on stage at CommsCon, where we discussed her role as campaign strategist for lobby group 36 Months. The group played a critical role in driving legislation aimed at banning children under 16 from accessing social media, requiring platforms to verify users’ ages.
The campaign helped push the federal government toward an age restriction, with bipartisan backing and PM Anthony Albanese acknowledging the devastating mental health impact.
The bill passed into law at the end of last year and will come into effect in December. Critics see it as potentially unworkable, overly narrow in its scope, and question why it excludes Youtube.
The article raises important concerns about the potential negative impacts of AI on children’s emotional and psychological well-being.
The piece highlights the need for regulatory safeguards, similar to those proposed for social media, to protect children from forming unhealthy attachments and dependencies on AI companions, which lack genuine emotional understanding. The tragic case mentioned underscores the urgency of addressing this emerging issue.
Given the potential for AI companions to negatively impact children’s emotional development, what specific regulatory frameworks or ethical guidelines should be prioritised to ensure responsible development and deployment of this technology, especially concerning its interaction with vulnerable age groups?