News

Nine informs staff of company’s AI plans, forms oversight committee

Nine Entertainment has given staffers a set of principles on how the media company will use artificial intelligence, with recognition that the technology “presents risks if not used responsibly”.

In an email sent out this week to staff titled ‘Nine’s Principles for AI Use’, seen by Mumbrella, Nine’s chief data officer Suzie Cardwell wrote: “As Australia’s media company, we’re deeply engaged with the rapid development of artificial intelligence. We are already seeing that there are many ways AI can help make our business more efficient, and our content easier to produce and distribute, however we also recognise that AI presents risks if not used responsibly.”

Nine consulted widely across the business to develop the set of principles, which are “intended to facilitate discussions within each business unit and, where applicable, support you and your teams in developing your own specific guidelines for AI use,” Cardwell told staffers.

This comes after MEAA members fronted Senate this week where they urged the government to introduce laws requiring disclosure of data used to train AI, and enforce the right for creators to consent to and be paid for their work being used for such purposes.

The document containing Nine’s AI principles notes: “As we work through the myriad ways AI can be integrated into our operations, all our decisions will be guided by our values. The technology may be changing, at great speed, but our values remain the same.”

Nine Entertainment Principles for AI Use outline five main areas of concern. The first principle — “we start and end with humans” — puts the onus on the humans: “Our people take responsibility for their work, including the journalism and content we produce.” This leads to concerns about AI models hallucinating (making things up).

“Acknowledging this, we critically examine AI-generated output and automated decision-making for accuracy and fairness.”

Nine also pledges to be “transparent with consumers about the use of data for AI, and provide reasonable declarations when AI has been used to reformat content”,  and provides safeguards for training AI.

“We build, train and tune models in a closed Nine environment, ensuring the models and data are protected, secured and confidential,” the document reads.

Where this involves third party platforms, “we will ensure their environments provide the requisite protection”, and when an AI tool or model is developed for internal use “the people who will use it will be involved in the testing, training and trialling of the technology”.

Nine’s principles don’t rule out future commercial agreements with Large Language Model platform owners and other software vendors “to licence the use of our content”, either.

Anna Meares, Ally Langdon, and James Bracey at Nine’s 2024 upfront

Cardwell notes “these principles are dynamic and will be constantly reviewed and updated in line with our ongoing business needs, developments in AI technologies and any relevant legislation.”

Nine will be developing its “formal AI strategy” during the current financial year, which will “identify the first areas of generative AI that will make the biggest difference to our people and our audiences.”

An AI oversight committee will be formed to “monitor the principles and guidelines and work with each business unit on responsible and effective AI implementation.”

ADVERTISEMENT

Get the latest media and marketing industry news (and views) direct to your inbox.

Sign up to the free Mumbrella newsletter now.

 

SUBSCRIBE

Sign up to our free daily update to get the latest in media and marketing.