News

Meta to crack down on digitally altered election ads

Meta will down-rank political ads that use significantly altered images or audio ahead of the Australian federal election in May.

Australian Associated Press (AAP) fact checkers are working with Meta and will label “faked, manipulated or transformed audio, video, or photos”.

The tech giant revealed its strategy to “ensure the integrity of elections” on its platforms during a media call on Tuesday morning.

Cheryl Seeto, Meta’s local head of policy, said while a lot of the company’s approach to elections has “remained consistent for some time”, the local policy has been adapted based on “learnings from hundreds of global elections to ensure the integrity of elections on our platforms: one that gives people a voice, helps support participation in the civic process, and combats voter interference and foreign influence.”

Seeto says that recent major elections in India, the UK and the US have put Meta in “a good position to protect the integrity of the Australian federal election on our platforms”, which include Facebook, Instagram, and Threads.

AAP will be providing independent fact checking and content reviewing, as well as drive a media literacy campaign aimed at helping Australian voters “critically assess” the information they view online.

When content is debunked by AAP fact-checkers, Meta will attach warning labels to the content and reduce its distribution. Seeto said the platforms’ community standards will block the most serious online violations, including content that “could contribute to imminent violence or physical harm, or that attempts to interfere with voting.”

Meta will also work with the Australian Electoral Commission with messaging to remind Australians to vote, and to provide them with AEC-verified information regarding the voting process. Meta has teamed with the AEC on similar voting-day reminders since 2018.

Countering the abuse of artificial intelligence in election posts will be a key factor in the coming election.

Meta will now require political advertisers to disclose if AI technology has been used to create or alter any ads, with this information made public alongside the advertisement. If an advertiser doesn’t disclose the use of AI, the ad will be rejected, with penalties for repeat offenders. Meta would not disclose the exact nature of the penalties, or what constitutes ‘repeat offenders’.

“This applies if the ad contains a photorealistic image or video, or realistic sounding audio, that was digitally created or altered to depict a real person as saying or doing something they did not say or do,” Seeto explains.

“It also applies if an ad depicts a realistic-looking person that does not exist or a realistic-looking event that did not happen, alters footage of a real event, or depicts a realistic event that allegedly occurred, but that is not a true image, video, or audio recording of the event.”

AAP’s fact-checkers can also add an ‘altered’ tag to such posts, which declares a post to be “faked, manipulated or transformed audio, video, or photos.”

“When it is rated as such, we label it and down-rank it in feed so fewer people see it,” Seeto said. “For content that doesn’t violate our policies, we still believe it’s important for people to know when photorealistic content they’re seeing has been created using AI.”

The other major focus this election is countering voter interference. Meta has built what it calls “specialised global teams to stop coordinated inauthentic behaviour” which have investigated and taken down over 200 “adversarial networks,” as Seeto puts it, since 2017.

“This is a highly adversarial space where deceptive campaigns we take down continue to try to come back and evade detection by us and other platforms, which is why we continuously take action as we find further violating activity,” she said.

State-controlled media will be labelled as such on Facebook, Instagram and Threads, while election-related policies are continually reviewed and updated throughout the election period, including those on voter interference, hateful conduct, coordinating harm and publicising crime, and bullying and harassment.

 

ADVERTISEMENT

Get the latest media and marketing industry news (and views) direct to your inbox.

Sign up to the free Mumbrella newsletter now.

"*" indicates required fields

 

SUBSCRIBE

Sign up to our free daily update to get the latest in media and marketing.