Meta tightens teen safety features ahead of u16 ban
Meta has announced a number of new safety features for its teenage user base, including increased transparency regarding the accounts they are messaging, safety tips regarding adults users, and a combined ‘block and report’ function.
The company announced the new features on its blog overnight, writing: “At Meta, we work to protect young people from both direct and indirect harm. Our efforts range from Teen Accounts, which are designed to give teens age-appropriate experiences and prevent unwanted contact, to our sophisticated technology that finds and removes exploitative content.”
From today, teenagers using Instagram or Facebook will see extra context about accounts direct messaging them, including the month and year the user joined Instagram, as well as the option to block and report an account with one click.
In addition, Meta will be strengthening protections for accounts run by adults that primarily feature children, preventing “potentially suspicious adults” from finding these accounts through search. This includes parents who regularly share pictures of their children, as well as talent managers who run accounts representing teens or children under 13.
Meta notes: “While you have to be at least 13 to use Instagram, we allow adults to run accounts representing children under 13 if it’s clear in the account bio that they manage the account. If we become aware that the account is being run by the child themselves, we’ll remove it.”
Last November, the Australian government passed an amendment to the Online Safety Act, that restricts social media services — including Facebook, Instagram, and Tiktok — to those aged 16 and older. The ban is due to come into effect by December 10 this year.
Meta has been strengthening its child safety features over the past year. It introduced safety notices “to remind people to be cautious in private messages and to block and report anything that makes them uncomfortable”, which led to over one million blocked accounts in June alone.
Location notices were also introduced earlier this year. This feature lets people know when they’re chatting with someone who may be in a different country and is designed to protect young users “from potential sextortion scammers who often misrepresent where they live”, according to Meta.
To further prevent child exploration, Meta stopped allowing accounts primarily featuring children to offer subscriptions or receive gifts late last year.
A nudity protection feature was also introduced recently, which automatically blurs images sent over DMs. Meta reports that 99% of people – including teens – have kept this feature turned on, and in June, “over 40% of blurred images received in DMs stayed blurred, significantly reducing exposure to unwanted nudity.”
The feature also “encourages people to think twice before forwarding suspected nude images”. Meta reports that in May, 45% of people who saw this “encouragement” opted against sending the offending message.
Keep up to date with the latest in media and marketing
Have your say