News

TikTok updates community guidelines to promote safety, security and wellbeing

TikTok will introduce a number of new updates to its’ community guidelines designed to promote safety, security, and wellbeing on the platform.

The new updates span across issues of safety, eating disorders, harmful ideologies and platform security, in response to community feedback as well as recommendations from experts in digital safety and security, content moderation, health and wellbeing, and adolescent development.

The updates follows the TikTok’s launch of new in-app education resources to combat anti-Semitism on the platform in January.

The first of the new updates outlined seeks to strengthen TikTok’s ‘dangerous acts and challenges policy’, to prevent the spread of content like suicide hoaxes, an issue previously covered under the platform’s suicide and self-harm policies. TikTok has also partnered with popular creators in a new campaign to encourage users to ‘stop, think, decide and act’ when assessing potentially dangerous content.

The platform will also look to broaden their ‘approach to eating disorders’, from the removal of content promoting diagnosable eating disorders, to also identify and remove content that may promote symptoms of disordered eating, such as over exercises and short-term fasting.

Tik Tok has also committed to explicitly clarifying the harmful ideologies and behaviours that are not accepted on the platform, referencing ‘deadnaming, misgendering, or misogyny as well as content that supports or promotes conversion therapy programs’. This follows the addition of a recent feature allowing users to add their pronouns to their profile.

Finally, the platform will broaden its security policy to prohibit unauthorised access to TikTok, educate its’ community on how to ‘spot, avoid, and report suspicious activity’.

The announcement comes as TikTok’s quarterly ‘Commmunity Guidelines Enforcement Report’ drops, showing that over 91 million violative videos were removed in the third quarter of 2021. 95% were removed before being reported by a user, 88% before receiving any views  and 93% within 24 hours.

Certain categories of violative content (including issues of adult nudity and sexual activities, minor safety, and illegal activities and regulated goods) are removed on an automated basis, with more nuanced content being reviewed by TikTok employees.

ADVERTISEMENT

Get the latest media and marketing industry news (and views) direct to your inbox.

Sign up to the free Mumbrella newsletter now.

 

SUBSCRIBE

Sign up to our free daily update to get the latest in media and marketing.