Opinion

Moderating social channels during COVID-19 presents brands with new and different challenges

Right now, social media teams cannot rely on the approach that has previously worked. In a pandemic, Amber Robinson explains, brands will have to manage their social accounts very differently.

Last week, a rumour that Oprah Winfrey had been arrested on child pornography charges swept across the globe. It was, of course, false, but it showed just how quickly misinformation can get out of hand when people are more panicked than usual and spending more time online.

The current coronavirus pandemic caused another unusual situation: Facebook and Google forced many of their online content moderators to take leave (the work is highly sensitive and often cannot always be done from home) and are relying more and more on artificial intelligence for moderation. People like me who work in community management are pleased to see that bots are currently not as highly skilled as actual people and there were many Facebook posts automatically removed in error.

Clearly, the systems we relied on before cannot be relied on now. And it has implications for brands managing their own social accounts amid global uncertainty.

The Oprah Winfrey rumour circulated late last week

Expect more social activity

People in self-isolation are looking at ways they can connect with customer service without entering a store. They are also looking for human connection and may just find that on your post.

Comments may not be on topic, so you may have to search through comment threads to find customer questions and issues to address.

Expect more risk

With AI moderation average at best, businesses need to be on high alert for risky posts. These can be anything from defamatory comments (like the Oprah example), to contempt of court, harassment of other members or reputational risks.

Reputational risks could include someone taking a photo of your supermarket allowing customers to break stockpiling rules, for example. Or perhaps a laid-off worker taking aim at a former employer. Since the landmark Dylan Voller warning, brands need to remember that they are responsible for all comments on their social media accounts, even ones made by third-party users.

It’s time to re-assess what resources you are dedicating to social media moderation, and ensure you have the coverage you need, which may extend beyond 9-5, Monday to Friday.

Hackers and scammers also take the opportunity at times like this to exploit system vulnerabilities. Ensure you’re prepared with updated escalation plans, especially for staff working from home.

Expect more fear and anxiety

Unfortunately, we have witnessed some very ugly behaviour in our supermarkets lately. Fear and panic makes people behave in unusual ways.

Customers may be more abusive online than usual and it may be tempting to react with anger, sarcasm or abuse in return. Such responses will harm businesses in the long run.

Instead, workshop empathetic responses with your social teams which acknowledge fear and uncertainty while providing factual information where possible, which could also mean responding that you simply don’t know right now.

Providing support to workers who may be experiencing online abuse has never been more important, whether that takes the form of an Employee Assistance Program or even just a daily team download and check-in from management.

There is no handbook that can completely guide us through the unprecedented challenges we’re experiencing. By being prepared and aware of the risks though, we can hopefully minimise the risk of harm to the small parts of the internet we do have control over.

Amber Robinson is a social media strategist at Quiip

ADVERTISEMENT

Get the latest media and marketing industry news (and views) direct to your inbox.

Sign up to the free Mumbrella newsletter now.

 

SUBSCRIBE

Sign up to our free daily update to get the latest in media and marketing.