After the ‘Facebook Files’, the social media giant must be more transparent
Facebook is having significant challenges in moderating the content on its platform in the face of conflicting demands from users, advertisers and civil society organisations. In this cross-posting from The Conversation, Nicolas Suzor looks at the need for transparency from the social media giant.
Most people on Facebook have probably seen something they wish they hadn’t, whether it be violent pictures or racist comments.
How the social media giant decides what is and isn’t acceptable is often a mystery.
Internal content guidelines, recently published in The Guardian, offer new insight into the mechanics of Facebook content moderation.
The slides show the rules can be arbitrary, but that shouldn’t be surprising. Social media platforms like Facebook and Twitter have been around for less than two decades, and there is little regulatory guidance from government regarding how they should police what people post.
In fact, the company faces a significant challenge in trying to keep up with the volume of posted content and often conflicting demands from users, advertisers and civil society organisations.
It’s certainly cathartic to blame Facebook for its decisions, but the true challenge is to work out how we want our online social spaces to be governed.
Before we can have that conversation, we need to know much more about how platforms like Facebook make decisions in practice.
The secret work of policing the internet
Apparently weighing in at thousands of slides, the newly published guidelines give some more detail to the vague community standards Facebook shares with its users.
Most of the documents are training material for Facebook’s army of content moderators who are responsible for deciding what content should go.
Some of the distinctions seem odd, and some are downright offensive. According to the documents, direct threats of violence against Donald Trump will be removed (“someone shoot Trump”), but misogynistic instructions for harming women may not be (“to snap a bitch’s neck, make sure to apply all your pressure to the middle of her throat”).
The rules appear to reflect the scars of legal and public relations battles Facebook and other social media platforms have fought over the last decade.
The blanket rule against images of nude children had to be changed after Facebook controversially banned the famous image of Kim Phuc fleeing napalm bombing during the Vietnam War. After years of controversy, a specific procedure now exists so people can request the removal of intimate images posted without their consent.
Because these rules develop over time, their complexity is not surprising. But this points to a bigger problem: without good data about how Facebook makes such decisions, we can’t have informed conversations about what type of content we’re comfortable with as a society.
The need for transparency
The core problem is that social media platforms like Facebook make most decisions about what constitutes acceptable speech behind closed doors. This makes it hard to have a genuine public debate about what people believe should be allowable to post online.
As the United Nations’ cultural organisation UNESCO has pointed out, there are real threats to freedom of expression when companies like Facebook have to play this role.
When governments make decisions about what content is allowed in the public domain, there are often court processes and avenues of appeal. When a social media platform makes such decisions, users are often left in the dark about why their content has been removed (or why their complaint has been ignored).
Challenging these decisions is often extremely difficult. Facebook allows users to appeal if their profile or page is removed, but it’s hard to appeal the moderation of a particular post.
To tackle the issue of offensive and violent content on the platform, Facebook says it will add 3,000 people to its community operations team, on top of its current 4,500.
“Keeping people on Facebook safe is the most important thing we do,” Monika Bickert, head of global policy management at Facebook, said in a statement. “We work hard to make Facebook as safe as possible while enabling free speech. This requires a lot of thought into detailed and often difficult questions, and getting it right is something we take very seriously”.
But without good data, there is no way to understand how well Facebook’s system is working overall – it is impossible to test its error rates or potential biases.
Civil society groups and projects including Ranking Digital Rights, Article 19 and the Electronic Frontier Foundation’s OnlineCensorship.org have been advocating for more transparency in these systems.
Facebook and other social media companies must start listening, and give the public real insight and input into how decisions are made.
Nicolas Suzor, Associate professor, Queensland University of Technology
This article was originally published on The Conversation. Read the original article.
A long argument from the social networks has been that they are not publishing anything, more enabling others to self publish, utilising their platforms. This, from where I am looking, is the grey area. If an old skool publisher allowed comments on their sites, like the ones you can read on Facebook, there would be uproar and complaints to the newspaper / broadcaster / (publisher)…
The answer? Google’s Local Guides is building and building and building. It is an online community and when you reach a certain level you can suggest edits to addresses, contact details; all sorts! ‘Community policing’ is important in our real communities and today, it seems, is very important online and is lacking. With such a mammoth operation for Facebook to clean up it’s comment threads, perhaps they should take a leaf from Google’s book and start rewarding their users for participation. Give a bit back and hey presto, slowly their community can become far more civil. We all want to live in a civil society and some would say that FB has enabled a monster like Trump to get into power. (Flippant comment or the truth?) Let’s hope FB do the right thing and start showing us all that they care and empower the good users to do good and help FB to clean up their act.
User ID not verified.
Facebook policing the internet – we’re all screwed.
As for Facebook’s ongoing concern… they had better figure out their future audience because my late teens kids and all their friends steer well clear of it.
User ID not verified.
Hah, Google Local Guides.
Ever read some of the threads on a YouTube video?
Vile, disgusting and hateful.
Google aren’t innocent in all of this.
User ID not verified.
Hearing you re YouTube. I referred to Local Guides as a model to watch. Slowly but surely, it would appear that Google will be identifying the credible online users and empowering them to police. Rome wasn’t built in a day and it has got out of hand on all social networks. With an empowerment and reward model, the long tail of credible, qualified, (by algorithm), users, will slowly clean up the web. It is really the only way. Wikipedia’s model can also work, again by only allowing trusted, credible folks to edit certain pieces. My bet is that in 5 years time ‘vile, disgusting and hateful’ comments will be a thing of the past. Lets hope so.
User ID not verified.
They’re all on Insta (owned by FB*……) ?
User ID not verified.