Opinion

Facebook has introduced a user trustworthiness score – here’s why it should go further

In this crossposting from The Conversation, Sharon Coen calls on the world of psychology to explain the mechanisms behind Facebook's decision to launch a user trustworthiness score.

Facebook has reportedly started giving users a secret trustworthiness score in its attempt to tackle fake news. According to the Washington Post, the score is partly based on users’ ability to correctly flag and report inaccurate news items on the site. Facebook then takes this score into account when working out how a user’s content should be spread around the network (although it doesn’t tell users what their score is).

Why has Facebook started doing this? Following events such as the supposed role of misinformation on social media during the 2016 US election via and the Brexit campaign, Facebook has been increasingly under pressure to curb the spread of fake news. But the platform is now facing the possibility that the growing awareness of misleading and false information has increased the likelihood that users will report articles as “fake” just because they don’t agree with them.

Facebook’s recent anti-fake news billboards

We can use psychology to explain why this might be happening. And it suggests that the trustworthiness score is a good idea. But I would argue that Facebook should go further and give each user a more comprehensive personal reputation score that they can view to determine the quality and reach of their content.

In psychology, we call the tendency to seek confirmation, and minimise or discard information challenging our own beliefs “confirmation bias”. Research on people in the US and Germany has shown (in line with other studies) that people tend to spend more time reading news consistent with their existing attitudes than stories that highlight a different position. The same study also shows reading news that supports your views strengthens them, while reading stories that show a discrepancy with your attitudes weakens them.

In general, people don’t like to face situations in which their beliefs are challenged. This gives rise to the uncomfortable feeling of cognitive dissonance, the sense of holding two conflicting positions at once. To prevent this, people go to great lengths to prove their original beliefs are right. When exposed to facts that contradict our views, our choice is to either reconsider our position or to challenge the new information. And attacking the credibility of the information or its source can often do the trick.

For example, research by colleagues and me has shown how people who deny the existence of – or the necessity to act upon – climate change go to great lengths to deny the value of the arguments presented. These strategies include denying and disputing the scientific evidence but also arguing that the scientists who produce it, and those trying to address the issue, are dishonest and have ulterior motives.

So it’s not surprising that the extensive media coverage of the issue of fake news (which research suggests was actually exaggerated in relation to the 2016 US election) might have encouraged Facebook users to flag as fake articles that make uncomfortable reading. And this creates a problem.

The idea of a secret trustworthiness score being used mysteriously by Facebook might put people off flagging content. But a publicly available score might be seen as a way to punish people who flagged content “incorrectly” by publicly shaming them. An alternative would be to allow users to see their own personal reputation scores based on what they share as well as what they flag, but not make them available publicly.

Using scores to change our behaviour

While we do have biases in our thinking, most of us want to feel useful and valuable. Studies have now shown how people use social media as a kind of identity laboratory, constructing a particular image of themselves that they present to the world. Our own research shows how Facebook use is associated with our need to feel like we belong to a community and that we are worthy and capable individuals. In a way, introducing reputation scores that users can access would satisfy this need, while also making sure that people do not feel discouraged from flagging content for fear of being profiled and singled out by the platform.

We already know how important authenticity and trust are online. People rely on online reputation scores to decide where to go, what to buy and what to do online. So a personal reputation score based on the quality and reliability of the sources of information we share online (and not only on the content we flag as fake) could be a useful tool for helping us spot actual fake news. Giving feedback on the accuracy of shared or flagged content could help us realise what we can trust and what we should flag, regardless of our wider opinions.

The challenge, of course, would be to figure out how to calculate this reputation score. But given how much data Facebook collects on its users, and on the content shared, perhaps this wouldn’t be too difficult.

Sharon Coen is senior lecturer in Media Psychology, University of Salford. This article was originally published on The Conversation. Read the original article.

ADVERTISEMENT

Get the latest media and marketing industry news (and views) direct to your inbox.

Sign up to the free Mumbrella newsletter now.

 

SUBSCRIBE

Sign up to our free daily update to get the latest in media and marketing.