Opinion

Facebook’s ‘transparency’ efforts hide key reasons for showing ads

Despite Facebook's claims it's improving the transparency of its advertising, the social media service's explanations of its targeting criteria are still obscure and misleading says Oana Goga, a research scientist at the Centre national de la recherche scientifique (CNRS), in this cross-posting from The Conversation.

Facebook’s advertising platform was not built to help social media users understand who was targeting them with messages, or why. It is an extremely powerful system, which lets advertisers target specific users according to a detailed range of attributes. For example, in 2017, there were 3,100 people in Facebook’s database who lived in Idaho, were in long-distance relationships and were thinking about buying a minivan.

That ability to microtarget specific messages at very particular groups of people can, however, let dishonest advertisers discriminate against minority groups or spread politically divisive misinformation.

Governments and advocates in the U.S. and Europe, as well as elsewhere around the globe, have been pushing Facebook to make the inner workings of its advertising system clearer to the public.

But as Congress continues to review ideas, it’s not yet clear how best to make these systems more transparent. It’s not even obvious what information people most need to know about how they are targeted with ads. I am part of a team of researchers investigating where risks come from in social media advertising platforms, and what transparency practices would reduce them.

Analyzing Facebook ads

In response to users’ and regulators’ concerns, Facebook recently introduced a “Why am I seeing this ad?” button that is supposed to provide users with an explanation for why they had been targeted with a particular ad.

However, the only people who see Facebook ads are those that Facebook’s algorithms choose, based on advertisers’ chosen criteria. Without help from Facebook, the only way to audit advertisers and the ads they buy is to directly collect from actual users the ads they see in their timelines. To do this, my research group developed a free browser extension called AdAnalyst that users can install to anonymously collect data about the ads they see.

More than 600 people shared their data with us, which allowed us to observe more than 50,000 advertisers and 235,000 ads from March 2017 to August 2018. We learned quite a bit about who advertises on Facebook, how they target their messages and how much information users can get about why they’re actually being shown specific ads.

This is what Facebook says about why it displayed a specific ad.
Oana Goga screenshot from Facebook.com, CC BY-ND

Who are Facebook’s advertisers?

Any Facebook user can become an advertiser in a matter of minutes and just five clicks. The company does not seek to verify a person’s identity, nor any involvement of a legitimate, registered business.

Our AdAnalyst data revealed that just 36% of advertisers bother to get themselves verified. There is no way to truly identify the remaining 64%, so they can’t really be held accountable for what their ads might say.

We also found that more than 10% of advertisers are news organizations, politicians, universities, and legal and financial firms, trying to promote nonmaterial services or spread particular messages. Efforts to determine if any of them are dishonest, spreading disinformation or racially targeting messages is much more difficult than, for instance, figuring out whether someone has falsely advertised a bicycle for sale.

Very specific targeting

We found that the most-targeted user interests were broad categories like “travel” and “food and drinks.” But a surprising amount of ads, 39%, were more specifically directed using keywords advertisers entered, for which Facebook suggested related interests and categories. For instance, an advertiser could type in “alcoholic” and get suggestions including “alcoholic beverages” – but also people interested in “Alcoholics Anonymous,” and users whom Facebook’s algorithms had identified as being part of a group called “adult children of alcoholics.”

Facebook’s ad system suggests possible categories of users to target, including ones its algorithms have identified.
Screenshot of Facebook.com, CC BY-ND

In addition, we observed that 20% of advertisers use potentially invasive or opaque strategies to determine who sees their ads. For instance, 2% of advertisers targeted ads at specific users based on their personally identifying information, like email addresses or phone numbers, which they had collected elsewhere, perhaps from customer loyalty programs or online mailing lists.

Another 2% used attributes from third-party data brokers to identify, for instance, “First-time homebuyers” or people who use “primarily cash.” A further 16% used a Facebook feature called Lookalike audiences to reach new users Facebook’s algorithms evaluate as being similar to users who had previously interacted with the business.

A Russian troll operation bought this Facebook ad to inflame some Americans, and other ads to agitate other groups, including those with opposing views.
U.S. House Intelligence Committee

Malicious groups can – and do – use these features to target Facebook ads in dishonest and manipulative ways. The Russian troll farm called the Internet Research Agency, for instance, managed several Facebook accounts, including two that created ads for directly opposing messages about the Black Lives Matter movement.

Facebook explanations are thin, unclear

Facebook doesn’t claim to give complete explanations to users about why they’re seeing a particular ad. Its messages often say things like “one reason you’re seeing this ad is,” “based on a combination of factors” and “there may be other reasons you’re seeing this ad.”

To find out more details, we used our AdAnalyst tool to collect, from a set of volunteers, not only all the ads they received, but also the explanations Facebook offered for showing them those ads. In addition, we designed controlled ad campaigns specifically targeting our AdAnalyst volunteers, to compare Facebook’s explanations to the actual targeting parameters we chose.

We found that Facebook’s ad explanations are incomplete in potentially worrying ways.
For instance, we bought an ad whose primary targets were specific people, based on a list of emails we had collected from people willing to participate in our experiment. As secondary target criteria, we added “Photography” and “Facebook.”

When users clicked on “Why am I seeing this ad?,” they learned only that they saw it because they are interested in Facebook, a characteristic they share with 1.3 billion other users. There was no mention of anything about their interest in photography, which they share with 659 million others. They saw no mention at all that we had targeted them specifically using their email address.

Revealing the most common characteristic, rather than the most distinct – and not disclosing that a user was individually targeted – is not a particularly useful explanation. This practice deprives users of the full picture of how they were targeted with an ad message.

Facebook founder and CEO Mark Zuckerberg has repeatedly promised his company will be more transparent about how it targets users with advertising.
AP Photo/Carolyn Kaster

Advertisers can hide direct targeting

In addition, advertisers may be able to hide evidence of controversial or discriminatory ad campaigns, or efforts that target characteristics people consider private, by adding a very prevalent attribute to their audience-targeting selection. For example, a person who wanted to target an ad at people with income below US$20,000 a year could conceal that intent by adding, as a secondary criterion, that they were “interested in Facebook” or “used a mobile phone” – massive groups that wouldn’t limit the advertising pool, but would more likely be mentioned in Facebook’s attempt to explain why any one person saw that ad.

Our experiments also show that Facebook’s ad explanations sometimes offer reasons that were never specified by the advertiser. We instructed Facebook to send ads only to a set of people whose emails we had. Despite the fact that we selected no location, all of the corresponding ad explanations contained the following text: “There may be other reasons why you’re seeing this ad, including that [advertiser] wants to reach people ages 18 and older who live [in or near]” and then mentioned a location near that user – though we had specified no locations at all. If Facebook fills in its explanations with reasons advertisers never chose, its transparency efforts are even more misleading.

To provide users with a more complete picture of who is targeting them and why, AdAnalyst shows aggregate statistics about advertisers targeting them and the characteristics of other users that received the same ads. We hope our tool will help users identify and avoid dishonest advertisers and their messages.The Conversation

Oana Goga is a research scientist at the Centre national de la recherche scientifique (CNRS), Université Grenoble Alpes.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

ADVERTISEMENT

Get the latest media and marketing industry news (and views) direct to your inbox.

Sign up to the free Mumbrella newsletter now.

 

SUBSCRIBE

Sign up to our free daily update to get the latest in media and marketing.