Opinion

Programmatic targeting… for dummies

In this test Timothy Whitfield pits six demand-side platforms against each other to find out how good programmatic offerings really are in Australia. Timothy Whitfield

Following my previous article on hyper local geo-targeting I felt that it would be equally interesting to use scientific principle to forensically dissect another group of adtech companies that are often clumped together.

Not knowing where to start I figured that I might as well start at the beginning of adtech: Demand Side Platforms (DSP).

The goal was to understand the similarities and differences between the various DSPs. At face value they all seem very similar. They all say: (a) that they have the best algorithm; (b) they all have the best data scientists; and (c) they have the best campaign results. The goal was to put them to the test.

V1_GroupM_Illustration_ProgrammaticTargeting_0104The test needed to be a fair evaluation of these DSPs. There are many types of campaigns that could be tested but in the end it was easier to test display / branding campaign rather than a video or a performance campaign.

Performance campaigns may be easier to measure results (CTR / CPA / ROI, etc), however, they require a more complex set-up as they require a conversion tracking pixels for each vendor.

The first step was vendor selection. Each DSP that had contacted me and also had a local office in our market was invited to participate in the test: 16 vendors were contacted.

If you know the adtech space then you can guess who was invited. Interestingly, the sub-set of DSPs that specialised in video campaigns decided not to participate.

This was surprising as their sales collateral clearly state they can support display campaigns. Furthermore, another sub-set of the DSPs that specialise in retargeting decided not to participate. This was just as surprising as their sales collateral clearly states that they also support branding objectives. In total: only 6 DSPs accepted.

Kiss Goodbye to MSThe next step was to find a test campaign: My friends at Multiple Sclerosis Research provided me a test creative for Kiss Goodbye to MS. It was important to be socially responsible and promote a good cause where possible. This campaign reached 400,000 unique users, over 600 clicks and over 400 total exposure hours.

“MS Research Australia is really thrilled and grateful for the pro-bono support that GroupM has provided for our Kiss Goodbye to MS campaign. The ability to market our campaign using various media channels is very important to us and something we would not have been able to do without their support. This has created significant awareness about MS”  – Matthew Miles, CEO, MS Research Australia

The next step was to find somebody to administer the test: Our ad-serving partner, Sizmek, were great as they donated free ad-serving and countless hours of their time in trafficking the campaign.

Ask Wall Street, a journalist, or anyone within the adtech ecosystem, and they’ll tell you that trying to discern the differences between adtech vendors within a category is futile. That’s why it’s so important for agencies like GroupM to employ test campaigns to show how vendors stack up, and who ultimately provides the best results,” said Neil Nguyen, CEO of Sizmek. “We were excited to have the chance to work with GroupM in an open and transparent way to deliver the most impactful campaign possible.

Lastly, we needed some measurements, and our partners at Moat measured the viewability for free and Nielsen measured the % in-target demographics at no cost. Big thanks to all of them.

Moat were thrilled to participate in this GroupM study. The team at GroupM organized a thoughtful apples-to-apples approach that allows marketers to understand differences across DSPs. This type of study can serve as the basis for understanding performance when in all cases viewability matters, demographics matter, brand safety matters and humanity matters,” said Jonah Goodhart, co-founder and CEO of Moat.

The scientific objectives of the test were simple: Each DSP had a maximum of 100,000 impressions to hit three specific objectives.

  1. Highest Viewability Rate
  2. In-Target Demographic % (Males 25-54)
  3. Maximum Unique Reach (lowest frequency)

Note: non-human, non-brand safe and non-domestic impressions were removed from the results.

The results were fascinating (see table below).

Vendor In-View On-Target Freq. Safe % IVT % Score
#1 83% 72% 1.01 99.8% 1.4% 58%
#2  84% 66% 1.03 98.4% 2.4% 51%
#3 76% 68% 1.20 92.9% 2.8% 39%
#4 78% 57% 1.15 89.3% 4.1% 33%
#5 67% 40% 1.41 84.3% 5.0% 15%
#6 47% 50% 1.13 74.4% 6.5% 14%
Ind.
Avg
53% 39% 1.15 92.0% 4.3% 16%

Please note that the last row in the report is industry averages for a 300 x 250 display creative. They are cobbled together from various studies from Moat, Nielsen, Grapeshot and AppNexus. The only number which is an estimate is the Freq, which is an average of the test itself.

Legend of metrics explained.

  • In View – This is the % of impressions whereby the whole surface area of the creative was viewable for 1 second or longer.
  • On Target – This is the % of the impressions that were delivered to an audience that was believed to be male between 25 and 54 years old, according to the Nielsen data-set.
  • Frequency – This is the average frequency of the campaign. The lower the number the better.
  • Safe % – This is the Brand Safety of the site as measured by our partners at Grapeshot.
  • IVT% – This is the % of the impressions that were delivered to non-human or “invalid traffic”. The lower the number the better.
  • Score – This is the final score. It represents the % of the media spend that was delivered to a viewable, brand safe, unique, human, male between 25 and 54.

Here are my observations for each of the vendors:

  1. Vendor 1 – They were very organised right from the beginning. Their attitude to this process was: “Yeah, sure, we will just load-up the campaign for you.” They didn’t seem to break a sweat in this whole process.
  2. Vendor 2 – They performed amazingly well. They took this process very seriously. They manually checked the campaign daily and maximised their viewability and frequency. However, they just didn’t have as much demographic data.
  3. Vendor 3 – I was blown away by their service. They proactively checked, optimised and re-optimised their campaigns manually. Very high level of service.
  4. Vendor 4 – They had some solid results. However, they struggled with frequency. I feel that this was because of their ‘X-Device’ targeting solution not being as strong as vendors 1 to 3. I had some brand safety concerns due to the large % of foreign websites on their site list.
  5. Vendor 5 – They did reasonably well with viewability but once again they didn’t have enough demographic data. Their frequency also blew out – which I put down to too-little manual checking.
  6. Vendor 6 – Whilst you may feel that compared with the rest the numbers were low, please keep in mind that their viewability and demographic targeting were on par with the market average.

Before you ask: I can’t / won’t name the vendors. Please respect this decision.

Summary: There was so much data. I have hundreds of megabytes of data from our partners at Sizmek and Moat. Between the two of them I have more than 100 metrics for each combination of site and placement; however I want to keep this article short so here are my key take reminders for advertisers.

  • DSP’s core function – When selecting a DSP think about your marketing budget in terms of Display vs. Video and Branding vs. Performance. Build your tech stack accordingly.
  • Demographic Data Optimisation – It’s important for you to ask your DSP where they get their demographic data from. There are a number of vendors in market and not all of them are cracked up to what they say.
  • Site List Optimisation – It’s important that you understand if your DSP runs on a White or Black list and for you to be involved in setting up the Brand Safety % tolerances.
  • X-Device Optimisation – It’s important that you ask your DSP what technology they use X-Device connecting Mobile and Desktop impressions.
  • Manual Campaign Optimisation – If you are an advertiser it’s important to know how often your campaign will be manually checked/optimised.

In Summary: Please be mindful that not all technology is equal. I often hear about advertisers wanting to build their own ‘tech stack’ and I strongly recommend that you kick the tyres hard and dig deep into the technology before selecting any adtech vendor.

Timothy Whitfield is the director of technical operations for GroupM.

This piece originally ran on Linkedin.

ADVERTISEMENT

Get the latest media and marketing industry news (and views) direct to your inbox.

Sign up to the free Mumbrella newsletter now.

 

SUBSCRIBE

Sign up to our free daily update to get the latest in media and marketing.