Digital ratings – why are we bothering?
Recent months have seen Mumbrella report on a number of questions relating to online traffic statistics provided by Nielsen, which the Interactive Advertising Bureau has selected as its preferred currency. In this guest post, the IAB’s director of research Gai Le Roy responds.
There is no getting away from the fact that ratings data, in any media channel, by its nature is always going to be controversial. It deals with a highly competitive media environment and provides data on which agencies and advertisers base commercial decisions. It won’t come as any surprise to most of you that in the digital space this controversy appears sometimes to be somewhat amplified.
To a large extent you can put this down to the fragmentation of the Australian digital market – the Nielsen ratings data reports on over 2,342 digital publishers and over 5,500 different entities (parents, brands & channels). And with the number of channels, commercials sites and apps competing for consumers in both the media and transaction space increasing rapidly this fragmentation is only going to continue.
Collectively we need to get use to the fact that ratings data discussions aren’t going to get any easier any time soon and that it’s never going to be as straightforward as ‘traditional media channel’ measurement that are predominantly dominated by oligopolies and duopolies.
That said – the IAB takes its job of helping design and direct the shape of market wide digital audience information for the purpose of media buying and selling very seriously, so we are doing a lot of work to ensure all the options are debated and a sensible approach adopted for our market.
I’m not sure too many people realise, but the Australian market has long been regarded as a world leader in digital measurement – with a lot of global innovation having its genesis here. As early as 2000 we had both panel data and market level tagged data available and then in 2011 IAB Australia in collaboration with the MFA and AANA decided to endorse one ratings currency in 2011, the Nielsen hybrid product.
This was essential because while publishers had (and will continue to have) incredibly deep information on their own audience, there were too many variables with their analytics systems and their business model. That didn’t help media buyers who for broad planning purposes need comparable information for the range of properties they are reviewing from an independent source.
Now while this ratings currency finally delivered what agencies and buyers wanted – audience based data – it was (and still is) is still less familiar for most media owners. It’s also based on non-census data so it has more room for error and movement.
And while we recommend people look at the trends in the ratings by rolling up more than one month to get a true feel of the data instead of poring over one month in isolation, the reality is people jump on monthly numbers and look for ways to make themselves number one in one way or another. It is something I have been as guilty of as others in my times with various publishers, but it’s not great for the industry and it’s not providing a realistic picture.
As is often the way – knowledge becomes both a blessing and a curse.
The IAB Australia Measurement Council and other global industry bodies are constantly reviewing innovative ways to track digital behaviour that can be used by the buying community and have identified some interesting models. So far we’ve found they often still do not meet the criteria of providing data that can be used for cross media planning purposes – which is essential. Indeed we welcome the move by The Readership Works & IPSOS to fuse the Nielsen currency into their new readership currency. You can expect to see more of this type of activity across the industry – using the best source of data for each channel.
We’ve also seen a newer type of audience measurement emerging in Australia and internationally – campaign measurement. These newer products bring people based measurement with profiling information to give a comparable TARP figure to traditional media, to individual campaigns and both Nielsen and Comscore offer these types of services through their respective products Online Campaign Ratings (OCR) and Validated Campaign Essentials (VCE).
It’s early days for these products though and as an industry we need time to compare the data sets coming from the different vendors as well as publisher data. IAB Australia is not looking at endorsing these products yet but we are looking forward to working with agencies and vendors to assess the accuracy of these products. More will come on this in the months ahead.
In the meantime – although we often seem to be at odds with each, most overseas industry bodies and vendors I meet are highly impressed with the level of industry collaboration and rigour of our processes. So I would like to invite everyone (but especially those who posting comments disparaging the number/s and claiming to have a better way of doing things) to contact me directly at the IAB. I do mean this sincerely – we are always looking for new ways to crack the egg.
- Gai Le Roy joined the IAB late last year. Her previous roles included Nielsen, Fairfax Media and ninemsn.
Hi Gai
How do you account for metrics that have been faked?
*DO* you account for metrics that have been faked?
As stated previously, the most naive kiddie with running a script over a TOR browser could generate thousands of fake pageviews per hour.
Should online metrics be trusted at all?
User ID not verified.
How can you accurately report when you can’t measure people using a current MAC computer!
User ID not verified.
Good question Observer. I’ll have a crack at it.
It’s easy to fake PIs – agreed (gees, even I can do it!). If you think of the parallel of ‘wisdom of crowds’ when you line sites up against each other it becomes obvious who is gaming the numbers … who has the PI count that is out of whack. So one of the ‘tricks’ is to look at multiple sites in parallel – which is one of the core reasons behind using a panel as part of the AMS (audience measurement system).
But a panel can’t reflect all segments of the market (businesses, government, defence, education, public place etc.). But what it can do is show sites that simply don’t match typical usage patterns of the myriad of sites in the panel of thousands of people.
Also be aware that any analysis of server-side logs to identify fake PIS (such as Auto-Refresh) are fraught and bound to fail as then can be so easily side-stepped with a few tweaks of the code. So how do we detect AR? Again we rely on the panel – if we see a page being served but there was no mouse-click or user request then you have identified a source of inflated PIs. Of course for some sites like sports, ASX, news etc. refreshing the page is in the best interests of the user so a “one size fits all” rule doesn’t work. But at least with the panel we can calculate an AR rate and ‘discount’ the PI traffic.
But that is all about traffic. What the buyer (client) wants is audience. Media agencies buy traffic to deliver audience so both are important but the end-game is audience. Again, the panel comes into play.
So, in essence the panel is used to (I) quasi-verify the traffic (ii) convert that traffic to people (iii) report those people demographically.
I stress that the panel produces ESTIMATES rather than precise measurements – because the things we can measure precisely can be easily rorted and don’t meet the end-game need. Where the panel really struggles is with sites that have low traffic (while the number may look high to a publisher it is comparatively low). If the panel has a +/- 3% precision level (albeit one of the more meaningless statistics oft quoted), if a site has a reach of 3% then anywhere between 0% and 6% is statistically valid. We can increase the precision – quadrupling the size doubles the precision … but who can afford that?
Just my rambling thoughts.
User ID not verified.
Mike, Mac’s are metered. A tad short of the proportion they should be … but they are metered.
User ID not verified.
Nice one John !
User ID not verified.