IAB to reveal new arbiter of online audience on Wednesday
The Interactive Advertising Bureau is to reveal the industry’s preferred supplier of online measurement on Wednesday.
The IAB has confirmed that it will make the announcement at 10am Sydney time.
The tender has been running since last September. In January the organisation announced there were five contenders:
They are:
- Colmar Brunton with Gemius
- ComScore
- Nielsen Online
- Roy Morgan Research with Effective Measure
- Vizisense
The announcement will be made by IAB boss Paul Fisher.The decision will potentially have a major impact on how online advertising is bought and sold in Australia.
The currency remains controversial with several providers in use and certain metrics – for instance monthly unique browsers – becoming increasingly discredited.
The IAB is also curating a session at next month’s Mumbrella360 which will feature the successful provider along with Maxus boss David Gaines, chairman of the Media Federation’s digital subcommittee and Scott McClellan, CEO of the Australian Association of National Advertisers.
Having sat through a series of presentations from some of the suppliers shortlisted for preferred supplier status (at an overseas conference), I am as unimpressed as ever with the quality of online audience measurement. The level of ‘calibration’ required by all of the suppliers is giddying.
The panel based suppliers have enormous issues with the quality of their sampling. They draw sample online from a self selecting group of respondents. They struggle to identify who is using a PC (key stroke calibration is used) and rarely measure all the devices a panel member uses to access the internet. The recent proliferation of devices (tablets, smartphones, netbooks) has meant that measuring one (at best 2) at home desktop or laptop per respondent is probably only representing a small and declining amount of their internet activity. The fact that the ‘at work’ element of the panels is something of a joke due to its lack of representivity exacerbates the issue.
If I was a panel member I would probably only have my PC at home metered, my work laptop would not be measured (my work IT guys don’t allow it), my smartphone, my wife’s at home laptop and my IPAD would not. My guess is about 20% of my online activity on a weekday and 50% at weekend only would be measured. I don’t think I am untypical of many respondents; I may have access to a few more devices than most.
Site centric measurement looks equally sketchy. It has always had the issue of measuring browsers and not people. Each supplier does an enormous amount of calibration to make their data look like people and not browsers. Some of this calibration is scientific and based on high quality Establishment Survey data which gives a universe framework to assign the behavioural data to. Unfortunately much of it is quite arbitrary.
The age old problems of cookie deletion and scraping bots are probably dealt with by reasonably accurately designed calibration algorithms. The bigger issue relates to the point made above about the number of devices (browsers) that an individual uses. The growing number of devices each individual uses is making calibration almost impossible as the take up of new devices is moving so quickly. I have moved from access and use of 2 devices and three browsers in the last 6 months to 5 devices and 7 browsers today. Unfortunately for the site centric research suppliers this increase in pick up is not the same for every demographic so a single calibration to assign more browsers to fewer individuals will not work. This is a minefield. The addition of facilities like ‘in private” browsing and their disproportionate use by certain demographics make attribution of browsers to individuals impossible.
What is worse is the talk of ‘hybrid” measurement. This seems to suggest that two shit measurement systems combined will produce something better than one. God help us.
Online measurement in its current form delivered by the main suppliers is increasingly becoming a laughing stock. It is truly the wild west of audience measurement.
User ID not verified.
There are many comples issues surrounding measurement period. Online measurement even more so. However, taking shots from the cheap seats is not helping anyone and moves us no closer to finding online measurement’s holy grail. Perhaps we can hear your thoughts moving forward on how to build a measurment system we can all be happy with Researcher?
User ID not verified.
This is introducing a traditional media measurement method on a non traditional media channel. Doesn’t make sense.
User ID not verified.
Hey Other John. I think you will find that quantifying an audience for a medium makes sense for any communication plan, and I would hope that it was a key objective of said planning. What doesn’t make sense is your thinking that online audience measurement should be exempt. By the way, every new medium is ‘non-traditional’ when it starts up.
User ID not verified.
Thanks for that KP. I am not sure if I am in the cheap seats.
I do think it is worth reminding people of the complexity of online audience measurement. I am not surprised that you are unable to contradict any of the serious points raised, just the fact I have raised them. I have spent an inordinate amount of time working on the development of online audience measurement systems. All this does is serve to remind me how difficult it is and how far we are all away from what would be considered in say TV audience measurement as delivering a basic, accurate and importantly replicable measurement system.
If you want one idea for a step in the right direction. I am happy to offer one.
If you plan to use a panel to measure online audiences, that panel should be recruited offline using a reasonably (I am not oblivious to cost) pure probability based recruitment approach. OZTAM wouldn’t dream of recruiting its panel by advertising for households on the TV. An unbiased, probability recruited and representative panel should be the foundation for any measurement. Unfortunately this is rarely the case.
Sorry if it seems as if I am casting stones, it doesn’t help that all of the online audience measurement companies are methods are built of glass.
User ID not verified.
@Researcher. Thanks for calling it like it is. I too have been closely involved with the development of online audience measurement systems and I agree that if there is one singularly agreed upon truth in this industry; it’s that it is very, very complicated.
So much so that it’s hard to imagine that there could ever be a perfect solution. The fundamental problem is that there simply isn’t a measurable connection between an individual and the content that they’ve been exposed to online. Not without violating every privacy policy on the planet anyway.
Therefore, the industry is ultimately going to have to decide which pros and cons of the available technical approaches it can live with, and live without. Obviously, opinions are going to vary and I can only give mine. That said, my opinion seems line up with the majority of other people I’ve spoken to in this same industry.
So, for what it’s worth – I think that the methodology being used by Nielsen in MI needed to be addressed, but I think the changes they’re proposing in Hybrid are a step backwards.
My issue, they’re moving from a measured solution that unquestionably had its issues (yes I know they still look at page views from tagged), but was still a measured output that was generally comparable across sites and sectors. They’re replacing this with a sampled output from a panel of users which has been widely criticised for being under represented (overly weighted) in key sectors, as well as skewed to an unknown extent.
As per @Researcher’s comment, it’s the ‘unknown’ that is the biggest issue. There are so many holes in the captured sample data that the corrections that the team that weight the data have to make can’t be any more than guesses. More concerning is that the corrections don’t evenly affect the companies being reported on, with no way for anyone to verify the validity of the data.
@KP – As you say, the industry needs solutions, not complainers – I agree completely. My solution, use measured data and make corrections on that to accommodate the cookie deletion, multi access, etc. It’s based on a more robust initial number and the estimating that needs to be done (this seems unavoidable for any solution) is against more consistent and predictable errors than trying to account for panel skews and fluctuations.
User ID not verified.