Making sense of measurement
On Friday, Mumbrella reported on a conference panel curated by the Interactive Advertising Bureau discussing advertising fraud. Panellist Michelle Katz cited an 80% rate according to an overseas study – putting it at odds with a 4% number stated by an IAB report into what it labels as “invalid traffic” launched at the event.
In this guest post, IAB research director Gai Le Roy argues that it is misleading to compare the two.
It’s fair to say that as an industry we probably don’t do ourselves any favours when it comes to measurement – it is complex and there are many points of view. In fact that’s why we scheduled the IAB Refresh Panel at the Programmatic Summit last week with senior panellists across the four key industry areas: marketers, agencies, tech companies and publishers. It’s all part of the new IAB mission to simplify and inspire.
The goal for the panel was to bring together the differing views and perspectives to help drive dialogue around the digital value chain around hygiene topics like viewability, ad blocking and IVT (Invalid Traffic). They’re tough subjects to address predominantly because to achieve simplicity of measurement in areas such as ad fraud and IVT requires rigour. It was a great discussion but it did lead to a misleading Mumbrella headline last Friday.
One of the data points referenced by the story was based on a very small UK study from two years ago looking at ad clicks. It was then compared to the newly released IAB Invalid Traffic (IVT) Benchmarks which aggregated three months of Australian data from three independent and key MRC accredited vendors measuring inventory quality. Aside from the questionable rigour of the UK study that was referenced – the story was actually looking at two totally different data sets.
