Digital attribution too often measures what’s easy, not what’s right
Driving marketing investment through data is key, but can we trust the numbers? Ebiquity's Jonathan Fox explores the complex world of digital ROI.
There’s a recently established trend in advertising where marketers have begun to question their investment into long-term brand building activity, and it’s growing.
Where in the past advertising has focused on creating a strong, memorable brand through emotive advertising, more and more advertisers are now chasing faster, short-term results – as noted in Binet & Field’s recent report.
With this move in the industry from long to short-term, it’s no surprise that more and more media dollars are being pumped into online advertising. Online campaigns can be easily tracked, monitored and measured, right?
Perhaps that’s true if you’re measuring impressions and clicks, but it is much harder to make an accurate link to sales revenue. This is the business metric that matters to the C-suite.
It is vital that advertisers question the data that supports the decision to invest so heavily in online advertising. Is this investment really driving sales?
I recently had the opportunity to interrogate this topic at Mumbrella360 in Sydney, where I explored the complexities of attribution: what it is, how it quantifies which media contribute the most to sales, and how it can mislead us if used incorrectly.
So, what is it that’s making advertisers take this path? Well, advertisers are using digital attribution to measure how successful online advertising is at generating revenue. Of those consumers that purchased their products online, they track which online media touchpoints their consumers were exposed to, such as paid search, social, online display, email, and online video.
But these measurements are typically based on pre-defined rules. Even when allocated using algorithms – the best form of digital attribution – it does not tell the whole story when carried out in isolation.
By example, say I want to buy a pair of running shoes. I talk to a friend and he recommends trying his brand of shoes and tells me how great they are. On my journey home from work, I see another runner in the same shoes. Perhaps later in the week I walk past a store and see the same shoes displayed in the window. Lots of touchpoints, and none of them media related.
I decide to buy these shoes, so I sit down with my laptop and type the brand name into Google. Up pops the first search result: a promotional link to the brand’s website. I click it and make a purchase.
Did I click it because it was a promotional link? No. Was the link responsible for the sale? Absolutely not, because I had already decided to buy the shoes. I would have gone to the brand’s website whether the promoted link had appeared or not. Those cues or nudges toward buying the shoes happened long before I sat down at my laptop to start my online journey.
In this same way, digital attribution results are misleading advertisers. When looked at in isolation, the results are showing them that their digital advertising is driving consumers to buy the shoes, or the insurance, or the car or whatever it happens to be. The data that’s being collected on digital attribution isn’t representative of the entire customer journey, it’s falling way short.
In fact, a more appropriate name would be digital misattribution. And it’s possible that these misleading results are prompting advertisers to invest too heavily in digital media.
There’s also the question of the long-term view. How is advertising driving brand-health metrics over time? Which brand health metrics are driving your core business?
Marketing activities that bolster long-term brand building will also drive online sales, but digital attribution typically ignores this. We’re all spending more time online and making more online purchases, but the data I’ve seen suggests online advertising doesn’t influence people as much as we would like to believe. We’re still walking around in the real world, exposed to shop-fronts, trends, TV shows and talking to friends and family.
The only way to know for sure is to look beyond digital attribution and incorporate results obtained from econometric modelling. This is essentially attribution in a wider sense, beyond digital, that takes into account a variety of sales drivers (not just media) and quantifies their impact. Taking this holistic approach accurately measures the impact of all marketing investment: pricing, promotions, distribution, and understanding all key sales drivers.
Ultimately, this insight is valuable to advertisers for the simple reason that it demonstrates the importance of implementing the right measurement framework. It allows for a conversation to determine the right balance between short and long-term goals. And it also provides a catalyst for the question of whether the right metrics are being measured within the company, because data is easy to take as gospel when sometimes it needs context to tell its story more clearly.
This discussion is set to continue, but the sooner advertisers can see past the digital lens to the wider picture, the sooner they’ll be clear about what’s working for them and where they can improve. And it’s vital that advertisers are collecting the data that’s going to show them the complete picture, and not lead them astray.
As the wise Seth Godin said: “Measurement is fabulous. Unless you’re busy measuring what’s easy to measure as opposed to what’s important.”
Jonathan Fox is Ebiquity’s head of effectiveness for Australia/NZ.
Any article that says econometric modeling accurately measures the impact of “all marketing investment” is just as wrong as someone telling you to take the results of digital attribution as gospel. Digital attribution and econometrics should both have a place in a marketers measurement framework as they have their own strengths and weaknesses – together they give marketers a “best guess” of what worked using statistics, available data, and a lot of educated assumptions.
The best guess is still better than no idea, all parties just need to be more honest about the accuracy of what they are delivering.
User ID not verified.
Amen.
Nobody likes to admit it, including marketers under pressure to show ROI for every dollar, but you are 100% correct for the vast majority of businesses.
User ID not verified.
Over engineered analysis of a common problem that most just use probability for right now – “econometrics”
Technology will solve this but that will be a result of it’s own evolution not pressure from misguided marketers trying to justify their ad spend choices
User ID not verified.
Re ‘Technology will solve this’ – curious to know how technology will solve the puzzle of the contribution of offline media, offline conversations and all those tricksy human aspects of decision-making?
User ID not verified.
Spot on Mr. Fox.
When you only incorporate only a partial data set then your attribution will be, by definition, flawed.
This applies equally to digital attribution (as shown above) as well as for traditional media econometric modelling.
For example, way too many models I have seen looked at the client’s brand metric in isolation. Brands act within a market – so ‘the market’ needs to be part of the model. Price (especially of the competitor) is way too often over-looked. Distribution, stock levels, competitive advertising and pricing are critical, the weather, consumer sentiment etc. all have roles to play (but may end up as variables with insufficient explanatory power and be deemed as superfluous.
And also, NEVER confuse correlation with causation, which is a way too common error.
User ID not verified.
Technology might solve it, but in the meantime marketers need to do a better job of managing the expectation that marketing return can be quantified and predicted. In most cases it can’t be done in a reliable way. That’s not to say it isn’t useful to have directional statistical insight, but marketers are creating rods for their backs by promising granular return to the businesses they serve.
User ID not verified.
Agree – technology will help reduce the number of assumptions, but technology is never going to be able to tell you who else was in the car when a radio ad played, who else watched that Youtube clip with you or who told you about a new product.
User ID not verified.
As a large digital media buying agency, we’ve recently been testing the concept of separating out digital brand (branded PPC, or true brand campaigns) from product (any non-brand PPC, product based awareness or performance campaigns) at both the floodlight/facebook/adwords conversion tag and account/campaign structure levels, ideally all tracked within an ad-server, able to carry out even basic attribution models, like Doubleclick.
While this would lead to unduplicated and therefore inflated conversion counts across the two conversion tags, it does allow us to separate out the influence of brand from everything else, so as not to steal brand equity when trying to report on incremental, non-brand conversions, whilst simultaneously allowing us to attribute any increase in conversions from the brand tag to pre-existing offline brand equity or any offline media in market. Having these separated out at the conversion tag level, means not having to worry about tailoring attribution models to account for the effect of brand, leaving us to measure and (in many cases) automatically optimise, based on both views, according to the best metrics associated with either perspective, without brand ‘muddying’ the analysis.
While there will as mentioned, be a larger total reported conversion number through unduplicated reporting, our ability to more meaningfully optimise around the biases of ‘brand’ using attribution, will in our opinion lead to better ‘real’ (not just reported) performance for the business, not reporting for the sake of reporting, with limited ability to optimise.
And yes, agree that a balance of Econometric Modelling and Attribution is best, though neither is a silver bullet in themselves, or when combined. You still need to use them as a guide to make informed decisions, sometimes without all the data.
For example EM recently pointed us to a potential deficiency in Paid Search over time, because we couldn’t track to offline conversion, which upon investigation revealed a change in tactic that was negative impacting performance. After addressing this, we saw real-business sales being to immediately return.
User ID not verified.
So if we say attribution and econometrics can’t measure marketing, or even as someone above says (and I paraphrase) marketing return can’t be quantified, what exactly are we as marketers saying?
Is it because the effect is so small? Why exactly do you think it can’t be quantified? If it’s $1.30 vs. $1.20 does that matter- as long as we are saying this one is big and this one is small surely that’s a win and we can use that information?
Otherwise, if marketing want real dollars to invest, but say they can’t measure the impact of it- it sounds like a bad investment to me.
User ID not verified.
No – if you want to 100% understand the impact of marketing turn it off and see what happens to sales over time.
The measurement limitations have nothing to do with the size of the effect but because the purchase journey is so complex and many steps can’t be tracked. Therefore any attempt to quantify ROI can only be positioned as a best guess based on all the information available (which as I said, a best guess is still better than no idea).
User ID not verified.
Have you got any serious solutions though? Turning off marketing would have to go for a long time, as there are lag effects that last for 12 months. Brands that have turned off advertising for years (there are very few case studies) have shown that base sales drop only 12% in the first year, but it grows incrementally after that. Do you really want to say you only add 12% to base? And guess what- these drops were proven by the models you say were worthless.
I can hear a lot of complaining about measurement models, that honestly sounds like complaining about models done badly. But what I can’t see is any realistic alternatives put forward other than “trust me”. Which means we go back to the situation where the marketing department and the CMO are not viewed as commercially minded or to be trusted with large investments. Let’s help move it forwards.
PS- I think the current models done well work well enough. Are they perfect? No. Do they need to be? No.
User ID not verified.
The answer is in the comments in the thread. I’ve got a solution that works for me and my business. If you are unable to construct a measurement framework that the business is comfortable with, with limitations you properly understand to properly interpret results – that’s not an issue of lack of commercial acumen – that’s marketers who don’t understand the solution to communicate impact. Models are rarely done badly – marketers interpret them badly.
User ID not verified.
Agree with many points in the article. Honestly, we do spend a lot on digital but there is no data that says the salss came in because of our online advertising or social.
User ID not verified.