Predicting the outcome of a campaign
In an opinion piece that first appeared in Encore, Adam Ferrier says while there are numerous tools for testing ads, they have limitations and he’s looking for a better solution.
I recently wrote an article suggesting that no-one really knows for certain if an ad or a campaign is going to be successful or not – none of us.
Now the good folk at Millward Brown have Link, a copy-testing tool which has many vocal detractors (such as all its benchmarks being born in the passive world of communications). Ipsos have the (weirdly) named ASI Big*Idea.
The neuromarketing salesmen would have us believe they know how the brain thinks. The executive creative director has their almighty confidence and ego shouting ‘make this or else’. And let’s not forget the humble qualitative researcher with their flippy book of cartoon images.
Interestingly most of the people above think there method is right and the others are wrong.
Now, I was telling a client the other day about what I perceive to be the limitations of quantitative advertising pre-testing and he challenged me. He said: “Adam, you may be right, and I may be prepared to agree with you. But show me a better way. Show me how I can have confidence that it’s likely the idea or campaign that is being put forward will work.” Anyone want to help me answer the question (see what I do for you Gav)?
Does anyone know if an advertising campaign idea will work? What’s the best way to determine likelihood of success?
Adam Ferrier is a consumer psychologist and the founder of Naked Communications.
This story first appeared in the weekly edition of Encore available for iPad and Android tablets. Visit encore.com.au for a preview of the app or click below to download.
I don’t think there’s any such thing as a “best” way to determine the likelihood of a campaign’s success in advance Adam … and even post-campaign determination of success can be fraught with a range of subjectivity, in assessing all of the potential factors that might influence its success or failure.
It’s an interesting, and never-ending, left brain conundrum in advertising and marketing – made even more complex with the expansion of potential consumer touch-points of varying levels of engagement and penetration.
If we were all 100% honest with ourselves, we’d admit that the real predictor of success is a combination of insight (brand, consumer & marketplace) that would ideally have a level of quantitative research to steer the wisdom, EXPERIENCE, genuine clarity and understanding of the idea, possibly some qualitative litmus-testing on the idea (not execution!), … and the ability to be flexible in monitoring and adapting it as you release your campaign into the world.
Sadly, I think the practice of needing some kind of concrete “proof” to fall back on and justify even the smallest of decisions overrides these truths – and the work produced is the poorer for it.
User ID not verified.
Link and all the other pre-testing methodologies have been designed to assess TV ads and ‘campaigns’ that presumably have a beginning and an end…and I wonder in the new era of conversation and the growing irrelevancy of ‘burst’ style flighting, whether many clients are getting genuine value out of that process. Or are they developing TV ads to satisfy the corporate ‘rules’ to satisfy the test, rather than question whether a tradtional campaign structure and TV ad is the answer they need at all?
User ID not verified.
… one further thought on my earlier comment.
If anything, one-on-one in-home and/or in-market, qualitative research is the closest to “best”, as it finds consumers in the right frame of mind and more likely to provide an accurate read on the idea being presented.
Its far more accurate than bringing respondents to a cold, anonymous, corporate room in the city, with a one-way mirror to conduct focus groups – but unfortunately for clients this is often too costly and time-consuming for their liking.
As for quantitative research to pre-test campaign ideas or executions – having commissioned, formulated and been on the participant side of quantitative research, I think its incredibly ill-equipped to encounter respondents in the “right” frame of mind to accurately determine their responses for anything useful … particularly when using storyboards, sketches or stealomatics to convey an intent.
The growth in online surveying for conducting this kind of research seems to have resulted in little more than a breed of serial respondents, mostly after a few easy dollars to click whatever boxes they can to get to the end of the survey quickly.
User ID not verified.
Adam, I agree with you that anyone saying they can predict the outcome of a campaign has been drinking their own snake-oil.
But I think you misrepresent the neuroscience. In the work I have seen the biometrics indicate which part of the brain is “firing” and at what parts of the TVC. It generally doesn’t purport to say whether those hot spots better or worse than norms (though I have noticed a trend to such comparisons creeping in – wrongly in my opinion) but that they are the ‘trigger’ spots and/or ‘flat’ spots of the ad. It is up to the researcher, agency and advertiser to hypothesise why and what they can do about it.
Cheers,
User ID not verified.
I’d bet on the ECD’s opinion.
There’s a hell of a lot of data now showing that creatively-awarded campaigns are more effective.
Whereas the recent data I’ve seen on Link implied that work which did well in pre-testing actually underperformed in the marketplace.
I think it’s touching that you want to find a scientific method to help your client know in advance what ads will work. TV ads (for example) are like little films, it should be obvious we’re not talking science here. Hollywood doesn’t know which films are going to be popular.
I find it a bit weird that marketers always want ads to be scientifically proven in advance to be successful. Their own products aren’t, and indeed often bomb.
But what can you do? You make the case. Why the insight it’s addressing is valid. Why the way it’s being brought to life will be engaging. You make the case and you try to convince the stakeholders. That’s business. And that’s advertising. It’s not science though.
User ID not verified.
Yep, co create ads with very carefully selected representatives of the target market. Integrate your consumers your real she’s, not those idealised ones you think are your target into the very DNA of you brand or company.
Around 20000 research papers, 80000 hours of consumer insight in my career thus far show that an ad is but one part of a complex web so best is to break the web and build it up again with great consideration and thought. At the very heart must sit our consumer and everything radiates out from that central consideration!
User ID not verified.
Most ads are only indirectly trying to achieve success.
Surely nobody believes that shouting at consumers drives success? But the format of Retail ‘shouty’ ads has remained unchanged for decades. No amount of testing, neuroscience or even generational changes at agencies and clients have had any impact on the formula. What does that tell you?
Many mainstream ads generally seek awareness – or noticeability – which is hardly what you’d call success. This kind of work urges ads to have a hook – so down the pub people can say “remember that ad with Grandmaster Flash in the car” – but what’s the ad for?
Some measure succes by virility – ‘Dumb ways to die’ being a recent viral smash.
But will it stop anyone behaving stupidly near trains? or will they just have a theme tune to sing while their being run over?
Occasionally you see an ad that is explicitly trying to grow business based on a compelling view of the product and how to positively enhance consumer behaviour towards it.
Walkers ‘Sandwich’ campaign is a good example.
This is a proper attempt at success. It can therefore be properly analysed and properly tweaked during the various phases of development.
User ID not verified.
@Bruce Banner – thank you for the new use of the word “virility”. I shall be borrowing that, if you don’t mind!
It’s fair enough to point out your “shouty” retail advertising reference – although I fear that what it tells me is how afraid of change or truly understanding the intelligence of their consumers that many major retail clients in Australia can be – which is why the smarter ones mostly arrive here from abroad.
A few years ago we proposed an alternate, more engaging approach to retail advertising for a large retail client under pressure. Their years of identical, repetitive price and product work was providing predictable sales but were showing a slow decline in impact. The new approach still included prices, but also incorporated brand values that communicated something deeper than essentially saying “we’re a place to come and buy the same stuff you’ll find elsewhere, but probably for a few cents less”.
They weren’t sure about making a change, but were happy to test it in case they were questioned from above. We arranged focus groups and canvassed responses at management and retail level. I also presented an overview of the alternate approaches successfully deployed by retailers in similar categories locally and abroad.
The fresh approach came up trumps every time, but it was all to no avail. The client was too nervous to make a change “because product and price is the way its always been done, and I don’t want to be the one to change it”, and were uncomfortable at even running it in a test market.
We’d misjudged our client’s willpower but at least we’d tried I suppose. Sadly, in a market like Australia the imagination and competitiveness required in some retail categories just doesn’t exist to necessitate a new approach.
Just because its so common doesn’t make it the only and “best” way to communicate in this space.
User ID not verified.
There’s an interesting Ipsos study that indicates creative is responsible for about ¾ of all ROI variance. It comes from a very traditional point of view and the findings are open to interpretation, but it’s interesting none-the-less.
I take out of it that the key predictor to campaign success is the creativity of the core idea, along with the messaging and media that are used to bring the core idea to market.
Some excerpts that touch on the Ipsos results:
1. Creative is king
When looking at advertising and promotion spend, it’s easy to assume that, because media comprises such a high proportion of overall spend, it must be the most important factor. In fact, creative has a disproportionate influence on the success or failure of an ad campaign. Ipsos ASI’s global advertising database shows that creative quality accounts for about three quarters of variance when explaining differences in ad recall levels. Weak creative rarely earns good recall based on heavy media. So, despite the high cost of buying media, the ‘creative’ is key for driving success.
2. Ads do not wear-in
Although TV ads do have long-term brand equity-building potential, the most marked impact is in the short term. A strong ad will achieve high levels of consumer recall within the first burst of spend. A poor performing ad will not. It is wishful thinking to hope that an ad will ‘wear-in’ on the flawed principle that ‘a bit more spend’ will surely have an impact. An ad that does not achieve good recall in the first burst of spend signifies that it is simply not engaging enough – whether because of its creative style or because of how its message is couched. It is more prudent to ditch weak creative as quickly as possible, than hope
that the media spend will lift it to success.
Some links:
http://www.edee.gr/files/White.....anning.pdf
http://www.ipsos.com/asi/sites.....isKing.pdf
User ID not verified.
Neuroscience can make a positive contribution to marketing, but not the neuro-babble being touted by some claiming to work in the field. Any company that claims it can measure ‘unconscious emotions’ but requires a respondent to perform a ‘conscious act’ to measure that ‘unconscious emotion’ is talking total nonsense. Neuroscience isn’t the problem, the snake-oil salesmen who claim to trade in it are.
User ID not verified.
The flippy cartoon book was very funny comment and would add it doesn’t get any better when the cartoons are carried online to get a larger sample size – a rubbish result because of the method that is statistically significant is just a pig with lipstick!
Part of the problem is holding the definition of ‘campaign’ success to some clear success measures that are actually measureable. Is it about awareness (product vs brand); is it about sales tactics; is it about new information sharing; or is it just to put something out there to remind active customers, potential customers, forgotten customers etc.
Even if its a multi-channel execution there still needs to be some kind of measure of success beforehand as long as the right attribution is made about how much that channel contributes to the success measure – particulalry if the outcome is sales driven.
Does qual work? – a bit;
Does neuro work? – a bit;
Does quant work? – a bit;
Does luck work – a lot!
User ID not verified.