Toyota’s Blunty contest was on a winner until the not-so Clever Film disaster
Before the disastrous backlash against Toyota’s Clever Film Competition led to the company calling off its live social media pitch, a campaign featuring YouTube blogger Blunty was the favourite among consumers, new research suggests.
Toyota set five agencies the simultaneous challenge of building social media campaigns around the Yaris.
From that, one agency would have been selected to get its social media account. However, the Saatchi & Saatchi-organised Clever Film competition generated hostile headlines around the world after a misogynistic ad was picked as the winner.
As a result of the furore, Toyota never chose a winning agency.
But word of mouth agency Soup conducted a survey of its consumer panel to gauge public reaction to the campaigns. A total of 1042 people took part in the survey which was conducted late last year.
The most liked campaign was the one led by digital PR agency House Party. It featured a Lego animation created by YouTube video blogger Blunty along with a chance to win a Yaris.
Around 30% of respondents liked the campaign best.
Next came the Clever Film Competition with 17%, narrowly ahead of executions by Iris and Host / One Green Bean getting 15% each. A Facebook-based Sydney vs Melbourne contest organised by the now defunct The Population was last with 9%.
However, none of the campaigns received a great deal of recognition, with 72% of people surveyed unaware of any of the campaigns.
The Clever Film Competition did best for recognition though, although Soup notes: “this may be related to the fact it feels like a lot of other competitions so many people misappropriate awareness.”
The survey was conducted by Soup just before the furore broke over the Clever Film Competition.
Additionally, in terms of which campaign made consumers feel most positive about the Yaris, the Blunty competition was again the best performer.
what was the sample for the research? how were they recruited?
aided/unaided?
If 28% of joe schmoe’s were aware of this campaign – I’d be EXTREMELY surprised.
Hi Ben,
Fifth para – 1042 people – from Soup’s panel.
Hopefully someone from Soup will be able to jump on to answer the other question.
Cheers,
Tim – Mumbrella
Tim did Scott Rhodie put you up to this, good advertising for house party no?
User ID not verified.
1042 people sourced from where? That’s the main q.
if 10% of the total population were aware of the Clever Film Comp … well … it just seems weird.
Same with Blunty. according to that, 8% of joe schmoes aware of it. However on YT It had just over 30,000 views. Last time I checked Australia had around 16m people over 14 so I’m wondering if that means 1.2m Australians or thereabouts were familiar with it. The recoginition across the board seems extremely high – 28% awareness is a massive figure and would probably excite big brand marketers who’d spent 8 figures on a campaign (especially if unprompted).
The question ‘How did it make you feel about Toyota Yaris’ is also a curious one. Not sure a car manufacturer would be too interested in ‘more appealing’ or ‘less appealing’
I’ve seen some pretty fragile research in my time but this seems to be right up there.
@ben
All respondents are from the Soup panel of 60,000 Australians, They’re not representative of the Australian population as a whole. They’re a subset who are more connected socially both on and offline than your average joe schmoe. This would probably serve to bump up recognition rates, but their opinions are still perfectly valid, maybe even more so…
Cheers,
Scott
PS Aided
If anyone’s keen for more info, just ask
User ID not verified.
If 72% of the 1042 respondents were unaware of the campaigns, that leaves roughly 300 who were. 30% liked Blunty the best. My question is whether the 30% refers to the total number of respondents or just the 300 who had seen the campaigns?
User ID not verified.
These stats have no credibility as the sample is too small and should not be published.
User ID not verified.
@ben
Think i answered most of your points in your points in the previous comment
@anthony
The respondents were given links to each campaign before doing the survey (hence the aided awareness) so were then able to judge their preference after having seen them. So all 1042, to directly answer your question
@Con Cerned
Each to their own, but in my experience the difference in results from 1,000 respondents and 10,000 respondents is genuinely pretty minimal.
User ID not verified.
thanks scott … i think maybe the way some of the numbers are outlined in the article could be misleading.
if all the respondents were given links to each of the campaigns then it answers my questions.
any plans for anyone to do the same study with joe public’s and no pre-awareness (ie to get an idea of those not so connected)
How is this research relevant to what the agencies were trying to achieve? I thought it was to test reach and engagement, not to gauge which one appealed to consumers the most.
OF COURSE a cute lego film will appeal the most, but that isn’t the point.
User ID not verified.
Hi “Somebody”,
I’d answer that in two ways.
First, no, Scott Rhodie didn’t put me up to it – the first he’ll have known of this story was when he read it.
And secondly, your IP address is a familiar one. I’m not going to out you here, but it’s fair to say you’re not exactly a disinterested observer, are you?
Cheers,
Tim – Mumbrella
Scott. Of course the 1,042 repsondents are entitled to their opinion! That’s what we call QUALITATIVE research. It is about garnering opinions from non-representative samples, panels or focus groups.
However, to report QUANTITATIVE data from qualitative research is, in the world of serious qualified market researchers, seen as a heinous crime that does violence against the reality of the situation, irrespective of the sample size.
As a Qualified Practicing Market Researcher (QPMR) of the Australian Market Social Research Society (AMSRS) I will not be putting any credence to the numbers reported.
User ID not verified.
Hi Tim,
Must be a combination of boredom and jealousy as I don’t get any press anymore 🙂
Thanks for not “outing me” the comment was meant more tongue in cheek than an actual jibe, hope you saw it that way.
Cheers,
Somebody
User ID not verified.
I don’t think I’ve ever seen you quite as stern before, John.
But I do like the idea of there being a crime against reality… If we can organise a show trial, would you be willing to pass sentence?
Cheers,
Tim – Mumbrella
Gladly Tim. I’ll try not to be the “Hanging Judge” though!
To explain. As a member of AMSRS and being a QPMR, I have a sworn responsibility and duty to ensure that research is not only conducted but reported to the highest standards. . Clearly, I take that responsibility seriously.
Maybe “Mumbrella Watchdog” as a new moniker?
User ID not verified.
Need some help with the maths here.
Total budget for campaign $15K.
Toyota Yaris: $12,605 (for base line model not shown on ’09 model discount)
Blunty’s payyoff, Production, Angency fee: less than $2,400.
Total: Bullshiet.
Did I miss something? How is that the winner?
User ID not verified.
the problem with something like this are these stats will be shared all over the place regardless of the fact they’re completely misleading.
same thing happened with nielsen’s facebook numbers blunder of last year.
not enough seem to question data – which is weird, for an industry that supposedly prides itself on a robust data driven decisions.
Hi Chris,
You wouldn’t usually expect to pay bloggers – many would be offended at the suggestion. You pitch them an idea that helps them create interesting content. Then they’ll do it for free.
It’s often closer to PR than it is an advertising model.
And John,
Please do consider yourself our honorary Mumbrella watchdog (although I always liked to think of you having that role anyhow…)
Cheers,
Tim – Mumbrella
Clearly these responses demonstrate the value of research to the digital debate!
We did not do this research to find a ‘winner’. We did this research to learn more about what works for social media campaigns: the chance to put 5 different social media ideas with the same objective side by side and get people to tell you what they think is very rare. The post campaign method reflects this research objective.
Tim’s article really just shows the tip of the iceberg in terms of insights we gained from this research.
We agree that a sample of opinion leaders is not a nationally representative view but a quantitative sample of over 1,000 does give you a valid and robust result for what opinion leaders think.
‘Winners’ are determined by the brief (which we were not privy to) but this research was about moving beyond ‘reach’ and asking people their opinions about a number of attributes related to the campaign.
So does it mean that Blunty won? Not necessarily.
To ensure impartiality we gave Tim the full results and Tim wrote a ‘who was the winner’ article and that’s totally up to Tim. But we’re happy to share the results with anyone who wants to get in touch to make their own assessment – like most things the devil is in the detail.
Just drop us a line. Scott@thesoup.com.au
User ID not verified.
As a director at Pollinate, I’m not entirely a disinterested observer, having worked closely with the Soup crowd for a couple of years. That said, I also have 20 years experience as a statistician in MR and teach the very courses the likes of John Grono takes to become QPMR certified.
To call the cited research qualitative is simply ludicrous (though not nearly as ludicrous as calling a sample of 1,042 too small – I have seen several PhD academic papers published on less than 3% of that!).
A representative sample of a population is representative of the population from which it is sampled, nothing more, nothing less. Scott, and the Soup folks, from what I can see, do not make any claims further than this. Granted, the article may not have been clear as it could be on this point, but anyone with sufficient knowledge about MR would know to interpret the results within the necessary context.
Perhaps the best lesson is that the proper use of surveys in journalism is a skill that continues to need developing. Of course, it’s also a tender balance between an accurate piece (with all the tedious explanations of samples and methods) and an interesting one 🙂
User ID not verified.
Even in death, the story still stirs.
None of means a thing if there’s no continuance of the strategy.
User ID not verified.
As a blogger I’m offended at the suggestion I’d be offended if you offered to give me money!
User ID not verified.
Tim,
It would be really interesting if you could feature a story where all agencies nominate a spokesperson to talk about the campaigns – the learnings, the wins, the loses.
I also think your intro is quite misleading: “YouTube blogger Blunty was the favourite among consumers, new research suggests.”
When in actual fact it was a favourite amongst people who may or may not have even know or seen any of the campaigns. Rather the sample were sent links to investigate each and then asked a series of questions.
Honestly – this is a bit strange. It may work for traditional advertising but this is a different beast completely. Clearly the researchers were looking for a way to measure these campaigns but it misses the critical element the experience.
As mentioned above by someone, the competition was about engagement. So how on earth are these people in the sample group going to comment? They are viewing camaigns – one would assume (because we don’t know the date the research was completed) that are near complete. They haven’t experienced these campaigns as they have developed – which is half the fun and one of the main functions of social media.
They wouldn’t have received the tweets, Facebook status updates and so on that bring these campaigns to life or even the opportunity of engaging with the promotions.
For example: The Saatchi & Saatchi campaign dropped funny film clips to fans on Facebook (a Star Wars one springs to mind) to spur people on for ideas for their own short films, and One Green Bean’s effort of picking people up in a real Yaris with a Werewolf on the streets of Sydney was a real highlight! A physical interaction that brought the whole thing to life and allowed for a true real world engagement with the product itself.
So giving people a few links to look at when its all over, is a little bit like giving people pictures of amazing food to look at after everyone has eaten…..a bit two dimensional.
I understand that perhaps this was one of the only options available for research but lets not frame it as more than it is. How about a chance for each team to respond?
Cheers,
Saxon
User ID not verified.
Saxon, agree with a lot of what you say here. The overall results presented do not reflect the experience of the campaign.
To help us understand this better we can look at results based on those who had previously heard of or experienced any of the campaigns (regardless of their level of engagement – which is a more difficult beast to measure). Basically we saw that previous knowledge of the campaign did significantly improve perceptions of it. Some (Blunty) more so than others (Facebook).
Cheers,
Scott
User ID not verified.