Research body to investigate behavior of McCrindle Research after Media Watch expose
The Australian Market & Social Research Society has launched an investigation into the behaviour of one of its members after concerns were raised on Monday’s Media Watch program.
The ABC1 program dedicated much of the show to looking at the activities of Mark McCrindle and his company McCrindle Research.
Surveys press released by McCrindle Research has been featured in hundreds of Australian media stories. One carried out on behalf of Readers Digest which named Colgate as Australia’s most trusted brand was treated with scepticism by Mumbrella readers when it was published last year.
Media Watch offered evidence that on many occasions McCrindle Research had press released surveys with much smaller response bases than it claimed.
Last week, McCrindle uploaded a video to YouTube talking about his methodology. Media Watch said that prior to broadcast McCrindle had issued “denials and legal threats but very few explanations”.
After the allegations were broadcast, McCrindle issued an apology to clients which was obtained by Media Watch saying: “In the early years of McCrindle Research, when releasing internal, unpaid research, the methodology line referring to the number of people surveyed would sometimes record the number of people sent the survey rather than the number of surveys completed. Almost two years ago this was recognised by us to be inadequate and erroneous and since then we have only made reference to the number of completed surveys received.”
Mark McCrindle currently appears on the AMSRS members list.
AMSRS executive director Elissa Molloy told Mumbrella: “We are currently undertaking an investigation. However any member is entitled to a presumption of innocence during that process.”
She said that the investigation was examining whether the AMSRA code of professional behaviour has been breached. She said: “If that is determined it can include suspension or explusion.”
She said the sanction had been exercised before but “not for a long time”.
At the time of posting McCrindle Research had not responded to Mumbrella’s inquiry.
Busted!!
User ID not verified.
McCrindle sounds like (edited by Mumbrella for legal reasons).
User ID not verified.
Potentially getting tossed from AMSRS? Oh the humanity! 🙂
User ID not verified.
Research body to investigate mUmbrella spelling behavior?
User ID not verified.
It was with interest I read your article about McCrindle research and Media Watch and noted that it had undertaken the Reader’s Digest Trusted Brands research for us this year. You also note that readers were sceptical about the results at the time of the survey.
I would like to draw to your attention that the research conducted on our behalf has not been under investigation by Media Watch, and your update note at the base of the Trusted Brands story from July seems to imply that our research is under question. This is a misleading comment and I would ask you to clarify that statement in your update note to reflect that our research has never been under question.
We were also concerned when we saw the Mediawatch piece, but can assure your readers that the research was conducted exactly as we stated in our media release. McCrindle Research was commissioned by Reader’s Digest to undertake a three stage research project in early 2011.
Part one was an initial online scoping survey involving 169 people. Respondents drawn from a national representative panel were asked, unprompted or unaided to nominate their most trusted brand across 23 broad categories. Results were then cross referenced with previous Most Trusted surveys to ensure no highly performing brands from 2010 were excluded.
The full survey was conducted online and the results were based on the feedback of 1147 survey respondents between the ages of 18 and 65 and representative of gender, age and state distribution. This number is beyond the minimum required for a statistically valid sample.
To further explore the results from the quantitative survey, four focus groups were conducted; two in Sydney and two in regional New South Wales. These sessions were viewed by Reader’s Digest advertising and editorial staff and/or transcribed. They provided qualitative insights into the nature of trust and provided editorial content around the results.
Mark McCrindle’s statement also indicated that the issues with its surveys were with the free surveys it undertook around two years ago. It claims that it became aware and altered its techniques “almost two years ago.” The statement also specified that surveys commissioned by clients were not in any way compromised.
Our readers, and all the winning brands can be assured that the research results were a true and accurate reflection of Australians at the time of survey.
User ID not verified.
Fiona, I infer you work for Reader’s Digest.
Sample of 1,147 is at the level I would consider it to be “quick and dirty” It could be worse, but it’s at my absolute bare minimum for an indicative exercise. (what is the limit you refer to, by the way?)
Online surveys are more popular amongst certain socio-economic groups. This appears to be emerging in your results when compared with more robust longitudinal studies.
If I was to undertake this exercise myself, I would likely engage a wider single-source panel so that some history, continuity and robustness can be applied. Various research, media and advertising groups have excellent local and international examples.
It’s stretching things to honestly call it “a true and accurate reflection of Australians at the time of survey” – a snapshot if you will, then frame the results as na annual survey.
As for McCrindle, you pays your money and takes your choice.
User ID not verified.
AdGrunt – thanks for your feedback. We regularly undertake and commision research at a national and international levels, so if it something you undertake , i’d love to chat offline (not sure how to do that without posting my details online). We have been undertaking this research for many years now, and the results year to year are consistent regardless of the agency undertaking the research. Brands fluctuate according to media issues they may have faced, but the same brands come up year after year.
750 respondents is considered statistically valid in Australia, insofar as any agency we have commissioned has informed us, therefore 1,147 well exceeds that number. But like I said, i would welcome the opportunity to chat offline.
User ID not verified.
It’s been a delight to see focus groups get their comeuppance in the media for their use by politicians. Interesting that marketers haven’t made the connection that if these groups end in tears for gutless politicians, why would the outcome be any different for marketers looking for something to blame if it all goes wrong.
I recently met a guy who regularly gets 200 bucks to take part in corporate focus groups. He says it quickly becomes obvious what the researcher is trying to get them to say. After an hour, they tell the researcher whatever he/she wants to hear, so they can all go home. And it’s not rare. In fact it happens time after time.
So why pick on Mark McCrindle? As far as I’m concerned Media Watch should be investigating the whole stinkin’ research industry for the crock it really is.
User ID not verified.
i watched the program last week and wondered how no one has questioned him before…finally he has been caught in the act!
User ID not verified.
QUOTE: “In the early years of McCrindle Research, when releasing internal, unpaid research, the methodology line referring to the number of people surveyed would sometimes record the number of people sent the survey rather than the number of surveys completed. Almost two years ago this was recognised by us to be inadequate and erroneous and since then we have only made reference to the number of completed surveys received.”
COMMENT: Recognised as “inadequate and erroneous” almost two years ago? Elsewhere, that was recognised decades before.
Further, it’s not so much the number surveyed as their representativeness, whether they were asked appropriate questions and whether the answers were properly analysed and reported. McCrindle’s methodology as described doesn’t much illuminate us about these points.
User ID not verified.
@AdGrunt you make sense as almost always (!) but in this case wouldn’t a sample of 1,147 Readers Digest readers constitute a census these days?
User ID not verified.
in attempting to grow his business rapidly by specialising in PR surveys it appears Mark McCrindle has probably destroyed it.
User ID not verified.
Fiona,
I’m not here to flog anything except good marketing. I’ve pretty much given you the solutions as I see them.
As I said, 1,147 is pretty slim, but it depends on methodology. I’d expect the 750 number to come with some qualification around sampling, confidence levels and margin of error. To a naked eye, some sampling errors appear to be emerging.
Larger, consistent samples, which allow for longitudinal cohort analysis will usually deliver more cogent results. A conjoint analysis could reveal even more but is quite the investment.
Pays money, takes choice.
User ID not verified.
Just watching sky where an acceptable figure in the US as being reprensentive of the population is 1000 (poll in question is the hill poll). The fact is quant and qual provides guidance and is not a magic eightball.
So simple. If you use research as you’re only basis then on you’re head be it.
User ID not verified.
Fiona, I wouldn’t worry about what Adgrunt writes he is clearly deluded. The sample and methodology you have outlined for both the Quant and Qual stages of this project all appear perfectly sound.
I am really not sure what he means when he referes to ‘single source panels’ and ‘robust longtidudinal studies’. Does he really mean a panel or a single source sample? Does longtidudinal refer to the changing behaviour of the same respondents (panel) or an evr changing sample over a long period of time. It sounds to me like he is someone who is flogging a ‘single source’ media research product.
Any reasonable ad hoc custom media researcher would be more than comfortable with standing behind the work you have done. I have seen masses of far worse research work paraded on this website as PR. I think your description of it being “a true and accurate reflection of Australians at the time of survey” is fine. A political pollster would be happy to claim this from a similiar sample so why can’t you.
It sounds like you engaged a research company who engaged in some questionable (to put it very mildly) practices in the past. I have never heard of anyone quoting the sample as the number of people invited to respond to a survey. I could never condone this kind of behaviour.
It does though seem that the survey conducted for Readers Digest stacks up in terms of method. Which is more than can be said for some of research quoted here.
User ID not verified.
A little off topic…
I recently moved home and took out can Aussie Post re direct service. I expressed interest on the form I filled in to hear from companies who could help me with my move. I since received a Readers Digest prize draw…?
Very strange. I have forwarded this to ACMA to look into. Not sure if it is Aussie post abusing their list services or RD?
As for McCrindle 77% of statistics are pinches of salt…
User ID not verified.
Researcher. You started so well. Perhaps re-read what I wrote. The research appears acceptable (just) but without any explanation of the confidence level, confidence interval or the sampling methodology, it’s a hard call to make. I’m surprised you didn’t question that… unless you know the specifics… as you work for… McCrindle or Readers Digest. Or you’re talking out of your arse.
But then you ruin it all by saying that “A political pollster would be happy to claim this from a similiar sample so why can’t you.” marks you at as being from the McCrindle end of the research spectrum. If not McCrindle itself.
If you’re sad that I’ve mentioned the gaping flaw in online panel sampling and resulting flimsy premises, then tough. Online surveys are what they are, but that isn’t consistent, reliable or impervious to gross sampling error.
And now I can add Single Source Research salesman to my list of alleged agendas, along with tobacco shill, pizza exec, rabid atheist, cola exec, hurt creative, John Grono’s long-lost twin, forest burner and green denier.
I prefer Bullshit Detector.
User ID not verified.
Fiona, I can only second Researcher above – AdGrunt has no idea what he/she is talking about when it comes to research.
The difference in the margin of error between 750 interviews and 1000 is +/- 3.6 versus +/- 3.1%, ie not very much. 1000+ reposndents just gives more comfort as a larger number for the non-research literate, or allows more analysis of sub-groups to be done.
As for conjoint analysis revealing even more – maybe if you wanted to examin how people chose brands, but no use at all telling you which brands are most trusted. AdGrunt, stick to whatever it is you know something about, as your research knowledge is sadly lacking.
User ID not verified.
Pretty sure the next Media Watch investigation will look at use of utterly irrelevant statistical jargon in an attempt to sound like you know what you’re talking about. Some good examples from at least one poster in these comments.
User ID not verified.
Behavior
Behaviour
Sorry. It’s just the wrong color.
User ID not verified.
Interesting story about the validity of research and sampling…lets hope all creative/media agencies have seen this report as I know for a FACT that many of them (not naming names) use exactly the same processes for creating “consumer insights” then presenting to clients.
McCrindle Research is not alone.
User ID not verified.
Scott, if you don’t understand the relevance of the jargon, then you don’t understand the relevance and accuracy of what is at hand. The nub here.
First & Second Researchers. For an apparently broad survey, with multiple facets, fragmentation and influences come to the fore, so scraping a sample isn’t the ideal.
Since we don’t have any aspect of the research brief to hand, it’s a little tricky to be objective.
Perhaps I’m laying too high a bar for what is effectively a beauty contest – PR puff.
I can’t imagine any brand manager will be using this honour in any internal claim. But it will get RD some ads from the grinning winners and some endorsement no doubt.
User ID not verified.
AdGrunt you should be an expert bullshit detector as it appears to gush forth from you so often.
I am not McGrindle and
User ID not verified.
Sorry about the abrupt end above. As I was saying
I am not McGrindle and have never worked for him or Readers Digest. I had never heard of him until I read this article.
Just to clarify I am no great fan of online access panels as a sample source. I appreciate that all it represents is the universe that were exposed to its recruitment techniques and nothing more. It has however been universally adopted as a research methodology in Australia (more so than any other developed market) and appears to produce relatively accurate data. I am wholly uncomfortable with the use of online access panels as a source of sample for undertaking media research particularly where it seeks to measure anything to do with online behaviour or comparisons in media consumption between online and other media, because of its obvious bias. This kind of research appears on these pages too often, and seems to generate little criticism.
This kind of brand research that Readers Digest has commissioned seems to work pretty well online. I have done parallel tests of offline samples and online samples and they do seem consistent in terms of the relationship between the brands results. We would all love to spend inordinate amounts of money undertaking research using the purest methods, unfortunately that is not always possible or practical. For all of the faults of this research practitioner, for which there appear many, I am not sure the work done for Readers Digest deserves fierce criticism. It seems to reflect what is considered best practice (within financial constraints) for this kind of research.
I think we would all like to live with AdGrunt and the fairies in a world where money is no object; unfortunately I have to dwell in the here and now.
User ID not verified.
I’ve been following this discussion for a while now and I am motivated to ask why so many of these comments are submitted on an anonymous basis ? To my mind that’s (almost) as bad as doing dodgy research. What is there to hide ? If you have an opinion on what constitutes good/bad sampling/research practice/whatever, why not simply say it ? Or not say it, if you don’t want to be associated with your opinion ? I won’t comment on the current case at hand, since what I have read on this site seems to be largely hearsay, whatever the rights or wrongs, but I can say, on one of the issues that has been raised, that there is no one correct answer to what constitutes adequate sample size etc. It depends on what you are trying to achieve. Equally, it’s not difficult to show that CATI panels can be no better/no worse than online panels w.r.t. the representativeness of the population being sampled. Every sample is representative of something. And, regardless, there are sooooooooo many other sources of survey error besides sampling error, that have so far not been mentioned (as just one example, what were the questions asked, and did they make sense to the respondents). [I make these remarks, btw, from the viewpoint of a being qualified statistician with > 30 years experience in market research survey design, analysis and reporting.] I’m happy to be shouted down, but at least you know who I am.
User ID not verified.
Researcher & Scott M – I have been clear.
It is my view. I set a higher bar than the lowest possible choice. I also recognise that you pays your money and takes your choice.
User ID not verified.
Research questionnaires are often weighted with questions to suit an agenda.
What brands of cerialbdo you prefer out of a, b and c? – then it is revealed that after asking 1000 people what cereal they ‘eat’…
The best research is open ended.
It’s like tv audience figures. I watch tv and I have never, ever had my tv monitored…
Agenda’s and false claims are rife in media. I love the digital age which is slowly cleaning out the trash in the gutters…
User ID not verified.
@adgrunt Pretty comfortable in the understanding of the jargon actually. As you rightly say though, I don’t understand the relevance of you using it here.
User ID not verified.
@Scott, so what, if not statistical accuracy and methodology relevance, is the topic at hand?
User ID not verified.
@adgrunt Put bluntly, some of the stats jargon you are using (conjoint for eg) is irrelevant to any of the research in question. It’s an easy mistake to make, but probably not one you want to make whilst taking a stance of expertise.
User ID not verified.
So you can’t see how conjoint would enhance this?
It would be an expensive undertaking, but far better value than a crappy online panel beauty contest.
Again, you pays your money, you takes choice.
Do let me know if you’re struggling to see how a conjoint might work here.
User ID not verified.