Research industry body to review political polling following Federal election embarrassment
The Association of Market and Social Research Organisations has announced a review of Australian political polling methods to understand why polls incorrectly called the outcome at Saturday’s Federal election and how methods can be improved in the future.
The review comes after widespread criticism of polling companies after almost the entire industry called the wrong winner in Saturday’s federal election.
Craig Young, AMSRO President, said “As the election results started to roll in on Saturday night, a lot of Australians would have been asking the question ‘How could all of the polling companies call the Federal Election result incorrectly?’
“Political polling has an important place in Australian society and the reality is that it is not going away. So it is important that polling companies take this opportunity to improve their collective accuracy, because nowadays accurate polling underpins the operation of a modern well-functioning democracy.
“It’s also important for the credibility of the polling companies, as well as the wider market research industry, that the public has confidence in the results of the major polls. The results are taken seriously by political parties and the general public, so as an industry we need to get it right.”
AMSRO will be approaching media organisations and other clients who commission political polling to participate in the review, and will also seek the involvement of companies who undertake their own polling.
“While the detailed terms of reference of the review are still to be worked out, it will no doubt include looking at the sample source and sampling, interviewing methods and analysis techniques used. In terms of the sample sources used, I note that access to a single, comprehensive, national sample source for telephone interviewing, such as the Integrated Public Number Database, is not currently available to the industry. This will no doubt be one key issue for the review to explore,” Young said.
Ipsos CEO Simon Wake, whose company produces the Ipsos Poll, said: “We welcome the review and as an AMSRO member company we look forward to participating. While our published figures on the two party preferred outcome did not accurately predict the final outcome, they were within the poll’s margin of error. We do, however, believe it is important we do better in future.
“Ipsos upholds the very highest standards of polling using interviewer conducted telephone surveying with sampling based on randomly generated mobile phone and landline numbers, coupled with demographic quotas and weighting to Australian Bureau of Statistics population numbers.
“While we strongly believe our approach is robust, we recognise there is work to be done to understand why our results differed from the final election outcomes and how to best address this in future polling. Separate to the AMSRO review, we will be conducting an assessment of our polling over coming weeks with our local team and the global Ipsos polling experts.”
Young said: “lt is important to note that in Australia, we have a long history of the major polls usually getting it right. That makes it critical that we look at what might have changed recently in terms of methods employed, sample sources used, and the environment within which polls are conducted, to get to the bottom of what went wrong and how we can do better as an industry.
“Similar reviews have been undertaken in recent years in the US and UK, also in response to the polling results not matching election outcomes, so there are international precedents for what may have happened to polling accuracy, as well as industry responses. AMSRO looks forward to working with the polling companies, and those who commission the polls, so we can share knowledge and try to improve our approaches. I think it’s what the Australian voting public expects and deserves, and is also vital to the perceived credibility of our industry.”
The review will commence this week with the establishment of terms of reference and the formation of a review panel, and is expected to release its report in July.
Maybe people told them they were voting Labor just to amuse themselves. Or maybe only ABC and Guardian folk were polled. Either way it made for a bloody good night and a very sweet victory.
User ID not verified.
Nice to see the polls that have led to half a dozen PM’s being knifed by their parties finally being held to account. There’s a new poll every 2 weeks – think one would have got it a bit closer.
Surely the media, that regard these polls as such an important metric to judge how our politicians are doing, should also be reassessing their reliance on them.
User ID not verified.
Maybe it’s polling and media companies having an overwhelming metropolitan / upper-middle-class bias – where regional Australia is portrayed as unimportant, ignorant, or lacking any decisionmaking power.
The writing was on the wall when you have ABC journalists making 200k+ a year with a network of ‘contributors’ from similarly affluent sources telling everyone they are underprivileged and working class. This created a bubble of similarly affluent inner-city left-leaning workers who thought themselves the same. People who have never been inland for more than a week at a time. People who were actually surprised that there are different views outside of cities.
User ID not verified.
Like many instances of misuse of data, the polls weren’t wrong, most of the MSM just chose to interpret them in a way that matched their narrative that Labor would win. Simplistic national numbers never tell the story on the ground in the handful of seats that determine an election result.
User ID not verified.
Interesting to know by what criteria you think this was a good night…
User ID not verified.
I would suspect there might also be a version of the “Bradley effect” at play. People answer polls according to what they perceive to be socially more desirable or accepted as opposed to what they really think. We’ve seen many instances of this already, like Brexit, Trump and multiple right wing parties in Europe outperforming their poll numbers when it came down to the voting booths.
This is extremely hard to adjust for as a pollster and I’m not sure what could be done about it. Perhaps try to measure the sentiment in mainstream media as a proxy for what is socially desirable and then weight results a bit in the other direction but would that really be more than a dressed up finger in the air?
https://en.wikipedia.org/wiki/Bradley_effect
User ID not verified.
100% correct Rob.
I’ve always been concerned that the polls are based on population distribution rather than electortate enrolments.
The AEC does a pretty good job (unlike the US) with electorates ranging from around 70k to 125k, but heavily clustered around 100k. The variations are pretty much due to absolute population count. For example, the NT has around 140k enrolled voters. You either have one seat of 140k or two seats of 70k – so there is no deliberate gerry-mandering (unlike the US).
When the result is generally 51-49 to the winner in most elections, weighting the raw data to enrolments rather than population may increase precision. This however doesn’t remove the bias of geographic selection, nor the respones variance within widely dispersed geographic areas.
User ID not verified.
Well done for so perfectly representing the party of hatemongers you voted for – no ideas, no thoughtfulness, no graciousness, no inclusiveness, no intelligence – just sneering self-entitlement and a win at all costs mentality. If there’s one good thing to come out of the election result, it’s that the LNP won’t be able to blame the tanking economy on Labor (though I’m sure they’ll give it a good shot) as the recession we’re now in starts to bite, and with them carrying double the debt of every government since Federation. Unfortunately, everyone else and definitely the environment will suffer as well, but you obviously don’t give a toss, so yay you!
User ID not verified.
Surely this must remove the Newspoll results from ever being the reason for a change of PM? Stamp “not trusted” all over it!
User ID not verified.
It might be better to wait for the outcome of the review, but it’s likely there are three issues:
1) Design. It’s clear that the current structure of polling isn’t picking up those differences. In particular, design isn’t dealing with the shift to minor parties. It’s worth noting that there was possible also a shift away from the coalition towards minor parties, but that preferences flowed more to the coalition than to Labor.
2) Desirability reporting. I’m sure some people didn’t want to admit that they weren’t going to vote Labor. Most polls now look beyond demographics to test attitudes but clearly more work needs to be done on this. Demographics is not destiny…
3) Interpretation. One thing that almost all the commentary so far has ignored is the vital words ‘margin of error’. As someone who works in research (but NOT polling) I’ve watched with frustration as things get reported as facts when actually the reality is the numbers are not robust enough to make the assumptions stated. We’ve moved forward in that most articles now mention that results are within the margin of error, but most people read the interp and have NO idea what the phrase means. So can we stop reporting results within the margin of error as facts?
More accurate polling is probably going to mean more money spent on doing it … but also a more reflective approach to reporting statistics. It’s not glamorous but it is important. Polls can influence how people vote (I wonder how many people thought they could register a vote for a minor party to show disaffection with both major parties, given a clear result was expected?)
Looking forward to the investigation either way!
User ID not verified.