from The Conversation
— this post authored by Will Jennings and Patrick Sturgis, University of Southampton
Following the political surprises of 2015 and 2016, there has been much reflection and debate on the accuracy of the polls in the run-up to the impending snap-election of 2017. It is fair to say that, although perhaps somewhat unfair on the pollsters, the EU referendum and US presidential election have exacerbated – rather than healed – the widespread loss of public faith in the polls induced by the 2015 general election debacle.
Please share this article – Go to very top of page, right hand side for social media buttons.
So are the pollsters heading for further ignominy on June 8? Given the substantial-if-narrowing lead the Conservatives currently hold in the polls, this seems unlikely.
Polls are judged first and foremost on whether they correctly indicate which party will form the next government and, as the chart below shows, were the Conservatives not to win an overall majority on June 8, we would be looking at a polling miss of unprecedented magnitude. The largest polling error on record was in 1992, when the Conservative lead over Labour was underestimated by an average of nine percentage points – about the same as the Conservatives’ current polling advantage.
But correctly predicting which party will obtain an overall majority in a relatively uncompetitive election isn’t in itself a very impressive feat. It’s still possible that, when judged on the basis of statistical error rather than picking the winner, the pollsters will fare little better in 2017 than they did in 2015 – if not even worse.
If that happens, it won’t be down to complacency. After the 2015 election, the British Polling Council (BPC) and the Market Research Society set up an official inquiry to work out why the polls had failed so badly. The resulting report concluded that the primary reason for the polling errors was the use of unrepresentative samples.
The pollsters’ recruitment methods meant their final samples included too many Labour voters and too few Conservative ones – and the weighting and adjustment procedures applied to the raw data did not mitigate this basic problem to any notable degree. While the inquiry could not rule out a modest late swing towards the Conservatives, initial claims that the polling errors were due to “shy Tories” (respondents who deliberately misreported their intentions) or “lazy Labour” (Labour voters who said they’d vote but ultimately didn’t) did not stand up to scrutiny.
Fixing it
The inquiry made a number of recommendations for changes in how polls are carried out and how their findings are presented to both media clients and the public. It also proposed amendments to the BPC rules on the disclosure and reporting of polls, most notably that pollsters should provide a clear statement on weighting procedures and should detail any methodological changes made since the previous published poll.
The BPC’s official response to these recommendations indicated that it would make procedural changes to its rules either immediately, or during the course of 2017, while it would be up to individual polling organisations to implement recommendations relating to methodological practice, and subjected to a review in 2019. Theresa May’s surprise decision to call an early election means that, understandably, most of the recommendations of the inquiry haven’t yet been implemented.
The ultimate poll. EPA/Robert Perry
This is not to say that the pollsters are approaching June 8 with precisely the same methodologies they used in 2015. On the contrary, the polling industry appears to have made a number of changes to its sampling and weighting procedures. Some changes are intended to improve sample composition: recruiting more politically disengaged people into online surveys, extending fieldwork periods, increasing sample sizes and so on.
Other pollsters have introduced new quota-setting and weighting procedures, adjusting samples by self-reported political interest, past vote and education, using modelling to estimate the probability that respondents will actually vote, and reallocating “don’t knows” differently across parties.
But frustratingly for the pollsters, of course, we will not know if these changes are working until June 9.
Coming together
The 2015 polling inquiry also found that the pollsters had “herded” around an inaccurate estimate of the Conservative-Labour margin, and that this consensus contributed to the collective sense of shock at the election result. The situation in 2017, however, is rather different.
There are suggestions this time that the polls are overstating Labour’s performance, a pattern that has been a consistent feature of UK polling since the general election of 1979. This can be seen in the chart below, which plots the difference between poll estimates and Labour’s eventual vote share by days from the election.
The black line is the average of all polls, while grey lines are poll estimates across individual elections. What the chart shows is that, while previous election polls do converge toward the result over the final three weeks of the campaign, they still tend to overestimate the Labour vote – even on the very eve of the election.
If pollsters continue to adjust their sampling and weighting procedures during the campaign, a belief that Labour will end up under-performing their polling will create implicit incentives to make methodological choices that reduce the Labour share in vote intention estimates. If the received wisdom is correct, this could reduce the polls’ average error – but if recent events have taught us anything it’s that, in politics, received wisdom is often wrong.
In the meantime it’s worth remembering another conclusion of the 2015 polling inquiry: that observers tend to endow opinion polls with greater levels of precision than they are capable of delivering. Polling, after all, is difficult. It involves hitting a moving target by persuading reluctant and reflexive citizens to provide truthful responses to socially loaded questions for little or no return.
Small wonder, then, that the average error on the Conservative-Labour margin between 1945, when political polling in the UK began, and 2015 is in the region of 4-5%. As yet, there’s no particular reason to assume 2017 will represent a radical departure from the historical record.
Will Jennings, Professor of Political Science and Public Policy, University of Southampton and Patrick Sturgis, Professor of Research Methodology, Director of National Centre for Research Methods, University of Southampton
This article was originally published on The Conversation. Read the original article.