Polling data is ubiquitous in today’s world, but it is is often difficult to easily understand the accuracy of polls. In a recent paper published in Political Analysis, Kai Arzheimer and Jocelyn Evans developed a new methodology for assessing the accuracy of polls in multiparty and multi-candidate elections. Based on the work reported in their paper, “A New Multinomial Accuracy Measure for Polling Bias,” I posed a number of questions to the authors regarding their methodology and the general question of how the accuracy of polling data can easily be assessed.
Your recent Political Analysis article focused on an examination of the accuracy of polling data in multiparty contexts. In a nutshell, how does your methodology determine the accuracy of polling the multiparty context?
A decade ago, Martin, Traugott, and Kennedy proposed an accuracy measure that relates the proportionate strength of two parties or candidates in a pre-election poll to their actual performance on election day. In our article, we generalise this two-party approach for multi-party elections, and develop an index which summarises the individual party deviations of a given pre-election poll from the final result of the election. Any individual poll can now be given an overall accuracy score. We also show how this measure, B, can be calculated using standard statistical software, which we have made available through a Stata ado package. The package can be downloaded from the SSC archive (or simply be installed from within Stata (ssc install surveybias)).
How should your measure of polling accuracy be used, and how do you see it being applied in elections in 2014? Are there particular types of elections or settings where you believe that polling accuracy is most problematic?
Our aggregate measure of accuracy makes it very easy to compare the overall performance of pollsters, but it is also possible to focus on bias for or against specific parties. As well as so-called ‘house effects’ (see below), accuracy will be problematic if many small parties compete, if respondents are reluctant to reveal their support for candidates that are perceived as extremist, or if there are substantial campaign effects that swing the electorate’s mood. In the article, we demonstrate how the impact of these factors can be assessed using data on the 2012 presidential election in France as an example. Replication data and scripts for this analysis are available from the journal’s repository.
In May 2014, the 28 member states of the EU will hold concurrent elections for the European Parliament, so we expect a bumper crop of pre-election polls. We would like to encourage scholars to download our software, and to apply our measure in as many countries as possible. We would also encourage polling organisations to use the measure in assessing trends in their own polling accuracy – and perhaps that of their competitors.
What issues do you believe account for why some polls in multiparty contexts might be more or less accurate? Does accuracy vary depending on how samples are drawn, the ways in which respondents are contacted and interviewed, and does your measure aid in the understanding the relative accuracy of different polls?
In just about every democratic country in the world, some pollsters have a reputation for ‘leaning’ towards one party or even one political camp, resulting in lower overall accuracy. With our measures, it becomes possible to estimate the size and stability of these house effects. Moreover, house effects will often be the result of mode (e.g. face-to-face vs telephone vs online), non-random sampling, sampling over a very short period of time, or house-specific post-stratification strategies (the ‘secret formulas’ used to adjust raw polling data). If (if!) pollsters publish information on these methodological issues, they can easily be included in a model of polling accuracy, using B as the dependent variable.
How do you recommend that a consumer of polling data try to better understand the accuracy of a particular poll? Are there simple things that someone can look at in a poll report that might help them assess the accuracy of a poll?
Consumers should indeed look out for a number of simple things. Is the sample size sufficiently large (over 1000 respondents, as a rule of thumb)? Was the sample randomly selected, and what is the sampling frame – all eligible voters (good), some social or regional subgroup (bad), or even a convenience sample such as the subscribers of a website or magazine (useless)? Most pollsters have to rely on some type of multi-stage sampling (for example, they will sample municipalities first, then households within municipalities, then voters within households), but this results in larger sampling errors and therefore lower accuracy than simple random sampling. One final (hugely oversimplified) piece of advice: be wary of online surveys. In most countries, very few (if any) pollsters have the resources and expertise to draw truly representative online samples, relying instead on panel self-selection.
How will you be studying polling the in many elections throughout the world in 2014?
In due course, we would like to pool the numerous surveys for the 2014 European elections to test our new accuracy measure comparatively. But, as we’ve said, we would really encourage country-scholars to use the test on their respective national polls. Another future project is to look at the dynamics to individual polling organisations’ accuracy across consecutive elections in single countries, but this is by definition a long-term commitment. More immediately, we are about to look at the 2013 German Bundestag elections to assess more precisely the degree to which the polls anticipated the ruling CDU bonus and the FDP collapse, the effects of which have still not been resolved at the time of writing. Our respective country interests mean there will be no lack of elections in coming years to apply the B measure to!
Kai Arzheimer is Professor of Politics at the University of Mainz, Germany. He specialises in German and European politics and has published widely on the electoral base of the Extreme Right in Europe. Jocelyn Evans is Professor of Politics at the University of Leeds, UK. He works on voting behaviour in European democracies, and has published extensively on French elections, including forecast models of Far Right voting. Between 2006 and 2011, he was editor of the Hansard-OUP journal, Parliamentary Affairs. As well as working on measures of polling accuracy, Profs Arzheimer and Evans are currently working on modelling geospatial effects in elections, examining the relationship between voter-candidate distance and party choice. Their first analysis of these effects in the 2010 UK General Election recently appeared in Political Geography. Together they are the author of “A New Multinomial Accuracy Measure for Polling Bias” in the most recent issue of Political Analysis (available to read for free for a limited time).
R. Michael Alvarez is a professor of Political Science at Caltech. His research and teaching focuses on elections, voting behavior, and election technologies. He is editor-in-chief of Political Analysis with Jonathan N. Katz.
Political Analysis chronicles the exciting developments in the field of political methodology, with contributions to empirical and methodological scholarship outside the diffuse borders of political science. It is published on behalf of The Society for Political Methodology and the Political Methodology Section of the American Political Science Association. Political Analysis is ranked #5 out of 157 journals in Political Science by 5-year impact factor, according to the 2012 ISI Journal Citation Reports. Like Political Analysis on Facebook and follow @PolAnalysis on Twitter.
Subscribe to the OUPblog via email or RSS.
Subscribe to only politics and political science articles on the OUPblog via email or RSS.
Image credit: Closeup Of Hand Inserting Ballot In Box Over White Background. © AndreyPopov via iStockphoto.