Oxford University Press's
Academic Insights for the Thinking World

Evaluating the long-view forecasting models of the 2016 election

In an earlier paper this year, we argued that election forecasting models can be characterized by two ideal types, called short-view models and long-view models. Short-view forecasting models are predominantly based on polls, and are continually updated until election day itself. The polls themselves are often interviewing respondents right up until a couple of days before the election. The long-view models apply theory and evidence from the voting and elections literature, and make forecasts months before the actual election. Our earlier paper explored the pros and cons of short-view and long-view forecasting, and we argued then that both views have value. What does the 2016 election result tell us about the polls, and short-view and long-view forecasting?

We start by looking at the short-view forecasts that predicted the Electoral College vote as listed in The Upshot on The New York Times website. All of these short-view models predicted Clinton had at least a 70% chance of winning the Electoral College vote using state and national polls. The Princeton Election Consortium even went so far as to predict a Clinton win in the Electoral College with 99% certainty. The Electoral College vote forecast is where the short-view forecasts all missed the mark.

As these short-term forecasts were based on state and national polls, the size of these errors can be better understood by scrutinizing some of the individual polls on the eve of the election. Each poll’s margin of error (MoE) at the 95% confidence level is key to understanding how accurate it was on election eve. The MoE at the 95% confidence level means that if the survey was done 100 different times, we would expect the actual percentage result to be within the estimated MoE in 95 of those 100 different surveys. In other words, the MoE tells us the range of values for the “actual” value with 95% confidence. If then the actual vote for Trump falls within the poll’s margin of error, then the poll was not too far off the mark. Take the battleground state of Michigan as an example. In the November Detroit Free Press poll, the reported MoE was plus or minus 4%, and Trump’s poll result was 38%. Thus, according to the poll, the “actual” value was between 34 and 42%. Trump won 47.6% of the vote in Michigan, so the Detroit Free Press was not very accurate. In five state polls taken in November, Trump’s vote percentage (47.6%) was inside the margin of error in only one of the five polls. Michigan’s result shows that if election eve polls are any indication of overall poll accuracy, then there is significant room for improvement.

Turning to the long-view forecast models (including what we called synthetic models), we evaluate how well they performed in the 2016 presidential election. These models are based on historical outcomes dating back to 1948 and earlier, but the 2016 election was certainly like no other in recent memory. So how well did these comprehensive long-view models do in their forecasts?

Each model set out to predict the incumbent party’s share of the two party vote (which was 51.0 as of December 2nd, 2016). Table 1 lists all eleven 2016 published forecasts in PS: Political Science and Politics put out by nine different forecasters (or teams of forecasters), and shows that seven of the eleven forecasts were within one percentage point of the reported vote. Another three forecasts were within two-and-one-half percentage points of the actual outcome. Only one forecast missed by more than three points. Our Political Economy model forecast of 51.0 appears exactly right, as of this date. Thus, the performance of the long-term forecasts is remarkable.

Table 1, Political Science Long-view Forecast Models by Charles Tien and Michael S. Lewis-Beck. Used with permission


Taking into consideration that Table 1 also shows the number of days before November 8 that each forecast was made, with the median number of days at 78, the long-view forecast models’ relative accuracy is even more impressive. The earliest forecast by Helmut Norpoth was made 246 days before the election, and the latest was by James Campbell, 60 days before the election. These long-view forecasts were all made well before the short-view forecasts settled on their final predictions on the morning of 8th November.

Earlier this year we warned that the short-view forecasts can go wrong if the polls go wrong. Short-view models are based on the polls until, in the end, nothing else seems to matter. The 2016 polls, especially the state polls, led the short-term Electoral College forecasters astray. The long-view forecast models, relying on political science theory and historical data, and constructed with considerable lead time with only occasional updating, fared quite well this election. We know from past experience, however, that one is only as good as one’s last forecast, and 2020 already looms too close in the near future for forecasters to rest easy.

Featured image credit: October 22: Election by Shannon. CC-BY-2.0 via Flickr.

Recent Comments

There are currently no comments.

Leave a Comment

Your email address will not be published. Required fields are marked *