A closer look at election forecasting models
Nate Silver examines prediction models:
The broader point is that we can get into trouble when we exaggerate how much we know about the future. Although election forecasting is a relatively obscure topic, you’ll see the same mistakes in fields like finance and earthquake prediction in which the stakes are much higher.
In response, John Sides defends forecasting models:
[I]f we look at the models in a different way, they arguably do a good enough job. Say that you just want to know who is going to win the presidential election, not whether this candidate will get 51 percent or 52 percent of the vote. Of the 58 separate predictions that Nate tabulates, 85 percent of them correctly identified the winner — even though most forecasts were made two months or more before the election and even though few of these forecasts actually incorporated trial heat polls from the campaign.
Ed Kilgore makes a distinction between “election determinists” who believe “that factors beyond the framework of actual campaigning, candidates, issues, messages, and events largely control outcomes” and “people who pretty much ignore fundamentals, and treat the character and abilities of candidates and their staffs, the thrust-and-parry of campaigns, the salience of particular issues, and the battle for persuasion of voters, as essentially non-determined phenomena that can only be understood and assessed via close ‘insider’ inspection.” He thinks this divide results in a lot of talking past each other:
There’s certainly room for widely varying perspectives, but we’ll all get along better if we try to show our cards—or if you prefer, our paradigms.