Why Bad Polls Get So Much Hype

By
Image
Don't put much credence in too-early and too-unusual general-election trial heats between Clinton and Trump.

A pop quiz: What’s the big news in terms of a general-election matchup between Hillary Clinton and Donald Trump? Is it the many months of evidence that Trump will struggle with all sorts of demographic categories of voters and could lose catastrophically? Or is it the “fact” that Republicans have quickly consolidated behind Trump, thereby erasing Hillary’s lead?

If you chose the latter, you may be a victim of Outlier Polling Hype, whereby a surprising finding or two gets so much attention that it sounds like the truth — or, if you are a Trump-fearing liberal, it’s maybe the Crack of Doom and an impetus to begin examining housing and employment options in Canada. 

Most political junkies realize there are some polling outlets that have what is known as a “house effect” — a more or less systematic tendency to show results bending one way or another to an extent that makes their surveys consistent outliers. Few Democrats, for example, will panic over an adverse Rasmussen poll. But some “house effects” are the product not of partisan or candidate bias, but of deployment of methodologies that over time tend to produce outlier results. I really don’t think Gallup in 2012 was shilling for Mitt Romney, even though its polls regularly and significantly inflated his odds of winning; the venerable organization made transparent and earnest efforts after the election to analyze and correct its errors. 

It’s also clear that some phenomenon — high cell-phone usage, declining response rates, and the increased expenses of live interviewing — are making polling more perilous and less scientific than most of us realize. All of this explains why the experts tell consumers of public-opinion research to rely on polling averages, not individual polls, to understand what’s going on politically, and to examine trends rather than absolute numbers. When it comes to polls about distant events, like the November general election, significantly more caution is in order. Some would argue that a general-election matchup poll prior to the party conventions is pretty much useless.

So the current hype about Trump more or less catching Clinton in general-election support should be taken with a shaker of salt and perhaps active disdain. 

In a New York Times op-ed today, political scientists Norman Ornstein and Alan Abramowitz discuss all of the problems with such general-election polls and the methodologies they deploy, and then add this important observation:

When polling aficionados see results that seem surprising or unusual, the first instinct is to look under the hood at things like demographic and partisan distributions. When cable news hosts and talking heads see these kinds of results, they exult, report and analyze ad nauseam. Caveats or cautions are rarely included.

That’s particularly true if these "cable news hosts and talking heads" find validation for their point of view from outlier polls. The fact that Republicans and Bernie Sanders–supporting Democrats have a common interest in showing Clinton doing poorly against Trump adds to the noise, to the point where it’s the only thing many people hear. 

Maybe these polls will turn out to be accurate, but we just don’t know that now. As Ornstein and Abramowitz conclude:

Smart analysts are working to sort out distorting effects of questions and poll design. In the meantime, voters and analysts alike should beware of polls that show implausible, eye-catching results. Look for polling averages and use gold-standard surveys, like Pew. Everyone needs to be better at reading polls — to first look deeper into the quality and nature of a poll before assessing the results.

Alternatively, just be careful about jumping to conclusions.