Some Polls Are Just Bad

Rafael Sanchez, 90, and Michelle Green vote in the US presidential primary election June 7, 2016 at Echo Deep Pool in Los Angeles, California.
While averaging poll results makes sense, not all polls are created equal. Photo: Michael Owen Baker/AFP/Getty Images

In an earlier post on overreaction to polls showing Donald Trump doing notably well against Hillary Clinton in certain battleground states, I mentioned in passing a couple of Rasmussen polls that uniquely (among June and July polls, that is) showed Trump ahead nationally. They affect the polling averages, of course, and a lot of people overlook them or take them with a grain of salt because the firm that produces them is famously conservative politically. But there’s a nonpartisan reason to exclude them from one’s data set entirely: They are robopolls that do not sufficiently compensate for their legal inability to call cell phones. Any polling sample limited to landline users is going to massively undersurvey young voters, and hence tend to produce results more favorable to Republican candidates. (Yes, Rasmussen tries to correct for this increasingly large problem with a supplemental online survey, and no, they aren’t the only pollster with this issue; the Democratic firm Public Policy Polling does as well, though they seem to have a better record for accuracy).

There are less blatant but still important shortcomings in other polls that bear watching. The most obvious are very small samples or very narrow polling windows (beware one-day polls!). When we get closer to Election Day, polls of “adults” or even “registered voters” may not accurately reflect the actual electorate. Weighting of results for demographic accuracy can cause problems if done inaccurately or not at all. And it’s worth taking with a grain of salt surveys done by fly-by-night firms or by academic institutions that rarely do polling.

In the end, though, the best guide to the BS level in polls is whether the results just don’t make sense. The Atlantic’s Derek Thompson offers a solid test:

Since 1980, no Democratic presidential candidate has won less than 80 percent of blacks; 55 percent of Hispanics; or 40 percent of whites. (A tiny exception: In 1992, Clinton got 39 percent of the white vote, and Bush got 40 percent; Perot did exceptionally well among white voters.)

It would be nice to call this the 80-60-40 Rule, but history doesn’t always cooperate with heuristics. Democrats won less than 60 percent of Hispanic support twice since 1980, with Carter in 1980 (56 percent) and Kerry in 2004 (56 percent).

That Kerry number in the exit polls was suspiciously low (sparking some controversy), and Carter was in a three-way race. But whatever: Clearly any poll showing Clinton winning less than 55 percent of the Hispanic vote against Donald Trump is questionable. Yet one of the polls we’re all talking about today, from McClatchy-Marist, does just that, pegging Clinton at 52 percent in that demographic. As Thompson says, that should raise “a tiny red flag.”

Most pollsters get something wrong now and then; Iowa’s much-admired Ann Selzer got the identity of the winner of the Iowa Republican caucuses wrong this time around. What can be really confusing is when BS findings happen to cluster, creating the illusion of a trend. That’s when looking at averages, and waiting for later results, can really come in handy.

Some Guidelines for Spotting BS Polls