Indeed, PPP and Rasmussen have come to be viewed, by their fans and bashers alike, as the MSNBC and Fox News of polling, respectively. Which means their poll results are almost invariably filtered through a partisan lens. This was never more apparent than in August, after the Missouri Republican Senate nominee Todd Akin stepped in it with talk about “legitimate rape”—causing his fellow Republicans to call for him to drop out. Rasmussen fired up its automatic dialers and quickly put a poll in the field in Missouri, which found the Democratic incumbent Claire McCaskill, who had previously been trailing Akin, now up ten points. Democrats cried foul. “Everyone knows that Rasmussen is a tool of the GOP Establishment in Washington,” a Democratic Senatorial Campaign Committee spokesperson told the Huffington Post. And McCaskill herself tweeted: “Rasmussen poll made me laugh out loud. If anyone believes that, I just turned 29. Sneaky stuff.” Meanwhile, after a PPP flash poll conducted that same week found Akin still leading McCaskill 45 to 44, Republicans smelled a rat. “Anyone suspect that the Democratic polling firm might be trying to get the result they want, to ensure Akin stays in, so that he can get pummeled in November?” the National Review’s Jim Geraghty asked.
PPP is hardly the only polling outfit that’s currently arousing conservatives’ suspicions. In the past month—and especially the last few days—as numerous polls have shown Obama pulling away from Romney, an increasing number of conservatives have begun chalking up the results to a polling-firm conspiracy. “The polls are just being used as another tool of voter suppression,” Rush Limbaugh recently warned his listeners. “They want to depress the heck out of you, and they want to suppress your vote.” A favorite conservative complaint is that pollsters are including more Democrats than Republicans in their interviews—never mind that years of survey research indicate that Democrats do in fact outnumber Republicans and that the partisan breakdown in most polls is driven by how the poll’s respondents identify themselves rather than by the pollster weighting the results to match a predetermined split. One conservative website, UnSkewedPolls.com, goes so far as to take polls from established outfits like NBC and Monmouth University and then reengineer them by adding enough Republicans to their samples so that Republicans outnumber Democrats. The result? A recent Reuters-Ipsos poll, which found Obama beating Romney by five points, gets “unskewed” to show Romney leading Obama by ten.
And yet, while conservatives’ poll denialism is patently wacky, it’s not as irrational as, say, their climate-change denialism. That’s because, unlike climate science, the science of polling has increasingly, and undeniably, come to be based on a good deal of guesswork. For years, the scientific part of polling science has been based on what’s known as the “random-probability sample.” Pollsters have labored to make sure that every member of the population has an equal chance of being selected, so that a sample of 1,000 people will be representative of the 300 million. “We were all taught this notion that a scientific survey is one where everyone has an equal or known probability of selection,” says Mark Blumenthal, a former Democratic pollster who’s now the Huffington Post’s lead polling analyst. That wasn’t that difficult when more than 90 percent of American households had home telephones and anywhere from a third to a half of those households were willing to answer a pollster’s call.
That is no longer the case. “We’re in a world right now where it’s impossible to find the perfect thousand,” says Leve. Part of the problem is the declining response rate. In 1997, the Pew Research Center found that typical poll-response rates were 36 percent. By 2003, Pew had knocked that number down to 25 percent. And in a study released in May of this year, Pew found that poll-response rates had plunged to the single digits, at just 9 percent. “Most of the surveys you read about in the newspaper are getting response rates between 5 and 25 percent,” says Blumenthal, “which means we’re looking at the opinion of those who are willing to be surveyed.”
Or those we are willing to survey. Automated dialers are prohibited by law from calling cell phones, and, given the cost of making the call (two to three times as much as reaching a landline), live-operator pollsters are reluctant to call cell phones, too. In the past, this omission was not considered an insurmountable obstacle to conducting a good poll. Voters who can only be reached by cell phone tend to be disproportionately younger than the average American, but also, counterintuitively, poorer and less white and therefore disproportionately likely to vote Democratic. A decade ago, they were still small enough in number that pollsters who excluded them could generally correct for their absence by weighting the responses of those young and African-American voters they reached on landlines. Even as recently as four years ago, it was estimated that only 18 percent of adults owned a cell phone but no landline, and Pew found in a postelection study that the difference between surveys that were based only on landline interviews versus those that included cell-phone respondents was “smaller than the margin of sampling error in most polls.”