‘‘The. Polls. Have. Stopped. Making. Any. Sense.’’

On the Friday after the Democratic convention, Tom Jensen tried to reach out and touch 10,000 Ohioans. He wanted to ask them, among other questions, whom they planned to vote for in November: Barack Obama or Mitt Romney? This sort of thing is easier—and harder—than you might think. As the director of Public Policy Polling, Jensen has at his disposal 1,008 phone lines hooked up to IVR (interactive voice response) software that enables PPP to make 400,000 automated calls a day. All Jensen needs to do is feed the 10,000 phone numbers into a computer, record the series of questions he wants to ask, press a few buttons, and voilà: He has a poll in the field. That’s the easy part. The hard part starts with getting people to answer the phone. Beginning that Friday night around six and then five more times over the course of the next two days—in the mornings, afternoons, and evenings—PPP called those 10,000 Ohioans; by Sunday night at eight, only 1,072 of them had been reached. Still, for Jensen’s purposes, that was sufficient, and he got to work assembling his poll.

And that’s where things get even more difficult. The 1,072 Ohioans who participated in PPP’s poll were, as is the case with almost every poll taken today, older and whiter than the electorate. As a result, Jensen decided to give more weight to certain respondents’ answers. “If the whole world was releasing unweighted polls,” he says, “Mitt Romney would be heading to an easy election.” For instance, although African-Americans accounted for just 7 percent of the respondents to PPP’s poll, Jensen believes—based on census data, past elections, and the current political environment—that black voters will make up 12 percent of the Ohio electorate come November. So Jensen multiplied his African-American respondents’ answers by 1.5. Similarly, only 7 percent of the respondents were under the age of 30; since Jensen projects young people will make up 14 or 15 percent of Ohio’s electorate, he multiplied his 18-to-29-year-old respondents’ answers by two. After some additional statistical tinkering, Jensen had his poll, and a little past ten on Sunday night, PPP released the results.

Once upon a time, polls came and went without much fanfare or even notice. That time is gone. Today, a good portion of Americans plan their lives—or at least their Twitter feeds—around the latest political numbers. As every good political junkie knows, each day at 9:30 a.m. Eastern time, Rasmussen Reports releases its daily national tracking poll; three and a half hours later, Gallup comes out with its own. Wednesdays are typically when Quinnipiac University, the New York Times, and CBS unveil their “swing state” polls; on Thursdays, it’s NBC, The Wall Street Journal, and Marist College’s turn to share their “battleground state” polls. And Sunday nights are for PPP—a three-person public-opinion-research firm in North Carolina that produces upwards of 800 polls a year. “Sunday’s a dead news day,” Jensen says of his poll-release strategy, “so people who are living and breathing this presidential election are just sitting around all day nervously waiting for PPP’s latest poll to come out.”

Jensen does his best to feed those anxieties. Like a record-company executive who leaks his best band’s new single, he begins dropping hints about PPP’s poll results as soon as the data starts coming in. “Things definitely looking good for Obama in the polls we started tonight,” he tweeted that Friday, a few hours after he sent the Ohio and four other polls into the field (or rather through the computer system). By Saturday morning, he was telling PPP’s more than 40,000 Twitter followers that those polls were “looking like … 2008.” And on Sunday night, a few hours before he posted the Ohio results, he tweeted this tidbit: “[L]ooks like Obama leads there by more than 2008 margin of victory.” So when Jensen revealed that PPP found Obama leading Romney 50 to 45 in Ohio—0.4 percent better than Obama had performed against John McCain in 2008 and, more important, two points more than Obama had led Romney in a PPP poll of Ohio in August—it wasn’t exactly a surprise.

That didn’t stop all hell from breaking loose. Democrats celebrated the result—one of the first pieces of evidence that the president had received a bounce from their party’s convention. Republicans, for their part, vented their spleen. Many accused PPP (which does work for Democratic candidates and liberal interest groups) of bias—of tweaking their formulas to produce a desired result. On Twitter, a parody account, Partisan Policy Polls, deadpanned: “Ohio voters favored Gov Romney 52-47. A follow-up question of ‘Why are you so racist?’ resulted in a switch to 50-45 lead for Pres Obama.”

Nate Silver of FiveThirtyEight.Photo: Christopher Anderson/Magnum Photos/New York Magazine

Then the analysis and debate began, an ad hoc crowd-sourced inquiry into whether PPP’s new numbers were useful predictive scientific findings or political propaganda. This effort included both a quasi-literary evaluation of its questions (What did it mean that 15 percent of Ohio Republicans believed Romney deserved more credit than Obama for killing Osama bin Laden?) and close scrutiny of its methodology (Was it significant that Democrats made up 41 percent and Republicans only 37 percent of the poll?).

The acknowledged master and leader of this analytical effort is Nate Silver. On his FiveThirtyEight blog at the New York Times website, Silver tried to make sense of the PPP poll, and scores of other ones, as he converted their numbers into one of his own: his trademarked FiveThirtyEight forecast that puts a specific numerical value on Obama’s and Romney’s chances of victory in November. On the morning after PPP showed Obama leading by five points in Ohio, and several other state and national polls found similarly positive results for the president, Silver put the president’s chances of reelection at 80.7 percent: “[T]‌he polling movement that we have seen over the past three days represents the most substantial shift that we’ve seen in the race all year, with the polls moving toward Obama since his convention,” he wrote.

And yet, for all the data constantly streaming in from polling firms, and all the data analysis being spit right back out by people like Silver, the polling industry has never been less confident in its ability to reduce a series of interviews to a number that is an accurate reflection of the opinions and future behavior of the populace. Some days, the polls—which are conducted by scores of firms, from established multimillion-dollar corporations to Podunk PR shops with P.O. boxes—present such wildly varied numbers it’s as if they’re examining two different countries. Other days, the results do align, but with clarity come accusations of bias by whoever happens to be shown to be losing. Mostly, this fall, that has been Romney, causing many Republicans to heatedly call into question the entire polling enterprise.

PPP’s number was quickly buried under piles of new numbers, which displayed an alarming inconsistency. Silver’s crystal ball grew cloudier. He started to downgrade Obama’s chances—to 78.6 percent, then to 76.2, then to 72.9. Finally, on a Wednesday afternoon in late September—a day on which more than twenty national and state presidential polls were released—the normally sober Silver seemed to morph into Howard Beale as he tried to reconcile the results of two new polls, one from Marquette University showing Obama beating Romney 54 to 40 in Wisconsin and the other from Rasmussen showing Romney beating Obama 48 to 45 in New Hampshire. “There is no plausible universe in which Mr. Obama wins Wisconsin by fourteen points but loses New Hampshire by three,” Silver later wrote. “Following the polls on Wednesday reminded me of the aphorism: ‘If you don’t like the weather in Chicago, wait five minutes.’ ”

Hence Silver’s mad-as-hell tweet at 1:27 that afternoon: “The. Polls. Have. Stopped. Making. Any. Sense.”

After Obama, Silver may have been the biggest winner of the 2008 elections. A statistician who hadn’t yet turned 30 (and who, for his day job in Chicago, wrote and edited for Baseball Prospectus), Silver began the campaign as one of the hundreds of anonymous, unpaid “diarists” on the Daily Kos website. There, writing under the pseudonym “poblano,” he dissected the political polls in meticulous, downright obsessive detail—sorting the good ones from the bad ones and bringing a level of empirical rigor to his analysis that was heretofore unknown in the world of political punditry. Where most commentators were content to frame the Democratic primary as a contest between Obama’s call for change and Hillary Clinton’s appeal to experience—and made their predictions as to who would prevail based on which message they felt was more potent—Silver was offering up stuff like:

“The basic technique here is multiple regression analysis. I took a look at a whole number of independent variables, and tried to gauge their effect on one dependent variable: Obama’s two-way vote share. By ‘two-way vote share,’ I mean the proportion Obama got of the (Obama + Hillary) votes; essentially we’re throwing the Edwards, Richardson, Biden, etc. votes out. So in New Hampshire, Obama’s two-way vote share is 48.3 percent, and Hillary’s is 51.7 percent—much higher than their multi-way vote share.”

And using it to make uncannily accurate forecasts—projecting, for instance, that Obama would win 833 Super Tuesday delegates, which was just fourteen delegates off Obama’s actual haul that day. It was an approach that resonated with a new group of young, web-savvy political junkies who favored charts and graphs over platitudes and clichés (and many of whom, like Silver himself, favored Obama over Hillary). “Poblano” gained enough of a following that, in March 2008, he abandoned Daily Kos, and eventually his pseudonym, to start his own website, FiveThirtyEight.com (538 being the total number of votes in the Electoral College). He also came to the attention of the Obama campaign, which, as Sasha Issenberg reveals in his new book, The Victory Lab, shared its internal polling with Silver, who signed a confidentiality agreement with the campaign so that he could analyze the data. Before long, there was a Silver-worshipping Facebook group—There’s a 97.3 Percent Chance That Nate Silver Is Totally My Boyfriend—and TEAM NATE SILVER T-shirts. If Shepard Fairey’s HOPE posters spoke to the Obamanauts’ ids, Silver’s regression analyses tickled their superegos. His legend only grew when, on Election Night, he accurately predicted the results in 49 states and Obama’s popular vote within 1.1 percent.

While Obama departed Chicago for the misery of Washington’s partisan gridlock, Silver moved to Brooklyn and has spent the past four years enjoying his newfound celebrity, which is no longer just confined to the world of stats and politics. He has landed on the Details “Mavericks” list, Out’s “Out100,” and Time’s “Time 100.” Penguin paid him a reported $700,000 advance for two books, the first of which, The Signal and the Noise: Why So Many Predictions Fail—But Some Don’t, hit shelves last week. Most important, in 2010, the New York Times signed Silver to an unusual three-year licensing agreement so that it might host his FiveThirtyEight blog. The onetime hobbyist now has the imprimatur of the nation’s preeminent news organization—and, though its reporters might think otherwise, essentially leads it politics coverage.

Although the “celebrity pollster” has existed ever since George Gallup correctly predicted Franklin Delano Roosevelt’s victory over Alf Landon in the 1936 presidential election, Silver has created for himself a new archetype: the celebrity polling analyst. It’s a field that’s all but certain to get more crowded. “I think there’s space in the market for a half-­dozen kind of polling analysts,” Silver says. “All I know is that I have way more stuff that I want to write about than I possibly have time to.”

In one respect, our need for Nate Silver, and those like him, is obvious. With so many polls coming out each day—in 2008, there were 1,687 state-level polls of the matchup between Obama and John McCain—someone has to keep track, to study them the way a radiologist might an X-ray. But the rising demand for trustworthy polling analysis also reflects something disturbing about the data itself. The central problem is that prototypically modern science is being disrupted by new technologies, which have created a flood of new firms and new methods. “We’re in sort of what I would call polling’s dark age,” says Jay Leve, who runs the polling firm Survey USA. “We’re coming out of a period of time where everyone agreed about the right way to conduct research, and we’re entering into a time where no one can agree what the right way to conduct research is.”

One reason consensus was easier to achieve back then is because until about twenty years ago, almost all political polling was done by only a dozen or so firms. The state of the technology essentially dictated the limited number of players—it was a matter of money. Polling involved employing live operators to call potential respondents, ask them questions, and record their responses. This was incredibly expensive. But in 1990, Leve, a former newspaper reporter who’d gone to work for Citibank on improving its ATMs and home-banking operations, had a brainstorm: What if the same IVR technology banks used to let people manage their accounts from home was used to collect public opinion? Instead of punching No. 1 to make a withdrawal or punching No. 2 to make a deposit, the person would punch No. 1 to indicate a preference for Candidate A or punch No. 2 to indicate a preference for Candidate B. For one thing, it would mean you could conduct a political poll with IVR for about a tenth of what one with a live operator cost.

Leve founded Survey USA as the first robo-pollster, and in his wake have come many more. Some, like Survey USA, are generally well regarded. But IVR has made the barriers to entry in the polling business so low that pretty much any public-relations or political-consulting firm can get into the game, whether it has any polling expertise or not.

Still, at a time when political campaigns are the greatest American spectator sport, the hunger for horse-race numbers has never been greater. So any poll, no matter how slapdash, is almost certain to get attention. While many prominent national media outlets, including the New York Times, the Washington Post, ABC, and NBC, refuse to report the results of automated polls on general principle, robo-pollsters don’t need them. “The Internet means we don’t have to go through CNN or anybody else to present my data to the public,” says Scott Rasmussen, who runs the robo-polling giant Rasmussen Reports. What’s more, given the partisan nature and intense political focus of cable news these days, the robo-pollsters are able to broadcast their numbers wide. “NBC won’t talk about our polls,” says PPP’s Jensen, “but Rachel Maddow and Ed Schultz will talk about our polls all night long on MSNBC.” Similarly, Rasmussen’s polls are regularly featured on Fox News, where Rasmussen himself is a frequent guest. “I don’t really worry too much what Chuck Todd thinks,” Rasmussen says.

Indeed, PPP and Rasmussen have come to be viewed, by their fans and bashers alike, as the MSNBC and Fox News of polling, respectively. Which means their poll results are almost invariably filtered through a partisan lens. This was never more apparent than in August, after the Missouri Republican Senate nominee Todd Akin stepped in it with talk about “legitimate rape”—causing his fellow Republicans to call for him to drop out. Rasmussen fired up its automatic dialers and quickly put a poll in the field in Missouri, which found the Democratic incumbent Claire McCaskill, who had previously been trailing Akin, now up ten points. Democrats cried foul. “Everyone knows that Rasmussen is a tool of the GOP Establishment in Washington,” a Democratic Senatorial Campaign Committee spokesperson told the Huffington Post. And McCaskill herself tweeted: “Rasmussen poll made me laugh out loud. If anyone believes that, I just turned 29. Sneaky stuff.” Meanwhile, after a PPP flash poll conducted that same week found Akin still leading McCaskill 45 to 44, Republicans smelled a rat. “Anyone suspect that the Democratic polling firm might be trying to get the result they want, to ensure Akin stays in, so that he can get pummeled in November?” the National Review’s Jim Geraghty asked.

PPP is hardly the only polling outfit that’s currently arousing conservatives’ suspicions. In the past month—and especially the last few days—as numerous polls have shown Obama pulling away from Romney, an increasing number of conservatives have begun chalking up the results to a polling-firm conspiracy. “The polls are just being used as another tool of voter suppression,” Rush Limbaugh recently warned his listeners. “They want to depress the heck out of you, and they want to suppress your vote.” A favorite conservative complaint is that pollsters are including more Democrats than Republicans in their interviews—never mind that years of survey research indicate that Democrats do in fact outnumber Republicans and that the partisan breakdown in most polls is driven by how the poll’s respondents identify themselves rather than by the pollster weighting the results to match a predetermined split. One conservative website, UnSkewedPolls.com, goes so far as to take polls from established outfits like NBC and Monmouth University and then reengineer them by adding enough Republicans to their samples so that Republicans outnumber Democrats. The result? A recent Reuters-Ipsos poll, which found Obama beating Romney by five points, gets “unskewed” to show Romney leading Obama by ten.

And yet, while conservatives’ poll denialism is patently wacky, it’s not as irrational as, say, their climate-change denialism. That’s because, unlike climate science, the science of polling has increasingly, and undeniably, come to be based on a good deal of guesswork. For years, the scientific part of polling science has been based on what’s known as the “random-probability sample.” Pollsters have labored to make sure that every member of the population has an equal chance of being selected, so that a sample of 1,000 people will be representative of the 300 million. “We were all taught this notion that a scientific survey is one where everyone has an equal or known probability of selection,” says Mark Blumenthal, a former Democratic pollster who’s now the Huffington Post’s lead polling analyst. That wasn’t that difficult when more than 90 percent of American households had home telephones and anywhere from a third to a half of those households were willing to answer a pollster’s call.

That is no longer the case. “We’re in a world right now where it’s impossible to find the perfect thousand,” says Leve. Part of the problem is the declining response rate. In 1997, the Pew Research Center found that typical poll-response rates were 36 percent. By 2003, Pew had knocked that number down to 25 percent. And in a study released in May of this year, Pew found that poll-response rates had plunged to the single digits, at just 9 percent. “Most of the surveys you read about in the newspaper are getting response rates between 5 and 25 percent,” says Blumenthal, “which means we’re looking at the opinion of those who are willing to be surveyed.”

Or those we are willing to survey. Automated dialers are prohibited by law from calling cell phones, and, given the cost of making the call (two to three times as much as reaching a landline), live-operator pollsters are reluctant to call cell phones, too. In the past, this omission was not considered an insurmountable obstacle to conducting a good poll. Voters who can only be reached by cell phone tend to be disproportionately younger than the average American, but also, counterintuitively, poorer and less white and therefore disproportionately likely to vote Democratic. A decade ago, they were still small enough in number that pollsters who excluded them could generally correct for their absence by weighting the responses of those young and African-American voters they reached on landlines. Even as recently as four years ago, it was estimated that only 18 percent of adults owned a cell phone but no landline, and Pew found in a postelection study that the difference between surveys that were based only on landline interviews versus those that included cell-phone respondents was “smaller than the margin of sampling error in most polls.”

Today, some demographers think that perhaps a majority of households either don’t have or don’t answer a landline. In other words, ignoring cell phones risks ignoring more than half of America. “I don’t know how you can in good conscience release polls in this day and age without that group factored in,” says Leve, who, in addition to robo-­polling machines, now spends money on live operators so that he can reach cell-phone users, too. Survey USA, however, is the exception: Especially when it comes to state-level polls, many firms, including PPP and Rasmussen, continue to exclude cell phones in their surveys.

But let’s say you’re a polling firm that has decided to spend money on cell phones. You still need to confront the bigger question: What to make of your results? No firm publishes its results without some sort of adjustment, but nobody agrees on what the proper adjustment should be. Should the cell-phone respondents represent 25 or 30 or 40 percent of a pollster’s sample? Considering that response rates for cell phones are even lower than those for landlines, at what point do the weights applied to cell-phone users, in order to get them to account for, say, 35 percent of a poll’s sample, produce unstable results? “You could ask a lot of different people, ‘Well, how do you combine your cell phones and your landlines?’ and the answer is, ‘Oh, it’s a real delicate art,’ ” says Leve. “No one knows the right way to do this right now.” “It’s almost a miracle that this stuff is still projective,” says Blumenthal. “The whole idea that we know the probability of selection for every adult is a little bit of a fiction.”

Most pollsters weight their responses using some combination of data from the census and previous elections’ exit polls—the latter of which, of course, are polls themselves, and which require pollsters to quantify their speculations about how the current political climate compares to the last one. A few firms, most notably Rasmussen, weight by party identification. (That’s why Rasmussen, which currently structures its polls so that Republicans account for 37.6 of respondents and Democrats for 33.3 percent, is now alone among pollsters in showing a tight presidential race.) And some pollsters just basically wing it, weighting their polls based on their hunches, like the pollster who decided that even though black turnout in Michigan has historically been between 12 and 14 percent, it’ll be 8 percent in 2012—an assumption that, in an August poll, tied Romney with Obama, against all other evidence.

Some pollsters are giving up on weighting—and telephones—altogether. Doug Rivers, a Stanford political scientist who runs the Internet polling firm YouGov, is leading the way in that area. Working from high-quality government surveys as well as data from Pew, Rivers has created a multi-variable sample of the population. At the same time, YouGov has recruited an online panel of a million people who, in exchange for a fee, agree to respond to YouGov surveys. From that panel of a million, YouGov then selects a subset of 1,000 to 2,000 that matches the variables of its population sample and has that subset take its survey. “Our method is purposive selection as opposed to random selection,” Rivers explains. “So if the variables we’ve used to select people don’t remove the selection bias for joining the panel, the biases will follow.” But Rivers is confident that he’s got the right variables.

Still, despite a solid track record in the 2010 midterms and the GOP presidential primaries this year, YouGov has its doubters. The Real Clear Politics Poll Average, for instance, excludes YouGov from its results. Which is one reason Rivers is already looking beyond online panels as the future of polling. During the Republican National Convention in August, YouGov launched a pilot project with Microsoft that sought the political views of Xbox Live users—asking them to watch the proceedings and convey their responses through their gaming systems. It plans to do the same thing during the three presidential debates starting this week, producing real-time polls. “CNN used to put ten people in a room during a debate and gave them dials to express how they were feeling about what they were watching,” says David Rothschild, a Microsoft economist working on the Xbox project. “But we could have hundreds of thousands of people with controllers in their hands.” Although those hundreds of thousands of gamers almost undoubtedly skew young and male (the exact opposite of telephone surveys), YouGov and Microsoft believe that with the proper weighting, they’ll be able to produce that mythical representative sample.“If George Gallup was frozen in time after the 1936 election and came back in 2012, he’d recognize polling as exactly what he’s doing, but that’s going to be revolutionized in the next few years,” predicts Rothschild. “We’ll be thinking about data coming out in real time.” In other words, if you think the torrent of polls and numbers is overwhelming now, just wait.

That might sound like Nate Silver’s ultimate fantasy; he says he lives by the maxim the more data the better. But the truth is Silver is already growing a little tired of political polling. Part of the problem is he’s worn out by the nastiness. “Before I did politics I did sports, and there weren’t nearly so many assholes in sports coverage,” he says, sounding much older than 34. “You weren’t getting in huge personal fights like, ‘Oh, you’re a White Sox fan, so you’re biased in how you’re interpreting the data.’ ” After taking heat over the recent revelation that the Obama campaign shared its internal polling data with him in 2008, Silver says he wouldn’t agree to such an arrangement in 2012. “I have thought more carefully about my role in all of this, and I think it’s fine to have political opinions, but you should be careful about the line between being an analyst and being a participant.” Like a true Times man, he didn’t vote in the 2010 elections, and he doubts he will this year.

More than anything, Silver seems to find the world of political polling too small. “I feel a little bit less like I want to play the polling police this time,” he says. Granted, there are moments when he simply can’t help himself. Witness his occasional forensic debunkings of polls that don’t sit right with him. (“Michigan Isn’t a Tossup,” read the headline on one August FiveThirtyEight post, which spent 1,650 words explaining why two polls that had the temerity to suggest it was a toss-up were improperly underweighting African-American and young voters.) But for the most part Silver has stayed out of the weeds. “I view my role now as providing more of a macro-level skepticism, rather than saying this poll is good or this poll is evil,” he says. And in four, he might be even more macro, as he turns his forecasting talents to other fields. “I’m 97 percent sure that the FiveThirtyEight model will exist in 2016,” he says, “but it could be someone else who’s running it or licensing it.”

But fear not, poll junkies! As one Nate prepares to exit, another rises to take his place. This past January, Nate Cohn, then a 23-year-old, was toiling away in a lowly foreign-policy-think-tank job in Washington, D.C., analyzing India-Pakistan relations and the U.S. defense budget for the Stimson Center. That’s when, partially inspired by Silver, he started a personal blog, Electionate, that was devoted to political-polling analysis. “I’m not a statistician or a pollster or a Ph.D. in demography,” Cohn concedes, “but it’s remarkably easy to become incredibly educated on polling issues when you have Nate Silver dissecting them.” Most of Electionate’s initial web traffic came from Cohn’s Facebook friends, but by March, as Romney was still struggling to sew up the GOP nomination, Cohn was getting about 10,000 page views a month. And in June, The New Republic bought the Electionate blog and hired Cohn as its full-time polling analyst.

“The original Nate Silver isn’t too different from what I do,” Cohn says. That’s a sentiment Silver himself concurs with: “I think [Cohn’s] good, although sometimes it almost reads like he’s doing the more 2008-style FiveThirtyEight.” Lately, the two have increasingly come to resemble competitors. Just as the networks battle on Election Night to be the first to call the race, the polling gurus now appear to be vying to be the first to call the election two months out—with each one inching closer and closer to declaring Romney toast. Last week, Silver was reporting that his forecast had put Obama’s chances at an all-time high of 81.9 percent, while Cohn was writing that “a Romney victory just doesn’t seem like it’s in the cards.”

In the meantime, Cohn is living in the moment. While the old Nate sounds burned out, complaining about his twenty-hour workdays, the new Nate, who says he’s getting about five or six hours of sleep a night, can’t believe his good fortune. “At heart, I’m a dork who’s just genuinely interested in this stuff and would be doing it anyway,” he says. The fact that someone is now paying him to wallow in the data, to load numbers into an Excel spreadsheet and search for patterns, to assemble from the noise a portrait of the future—well, it can be a little difficult for a self-described “empirically minded, data-driven” person to believe. “Maybe after the election I’ll have a better sense of the big picture,” he continues. “I do think I’ll probably try to learn statistics.”

See Also
Q&A With Nate Silver on His New Book, Whether Romney Has a Shot, and Why He Doesn’t Play Fantasy Baseball Anymore

‘‘The. Polls. Have. Stopped. Making. Any. Sense.’’