How to Interpret 2018 Polls Without Fooling Yourself

You can freak yourself out by looking sideways at a single chunk of polling data. Illustration: Getty Images

This morning, many political junkies, bleary-eyed from staying up to watch and then ponder the president’s SCOTUS Reality Show last night, were greeted with this tweet from the CEO of Axios:

The full Axios article shows polling data from 13 Senate races, with Democrats leading in nine of them, which is not what you’d expect in a poll that is “brutal” for them.

Yes, Axios (like Politico, where VandeHei was earlier CEO) is famous for taking a snail’s-eye view of political developments, in which every twist and turn is epochal and decisive and game-changing. But even taking that into account, the siren, the over-the-top headline, and then the lurid treatment of a big chunk of data is a bit remarkable. Political scientist John Lovett got cranky about it:

And so did data journalist Nate Silver:

As Lovett noted, the Axios freakout is over a single battery of polls (from, I might add, a polling outlet, SurveyMonkey, that has a shaky reputation for accuracy). While the poll itself does make an effort to measure likelihood to vote (which is a factor that probably will benefit Democrats this year), the Axios marketing only deals with numbers for registered voters. And Axios treats basically dead-even race findings as predictive of a decisive outcome.

The more you look at the Axios “analysis” of these Senate polls, the worse it gets. One thing a single pollster can in fact show is trend lines based on its earlier findings. But that’s not there, either:

And most of all, the take on Senate races is missing context. Maybe Democrats have been “dreaming” of a Senate takeover, but it’s never been especially likely given the extremely pro-GOP landscape this year. The only specific SurveyMonkey finding that surprised me is that Marsha Blackburn has a big lead over Phil Bredesen in Tennessee. But personally, I would have never bet a nickel on a Democratic win in a state like Tennessee, unless Blackburn turned out to have a secret Roy Moore–style mall-trawling habit.

Axios is hardly the only political observer that sometimes exaggerates or cherry-picks polling data to pull the excited or the frightened into a click. If you only follow polls from Rasmussen, which routinely shows the president with approval ratings in the high 40s and sometimes over 50, you probably think his party is on the road to a midterm win and then a 2020 Trump reelection. And less-predictable polls can be misleading or confusing, depending on how they are deployed: the Reuters/Ipsos tracking poll that shows up in every collection of congressional generic polling (testing which party respondents want to control the House) regularly bounces around from near-even to double-digit Democratic leads.

There’s also another problem embedded in the RV/LV issue that Lovett mentions. Some pollsters use self-identified enthusiasm about voting as the key measurement of likelihood to vote. Others mostly rely on past voting behavior in similar elections. You can see how those different approaches could produce very different numbers in a year like this one, where a lot of Democrats who skipped voting in 2010 and 2014 seem eager to cast a vote against Trump in November.

Some people think the best way to avoid all these complicated issues is to throw up one’s hands and ignore polling data altogether, which is a bit like a Florida resident refusing to watch TV weather reports in hurricane season. The cure for bad data is better data, and the simplest way to get better data is to use more data. Averages of large numbers of polls are available at many places (notably RealClearPolitics and FiveThirtyEight). It’s not that hard when looking at one pollster’s findings to go back and check out previous findings from the same outlet to spot trends. Beware “internal polls” sponsored by campaigns, polls with tiny samples or dubious methodologies, and consider FiveThirtyEight’s pollster ratings before relying on anything from pollsters you’ve never heard of (a late campaign poll from the East Calcium Deposit State University poli sci department might not be especially reliable).

And as today’s Axios example shows, it’s generally a good idea to throw away the packaging before digging into a poll. Otherwise you are playing somebody else’s emotional game with yourself, which isn’t the best way to learn anything.

How to Interpret 2018 Polls Without Fooling Yourself