Well over six months after the 2020 general election, the public-opinion industry and the political campaigns and media that rely on it are still in a state of agonized uncertainty over undeniable polling errors that led to a wholesale underestimation of Republican voting strength. Now comes an official assessment of the wreckage by the American Association for Public Opinion Research that raises as many questions as it answers, according to a Wall Street Journal report:
In the aggregate, the panel said, polls overstated support for Democratic nominee Joe Biden by 3.9 percentage points in the national popular vote in the final two weeks of the campaign. That was a larger error than the 1.3-point overstatement in 2016 surveys for Hillary Clinton, who won the popular vote but lost in the Electoral College.
It was the most substantial error in polling since 1980, when surveys found it hard to measure the size of Ronald Reagan’s impending landslide and overstated support for President Jimmy Carter by 6 percentage points.
It was not, however, just a matter of getting the presidential numbers wrong; the error persisted down-ballot. And it was not simply a repetition of the 2016 mishap of underestimating white, non-college-educated voters as a share of the electorate.
Could the partisan bias of polling samples be corrected by over-sampling Republicans, the experts asked themselves. Not exactly:
Republicans who were willing to talk to pollsters might have been those most open to supporting Mr. Biden, while Republicans who declined to be polled may have been more supportive of Mr. Trump. If the latter possibility were the case, then merely increasing the number of Republicans in a survey wouldn’t solve the accuracy problem.
But in terms of what pollsters need to do to get ready for the 2022 midterms (or the 2021 off-year elections), there is an additional complication: The polls were very accurate in 2018, and even in the 2021 Senate runoffs in Georgia. Placing a thumb on the scales for Republican candidates going forward is not necessarily going to produce accurate polls. Perhaps the errors in both 2016 and 2020 were attributable somehow to Trump being on the ballot, either because (a) he stimulated a stronger turnout among the low-propensity conservative voters least likely to respond to polls, or (b) his attacks on “fake polls” discouraged participation by his supporters. And then you have to consider the whole “pandemic factor” that affected voting methodologies and availability for polling in 2020. In the WSJ report, AAPOR president Dan Merkle summarized what we know and don’t know on these subjects with something of a shrug:
“[I]t’s possible that these may be short-term phenomena that will abate when Trump is not on the ballot, or in the post-pandemic era,” he said. “On the other hand, it could be a broader issue of conservatives becoming less likely to respond to polls in general.’’
One response, he said, is that pollsters should be more clear and forthcoming about the level of uncertainty in polls.
Looking at it from a different perspective, perhaps for the time being consumers of polls — especially political writers and politicians themselves — should stop acting as though polling results are either spot-on infallible or worthless, depending on whose interests are served by promoting or rejecting any particular data set. Treating polls (and wherever possible, polling averages) as part of the picture in assessing the political environment and likely electoral outcomes — alongside history, objective indicators of external determinants, and even campaign activities — makes the most sense right now. That’s certainly a more valid approach than the alternative being promoted by Trump and many of his acolytes of asserting imminent victory and rejecting any data , including actual election returns, that contradict the desired spin.