The Lessons of Slate’s Election Day Handicapping Experiment

By
Trying to guess the vote as it is being cast is perilous. Photo: Matt McClain/The Washington Post/Getty Images

When I first heard about Slate’s joint venture with VoteCastr to build an infrastructure for keeping people informed about turnout patterns on Election Day, I thought it sounded like just what the world needed. Why should campaigns be the only ones that know what is happening on the ground in elections (at least before the media organizations get hold of exit-poll data)? Besides, maybe the “hundreds of field workers” deployed by VoteCastr around the country to monitor turnout would help keep poll-place vigilantes away.

But Slate and VoteCastr didn’t stop with collecting turnout data, or compiling early-vote information, either. It sought, via a “proprietary” poll and some modeling, to provide rolling estimates of who was winning or losing in seven key states at any time of the day. I noted, playing Captain Obvious, that the viability of the whole system would depend on the accuracy of the estimates.

Well, the results the model predicted didn’t end up matching the results we all now know, as Recode’s Peter Kafka reported:

VoteCastr’s data generated lots of attention on Tuesday, and may have even helped move financial markets.

But it turned out to be way, way off: VoteCastr got five of the seven states it predicted wrong.

The most prominent error regarded Florida, which VoteCastr thought Hillary Clinton would win by more than 300,000 votes; instead, she lost it by more than 100,000 votes.

The misses convinced many people that VoteCastr’s mission was a bad one: It got the calls wrong, and it distributed those incorrect calls while voting was in process, which could have affected the outcome.

The only thing worse than giving people no information on Election Day is giving them real-time misinformation. I know that a lot of people I talked to yesterday were clutching VoteCastr data like a handful of prayer beads.

Experiments are a good thing. I would argue that the lesson of this one is to send out the field workers and compile and analyze the data but stay away from numerical estimates of who is and isn’t winning. A little Election Day suspense isn’t such a bad thing.