Last month, for the first time ever, one of Google’s self-driving cars — the ones it insists are safer than human-driven cars — crashed into another vehicle. According to a report filed with the California DMV, the Google car, briefly entering another lane to avoid sandbags in the road, struck a city bus. The test driver “saw the bus approaching in the left side mirror but believed the bus would stop or slow to allow the Google [autonomous vehicle] to continue,” according to the report, which was obtained by Re/code.
Google released part of its monthly self-driving car report early to Re/code, in which it both accepts blame and deflects it onto the bus driver — who is probably not a robot:
Our test driver, who had been watching the bus in the mirror, also expected the bus to slow or stop. And we can imagine the bus driver assumed we were going to stay put. Unfortunately, all these assumptions led us to the same spot in the lane at the same time. This type of misunderstanding happens between human drivers on the road every day.
This is a classic example of the negotiation that’s a normal part of driving — we’re all trying to predict each other’s movements. In this case, we clearly bear some responsibility, because if our car hadn’t moved there wouldn’t have been a collision. That said, our test driver believed the bus was going to slow or stop to allow us to merge into the traffic, and that there would be sufficient space to do that.
While it’s always heartening to see our future robot overlords screw up, it’s probably important to note that the Google self-driving cars have logged more than a million miles without any accidents that can be chalked up to the robot. Plus, the bus was going 15 miles per hour and the Google car was going two.