AirAsia Flight 8501 Crash Reveals the Dangers of Putting Machines in the Driver’s Seat

Indonesian Army personnel keep watch during a search and rescue (SAR) operation for missing Malaysian air carrier AirAsia flight QZ8501, over the waters of the Java Sea on December 29, 2014.
Indonesian Army personnel search for AirAsia flight QZ8501 on December 29, 2014. Photo: Juni Kriswanto/AFP/Getty Images

Eleven months after AirAsia Flight 8501 crashed under mysterious circumstances south of Borneo, taking with it the lives of 162 passengers and crew, we at last understand what happened: On Tuesday, Indonesia released a report revealing that the plane was doomed by a combination of minor mechanical glitches and pilot error. While this in itself would be grounds for concern, eerie similarities with another crash five years earlier suggest that an underlying vulnerability remains unaddressed in the worldwide air-travel system — one that could eventually have unexpected and far-reaching consequences for the driverless cars currently being developed by some of the world’s richest and most ambitious companies.

Flight QZ8501 took off from Surabaya, on the Indonesian island of Java, at 5:35 a.m. local time on December 28, 2014, bound for Singapore. Ahead lay a band of thunderstorms, some of them towering up to 44,000 feet high. After reaching the assigned cruising altitude of 32,000 feet, the flight crew called air-traffic control and requested a turn to the left to avoid a storm. Given permission, the pilots then asked to climb to 38,000 feet. Controllers denied that request, then soon afterward said the plane could go to 34,000 feet. But something had gone wrong. The pilots did not respond to the new clearance. Instead, without issuing a distress call or signal, the plane abruptly climbed, slowed, and banked into a steep turn. When it disappeared from radar, it was plummeting at a rate of more than 11,000 feet per minute.

For days it seemed as though the plane had simply vanished. Then, on December 30, the first bodies and debris were pulled from the ocean six miles from the plane’s last known location. More wreckage was recovered soon after, and on January 12, the black boxes were recovered from the ocean floor.

Given the proximity of the thunderstorms and the flight crew’s urgent efforts to avoid them, it seemed that weather was likely a major cause of the accident. Indeed, before the black boxes were found, Indonesia’s weather agency issued at 14-page report stating that most likely the plane had been brought down by icing in the thunderstorm cloud tops.

But as Tuesday’s report takes pains to emphasize, it turns out that weather had no direct bearing on what happened. Instead, it focuses on pieces of equipment located in the tail of the aircraft called the Rudder Travel Limiter Units, or RTLU. These are designed to prevent the rudder from moving too much at high speed, in order to prevent it from damaging the tail. Due to faulty wiring, the units aboard QZ8501 had been generating error messages in the cockpit with increasing frequency during the month leading up to the accident. The pilots had learned to deal with these messages by turning off power to one of the plane’s computers, then turning it back on again.

It’s worth bearing in mind that this problem was annoying, but not dangerous. But as QZ8501 approached the mass of thunderstorms, the fault occurred several times in rapid succession. At the time, the plane was being flown by the second-in-command, a 46-year-old Frenchman who had only recently become an airline pilot and had only 2,247 hours of flight experience (compare that to the flight’s captain, a 53-year-old Indonesian with more than 20,000 hours experience). At 6:16 a.m., 40 minutes into the flight, a series of warning lights triggered by the faulty RTLU led the pilots to turn off one of the flight computers, which resulted in the autopilot turning off. Now the plane had to be flown manually.

The plane was still perfectly airworthy. But, perhaps preoccupied with the threat posed by the thunderstorm looming beyond his windshield, the co-pilot became disoriented. Failing to notice that the plane had started to bank to the left, he let it roll over into a steep, 60-degree turn. At the same time he pulled the plane into a steep climb. Bleeding off speed as it gained altitude, the plane became dangerously slow and continued banking to the left until it rolled past vertical. The captain shouted, “Level! Level! Level!”; the men managed to get the wings horizontal. But their problems were not over. Still badly disoriented, the co-pilot kept pulling back on the stick, causing the nose of the plane to remain high. This prevented the plane from regaining airspeed and recovering from its aerodynamic stall. The captain understood the danger. The black boxes show that as the plane sank precipitously, he pushed his control stick forward to break that stall. At the same time, however, the co-pilot was pulling his control stick back, negating the captain’s effort. Unable to move forward with sufficient speed, the plane plummeted. When it hit the water it was moving at a vertical speed of 95 mph.

The details line up to an uncanny degree with the crash of Air France Flight 447, which disappeared en route between Rio de Janeiro and Paris in 2009. In that case as well, a junior pilot was at the controls in the right-hand seat. A band of thunderstorms lay ahead, and the flight crew tried to climb and turn to avoid the worst of the weather. But again, a minor technical mishap — in this case, a blockage of a speed sensor by ice crystals — caused the autopilot to turn off so that the plane had to be flown by hand. Already unnerved by the weather, and now assaulted by flashing lights and alarms, the confused co-pilot pulled back on the stick so that the plane climbed several thousand feet and then stalled. The more experienced pilot in the left-hand seat recognized what had happened and pushed forward on his control stick to break the stall, but his efforts were negated by the still-confused junior pilot. In both cases, coincidentally, the planes wound up in a steep spiral descent to the left, with impact occurring less than five minutes after the precipitating mechanical fault.

When France’s air-accident investigation bureau published its report into AF447 in 2011, the sequence of events seemed mind-boggling. How could a pilot with one of the world’s most prestigious airlines make such a beginner’s mistake as pulling the nose up during a stall? (Aviation analysts around the world, myself included, had spent two years trying to figure out what might have happened, and none of our guesses proved correct.) It seemed unlikely that such a thing could ever happen again. And yet here we are, with an accident so stunningly similar that it verges on plagiarism.

This recurrence suggests that there’s a fundamental vulnerability in the way that humans interact with automatic systems in a crisis. In the normal course of things, we come to rely on the automation so much that our own abilities fade away. Then, when the automated system suddenly goes haywire and shuts itself off, we don’t have the skills to expertly handle the situation. To make matters worse, the sudden stress of a life-or-death crisis tends to shut down our capacity for reasoned thought and leaves us prone to mentally freeze up. In essence, in QZ8501 and AF447, the machines panicked, and then the human panicked.

No doubt in the wake of this second crash, the global airline industry will take steps to prevent this specific sequence of events from happening a third time. Yet the underlying issue will remain, and indeed spread beyond aviation, as automation becomes an increasing part of daily life. Already, the first self-driving cars are operating on the road, and the problem of keeping unoccupied drivers alert has emerged as a major issue. Imagine you’re behind the wheel at night, reading a book on a winding, rain-swept road, when a deer suddenly jumps in front of the car and the autopilot turns off. You’re going 80, an alarm is blaring, you have no idea where you are, and you’ve got less than a second to react. For engineers tasked with designing the system, what happens in that moment — that critical instant of handover from machine to human control — will likely pose a serious problem for a long time to come.