The Plane Paradox: More Automation Should Mean More Training

Today’s highly automated planes create surprises pilots aren’t familiar with. The humans in the cockpit need to be better prepared for the machine’s quirks.

Shortly after a Smartlynx Estonian Airbus 320 took off on February 28, 2018, all four of the aircraft’s flight control computers stopped working. Each performed precisely as designed, taking themselves offline after (incorrectly) sensing a fault. The problem, later discovered, was an actuator that had been serviced with oil that was too viscous. A design created to prevent a problem created a problem. Only the skill of the instructor pilot on board prevented a fatal crash.

Now, as the Boeing 737 MAX returns to the skies worldwide following a 21-month grounding, flight training and design are in the crosshairs. Ensuring a safe future of aviation ultimately requires an entirely new approach to automation design using methods based on system theory, but planes with that technology are 10 to 15 years off. For now we need to train pilots how to better respond to automation’s many inevitable quirks.

In researching the MAX, Air France 447, and other crashes, we have spoken with hundreds of pilots, and experts at regulatory agencies, manufacturers, and top aviation universities. They agree that the best way to prevent accidents in the short term is to teach pilots how to creatively handle more surprises.

Slow response to overdue pilot training and design reform is a persistent problem. In 2016, a full seven years after Air France 447 went down in the South Atlantic, airlines worldwide began retraining pilots on a new approach to handling high-altitude aerodynamic stalls. Simulator training that Boeing convinced regulators was unnecessary for 737 MAX crews began only after the MAX’s second crash, in 2019.

These remedies only address these two specific scenarios. Hundreds of other unforeseen automated-related challenges could be out there that cannot be anticipated using traditional risk-analysis methods but in the past have included factors such as a computer preventing the use of thrust reverse when it “thought” the airplane had not landed. An effective solution needs to go beyond the limitations of aircraft designers who are unable to create the perfect fail-safe jet. As Captain Chesley Sullenberger points out, automation will never be a panacea for novel situations unanticipated in training.

Paradoxically, Sullenberger correctly noted in a recent interview with us, “it requires much more training and experience, not less, to fly highly automated planes.” Pilots must have a mental model of both the aircraft and its primary systems, as well as how the flight automation works.

Contrary to popular myth, pilot error is not the cause of most accidents. This belief is a manifestation of hindsight bias and the false belief in linear causality. It’s more accurate to say that pilots sometimes find themselves in scenarios that overwhelm them. More automation may very well mean more overwhelming scenarios. This may be one reason why the rate of fatal large commercial airplane crashes per million flights in 2020 was up over 2019.

Pilot training today tends to be scripted and based on known and likely scenarios. Unfortunately, in many recent crashes experienced pilots had zero system or simulator training for the unexpected challenges they encountered. Why can’t designers anticipate the kinds of anomalies that nearly took down the Smartlynk plane? One problem is they use obsolete models created before the advent of computers. This approach to anticipate scenarios that might present risk in flight is limited. Currently, the only available model contemplating novel situations like these is System Theoretic Process Analysis, created by Nancy Leveson at MIT.

Modern jet aircraft developed using classic methods lead to scenarios that wait for the right combination of events. Unlike legacy aircraft built using only basic electrical and mechanical components, the automation in these modern jets uses a complex series of situations to “decide” how to perform.

In most modern aircraft the software driving how the controls respond behaves differently depending on airspeed, if it’s on the ground, in flight, if the flaps are up, and if the landing gear is up. Each mode can carry a different set of rules for the software and can lead to unexpected outcomes if the software is not receiving accurate information.

A pilot who understands these nuances might, for example, consider avoiding a mode change by not retracting the flaps. In the case of the MAX crashes, pilots found themselves in confusing situations, i.e., the automation worked perfectly, just not as expected. The software was fed bad information.

The MAX’s designers incorrectly assumed pilots would magically intervene. They missed the key fact that the same faulty data confusing the computer was also confusing the pilots. The flight automation systems operated precisely as designed on both doomed flights, all the way to impact.

Although these challenges can often be “designed out,” pilots can’t wait for planes that are better-designed. They need to be trained now to understand that an aircraft’s response depends on the computer “process model.” For example, when something happens on takeoff that is not defined in the manuals, pilots are typically trained to climb to a safe altitude, retract the landing gear and flaps, and then sort out what they need to do next. This was fine on traditional aircraft, but it has major drawbacks today. Even if the pilot “disconnects” the automation, there may still be mode changes that affect the way the airplane responds. In several of the newer aircraft, automated systems continue working even after a pilot believes they have “turned everything off.” When the aircraft is flying satisfactorily, pilots should consider changing nothing until they absolutely understand the plane’s status. Pilots also need unusual-scenario simulator training focused on the complete loss of automation, including flight control computers. Currently such training, if it occurs at all, is brief, with systems being restored. The loss needs to be concluded with a landing and include high-altitude handling. Virtually nobody is doing this today.

The industry must reverse the dangerous trend of providing pilots with less system knowledge and “corner case” hand flying, a faulty premise based on reliability theory, not system theory. Pilots must understand how systems change modes and their impact on flight controls and other systems.

Many pilots today feel they know less about their highly automated planes than they did about any of the arguably much simpler airplanes they flew in the past. This needs to change. We believe this better approach to training would have prevented many of the more than 60 commercial aircraft crashes that have taken over 3,500 lives over the past 11 years. These include the 737 MAX crashes in 2018 and 2019, the 2019 Russian Superjet crash in Moscow, the 2014 Air Asia Airbus 320 crash in the Java Sea and Air Algérie MD-83 loss in Mali, the 2013 Asiana Boeing 777 crash in San Francisco, as well as the 2009 Air France Airbus 330 crash in the South Atlantic.

Thanks to the fact that thousands of pilots remain furloughed, the industry now has a unique opportunity to take the first step toward preventing accidents with better pilot training. With more than $70 billion in recent grants and loans, America’s airlines are in a strong position to give pilots the kind of expertise they need to deal with more unexpected events. In the process they can create a new worldwide model that will prevent more crashes triggered by surprises no airline training department or built-in automation system can anticipate. Until automation can account for its own surprises, we need to make sure humans can.


WIRED Opinion publishes articles by outside contributors representing a wide range of viewpoints. Read more opinions here, and see our submission guidelines here. Submit an op-ed at opinion@wired.com.


More Great WIRED Stories