Autonomous Systems and Aviation Part 2

Understanding the risks of autonomous aviation systems

When attempting to understand the risks of autonomous aviation systems, including self-flying airplanes, it is worthwhile to consider the circumstances of two very famous aviation accidents, which occurred within a few months of each other in 2009. In June of that year, Air France 447, while en route from Rio to Paris over the Atlantic, experienced a failure of its airspeed sensing system. This failure, as well as the extreme weather conditions, left the flight crew disoriented and unable to fly the plane within its operating parameters, ultimately leading to a loss of life for everyone on board. Earlier in January, the crew of United 1549, having struck a flock of birds while taking off from New York’s LaGuardia Airport, suffered a loss of all engine power at a dangerously low altitude over North America’s largest and most crowded city. The flight crew managed to successfully ditch the aircraft in the Hudson River, avoiding both land and water obstacles and evacuating everyone on board safely.

One outcome was tragic, the other triumphant, but they both raise interesting questions about how machines rather than humans might have performed in the same circumstances. Might the passengers aboard AF 447 been saved, if the plane’s automated systems had not disengaged, allowing the pilots to make multiple incorrect control inputs that brought about an unrecoverable flight condition? Conversely, would a fully automated UA 1549, lacking the resourcefulness and skill of its experienced former Navy pilot, have met with tragedy somewhere on the streets of Manhattan?

Accurately sensing the environment

At their most basic level, the outcomes of both accidents depended heavily on the flight crews’ ability to correctly sense the environment in which they were operating. After entering a descent at a steep angle of attack with no speed indications, the three-man crew of AF 447 lost all sense of the aircraft’s altitude, pitch and movements. Captain Sullenberger, sensing only one possible avenue for rescue, committed to a crash while visually avoiding obstacles that might have proved catastrophic at the aircraft’s high approach speed. Within the autonomous systems world, sensing is the way in which the system’s learning algorithms and central logic acquire the data that drives all decisions and actions. AI decisions, like human decisions, are often based on learning models derived from many previous experiences. These decisions can be problematic in two circumstances: when presented with data or circumstances never before seen by the model, or when presented with data that matches, but contradicts previous learning.

In fully autonomous aviation systems, hackers can exploit these circumstances, leading to unpredictable and potentially disastrous results. One way this might be achieved is through the deliberate introduction of false data into the environment, giving the system an inaccurate picture of its actual circumstances. A security researcher from the University of Cork, for example, found that by using an inexpensive laser pulse generator, he was able to spoof the sophisticated LIDAR-based object sensor of a self-driving car and, “create the illusion of a fake car, wall, or pedestrian anywhere from 20 to 350 meters from the LIDAR unit, make multiple copies of the simulated obstacles, and even make them move.” This same form of LIDAR sensing is the basis for several emerging aviation collision avoidance systems.

Small alterations in the physical environment can also have a disproportionate effect on machine-learning-based sensing systems, due to the heavy reliance on training data and prior experiences to classify objects and events. Researchers from the University of Washington, for example, found that when they placed life-sized paper road signs over actual signs of the same type, a computer vision system completely misidentified them, mistaking an image of a “faded stop sign” for a sign reading “Speed Limit 45.” These attacks work because computer vision systems are often designed to use only a handful of features of an image to make a classification.

Minimize the risks of autonomous systems with good programming practices

The situation is not entirely hopeless, though. Good programming practices can minimize the risks posed by autonomous systems. Redundancy, especially in sensing, can help avoid missed or misinterpreted events. The most critical systems should employ a diversity of sensing methods, algorithms and hardware. Computer vision systems from two different vendors, for example, are not likely to contain the same faulty logic or programming errors, increasing the probability that one system will keep functioning in spite of an attack. Data collections such as maps or object signatures should rely on trusted sources, backed by strong cryptography, to prevent the introduction of false signals. Finally, limits should be placed on the full autonomy of machine-learning systems: these approaches should be complemented by both heuristics and hard limits on operating parameters that could lead to unsafe flight conditions or high-risk scenarios.

When you need to ensure your aerospace project is programmed with redundancies and best practices for autonomous systems and applications, contact Performance Software to be your partner. 

autonomous vehicles and aviationembedded security