Tags:
Rapid7's Black Hat announcement around the inherent weakness of the CAN bus avionics systems used in some small aircraft did not focus on any new vulnerabilities. CAN, as well as most monitoring and control protocols in general use, has no authentication of source or data, as well as no native encryption.
In its announcement, Rapid7 raised the alarm that "small airplanes" are vulnerable to CAN bus manipulation which can result in unauthorized changes to flight and engine instrumentation readings. The announcement asserts that such manipulation could lead to a host of dangerous outcomes because a pilot cannot trust their readings.
There are several things patently wrong with this last statement.
Limiting Factors
First, the attack methodology requires physical access to attach itself to the CAN bus system. Even if physical access were breached and the CAN bus exploited, there are other mitigating factors that should limit the catastrophic effects of that compromise.
Second, pilots usually know their small craft, including its behaviors, and their systems inside and out. This knowledge makes it harder to trick a pilot into believing faulty instrument readings.
Pilots are trained to recognize threats, and to check and cross-check sources to identify possible false data and readings. Initial flight training introduces the student pilot to a new visual environment and teaches that individual how to correlate that experience with instruments to identify correct and incorrect flight attitudes, taking into account direction, altitude, airspeed and angle of attack (AoA).
Third, a pilot in command should always have a contingency plan. A pilot maintain currency in the various aircraft flown, including the systems on each aircraft, failure modes and actions to take in light of those failures.
Manual Failover
A main contingency plan includes manual mode to failover. Pilots know not only how to read an old-fashioned compass (the kind that is swung manually and read manually), but also how to take into account its built-in errors.
For example, if any of my avionics fail, I know that my magneto-equipped engine will not stop running. And I know where the circuit breakers for the autopilot are in case my system does not let me switch over automatically.
As another failover, most (if not all) modern pilots carry iPads with native GPS and an often independent attitude and heading reference system (AHRS), providing a redundant tool that provides situational awareness in an emergency.
And finally, safety and security are embedded into aircraft by design. For example, most digital (glass) cockpit systems have been designed with redundancy in mind. If they aren't, the owner/operator, who is under the U.S. Federal Aviation Regulations (FAA) responsible for maintenance, should take corrective action.
The Human Head
Ultimately, small plane safety comes down to situational awareness. As pilot in command, I am on the operational side of the flight equation: I look out the window to follow visual flight rules (VFR) and maintain proper cross-checks in accordance with instrument flight rules (IFR). I understand my equipment and my systems. In flight, I am effectively both my own safety and security officer.
Our recent 2019 SANS OT/ICS survey highlighted the need for the IT and OT sides to achieve a common ground in order to protect critical infrastructure. Aviation is one of these infrastructures, and I speak from both the side of a security professional and a pilot.
The Rapid7 announcement further illustrates this need for assessing risk in ultrasafe, high-risk industries such as aviation. It also supports that the concept of man-in-the-loop is still the true security mechanism in this age of machine learning, artificial intelligence (AI) and autonomous vehicles.