Air safety has become a growing concern in recent years following several incidents of air crashes and disappearances. It has been observed that modern pilots often grapple with an onslaught of information from multiple monitors, especially during critical moments.
To address this issue, researchers at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) have turned to AI systems, which can serve as a safety net to prevent such incidents by combining human intuition with machine precision.
The new “Air-Guardian” system acts as a proactive copilot for better collaboration between human pilots and AI, promising safer skies for all.
The system is designed based on the concept of having two pilots onboard – one human and one computer. While both have their “hands” on the controllers, they are always monitoring different aspects of the flight. In case both are focusing on the same thing, the human copilot takes over. However, if the human is distracted or misses something, the computer copilot quickly takes charge to ensure safety.
This means Air Guardian monitors not just the aircraft but also the pilot. It does this by tracking the pilot’s eye movements and building up “saliency maps.” The saliency maps help to pinpoint where attention is directed and serve as visual guides highlighting key regions within an image, aiding in grasping and deciphering the behavior of intricate algorithms.
Air-Guardian identify early signs of potential risks through these attention markers instead of only intervening during safety breaches like traditional autopilot systems.
“This system represents the innovative approach of human-centric AI-enabled aviation,” said Ramin Hasani, MIT CSAIL research affiliate and inventor of liquid neural networks. “Our use of liquid neural networks provides a dynamic, adaptive approach, ensuring that the AI doesn’t merely replace human judgment but complements it, leading to enhanced safety and collaboration in the skies.”
Air-Guardian’s core technology relies on an optimization-based cooperative layer using visual attention from humans and machines and liquid closed-form continuous-time neural networks (CfC) known for their prowess in deciphering cause-and-effect relationships. Additionally, the inclusion of the VisualBackProp algorithm allows for a clear understanding of the attention maps within images.
During field tests, both the Air-Guardian and the human pilot made decisions based on the same raw images to navigate to the target waypoint. The system’s success was measured by the cumulative rewards earned during the flight and the shorter path to the waypoint. The results show that Air-Guardian reduced the risk level of flights and increased the success rate of navigating to target points.
“The Air-Guardian system highlights the synergy between human expertise and machine learning, furthering the objective of using machine learning to augment pilots in challenging scenarios and reduce operational errors,” says Daniela Rus, senior author of the paper.