An AI copilot can help humans be even more precise, leading to safer flights. The researchers proposed an air guardian system that enables collaboration between a pilot equipped with eye-tracking technology and a parallel end-to-end neural control system. 

Consider being on a plane flown by a human and a computer. Although they both have their "hands" on the controls, their priorities are often at odds. If humans and robots focus on the same thing, the human gets to drive. However, the computer immediately takes control if the human becomes preoccupied or fails to notice something.

Air-Guardian

Researchers have created a technology they call the Air Guardian. Air-Guardian is a proactive copilot, a collaboration between humans and computers based on understanding attention, that helps modern pilots deal with the avalanche of information from many displays, especially at crucial moments.

But how can it precisely gauge focus? Humans utilize eye-tracking technology, while artificial intelligence uses saliency maps to determine where processing power should be allocated. The maps are like visual guides that point out important landmarks in an image, making it easier to understand and interpret the actions of complex algorithms. Instead of just intervening after safety breaches like conventional autopilot systems, Air-Guardian identifies early signals of potential threats using these attention markers. 

Beyond the realm of aviation, this approach has far-reaching ramifications. One day, similar cooperative control techniques may be utilized in automobiles, crewless aerial vehicles, and other types of robots.

Continuous-time neural networks

The core technology of Air-Guardian is where it excels. It examines incoming images with the help of an optimization-based cooperative layer that combines human and machine visual attention and liquid closed-form continuous-time neural networks (CfC), which are adept at determining causation. To supplement this, the researchers have the VisualBackProp algorithm, which locates the system's attention mappings within an image. 

The human-machine interface needs to be improved in the future if widespread use is to occur. User comments suggest a bar or other visual cue to indicate when the watchdog system takes over, which would be more understandable.

VisualBackProp algorithm

The attention profiles for neural networks are generated via the VisualBackProp algorithm, which computes the networks' saliency maps (feature importance). In contrast, human attention profiles are derived from either eye tracking of human pilots or saliency maps of networks trained to simulate human pilots. The pilot exercises control when the pilot and guardian agents' attention profiles align. Without such intervention, the air-guard assumes command of the aircraft. 

Conclusion

The researchers demonstrate that their attention-based air-guardian system can maintain an equilibrium between the pilot's expertise and attention and its level of involvement in the flight. Particularly effective are situations in which the aviator was preoccupied with information overload and activated the guardian system. The researchers exhibit the efficacy of their approach in traversing flight scenarios through simulated operations utilizing a fixed-wing aircraft and real-world implementation using a quadrotor platform.

With Air-Guardian ushering in a new era of safer skies, pilots no longer have to worry about being in danger even if their concentration wanders for a split second. Furthermore, by integrating a cooperation layer with a causal continuous-depth neural network model, their vision-based air-guardian system facilitates concurrent autonomy between a pilot and a control system. It is achieved by leveraging perceived disparities in the attention profiles of both parties.

Sources of Article

Image source: Unsplash

Want to publish your content?

Publish an article and share your insights to the world.

Get Published Icon
ALSO EXPLORE