Bio-mimetic Agile aerial roBots flying in real-life conditions – AgileNeuroBot
Unmanned Aerial Vehicles (UAVs) are becoming essential tools in an increasing number of tasks. However, flying in complex environments requires a fast, low-latency coordination between sensing and control to initiate aggressive maneuvers and allow for flying capabilities such as stabilization, obstacle avoidance, tracking or interception. Combined, these abilities are currently mostly nonexistent for UAVs, as most systems require the intervention of an external operator or of global positioning systems (GPS) that are known to be inefficient for realistic scenarios. While recent years have seen a fast-paced technical progress, there is still a technological gap to allow UAV to fly autonomously in safe conditions. The reason for these poor performances is a lack of dynamic acquisition and processing. The current paradigm inherited from decades of using conventional frame-by-frame acquisition of signals is incompatible with a flying dynamic. Machine vision techniques are operated systematically on the entire image on each frame, performing computations on each pixel regardless to its content. This approach leads to a waste of energy as redundant information is uselessly acquired, transmitted and processed. While this approach is acceptable for low frequencies of processing and acquisition, it becomes a major obstacle specially when dealing with high frame-rate and low-latency tasks.
Here, we propose to implement a neuromorphic bio-inspired architecture for UAVs. We will rely on event-based vision sensors that are novel pieces of hardware mimicking biological retinal processing. These sensors allow to acquire visual information asynchronously while processing at camera's native resolution in real time at about 1 microsecond. We will develop a full neuromorphic architecture that preserves the dynamic temporal properties of neuromorphic sensors. It will allow to operate at the precise timing of visual events throughout the information processing pipeline. We expect to report efficient energy consumption while being extremely fast. We will set up an event-per-event computation processing loop allowing a feed-forward stream of information and for the first time a true time-based dynamic feed-back allowing to increase reactivity and reduce the number of visual events necessary to initiate a flying manoeuvre. We will develop a computation multi-modal binding scheme relying on a combination of sparse, predictive coding operating directly on the time of each visual event. For instance, the current motor command or knowledge of current trajectory from the inertial unit can be used to predict and update internal navigation maps, such that only the residual information is passed forward. We will be able to anticipate future time-to-contact maps using the current flight path but also from the collision detection system toward the sensors to filter out irrelevant visual information. By increasing sparseness, this approach allows to concentrate processing on the most informative data both from the engine flight control and the navigation maps, an architecture well adapted to event-based representations. Such combination of asynchronous event-based processing with conventional retro-action control is, we believe, a game changer as it both accelerates the processing pipeline by several orders of magnitude compared to state-of-art and allows the UAV to perform complex reactive manoeuvres in real situations.
Together, our proposed studies will have a broad impact on enabling a technological leap for autonomous vehicles by providing the building elements for ultra-fast, low latency and power efficient architectures for autonomous systems. The consortium gathers all the essential required knowledge and environments to cover all aspects of this proposal. This makes us uniquely qualified to pursue this objective given our extensive experience in neuromorphic engineering, autonomous UAV and bio-inspired computation.
Project coordination
Laurent Perrinet (Institut de Neurosciences de la Timone)
The author of this summary is the project coordinator, who is responsible for the content of this summary. The ANR declines any responsibility as for its contents.
Partnership
INT Institut de Neurosciences de la Timone
ISM Institut des sciences du mouvement - Etienne-Jules Marey
IdV Institut de la vision
Help of the ANR 433,715 euros
Beginning and duration of the scientific project:
February 2021
- 36 Months