Deep Spiking networks for Embedded and Efficient intelligence in autonomous systems – DeepSee
Autonomous and intelligent embedded solutions are mainly designed as cognitive systems composed of a three step process: perception, decision and action, periodically invoked in a closed-loop manner in order to detect changes in the environment and appropriately choose the actions to be performed according to the mission to be achieved. In an autonomous agent such as a robot, a drone or a vehicle, these 3 stages are quite naturally instantiated in the form of i) the fusion of information from different sensors, ii) then the scene analysis typically performed by artificial neural networks, and iii) finally the selection of an action to be operated on actuators such as engines, mechanical arms or any mean to interact with the environment. In that context, the growing maturity of the complementary technologies of Event-Based Sensors (EBS) and Spiking Neural Networks (SNN) is proven by recent results. The nature of these sensors questions the very way in which autonomous systems interact with their environment. Indeed, an Event-Based Sensor reverses the perception paradigm currently adopted by Frame-Based Sensors (FBS) from systematic and periodical sampling (whether an event has happened or not) to an approach reflecting the true causal relationship where the event triggers the sampling of the information. We propose to study the disruptive change of the perception stage and how event-based processing can cooperate with the current frame-based approach to make the system more reactive and robust.
Hence, SNN models have been studied for several years as an interesting alternative to Formal Neural Networks (FNN) both for their reduction of computational complexity in deep network topology, but also for their natural ability to support unsupervised and bio-inspired learning rules. The most recent results show that these methods are becoming more and more mature and are almost catching up with the performance of formal networks, even though most of the learning is done without data labels. But should we compare the two approaches when the very nature of their input-data is different? In the context of interest of image processing, one (FNN) deals with whole frames and categorizes objects, the other (SNN) is particularly suitable for event-based sensors and is therefore more suited to capture spatio-temporal regularities in a constant flow of events. The approach we propose to follow in the DeepSee project is to associate spiking networks with formal networks rather than putting them in competition.
Project coordination
Benoit Miramond (Laboratoire d'électronique antennes et télécommunications)
The author of this summary is the project coordinator, who is responsible for the content of this summary. The ANR declines any responsibility as for its contents.
Partner
CerCo CENTRE DE RECHERCHE CERVEAU ET COGNITION
LEAT Laboratoire d'électronique antennes et télécommunications
I3S Laboratoire informatique, signaux systèmes de Sophia Antipolis
RENAULT SAS - GUYANCOURT
Help of the ANR 711,343 euros
Beginning and duration of the scientific project:
February 2021
- 42 Months