Gradual Multi Transfer Learning for Safe Autonomous Driving – MultiTrans
The development of algorithms for Autonomous Vehicles (AVs) faces important challenges throughout the design and implementation pipeline. The high cost and complex operation of real-world test-beds limits the experience an embedded Artificial Intelligence (AI) can gather, as it originates from a few vehicles that cannot be kept online extensively. For this reason, development often goes through a simulation stage or a testing step on a simplified system (e.g., smaller vehicles, standalone sensors or robotic models). In MultiTrans the focus is on the perception stage of AVs, which needs to provide a very accurate representation of the driving environment(s), that is used as an input for the following decision and control steps, while allowing a clear discrimination between similar but different contexts. The project takes the perspective of vision-based embedded systems (i.e., relying on cameras or similar sensors) that are among the most promising perception solutions. Their underlying sensing technologies however make them sensitive to an important research challenge: facing adverse conditions (such as bad weather or sun glare). In addition, knowledge transfer between different (real or virtual) environment suffers from two additional issues: reality gap, when a simulation/model fails to capture all the particularities of a real system, and the extended development time caused by the inherent repeated iterative process of adapting an algorithm from a system/domain to a different one.
In MultiTrans, we propose to address these research issues by tackling autonomous driving algorithms development and deployment jointly. The idea is to enable data, experience and knowledge to be transferable across the different systems (simulation, robotic models, and real-word cars), thus potentially accelerating the rate an embedded intelligent system can gradually learn to operate at each deployment stage. The research hypothesis acting as a starting point of MultiTrans corresponds to the current state of deployment of autonomous driving technologies: AVs can be programmed (or are able to learn) to react and operate in controlled (or restricted) environments autonomously. The focus of our proposal is on the AI-side : research is needed to help these systems during the perception stage, enabling AVs to be operational and safer in a wider range of situations. The project is expected to contribute to substantial advances with respect to state of the art, by resulting in (i) A novel theoretical framework and new algorithms on transfer and frugal learning in virtual and real environments; (ii) Advances in multi-domain and multi-source computer vision for semantic segmentation and scene recognition applied to safe autonomous driving and (iii) The development of a robotic autonomous vehicle model demonstrator combined with a virtual world model.
The novelty in this project is to develop an intermediate environment that allows to deploy algorithms in a physical world model. This additional step will allow to re-create more realistic use cases that would contribute to a better, faster and more frugal transfer of perception algorithms to and from real autonomous vehicle test-beds. This robotic platform will also enable to lead research focusing on multi-domain and multi-actor transfer by reducing the time and efforts required to build relevant use cases and multiple variants of these scenarios, thus allowing to achieve domain generalization. We will also explore frugal learning techniques such as few-shot learning would reduce the amount of samples require for the recognition/segmentation tasks to converge before transferring them. Thanks to the platform, we will be able to evaluate solutions for complex configurations in the virtual environment and then transfer them on the platform, bridging the gap between behaviour cloning (through imitation learning) and simulation.
Project coordination
Maxime Gueriau (LABORATOIRE D'INFORMATIQUE, DE TRAITEMENT DE L'INFORMATION ET DES SYSTÈMES - EA 4108)
The author of this summary is the project coordinator, who is responsible for the content of this summary. The ANR declines any responsibility as for its contents.
Partnership
LITIS LABORATOIRE D'INFORMATIQUE, DE TRAITEMENT DE L'INFORMATION ET DES SYSTÈMES - EA 4108
I3S Laboratoire informatique, signaux systèmes de Sophia Antipolis
Valeo Comfort & Driving Assistance / Valeo.ai
Help of the ANR 809,688 euros
Beginning and duration of the scientific project:
February 2022
- 48 Months