ChairesIA_2019_2 - Chaires de recherche et d'enseignement en Intelligence Artificielle - vague 2 de l'édition 2019 2020

A road toward safe artificial intelligence in mobility – Raimo

???????????A road toward safe artificial intelligence in mobility

While these technologies promise more sustainable and smoother mobility, their societal acceptance ultimately depends on safety. Recent accidents and the vulnerability of deep learning algorithms have demonstrated that performance alone is insufficient: formal guarantees, robust certification methods, and mechanisms capable of detecting unforeseen situations are required.

RAIMo: Ensuring the Safety and Reliability of Embedded AI for Autonomous Mobility, from Certification to Field Validation

The rapid development of artificial intelligence is transforming the transportation sector, with autonomous vehicles representing a major disruptive innovation whose large-scale deployment remains uncertain. Despite ongoing real-world experimentation, broader acceptance and deployment are constrained by a central requirement: providing credible safety guarantees for algorithmic decision-making, particularly for systems based on deep learning. Accidents involving autonomous vehicles have reinforced the urgency of methods capable of demonstrating, not merely observing, operational safety. Current industrial processes rely on ex ante certification and documentation of expected behaviors, while machine learning models still suffer from a lack of formal guarantees (robustness to perturbations, vulnerability to adversarial attacks, overconfidence under distribution shift).<br /><br />RAIMo addresses this scientific bottleneck by developing certification and optimization methods (notably via Mixed-Integer Programming), robust transfer learning approaches, and multimodal safe-perception components (vision, LiDAR/radar, and audio), including anomaly and novelty detection mechanisms. The proposed solutions are validated using data and vehicles from the Rouen Autonomous Lab and the TIGA ecosystem, with expected impacts on the reliability, certifiability, and societal acceptability of autonomous systems.

The expected outcomes are achieved through a chain of complementary methods. First, perception and decision models are trained to remain reliable under small data perturbations (lighting changes, noise, occlusions), by formulating learning as a bi-objective optimization problem balancing accuracy and caution. Second, safety requirements are translated into mathematical programming problems with integer variables, capable of formally proving that a model respects behavioral constraints within a defined operational domain. To ensure scalability, controlled approximations and guided initialization strategies are used to reduce computational complexity. The transition from laboratory to road deployment is addressed by detecting distribution shifts between training and real-world data and triggering more conservative decisions when uncertainty arises. On the sensing side, cameras, laser sensors, radars, and audio signals are combined to detect unusual situations and unknown objects. Finally, solutions are evaluated on real vehicles, including mixed-reality testing to safely replay critical scenarios before full real-world validation.

RAIMo has advanced the safety of AI for mobility through robust certification methods against adversarial attacks and global optimization techniques (MIP/branch-and-bound) for sparse, nearly imperceptible perturbations. On the perception side, the project introduced original multimodal components (polarimetric imaging, ADOS open-set object detection, and audio-based perception) and validated them through mixed-reality testing in collaboration with the Rouen Autonomous Lab and TIGA. Impact includes publications in leading venues (ECML/PKDD, NeurIPS, IEEE Transactions on Intelligent Transportation Systems) and continued research on novelty detection through an associated ANR Chair.

The RAIMo project focuses on strengthening formal guarantees of deep learning model robustness, in particular through the advanced integration of mixed-integer programming for certification, constrained training, and fine-grained robustness measurement. The development of a specialized MIP solver capable of scaling up is a strategic lever for handling more complex deep architectures. In terms of applications, RAIMo has proposed improvements to robust domain adaptation based on optimal transport and Wasserstein distance, as well as the advanced integration of multimodal fusion (vision, LiDAR, radar, audio) for reliable perception in degraded conditions. The detection of novelty and out-of-distribution data remains a priority in order to limit overconfidence in systems. Finally, experimental validation on autonomous vehicles will consolidate safety and acceptability guarantees.

Since its launch, RAIMo’s scientific production has focused on robustness and certification of deep learning models, including one journal article (*Metrika*, 2023) and several conference papers (ITSC/IV, ECML-PKDD, ACML, NeurIPS 2024) addressing adversarial attacks, MIP-based optimization, and optimal transport methods. No patents have been filed to date; however, several technological components—particularly sparse “invisible” adversarial attack generation and polarimetric perception methods—present strong potential for future valorization.

Recent progresses in machine learning in general ad deep learning in particular make it possible to include this technology in more and more autonomous vehicles. However, before this possible future becomes reality and our roads are made safer with algorithms replacing human drivers, it is necessary to know how to prove the quality of the decisions made.

This Chair project "A road towards Safe Artificial Intelligence for Mobility" is a research proposal aimed at strengthening local research dynamics about safety issues associated with the use of artificial intelligence in mobility. To achieve this goal, he will endeavor to formalize the problem, to propose algorithms to solve it and to demonstrate its feasibility on real autonomous vehicles under real driving conditions.

To establish safety certificates, a first idea is to develop the associated theory by formalizing this requirement as a multi-objective/multi-level optimization problem aimed at both learning and guaranteeing the quality of the IA model learning. However this optimization problem, a mixed binary program, is very complex and does not scale. The challenge is to work on formalization, relaxations and resolution algorithms. The goal is to build and train, in a reasonable time, deep neural networks which can be proven robust, possibly associated with the explicability and the interpretability of these black box models.

The second research direction of the grant proposal aims at ensuring the safety of deep neural networks in the framework of mobility by monitoring its decision processes. This implies research on redundant multimodal perception including audio and video data processing acquired through different modalities (such as polarimetry) and related fusion issues in the context of deep learning. Another important aspect is the safe monitoring of the decision-making processes by including novelty and out-of-distribution detection mechanisms and self assessment. A way to achieve this goal is the formalization of the problem in the context of robust statistical hypothesis testing on multi-modal inputs based on optimal transport theory.

The third part of the project regards the implementation of the proposed solutions. It aims at testing the investigated solutions in real conditions with real autonomous vehicles. To this end, our chair project is articulated with the Rouen Autonomous Lab which already has four autonomous vehicles on site and the program PIA3 TIGA "Rouen Normandie Intelligent Mobility for All - for an integrated system of multimodal and carbon-free mobility".

The chair will intervene, together with the INSA and the University of Rouen Normandy as part of the Normandy University, in all levels of engineering training and in training through research programs. To this end, the chair will be deeply involved in the Normandy University Research Institute (EUR) project MINMACS in the field of safe AI for mobility.

To reach these ambitious goals and to make the Madrillet Campus in Normandy an international reference in the field of AI for mobility, the Chair will benefit from: (i) a team of three professors combining the relevant scientific skills, (ii) a financial support from INSA and the University of Rouen Normandy, (iii) the scientific collaboration with local and national research laboratories with research in AI domain, (iv) facilities of the CRIANN (the regional on-site calculation center), and (v) the support of Rouen Autonomous Lab and its four autonomous vehicles operating on Madrillet Campus

Our ultimate goal is to contribute to make learning systems safe for mobility and beneficial to society.

Project coordination

Stéphane Canu (LABORATOIRE D'INFORMATIQUE, DE TRAITEMENT DE L'INFORMATION ET DES SYSTÈMES - EA 4108)

The author of this summary is the project coordinator, who is responsible for the content of this summary. The ANR declines any responsibility as for its contents.

Partnership

LITIS LABORATOIRE D'INFORMATIQUE, DE TRAITEMENT DE L'INFORMATION ET DES SYSTÈMES - EA 4108

Help of the ANR 599,400 euros
Beginning and duration of the scientific project: August 2020 - 48 Months

Useful links

Explorez notre base de projets financés

 

 

ANR makes available its datasets on funded projects, click here to find more.

Sign up for the latest news:
Subscribe to our newsletter