CE22 - Sociétés urbaines, territoires, constructions et mobilité

Event camera for the perception of fast objects around Autonomous vehicles – CERBERE

CERBERE : Event camera for fast object perception around the autonomous vehicle

Context and objectives

In recent years, research and experimentation on autonomous vehicles have multiplied, with autonomous vehicles being one of the major challenges of tomorrow's mobility. In the near future, users will have access to fleets of shared autonomous vehicles that can be booked at any time via a smartphone, while reducing the risks associated with human driving, as more than 90% of accidents are related to human error.<br />One of the main technological challenges for the autonomous vehicle is the understanding of its environment, which is usually perceived by sensors such as lidars, radars and cameras. The main objective of this project is the exploitation of a sensor in rupture with the existing solutions for the perception of the autonomous vehicle: the event camera.<br />The event camera is a bio-inspired sensor that, instead of capturing static images - while scenes are dynamic - at a fixed frequency, measures changes in illumination at the pixel level and asynchronously. This property makes it particularly interesting for autonomous vehicles since it can address the remaining challenges in autonomous driving scenarios: scene with high dynamics (e.g. tunnel exit), latency and speed of detection of obstacles (other vehicles, pedestrians), while taking into account the constraints of computing power and limited data flow imposed by the autonomous vehicle.<br />The use of event cameras requires the development of new algorithms, since the classical computer vision algorithms are not adapted, the data provided by the event camera being fundamentally different. The application context (perception for autonomous vehicles) is radically different from the works that can be found at the moment. Indeed, most of the works use a mobile event camera in a static scene, or a static event camera observing a dynamic scene. In this project, the objective is to exploit a camera embedded in the vehicle and observing a dynamic scene. The events generated by the camera will be due to both its own movement and that of the objects in the scene, so it will be necessary to be able to dissociate them, which remains a challenge at the moment. This change in the application context will lead to a number of new scientific challenges that we will try to solve in this project.

The perception for the autonomous vehicle must be three-dimensional in order to localize the different entities (other vehicles, motorcycles, cyclists, pedestrians) and to determine if there is a danger or if the situation is normal. This is why we are particularly interested in the innovative theme of event-based 3D for autonomous vehicles.
In addition to the detection and 3D reconstruction of moving objects, a recognition step will also be necessary to allow the autonomous vehicle to make the most appropriate decision according to the situation. The most efficient approaches at the moment on classical images are those based on CNN (Convolutional Neural Networks). Given the structure of the data provided by the event camera, this type of network is not adapted and new approaches must be found.
The real-time aspect of the solution is very important if we do not want to lose the advantages of the event camera. An important part of this project will be dedicated to the Algorithm Architecture Adequacy (AAA) so that the developed algorithms can be integrated in the smart camera proposed by the industrial partner of this project.

To appear

To appear

To appear

In recent years, research and experimentation on autonomous vehicles have multiplied, with autonomous vehicles being one of the major challenges of tomorrow's mobility. In the near future, users will have access to fleets of shared autonomous vehicles that can be booked at any time via a smartphone, while reducing the risks associated with human driving, as more than 90% of accidents are related to human error.
One of the main technological challenges for the autonomous vehicle is the understanding of its environment, which is usually perceived by sensors such as lidars, radars and cameras. The main objective of this project is the exploitation of a sensor in rupture with the existing solutions for the perception of the autonomous vehicle: the event camera.

The event camera is a bio-inspired sensor that, instead of capturing static images - while scenes are dynamic - at a fixed frequency, measures changes in illumination at the pixel level and asynchronously. This property makes it particularly interesting for autonomous vehicles since it can address the remaining challenges in autonomous driving scenarios: scene with high dynamics (e.g. tunnel exit), latency and speed of detection of obstacles (other vehicles, pedestrians), while taking into account the constraints of computing power and limited data flow imposed by the autonomous vehicle.

The use of event cameras requires the development of new algorithms, since the classical computer vision algorithms are not adapted, the data provided by the event camera being fundamentally different. The application context (perception for autonomous vehicles) is radically different from the works that can be found at the moment. Indeed, most of the works use a mobile event camera in a static scene, or a static event camera observing a dynamic scene. In this project, the objective is to exploit a camera embedded in the vehicle and observing a dynamic scene. The events generated by the camera will be due to both its own movement and that of the objects in the scene, so it will be necessary to be able to dissociate them, which remains a challenge at the moment. This change in the application context will lead to a number of new scientific challenges that we will try to solve in this project.

The perception for the autonomous vehicle must be three-dimensional in order to localize the different entities (other vehicles, motorcycles, cyclists, pedestrians) and to determine if there is a danger or if the situation is normal. This is why we are particularly interested in the innovative theme of event-based 3D for autonomous vehicles.
In addition to the detection and 3D reconstruction of moving objects, a recognition step will also be necessary to allow the autonomous vehicle to make the most appropriate decision according to the situation. The most efficient approaches at the moment on classical images are those based on CNN (Convolutional Neural Networks). Given the structure of the data provided by the event camera, this type of network is not adapted and new approaches must be found.
The real-time aspect of the solution is very important if we do not want to lose the advantages of the event camera. An important part of this project will be dedicated to the Algorithm Architecture Adequacy (AAA) so that the developed algorithms can be integrated in the smart camera proposed by the industrial partner of this project.

Project coordination

Rémi BOUTTEAU (LABORATOIRE D'INFORMATIQUE, DE TRAITEMENT DE L'INFORMATION ET DES SYSTÈMES - EA 4108)

The author of this summary is the project coordinator, who is responsible for the content of this summary. The ANR declines any responsibility as for its contents.

Partner

LITIS LABORATOIRE D'INFORMATIQUE, DE TRAITEMENT DE L'INFORMATION ET DES SYSTÈMES - EA 4108
MIS MODÉLISATION, INFORMATION ET SYSTÈMES - UR UPJV 4290
YUMAIN / YUMAIN
ImViA Imagerie et Vision Artificielle - EA 7535

Help of the ANR 656,718 euros
Beginning and duration of the scientific project: January 2022 - 48 Months

Useful links

Explorez notre base de projets financés

 

 

ANR makes available its datasets on funded projects, click here to find more.

Sign up for the latest news:
Subscribe to our newsletter