DS07 - Société de l'information et de la communication

technology-aided MOBIlity by semantic DEEP learning – MOBI-DEEP

Submission summary

MOBI-DEEP addresses the development of technologies for autonomous navigation in unknown environments using low cost vision sensors. The project relies on the assumption that the inference of semantic information (presence of particular structures, identification of objects of interest, obstacles, etc.), the inference of depth maps as well as the one of motion maps describing a scene given by a monocular camera can be sufficient for guiding a person, robot, etc. in an open and unfamiliar environment. This project departs from the current dominant approaches where good prior knowledge of the environment and the ability to reconstruct 3D metric structure of this environment (SLAM, Lidar, etc.) are needed. It allows to deal with situations where the systems should be able to navigate with limited knowledge of their environment and using a perceptual system as light as possible.

MOBI-DEEP will address these situations through two use cases: the guidance of the visually impaired and the navigation of mobile robots in open areas. In both cases, the problem studied can be formulated as follows: an on-board camera, roughly localized by GPS, has to move to a specified position given by GPS coordinates. No accurate map is available, and the navigation should be done through a series of local displacements. The image sensor has to extract from the images sufficient information to make the navigation possible. The carrier can be a robot or a person. We further assume that it is possible to reach the destination by simply moving toward this direction. The problem studied is the one of the planning a path in an unknown environment by building over time an egocentric and semantics representation of the navigable space.

This raises three main questions which be studied in the project for both use cases: what are the minimum semantic/3D/dynamic information required to allow the navigation? How to extract the information from monocular images? How to dynamically navigate through local representations, in a geometrically and semantically described environment?

Special emphasis will be given to experiments within a Living Lab that will have a dual purpose: conducting real scale experiments and allowing to conduct scientific mediation.

Project coordination

Philippe Martinet (Centre de Recherche Inria Sophia Antipolis - Méditerranée)

The author of this summary is the project coordinator, who is responsible for the content of this summary. The ANR declines any responsibility as for its contents.

Partner

Inria Sophia Antipolis Méditerranée Centre de Recherche Inria Sophia Antipolis - Méditerranée
INJA Institut National des Jeunes Aveugles
Safran SAFRAN
Safran Electronics & Defense SAFRAN ELECTRONICS & DEFENSE
NAVOCAP NAVOCAP
GREYC Groupe de REcherche en Informatique, Image, Automatique et Instrumentation de Caen

Help of the ANR 611,221 euros
Beginning and duration of the scientific project: - 36 Months

Useful links

Explorez notre base de projets financés

 

 

ANR makes available its datasets on funded projects, click here to find more.

Sign up for the latest news:
Subscribe to our newsletter