ChairesIA_2019_1 - Chaires de recherche et d'enseignement en Intelligence Artificielle - vague 1 de l'édition 2019

Deep learning for computational imaging with emerging image modalities – DeepCIM

Submission summary

Digital and computational imaging is a key technology to help understanding the world around us. This field is likely to know disruptive changes in the coming years due to the emergence of novel imaging modalities, e.g., omni-directional videos, light fields, and impressive advances in machine learning. However, a number of barriers need to be alleviated before being able to fully exploit the potential of these promising technologies. The huge amount of high dimensional data that these imaging modalities represent has obvious implications on storage, and on the possibility to learn signal or physical models from the input data. Indeed, if the data lies in a high dimensional space, then an enormous amount of data is required to learn a model. This problem is known as the curse of dimensionality. Due to the huge amount of high-dimensional data, the number of variables increases, making some processing tasks intractable. It then becomes difficult to process the data in interactive time. In addition, capturing these data with a sufficiently high angular and spatial resolution and with low noise remains technologically challenging. Reconstructing the imaged scene, so that it can be observed and interpreted from continuously varying positions or angles in space, is another challenge to be solved for a wide adoption in practical applications.

The proposed project aims at addressing the above barriers with a research plan leveraging recent advances in three fields: image processing, computer vision and machine (deep) learning. It will focus on the design of models and algorithms for data dimensionality reduction and inverse problems with emerging image modalities. The project will be organized around two main research challenges. The first research challenge will concern the design of learning methods for data representation and dimensionality reduction. These methods encompass the learning of sparse and low rank models, of signal priors or representations in latent spaces of reduced dimensions. This also includes the learning of efficient and, if possible, lightweight architectures for data recovery from the representations of reduced dimension. Modeling joint distributions of pixels constituting a natural image is also a fundamental requirement for a variety of processing tasks. This is one of the major challenges in generative image modeling, field conquered in the recent years by deep learning based approaches. Based on the above models, our goal is also to develop algorithms for solving a number of inverse problems with novel imaging modalities. Solving inverse problems to retrieve a good representation of the scene from the captured data requires prior knowledge on the structure of the image space, usually expressed mathematically as regularization models. Deep learning based techniques designed to learn signal priors, that can then be used as regularization models, are revolutionizing the field.

The goal is to bring ground-breaking solutions which should open new fields of innovation in a variety of sectors: consumer applications (e.g. photography, augmented reality, autonomous vehicles) in surveillance (scene understanding, face recognition, gesture analysis), and in life science (light field microscopy, medical imaging, particle image velocimetry).

Project coordination

Christine Guillemot (Centre de Recherche Inria Rennes - Bretagne Atlantique)

The author of this summary is the project coordinator, who is responsible for the content of this summary. The ANR declines any responsibility as for its contents.


Inria Rennes - Bretagne Atlantique Centre de Recherche Inria Rennes - Bretagne Atlantique

Help of the ANR 513,881 euros
Beginning and duration of the scientific project: August 2020 - 48 Months

Useful links

Explorez notre base de projets financés



ANR makes available its datasets on funded projects, click here to find more.

Sign up for the latest news:
Subscribe to our newsletter