DS0707 - Interactions humain-machine, objets connectés, contenus numériques, données massives et connaissance

Learning with interacting views – LIVES

Submission summary

Imagine you have to answer the following questions: how to build a computer-aided diagnosis tool for neurological disorders from images acquired from different medical imaging devices? that could identify which emotion is feeling a person from her face and her voice? How could these tools be still operational even when some data of a type is missing and/or poor quality?
These questions are at the core of some problems addressed by the Institut de Neurosciences de la Timone (INT), where people have expertise in brain imaging based medical diagnosis, and Picxel, a SME centered on affective computing. The Laboratoire d'Informatique de Paris 6 (LIP6), the Laboration Hubert Curien (LaHC), and the Laboratoire d'Informatique Fondamentale de Marseille (LIF, head of the PI) are the other partners that are teaming up with INT and Picxel: in this project, they provide their renowned knowledge in machine learning, wherein they have developed, theoretical, algorithmic, and practical contributions. The five partners will closely work together to propose original and innovative advances in machine learning with a constant concern to articulate theoretical and applicative findings.

The above questions pose the problem of (a) building a classifier capable of predicting the class (i.e. a diagnosis, or an emotion) of some object, (b) that of taking advantage of the few modalities or *views* used to depict the objects to classify and, possibly (c) that of building relevant representations that take advantages of these views. This is precisely what the present project aims at: the development of a well-founded machine learning framework for learning in the presence of what we have dubbed *interacting views*, and which is *the* notion we will take time to uncover and formalize.

To address the issues of multiview learning, we propose to structure as follows. On the one hand, we will devote time to establish when and how classical (i.e. monoview-based) learnability results carry over to the multiview setting (WP1); this may require us to brush up on our understanding of different notions and accompanying measures of interacting views. On the other hand, possibly building upon the results just mentioned, we will build new dedicated multiview learning algorithms, according to the following lines of research: a) we will investigate the problem of learning (compact) multiview representations (WP2), then b) we will create new algorithms by leveraging some recent works on transfer learning -- multitasks and domain adaptation -- to the multiview setting (WP3), and, c) we will address the scalability of our algorithms to real-life conditions, such as large-dimension datasets and missing views (WP4).

Finally, the performances of our learning algorithms will be assessed on benchmark datasets, both synthetic and real, that we will collect and make available for the machine learning community (WP5). Beyond the mere evaluation of our algorithms, these dataset will be disseminated to promote reproducible research, to identify the most suitable algorithms in a multiview setting, and to make the machine learning community aware of the exciting problems of multiview learning for affective computing and brain-image analysis.

Project coordination

Cécile Capponi (Laboratoire d'Informatique Fondamentale de Marseille)

The author of this summary is the project coordinator, who is responsible for the content of this summary. The ANR declines any responsibility as for its contents.

Partner

Picxel PICXEL
IRONOVA
LIP6 Laboratoire d'Informatique de Paris 6
LaHC Laboratoire Hubert Curien
AMU_INT Aix-Marseille Université_Institut de Neurosciences de la Timone
LIF Laboratoire d'Informatique Fondamentale de Marseille

Help of the ANR 679,797 euros
Beginning and duration of the scientific project: - 42 Months

Useful links

Explorez notre base de projets financés

 

 

ANR makes available its datasets on funded projects, click here to find more.

Sign up for the latest news:
Subscribe to our newsletter