ChairesIA_2019_2 - Chaires de recherche et d'enseignement en Intelligence Artificielle - vague 2 de l'édition 2019

Learning Reasoning, Memory and Behavior – REMEMBER

Learning Reasoning, Memory and Behavior

We will focus on methodological contributions (models and algorithms) for training virtual and real agents to learn to solve complex tasks autonomously, targeting terrestrial mobile robots, typically service robots; industrial cobotics; autonomous vehicles; UAVs; humanoid robots. In particular, intelligent agents require high-level reasoning capabilities, situation awareness, and the capacity of robustly taking the right decisions at the right moments. The required behavior policies are complex, since they involve high-dimensional input spaces and state spaces, partially observed problems, as well as highly non-linear and entangled interdependencies. Learning them crucially depends on the algorithm’s capacity of learning compact, structured and semantically meaningful memory representations, which are able to capture short and long range regularities in the task and the environment. A second key requirement is the ability to learn these representations with a minimal amount of human interventions and annotations, as the manual design of complex representations is up to impossible. This requires the efficient usage of raw data through the discovery of regularities by different means: supervised, unsupervised or self-supervised learning, through reward or intrinsic motivation etc.

We combine machine learning (ML) and planning and spatial reasoning and address problems in robotics.

We use large scale machine learning (ML) address problems in robotics: <br />- Learning of spatial reasoning <br />- Integration of classical planning and learned planning <br />- Integration of physics into ML <br />- Sim2real transfer: large-scale learning in simulation and deployment to real physical environments

Several different methodologies of machine learning are used, extended, and combined in innovative ways:
- Reinforcement learning and deep reinforcement learning (ML)
- Representation learning and spatial reasoning
- Integration of geometry into machine learning to improve sample complexity
- Hybrid control with learned controllers (ML) with added stability constraints (CT) or physics knowledge
- Hybrid control with learned controllers and classical planning or graph theory

- Sim2real transfer for robot learning
- Robot navigation: auxiliary losses
- Robot navigation: experimental large scale study
- A new robot platform (hardware and software) targeting large-scale training in simulation and deployment to real environments.
- Vision and language reasoning: avoiding short-cut learning; new benchmark; improved method using Oracle transfer; theoretical study of sample complexity

Win of the CVPR 2021 Multi Object Navigation challenge.
New methods for visual reasoning, reasoning in physics and robot navigation.
Publications in top-level conferences (CVPR, NeurIPS).

=== Accepted papers (as of 1.12.2021)
[1] S. Janny, V. Andrieu, M. Nadri, and C. Wolf. Deep KKL: Data-driven Output Prediction for Non-Linear Systems. CDC 2021.
[2] C. Kervadec, G. Antipov, M. Baccouche, and C. Wolf. Roses Are Red, Violets Are Blue... but Should VQA Expect Them To? In CVPR, 2021.
[3] C. Kervadec*, C. Wolf*, G. Antipov, and M. Baccouche. Supervising the Transfer of Reasoning Patterns in VQA. In NeurIPS, 2021.
[4] C. Kervadec*, T. Jaunet*, G. Antipov, M. Baccouche, R. Vuillemot, and C. Wolf. How Trans- ferrable are Reasoning Patterns in VQA? In CVPR, 2021.
[5] B. Duke, A. Ahmed, C. Wolf, P. Aarabi, and G.W. Taylor. SSTVOS: Sparse Spatiotemporal
Transformers for Video Object Segmentation. In CVPR, 2021.
[6] T. Jaunet, G. Bono, R. Vuillemot, and C. Wolf. Sim2RealViz: Visualizing the Sim2Real Gap in Robot Ego-Pose Estimation. NeurIPS workshop on eXplainable AI approaches for debugging and diagnosis, 2021.
[7] T. Jaunet, C. Kervadec, G. Antipov, M. Baccouche, R. Vuillemot, and C. Wolf. VisQA: X- raying Vision and Language Reasoning in Transformers. IEEE Transactions on Visualization and Computer Graphics, 2021.

=== Submitted papers (as of 1.12.2021)
[8 ]A. Sadek, G. Bono, B. Chidlovskii, and C. Wolf. An in-depth experimental study of sensor usage and visual reasoning of robots navigating in real environments. In Submitted to ICRA; arXiv pre-print arxiv:2111.14666, 2022.
[9] P. Marza, L. Matignon, O. Simonin, and C. Wolf. Teaching Agents how to Map: Spatial Reasoning for Multi-Object Navigation. In Submitted to ICRA, 2022.
[10] S. Janny, F. Baradel, N. Neverova, M. Nadri, G. Mori, and C. Wolf. Filtered-CoPhy — Unsu- pervised and Counterfactual Learning of Physical Dynamics. Submitted to ICLR, 2022.
[11] B. Chidlovskii, A. Sadek, and C. Wolf. Universal Domain Adaptation in Ordinal Regression. Pre-print: arXiv:2107.06011, 2021.

The last years have witnessed the soaring of Machine Learning, which has provided disruptive performance gains in several fields. Apart from undeniable advances in methodology, these gains are often attributed to massive amounts of training data and computing power, which led to breakthroughs in speech recognition, computer vision and language processing. In this challenging project, we propose to extend these advances to sequential decision making of agents for planning and control in complex 3D environments. In this context, Markov Decision Processes and Reinforcement Learning have traditionally provided a mathematically founded framework for control, where agents learn policies from past interactions. They currently suffer from low sample efficiency, often requiring billions of interactions, and difficulties learning high-level reasoning from high-dimensional observations and reward, as well as difficulty in generalization from simulation to real environments.

In this chair project, we will focus on methodological contributions (models and algorithms) for training virtual and real agents to learn to solve complex tasks autonomously. In particular, intelligent agents require high-level reasoning capabilities, situation awareness, and the capacity of robustly taking the right decisions at the right moments. The required behavior policies are complex, since they involve high-dimensional input spaces and state spaces, partially observed problems, as well as highly non-linear and entangled interdependencies. We argue, that learning them crucially depends on the algorithm's capacity of learning compact, spatially structured and semantically meaningful memory representations, which are able to capture short-range and long-range regularities in the task and the environment. A second key requirement is the ability to learn these representations with a minimal amount of human interventions and annotations, as the manual design of complex representations is up to impossible. This requires the efficient usage of raw data through the discovery of regularities by different means: supervised, unsupervised or self-supervised learning, through reward or intrinsic motivation etc.

The research project aims to address these issues with four axes: (i) adding structure and priors to Deep-RL algorithms allowing them to discover semantic and spatial representations with metric and topological properties; (ii) learning situation aware models generalizing to real environments with geometry and self-supervised learning; (iii) learning world-models and high-level reasoning for mobile agents, and (iv) adding stability priors from control-theory to Deep-RL.

The planned methodological advances of this project will be continuously evaluated on challenging applications, partially in simulated environments, partially in real environments with physical robots in large scale scenarios.

This chair project will have an impact on training of highly qualified personal in AI in Lyon through the exchanges of PhD students with international partners, and on an undergraduate level through new lectures on AI, new hardware infrastructure for teaching, and through consolidations of AI teaching.

Project coordinator

Monsieur Christian Wolf (UMR 5205 - LABORATOIRE D'INFORMATIQUE EN IMAGE ET SYSTEMES D'INFORMATION)

The author of this summary is the project coordinator, who is responsible for the content of this summary. The ANR declines any responsibility as for its contents.

Partner

LIRIS UMR 5205 - LABORATOIRE D'INFORMATIQUE EN IMAGE ET SYSTEMES D'INFORMATION

Help of the ANR 574,482 euros
Beginning and duration of the scientific project: May 2020 - 48 Months

Useful links

Explorez notre base de projets financés

 

 

ANR makes available its datasets on funded projects, click here to find more.

Sign up for the latest news:
Subscribe to our newsletter