CONTINT - Contenus et Interactions

Robots for perceptual Interactions Dedicated to Daily Life Environment – RIDDLE

Submission summary

When robots leave industrial mass production to help with household chores, the requirements for robot platforms will change. While industrial production requires strength, precision, speed, and endurance, domestic service tasks for household robots are: robust navigation in indoor environments, dexterous object manipulation, and intuitive communication (speech, gestures, body language) with users. In this perspective, many issues are still to be solved, such as perception and system integration. The latter must not be underestimated, as the performance of the system as a whole is determined by the performance of the weakest component, generally the robot’s perception capacities and especially its perception of human user which is a bottle neck for long-term interaction. Our RIDDLE project seeks to make a step forward in these directions and our core research issue will be to combine the underlying multiple and uncertain perceptual analyses related to (i) objects and space regarding the robot's spatial intelligence, and (ii) multimodal communication regarding the robot's transactional intelligence. We argue that the robot's transactional knowledge as well as its visual and also audio based perception of humans during H/R interaction should be improved considering such contextual information. The services targeted by our application concern mild memory assistance and search / carry services using Human/Robot (H/R) interaction based on concepts learnt through multimodal communication with the human user: places, furniture, household objects i.e. properties, storage location, and temporal associations. The purpose of this cognitive robot is to learn environmental information with the user in the loop ("learning by interacting with a human user") in terms of interactions with a set of household objects. This common semantic/contextual representation is the appropriate level of abstraction required during any H/R communication ("learning to interact with humans"). Making a robot as socially competent as possible, in all the daily life areas is very challenging. That is why we focus on a subset of daily life actions answering to specific needs related to objects, matching the application requirements, and providing the interactional support the human user expects. These needs and associated robotic services are (i) search and carry objects and, (ii) mild memory assistance about these objects e.g. their current semantic locations. In other words, the robot will answer the human user’s questions/riddles1 about objects by appropriate actions (speech, displacement or manipulation). Decision-making, components related to robot’s action, and safety are not the core of the RIDDLE project. These topics are the subject of other projects involving all or part of the RIDDLE consortium; these issues are just considered here to validate the perception level. The final key issue will be to integrate the whole final perceptual system onto the ROMEO humanoid robot developed by Aldebaran Robotics. We will consider its full embedded perception resources (vision, audio, radio frequency) to limit the environment instrumentation. In order to reduce deployment time and facilitate the household object perception, small radio frequency tags will be stickled on them. The achievement of scenarios involving ROMEO, elderly volunteers and graspable household objects of user’s daily life will measure the impact in term of robustness when sharing multiple and uncertain perceptual analyses within the perception layers. Such abilities could be generalized and expanded outside the elder-care field.

Project coordination

frédéric LERASLE (Laboratoire d'Analyse et d'Architecture des Systèmes) – lerasle@laas.fr

The author of this summary is the project coordinator, who is responsible for the content of this summary. The ANR declines any responsibility as for its contents.

Partner

CHU Toulouse / Gérontechnologie Centre Hospitalier Universitaire de Toulouse / Laboratoire de Gérontechnologie La Grave / Gérontopôle
Aldebaran Aldebaran Robotics
Magellium
LAAS Laboratoire d'Analyse et d'Architecture des Systèmes
UPS-IRIT Université Paul Sabatier Toulouse 3 - Institut de Recherche en Informatique de Toulouse

Help of the ANR 790,648 euros
Beginning and duration of the scientific project: August 2012 - 36 Months

Useful links

Explorez notre base de projets financés

 

 

ANR makes available its datasets on funded projects, click here to find more.

Sign up for the latest news:
Subscribe to our newsletter