CONTINT - Contenus numériques et interactions

Serious Game for Children with autistic spectrum disorders based on Multimodal Emotion Imitation – JEMImE

JEMImE

Serious Game for Children with autistic spectrum disorders based on Multimodal Emotion Imitation

Objectives

Interpersonal communication relies on complex processing of multimodal emotional cues such as facial expression and tone-of-voice. Unfortunately, children with autistic spectrum disorders (ASD) have disabilities to understand and produce these socio-emotional signals. Various solutions have been proposed to help children to develop communication skills but most of these solutions focus on what children succeed to understand about human emotion by analyzing faces. They try to learn how to puzzle out information to recognize emotion like joy, sadness and disgust.<br />However, only few studies deal with the production of emotion by children with ASD. This is mainly due to the lack of efficient technological tools to analyze children behavior. Recent advances in the field of automatic emotion recognition offer new opportunities to assess the quality of emotions produced by children. JEMImE aims at designing new emotion recognition algorithms in order to extend the features currently used in the JEStiMulE serious game (a multisensory serious game for training emotional comprehension and social cognition of individuals with ASD). The goal of the serious game developed in JEMImE project is to help children with ASD to learn to mimic facial and vocal emotions and to express the proper emotion depending on the context. Such a tool will be very useful for the children to learn to produce emotions congruent to what they feel and for the practitioner to quantify progress.

Scientific and technological breakthroughs in emotion characterization may significantly improve understanding and evaluating natural productions of children. For example, this implies to be able to define metrics to quantify how much the behavior is relevant. From a technological point of view, algorithms need to be able to analyze spontaneous behavior of children in a realistic environment. That means to design robust, real-time and multimodal methods, pushing forward the current state- of-the-art in unconstrained emotion recognition.

The design of these methods is based on annotated data, i.e. videos of children producing emotions. For each video, one or more human operators have given their opinion on the quality of the emotional output. Machine learning algorithms will then use this data to learn a relationship between visual and audio clues in the video and the emotional production of children. The algorithms will then be able to autonomously analyze the emotional production of children.

Collecting multimodal data (2D, 3D and audio) with children producing facial and vocal emotions.

Completion of a first prototype in which a player mimicking a virtual character is analyzed by the emotion recognition algorithm.

The objective is to finalize the system of multimodal emotion analysis and to integrate it into a proof of concept of serious game.

[DBD 15] A. Dapogny, K. Bailly, S. Dubuisson. Dynamic facial expression recognition by joint static and multi-time gap transition classification. International Conference on Automatic Face and Gesture Recognition (FG 2015)

[NBC 15] J. Nicolle, K. Bailly, M. Chetouani, Facial Action Unit Intensity Prediction via Hard Multi-Task Metric Learning for Kernel Regression, Facial Expression Recognition and Analysis Challenge (FERA 2015)

[ZBB 14] L. Zamuner, K. Bailly, E. Bigorgne. Pose-Adaptive Constrained Local Model For Accurate Head Pose Tracking. International Conference on Pattern Recognition (ICPR 2014)

[GG 15a] C. Grossard, O. Grynszpan, Entraînements des compétences assistés par les technologies numériques dans l’autisme : une revue de la question, Enfance. Presse Universitaire de France. (2015)

Interpersonal communication relies on complex processing of multimodal emotional cues such as facial expression and tone-of-voice. Unfortunately, children with autistic spectrum disorders (ASD) have disabilities to understand and produce these socio-emotional signals. Various solutions have been proposed to help children to develop communication skills but most of these solutions focus on what children succeed to understand about human emotion by analyzing faces. They try to learn how to puzzle out information to recognize emotion like joy, sadness and disgust.

However, only few studies deal with the production of emotion by children with ASD. This is mainly due to the lack of efficient technological tools to analyze children behavior. Recent advances in the field of automatic emotion recognition offer new opportunities to assess the quality of emotions produced by children.

JEMImE aims at designing new emotion recognition algorithms in order to extend the features currently used in the JEStiMulE serious game (a multisensory serious game for training emotional comprehension and social cognition of individuals with ASD). The goal of the serious game developed in JEMImE project is to help children with ASD to learn to mimic facial and vocal emotions and to express the proper emotion depending on the context. Such a tool will be very useful for the children to learn to produce emotions congruent to what they feel and for the practitioner to quantify progress.

Scientific and technological breakthroughs in emotion characterization may significantly improve understanding and evaluating natural productions of children. For example, this implies to be able to define metrics to quantify how much the behavior is relevant. From a technological point of view, algorithms need to be able to analyze spontaneous behavior of children in a realistic environment. That means to design robust, real-time and multimodal methods, pushing forward the current state-of-the-art in unconstrained emotion recognition.

To achieve this goal, the project brings together major academic and industrial players with complementary skills in automatic emotion recognition, serious game design and treatment of children with autism.

Project coordination

Kévin Bailly (Institut des Systèmes Intelligents et de Robotique)

The author of this summary is the project coordinator, who is responsible for the content of this summary. The ANR declines any responsibility as for its contents.

Partner

ISIR Institut des Systèmes Intelligents et de Robotique
Idées-3com Idées-3com
LIRIS Laboratoire d'InfoRmatique en Image et Systèmes d'information
CoBTek CoBTek : Cognition – Behaviour – technologie
GENIOUS SYSTEMES

Help of the ANR 614,463 euros
Beginning and duration of the scientific project: September 2013 - 42 Months

Useful links

Explorez notre base de projets financés

 

 

ANR makes available its datasets on funded projects, click here to find more.

Sign up for the latest news:
Subscribe to our newsletter