AI-Assisted Exchange and Development Therapy for Autism – TEDIA
Autism spectrum disorders (ASD) are a diverse group of neurodevelopmental conditions, characterized by difficulties in social interaction and communication, which affect 1 in 100 children worldwide. Exchange and development therapy (EDT) was developed to rehabilitate those skills in young children. The main goal of caregivers during EDT sessions is to provoke synchronizations (typically eye contacts) with the patients, which are known to improve their social skills.
The TEDIA project aims to retrospectively analyze videos recorded during EDT sessions, using Artificial Intelligence (AI), in order to assist caregivers. The main objective is to detect early warnings of synchronization between patients and caregivers, a few seconds in advance. We hypothesize it can guide caregivers towards provoking more synchronizations, by taking action at the right time. A secondary objective is to discover new or more precise early indicators of future social skill progression in a patient, besides the intensity or frequency of synchronizations. For this purpose, we will rely on standardized behavioral evaluations performed at regular time intervals in EDT patients. We hypothesize more precise guidance can be provided to caregivers. Automatic video analysis will rely on a combined characterization of their visual and audio content. We will explore high-level representations of visual content (pose description, gaze direction, etc.) and audio content (acoustic parameters, prosody, semantics, etc.), as well as lower-level representations (direct analysis of pixels and audio spectrograms).
EDT sessions only represent a portion of all video-recorded sessions involving ASD children interacting with various professionals. Considering the visual and audio similarities across these videos, we hypothesize that EDT-specific AI models could benefit from pre-training on this broader set of videos. In TEDIA, we propose to initially train generic AI models on this expanded set, followed by fine-tuning for EDT-specific clinical objectives, to enhance their performance. These generic AI models will rely on self-supervised learning (SSL), extending recent SSL research efforts aiming to represent videos, by incorporating an additional dimension: the representation of video time series.
To reach these objectives, TEDIA involves 1) collecting a large dataset of EDT videos, along with metadata (synchronization annotations, standardized evaluations, etc.), as well as other videos involving ASD patients, and 2) developing self-supervised, explainable, and predictive AI algorithms.
The project will result in the publication of updated recommendations on the best practices for EDT, to further improve social interaction and communication skills in a growing number of children with ASD. It will also lead to the publication of open-source AI models for analyzing monitoring videos of ASD patients, including self-supervised foundational models, to foster research in this direction.
TEDIA is a collaborative effort between LaTIM, specialist in AI for medical image and video analysis, as well as iBrain and CHU Tours, experts in ASD research and care, and pioneers of the EDT. The LaTIM laboratory has significant expertise in explainable and predictive AI, as well as in SSL (videos and time series of images). The iBrain laboratory has complementary expertise in speech analysis, especially with autistic patients.
Project coordination
Gwenolé Quellec (Université de Brest)
The author of this summary is the project coordinator, who is responsible for the content of this summary. The ANR declines any responsibility as for its contents.
Partnership
LaTIM Université de Brest
iBraiN Université de Tours
DRI Tours Centre Hospitalier Universitaire de Tours
Help of the ANR 621,208 euros
Beginning and duration of the scientific project:
January 2025
- 48 Months