SLEEP monitorIng in premature NEwborns by multimodal data fusion and Self-Supervised approaches – SLEEPINESS
Preterm birth is a birth that occurred before 37 weeks of gestation. Because of the immaturity of all their physiological functions, premature babies are exposed to high morbidity, especially in terms of neurodevelopment. Their health status is evaluated through the continuous monitoring of several vital signs (cardiac activity, breathing…). Sleep is important for neonatal brain development. Indeed, sleep alterations or deprivations have been associated with impaired neurocognitive function and increase the risk for cardiometabolic diseases and obesity. Until now the assessment of sleep alteration was only accessible through polysomnography or by observation of the behavioral states (body activity, eye state, cardio-respiration regularity, vocalizations…) by experts. These methods are difficult to apply, performed over a limited period of time and, for polysomnography, in a standardized environment. Moreover, the analyses remain subjective and time consuming. The continuous monitoring of sleep can become accessible non-invasively with signal processing automation and artificial intelligence and new ways to automatically assess neonate sleep and wake states are necessary. This will allow neonatal sleep quality to be taken into account in optimizing the environment and treatment of newborns, with potential short- and long-term benefits.
In SLEEPINESS, we propose to develop a bedside sleep monitoring tool giving an accurate, detailed, structured, and systemic assessment of sleep organization. For this purpose, two types of data will be investigated: i) electrophysiological signals (electrocardiogram and respiration) and ii) audio-video modalities, where audio and video data will provide information on baby vocalizations and motion, respectively. It is worthwhile to notice that this technique does not require additional sensors for the newborns since i) ECG and respiration are among the signals already acquired permanently during the monitoring of premature newborns, and ii) video and audio acquisitions are contactless.
This strategy, as close as possible to the clinical practice applied during manual annotations, should allow us to obtain higher classification performance than existing methods. Indeed, this four-modality approach has never been implemented in literature. One of the reasons is that it requires a specific database, difficult to acquire, but also a large number of annotations. In SLEEPINESS, we will exploit two previous databases already acquired and partially annotated.
A set of features will have to be extracted from motion, vocalizations, ECG and respiration variabilities. For this, the specificity of the environment must be taken into account, as the data were recorded in a clinical context. We already addressed this issue in previous works, and observed on this occasion several limitations that will have to be addressed. For motion processing, specific attention will be paid to non-analyzable periods (e.g., when an adult is present in the field of the camera, or when the newborn is not in the bed). For audio processing, baby cry extraction is also a challenge in a clinical environment, since other types of sounds (alarms, adult voices…) can be captured. In addition, other types of vocalizations different from crying (e.g., cooing), sometimes difficult to identify, will be relevant in this context of sleep analysis.
Data fusion will be performed using a self-supervised approach. It will combine a supervised general model with a specific-patient self-learning system for an automated sleep-wake scoring based on massive data and artificial intelligence. Then, estimated sleep states will be exported to a sleep platform to be used for better management of the preterm infant health status. An analysis of sleep maturation in relation to clinical events recorded in the clinical database will be performed to determine their impact. Finally, the sleep platform will be designed with a user-centered design approach.
Project coordination
Fabienne POREE (LABORATOIRE TRAITEMENT DU SIGNAL ET DE L'IMAGE)
The author of this summary is the project coordinator, who is responsible for the content of this summary. The ANR declines any responsibility as for its contents.
Partner
CHU Lille Centre Hospitalier Universitaire de Lille
DRI Direction de la Recherche et de l'Innovation
LTSI LABORATOIRE TRAITEMENT DU SIGNAL ET DE L'IMAGE
Help of the ANR 565,343 euros
Beginning and duration of the scientific project:
December 2023
- 36 Months