CE33 - Interaction, Robotique – Intelligence artificielle

Enabling Learnability in Embodied Movement Interaction – ELEMENT

ELEMENT - Enabling Learnability in Movement Interaction

While so-called Natural User Interfaces are becoming widespread, the use of expressive body movements remains limited in most human-computer applications.<br /><br />With «adaptable« and «learnable« systems, we want to facilitate the appropriation of movements or gestural interactions. We develop systems that could adapt to different motor skills, towards complex and expressive interactions.

Towards interactive systems - with or for - movement learning

Our project addresses movement learning and interactive gestural systems. We propose to address three main research questions. <br /><br />1) How to design body movements, whose components might be easy to learn, but allowing for complex/rich interaction techniques that go beyond simple commands?<br /><br /><br /> 2) What computational movement modelling can account for sensorimotor adaptation and/or learning in embodied interaction? <br /><br />3) How to optimize model-driven feedback and guidance to facilitate skill acquisition in embodied interaction? <br /><br /><br />We consider complementary use-cases in human-computer interaction, from assistive technologies or rehabilitation, to musical interfaces or systems facilitating dance movement learning. The long-term aim is to foster innovation in multimodal interaction, from non-verbal communication to interaction with digital media/content in creative applications.

The project is based on rapid cycles involving experimental studies, computational modelling and tool development.

First, user-centred and participatory design methodologies are used to gain knowledge from expert practitioners in music and dance.

Second, we develop computational models by leveraging our expertise on User-Centred Machine Learning, an emerging community in HCI (see for example the CHI 2016 workshop on Human-Centred Machine Learning). Such an approach differs from conventional machine learning approaches that typically relies on large datasets, and focus on improving algorithms. Instead, User-Centred Machine Learning proposes broader perspectives on real-world use-cases, with a special attention to the interaction with users or designers.

Third, the tools development is grounded on our current software platforms. In particular, we will develop prototypes using interactive programming environments (e.g. Max/MSP) as well as web applications.

Main results of the first 18 months

State of the art work was done collaboratively by all partners, considering several domains and approaches, from movement learning and gesture design to computational models.
Concerning experimental studies, several studies are being conducted in parallel. First, a field study, with semi-structured interviews, was conducted by interviewing 12 dance professionals about their transition from one classical dance practice to other practices.
Another study conducted at LRI concerned the learning of movement by dancers using video. A prototype of video annotation and segmentation (MoveOn) has been pursued to help dancers acquire complex movements of a choreography.
An experimental study was conducted by IRCAM in collaboration with the LRI, on the variability of movements when learning upper body movements (mid-air gestures), and the effects of movement sonification. This study, which was conducted over 3 days with 24 participants, collected a total of 2160 gestural data, and allowed us to build a database, which is currently being used for further investigations on movement learning metrics.

Various models are currently being tested and should provide new directions for the use of adaptive models in interactive gesture systems. For example, LRI started to explore methods for transferring pre-trained deep neural network models for adaptation to personal and expressive vocabularies.

New tools for interactive machine learning are currently developed:
- A new web programming framework for interactive machine learning has been developed (by LIMSI and LRI), called Marcelle. Initially designed for pedagogical purposes, the framework is starting to be used in research projects within the project, for example on transfer learning.
- A completely revised version of the CoMo web application allowing to associate sound feedback to movements, using interactive machine learning.

Concerning dissemination actions, our first Workshop/Public Colloquium, Movement Design and Learning, was successively held at IRCAM in 2019and the talks are available online (https://medias.ircam.fr/x6984c8). We plan to continue this initiative with further public workshops in the following years.

Caramiaux, B., Françoise, J., Liu, A. W., Sanchez, T., & Bevilacqua, F. (2020). Machine Learning Approaches For Motor Learning: A Short Review. Frontiers in Computer Science (https://hal.archives-ouvertes.fr/hal-02558779

Schwarz, D. Liu, W. Bevilacqua, F. (2020). A Survey on the Use of 2D Touch Interfaces for Musical Expression, In Proceedings of NIME’20.
hal.archives-ouvertes.fr/hal-02557522

Rivière, J. P., Alaoui, S. F., Caramiaux, B., & Mackay, W. E. (2019). Capturing Movement Decomposition to Support Learning and Teaching in Contemporary Dance. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW), 1-22. DOI: 10.1145/3359188.
hal.archives-ouvertes.fr/hal-02378487

Lemouton, Bevilacqua, F., S., Borghesi, R., Haapamäki, S., & Fléty, E. (2019, October). Following Orchestra Conductors: the IDEA Open Movement Dataset. In Proceedings of the 6th International Conference on Movement and Computing (pp. 1-6). DOI: 10.1145/3347122.3359599.
hal.archives-ouvertes.fr/hal-02469891v1.

Ley-Flores, J., Bevilacqua, F., Bianchi-Berthouze, N., & Taiadura-Jiménez, A. (2019, September). Altering body perception and emotion in physically inactive people through movement sonification. In 2019 8th International Conference on Affective Computing and Intelligent Interaction (ACII) (pp. 1-7). IEEE. DOI: 10.1109/ACII.2019.8925432.
hal.archives-ouvertes.fr/hal-02558385v1

While the so-called Natural User Interfaces are becoming widespread in consumer devices, the use of expressive body movements remains limited in most HCI applications. Since memorizing and executing gestures remain challenging for users, most current approaches to movement-based interaction consider “intuitive” interfaces and trivial gesture vocabularies. While these facilitate adoption, they also limit users’ potential for more complex, expressive and truly embodied interactions. We propose to shift the focus from intuitiveness/naturalness towards learnability: new interaction paradigms might require users to develop specific sensorimotor skills compatible with – and transferable between – digital interfaces. With learnable embodied interactions, novice users should be able to approach a new system with a difficulty adapted to their expertise, then the system should be able to carefully adapt to the improving motor skills, and eventually enable complex, expressive and engaging interactions. Our project addresses both methodological and modelling issues. First, we need to elaborate methods to design learnable movement vocabularies, which units are easy to learn and be composed to create richer and more expressive movement phrases. Since movement vocabularies proposed by novice users are often idiosyncratic with limited expressive power, we propose to capitalize on knowledge and experience of movement experts such as dancers and musicians. Second, we need to conceive computational models able to analyze users’ movements in real-time to provide various multimodal feedback and guidance mechanisms (e.g. visual and auditory). Importantly, the movement models must take into account the user’s expertise and learning development. We argue that computational movement models able to adapt to user-specific learning pathways is key to facilitate the acquisition of motor skills. We propose thus to address three main research questions. 1) How to design body movement as input modality, whose components are easy to learn, but that allow for complex/rich interaction techniques that go beyond simple commands? 2) What computational movement modelling can account for sensorimotor adaptation and/or learning in embodied interaction? 3) How to optimize model-driven feedback and guidance to facilitate skill acquisition in embodied interaction? We will consider complementary use-cases such as computer mediated communication, assistive technologies and musical interfaces. The long-term aim is to foster innovation in multimodal interaction, from non-verbal communication to interaction with digital media/content in creative applications.

Project coordination

Frederic BEVILACQUA (INSTITUT DE RECHERCHE ET COORDINATION ACOUSTIQUE MUSIQUE)

The author of this summary is the project coordinator, who is responsible for the content of this summary. The ANR declines any responsibility as for its contents.

Partner

IRCAM INSTITUT DE RECHERCHE ET COORDINATION ACOUSTIQUE MUSIQUE
LRI Laboratoire de Recherche en Informatique
CNRS - LIMSI Laboratoire d'Informatique pour la Mécanique et les Sciences de l'Ingénieur

Help of the ANR 583,453 euros
Beginning and duration of the scientific project: - 36 Months

Useful links

Explorez notre base de projets financés

 

 

ANR makes available its datasets on funded projects, click here to find more.

Sign up for the latest news:
Subscribe to our newsletter