CE28 - Cognition, éducation, formation tout au long de la vie

Gestures for the pedagogy of intonation – Gepeto

Submission summary

This project proposes pedagogical innovations and experimentations for the use of gesture in learning intonation control, through computer-human interfaces. Mastering vocal source modulation through laryngeal control – “intonation” in a broad sense, encompassing here fundamental frequency, voice quality, melodic rhythm – is fundamental for vocal communication. Mastering phonatory control is not generally an issue during the acquisition of the native language (L1), since it remains largely unconscious. In contrast, mastering phonatory control is a long, voluntary, and often difficult process to acquire a foreign language (L2), or to control a vocal prosthesis after laryngeal surgery. It has been demonstrated that the use of gestures that mimic speech intonation improves the process of learning intonation control, by coding information through different modalities (auditory, visual, and kinaesthetic). This project aims at taking multimodality integration a step further by exploiting gestural control of voice synthesis. The latter, called chironomy, is a novel research paradigm in human-computer interaction that allows to generate intonation trajectories in real-time, from manual gesture. The produced intonation contour is either transmitted to a voice synthesiser, to control a vocal instrument with gesture; or is injected into an excitation source that is placed into the user’s mouth. This excitation is naturally combined with the user articulation to produce an integrated semi-synthetic voice, this is vocal substitution. Thus, this project aims at investigating the use of such systems for the learning of intonation control in two complementary approaches: 1) Learning the natural control of intonation contours with the help of chironomy for foreign language acquisition; 2) Learning the chironomic control of intonation contours with the help of native language knowledge, for vocal substitution.

This project is divided in four scientific tasks. The first relates to the development of performative synthesis tools for intonation learning. The existing chironomic systems use voice synthesis and were proven to be operational for singing, and for controlling Mandarin tones in a pilot study. These systems will be adapted to fit the applicative environments of the project. Moreover, a voice excitation source that is suitable for injection will be developed for vocal substitution. The second task tackles the methodological aspects of the project, and aims at identifying the target intonation patterns of the several languages that will be considered in the project (French, English, and Mandarin), along with their associated gestures. A multimodal corpus will be recorded, and preliminary evaluation of chironomic control will be undertaken. The third task consists in applying these methodological aspects to the evaluation of chironomy in different learning situations with subjects who are learning foreign languages. An experimental protocol will be designed to observe whether chironomy is more effective in a given learning situation as compared to spatial hand gestures or auditory feedback alone. This experiment will be conducted in classroom conditions, on the learning of: lexical tones in Mandarin; intonation of French; and intonation of English, as foreign languages. The fourth task will demonstrate that chironomic control of intonation can be used as a substitution of one’s natural voice. This unique paradigm combines the gestural control of an excitation source and the natural articulation of the user, and the coordination of the two controls will be first investigated. Then a large-scale experiment will be deployed to validate the feasibility of such control in vocal substitution.

Overall, this project is the first proof of concept of the use of chironomy in the learning of intonation, which has a potentially high short-term social and cultural impact, both in computer-assisted education and clinical applications.

Project coordination

Christophe D'Alessandro (Institut Jean le rond d'Alembert)

The author of this summary is the project coordinator, who is responsible for the content of this summary. The ANR declines any responsibility as for its contents.

Partner

LPP Laboratoire de Phonétique et Phonologie
GIPSA-lab Grenoble Images Parole Signal Automatique
d'Alembert Institut Jean le rond d'Alembert

Help of the ANR 433,942 euros
Beginning and duration of the scientific project: October 2019 - 42 Months

Useful links

Explorez notre base de projets financés

 

 

ANR makes available its datasets on funded projects, click here to find more.

Sign up for the latest news:
Subscribe to our newsletter