CE23 - Intelligence Artificielle

Translating sign language with computer vision – CorVis

Submission summary

The CorVis project aims to develop cutting-edge sign language translation techniques, based on recent advances in artificial intelligence, in particular in the fields of computer vision and machine translation. Sign language analysis from video data is understudied in computer vision despite its great impact on the society, among which is enhancing hearing-deaf communication. This project, therefore, will make a step towards translating the video signal into spoken language, by learning data-driven representations suitable for the task, through deep learning. There will be two inter-related directions focusing on: (1) the visual input representation, i.e., how to embed a continuous video sequence to capture the signing content, (2) the model design for outputting text given video representations, i.e., how to find a mapping between the sign and spoken languages.

Project coordination

Gul Varol (Laboratoire d'Informatique Gaspard-Monge)

The author of this summary is the project coordinator, who is responsible for the content of this summary. The ANR declines any responsibility as for its contents.

Partner

LIGM Laboratoire d'Informatique Gaspard-Monge
VGG University of Oxford / VGG

Help of the ANR 303,385 euros
Beginning and duration of the scientific project: March 2022 - 48 Months

Useful links

Explorez notre base de projets financés

 

 

ANR makes available its datasets on funded projects, click here to find more.

Sign up for the latest news:
Subscribe to our newsletter