DS07 - Société de l'information et de la communication

ComPLetely Unsupervised Multimodal Character identification On TV series and movies – PLUMCOT

Submission summary

Automatic character identification in multimedia videos is an extensive and challenging problem. Person identities can serve as foundation and building block for many higher level video analysis tasks, for example semantic indexing, search and retrieval, interaction analysis and video summarization.

The goal of this project is to exploit textual, audio and video information to automatically identify characters in TV series and movies without requiring any manual annotation for training character models. A fully automatic and unsupervised approach is especially appealing when considering the huge amount of available multimedia data (and its growth rate). Text, audio and video provide complementary cues to the identity of a person, and thus allow to better identify a person than from either modality alone.

To this end, we will address three main research questions: unsupervised clustering of speech turns (i.e. speaker diarization) and face tracks in order to group similar tracks of the same person without prior labels or models; unsupervised identification by propagation of automatically generated weak labels from various sources of information (such as subtitles and speech transcripts); and multimodal fusion of acoustic, visual and textual cues at various levels of the identification pipeline.

While there exist many generic approaches to unsupervised clustering, they are not adapted to heterogeneous audiovisual data (face tracks vs. speech turns) and do not perform as well on challenging TV series and movies content as they do on other controlled data. Our general approach is therefore to first over-cluster the data and make sure that clusters remain pure, before assigning names to these clusters in a second step. On top of domain specific improvements for either modality alone, we also expect joint multimodal clustering to take advantage of three modalities and improve clustering performance over each modality alone.

Then, unsupervised identification aims at assigning character names to clusters in a completely automatic manner (i.e. using only available information already present in the speech and video). In TV series and movies, character names are usually introduced and re-iterated throughout the video. We will detect and use addresser-addressee relationships in both speech transcripts (using named entity detection techniques) and video (using mouth movements, viewing direction and focus of attention of faces). This allows to assign names to some clusters, learn discriminative models and assign names to the remaining clusters.

For evaluation, we will extend and further annotate a corpus of four TV series (57 episodes) and one movie series (8 movies), a total of about 50 hours of video. This diverse data covers different filming styles, type of stories and challenges contained in both video and audio. We will evaluate the different steps of this project on this corpus, and also make our annotations publicly available for other researchers working in the field.

Project coordination

Hervé BREDIN (Laboratoire d'Information pour la Mécanique et les Sciences de l'Ingénieur)

The author of this summary is the project coordinator, who is responsible for the content of this summary. The ANR declines any responsibility as for its contents.

Partner

LIMSI Laboratoire d'Information pour la Mécanique et les Sciences de l'Ingénieur
KIT Karlsruhe Institute of Technology

Help of the ANR 219,531 euros
Beginning and duration of the scientific project: February 2017 - 36 Months

Useful links

Explorez notre base de projets financés

 

 

ANR makes available its datasets on funded projects, click here to find more.

Sign up for the latest news:
Subscribe to our newsletter