CHIST-ERA Call 2019 (step 2) - 10ème Appel à Projets de l'ERA-NET CHIST-ERA (step 2)

Countering Creative Information Manipulation with Explainable AI – CIMPLE

Submission summary

Explainability is of significant importance in the move towards trusted, responsible and ethical AI, yet remains in infancy. Most relevant efforts focus on the increased transparency of AI model design and training data, and on statistics-based interpretations of resulting decisions (interpretability). Explainability considers how AI can be understood by human users. The understandability of such explanations and their suitability to particular users and application domains received very little attention so far. Hence there is a need for an interdisciplinary and drastic evolution in XAI methods. CIMPLE will draw on models of human creativity, both in manipulating and understanding information, to design more understandable, reconfigurable and personalisable explanations. Human factors are key determinants of the success of relevant AI models. In some contexts, such as misinformation detection, existing XAI technical explainability methods do not suffice as the complexity of the domain and the variety of relevant social and psychological factors can heavily influence users’ trust in derived explanations. Past research has shown that presenting users with true / false credibility decisions is inadequate and ineffective, particularly when a black-box algorithm is used. Knowledge Graphs offer significant potential to better structure the core of AI models, and to use semantic representations when producing explanations for their decisions. By capturing the context and application domain in a granular manner, such graphs offer a much needed semantic layer that is currently missing from typical brute-force machine learning approaches. To this end, CIMPLE aims to experiment with innovative social and knowledge-driven AI explanations, and to use computational creativity techniques to generate powerful, engaging, and easily and quickly understandable explanations of rather complex AI decisions and behaviour. These explanations will be tested in the domain of detection and tracking of manipulated information, taking into account social, psychological and technical explainability needs and requirements.

Project coordination

Raphaël Troncy (EURECOM)

The author of this summary is the project coordinator, who is responsible for the content of this summary. The ANR declines any responsibility as for its contents.

Partner

OU The Open University
INESC-ID Instituto de Engenharia de Sistemas e Computadores, Investigação e Desenvolvimento em Lisboa
WLT webLyzard technology
EURECOM EURECOM
VSE University of Economics, Prague

Help of the ANR 296,842 euros
Beginning and duration of the scientific project: March 2021 - 36 Months

Useful links

Explorez notre base de projets financés

 

 

ANR makes available its datasets on funded projects, click here to find more.

Sign up for the latest news:
Subscribe to our newsletter