ChairesIA_2019_1 - Chaires de recherche et d'enseignement en Intelligence Artificielle - vague 1 de l'édition 2019

EXPlainable artificial intelligence: a KnowlEdge CompilaTion FoundATION – EXPEKCTATION

Submission summary

The EXPEKCTATION project is about explainable and robust AI. It aims to devise global model-agnostic approaches for interpretable and robust machine learning using knowledge compilation: we seek for generic pre-processing techniques capable of extracting from any black-box predictor a corresponding white-box which can be used to provide various forms of explanations and to address verification queries. In the EXPEKCTATION project, we plan to focus on the post hoc interpretability issue: one will consider ML models that are not intrinsically interpretable and analyze the models once they have been trained. We plan also to focus on the global interpretability issue (i.e., to explain the entire model behavior, which is not the same as explaining an individual prediction).

Clearly, the translation of the black-box into a white-box can be computationally demanding. Notably, if the white-box model is an arithmetic circuit, it can be very large. Furthermore, inferring explanations from a white-box model can also be computationally demanding, as many abduction problems are NP-hard. Fortunately, once the black-box model has been trained, there is no need to modify it each time a new input to predict must be considered. Thus, the corresponding white-box / circuit can be pre-processed so as to facilitate the generation of explanations of the predictions, independently of the corresponding inputs. Knowledge compilation (KC) appears as a very promising approach in this respect.

The main purpose of the EXPEKCTATION project is to take advantage of KC techniques, for which we have a strong expertise, to address fundamental issues in the objective of explainable and robust AI. Two main issues will be considered:
- Which representation languages admit tractable algorithms for inferring various forms of explanations and support many verification queries?
- How can we extract a tractable representation from a black-box predictor?

As to the first issue, we plan to study abduction tasks and verification queries for families of compiled representations, especially those for which ``efficient’’ compilers exist. We will consider new concepts of preferred explanations, including a notion of robust explanation, i.e., intuitively, an explanation which still explains the observations when it is distorted by some ``noise’’. We will investigate the computational complexity of finding out preferred explanations, counting them, enumerating them for several families of compiled circuits. We will develop and evaluate algorithms for those tasks. We would like also to figure out how to make explanations intelligible by taking into account a user model. We will also consider notions of counterfactual explanations in the abductive setting for addressing the case when the user is surprised by the explanations that are reported. In such a case, one must be able to explain why the explanations furnished are not those the user was expecting.

As to the second issue, walking in the footsteps of Shih et al. (2019), we plan to use the black-box predictor at hand as an oracle for extracting a circuit. We plan to consider other classes of neural networks and other classes of arithmetic circuits, more general than those considered so far. Our query-directed learning algorithms will not only use membership queries but also more powerful statistical queries when the neural net is a probabilistic predictor. We plan also to achieve the goal following a two-step approach, where one first extracts an intermediate model (e.g., a Bayes net) from a black-box predictor, and then compile it into an arithmetic circuit. Though we have a significant background on the second step, many questions related to the first one are still open and will be addressed; especially, can we translate deep neural nets into deep Bayes nets? Which classes of neural networks are representable by probabilistic graphical models, that in turn can be efficiently compiled into arithmetic circuits?

Project coordination

Pierre Marquis (Centre de Recherche en Informatique de Lens)

The author of this summary is the project coordinator, who is responsible for the content of this summary. The ANR declines any responsibility as for its contents.

Partner

CRIL Centre de Recherche en Informatique de Lens

Help of the ANR 528,120 euros
Beginning and duration of the scientific project: August 2020 - 48 Months

Useful links

Explorez notre base de projets financés

 

 

ANR makes available its datasets on funded projects, click here to find more.

Sign up for the latest news:
Subscribe to our newsletter