CE23 - Intelligence artificielle

Framework for Automatic Interpretability in Machine Learning – FAbLe

Submission summary

Recent technological advances rely on accurate decision support systems that have been constructed as black boxes. That is, the system's internal logic is not available to the user, e.g., due to the complexity of system. This lack of explanation can lead to technical, ethical, and legal issues. For this reason, multiple research approaches provide comprehensible explanations for the decisions of traditionally accurate but black-box-like machine learning algorithms such as neural networks. All these approaches rely on explanations on simpler models such as linear functions, rules, and decision trees. Nevertheless, data scientists do not count on a straightforward way to choose the most suitable explanation model for a given use case. Our proposal, called FAbLe (Framework for Automatic Interpretability in Machine Learning), aims at fully automating this process in order to provide the most faithful and comprehensible explanations.

Project coordinator

Monsieur Luis Galárraga (Centre de Recherche Inria Rennes - Bretagne Atlantique)

The author of this summary is the project coordinator, who is responsible for the content of this summary. The ANR declines any responsibility as for its contents.

Partner

Inria Rennes Centre de Recherche Inria Rennes - Bretagne Atlantique

Help of the ANR 195,207 euros
Beginning and duration of the scientific project: February 2020 - 48 Months

Useful links

Explorez notre base de projets financés

 

 

ANR makes available its datasets on funded projects, click here to find more.

Sign up for the latest news:
Subscribe to our newsletter