ChairesIA_2019_2 - Chaires de recherche et d'enseignement en Intelligence Artificielle - vague 2 de l'édition 2019

Explainable artificial intelligence for anti-money laundering – XAIforAML

AI can help in fighting money laundering and terrorist financing (AML-CFT), but AI tools won’t be adopted until they’re explainable

Current AML-CFT systems deployed by financial institutions are costly. They rely on rule-based systems, generate large numbers of false positives, and their actual contribution to apprehending criminal funds has been debated. AI offers new perspectives to improve the efficacy of AML-CFT, but regulatory uncertainty, particularly on the lack of explainability of machine-learning models, is a major barrier.

Identify the needs for explainable AI from the standpoint of banks deploying AML-CFT systems and supervisory authorities who regulate and audit those systems

Financial institutions and government authorities want to introduce more advanced AI tools in AML-CFT processes. Financial institutions see AI as a way to reduce costs. Regulators see AI as a way to identify more criminal networks that currently escape prosecution (the director of Europol estimates that only 1% of criminal funds are actually apprehended). Current AML-CFT tools rely on rules-based systems that generate alerts which then require human review. Many alerts (>90%) are false positives, and human review of alerts can lead to backlog and delays. Adoption of machine learning tools in AML-CFT systems has been slow or non-existent, one of the reasons being regulatory uncertainty associated with using opaque, and in some respects unpredictable, algorithms in a highly regulated function where high sanctions apply if systems are deemed inadequate by the regulator. With the support of our partners PwC and Dataiku, our project is to determine why explainability is needed in AI-based AML-CFT systems : explanations to whom, and for what purpose ? We then ambition to develop approaches, technical and regulatory, that contribute to removing explainability as a barrier to AI-uptake in AML-CFT processes.

Our method consists of :
- First placing the AML-CFT process in a broader context of its economic, regulatory and human rights environment, including identification of current weaknesses in the process that can potentially be helped by AI ;
- Second studying explainability in the AML-CFT processes, and identifying precisely the purpose served by explainability and to whom ? We are focusing on the need (or not) for explanations by human operators at the bank in charge of examining AML-CFT alerts, and the need for explanations by officials from the bank supervisory authority who may want to know why a given alert was classified in a certain way.
- Third exploring the utility of graph networks to supplement existing AML-CFT methods for identifying unusual patterns, and for providing explanations.

We organized with the French supervisory authority ACPR-Banque de France a series of online workshops (“Les Lundis de l’IA et de la Finance”) focusing on different aspects of AI used by financial institutions, including for AML-CFT. Each webinar brought together academics, regulators and financial institutions from several countries, and each workshop was attended by between 150 and 300 participants.
During the period we did considerable research on how the introduction of AI in AML-CFT systems may threaten human rights and made suggestions on how AI-based systems could be introduced while guaranteeing respect for fundamental rights and freedoms. This research resulted in the publication of two articles in peer-reviewed journals.
PhD researcher Astrid Bertrand studied the effectiveness of various explanation techniques in helping human operators make decisions without falling victim to automation bias and has submitted her survey paper to several international conferences.
We developed with a major French financial institution a research project on XAI for AML that would involve graph networks.

The French supervisory authority ACPR-Banque de France is closely associated with our work and has indicated that the project «addresses a subject of public interest by reconciling anti-money laundering requirements with the rights of customers, in particular their fundamental rights, in a context where AI has opened new perspectives.«
Our ambitions for 2022 are:
• To publish the working paper by PhD researcher Astrid Bertrand on potential pitfalls of explainable AI in the proceedings of an international conference on artificial intelligence;
• To conduct an experiment and publish a paper on the efficacy of different AI explanation techniques for regulators auditing algorithmic decisions to close AML alerts;
• To launch the new research with a major French financial institution on the use of graph networks both to detect suspicious activities and to facilitate explanations;
• To publish a research paper by visiting scholar Joshua Brand on the moral and ethical requirements for explainable AI in AML;
• Depending on continuing COVID conditions, either organize a series of online workshops, OR one international symposium on XAI’s role in improving AML processes and respect for fundamental rights.

Publications in peer-reviewed journals:
W. Maxwell, The GDPR and private-sector measures to detect criminal activity, Revue des Affaires Européennes, 2021 n°1, p. 103 (HAL 03316259)

A. Bertrand, W. Maxwell, X. Vamparys, Do AI-based anti-money laundering (AML) systems violate European fundamental rights ? 11 Int’l Data Privacy Law (Oxford University Press), No. 3, 2021, p. 276 (HAL 02884824)

International conferences: W. Maxwell, Are AI-based AML systems compatible with European fundamental rights ? ICML2020 Law and Machine Learning workshop, July 17, 2020

AI, AML and Human Rights, presentation at Academy of European Law (ERA) workshop on Artificial intelligence and financial transparency as a national security priority, 26 January 2022.

Workshops organized with the ACPR-Banque de France:

Mondays of AI and Finance, by ACPR-Banque de France and Télécom Paris :
1. Explainability of AI in Finance, 9 Nov. 2020
2. Algorithmic Fairness, 11 Jan 2021
3. Data sharing and pooling, 8 March 2021
4. AI regulation in the financial sector : crossed perspectives from Asia and Europe, 17 May 2021
(Workshop proceedings are published on HAL 3530291)

The XAI4AML (Explainable AI for Anti-Money Laundering) chair will explore how AI and explainability affect the optimal level of financial regulation, including how different levels of explainability and regulation may affect the costs and benefits associated with deploying AI-based solutions for anti-money laundering (AML) enforcement.
Traditional approaches used by banks to fight money laundering are both costly (€20 billion per year in Europe), and relatively ineffective, being based on deterministic rule-based models. Current AML systems generate many false positives, while at the same time missing large amounts of truly suspicious transactions. Professional criminals use sophisticated techniques to disguise transfers as normal-looking transactions. AI can reduce false positives and bring about greater effectiveness by identifying otherwise invisible trends across large data sets. However, problems of explainability, together with regulatory uncertainty, are the main barriers to implementing AI in AML systems.
My interdisciplinary chair, combining economics, law (with Winston Maxwell, Director Law & Technology) and AI/data science (Stéphan Clémençon, Professor Applied Mathematics), will contribute to the economic literature on the economics of financial regulation and financial crime, while at the same time contributing to an operational need for clarity on what constitutes and “explainable” AI system for AML. The results will have a positive impact on designers of AI-based AML systems (such as the French fintech Bleckwen, partner of the chair), on users of the systems (such as banks and consulting firms, in particular PWC specialized in banking operations and compliance, partner of the chair), and on the financial regulator (ACPR, partner of the chair).

Project coordination

David Bounie (Telecom Paris)

The author of this summary is the project coordinator, who is responsible for the content of this summary. The ANR declines any responsibility as for its contents.

Partner

TP Telecom Paris

Help of the ANR 600,000 euros
Beginning and duration of the scientific project: February 2020 - 48 Months

Useful links

Explorez notre base de projets financés

 

 

ANR makes available its datasets on funded projects, click here to find more.

Sign up for the latest news:
Subscribe to our newsletter