Trustworthy and Robust Artificial INtelligence – TRAIN
Artificial Intelligence (AI) technologies can efficiently process large amounts of data, to help stakeholders improve their services and propose applications tailored to the end-user needs. While the benefits of AI technologies for the industry and the society can be manifold and range from personalized services to improved health care, their adoption remains slow.
In TRAIN, we set forth to address this challenge and focus on two main barriers to the widespread deployment of AI which are: the lack of robustness and the lack of trustworthiness.
Robustness. While the amount of data available to AI technologies has increased exponentially, its quality remains poor. AI faces several technical challenges, including data heterogeneity and bias. In addition to these technical challenges, AI must also deal with the possibility of security attacks that aim to compromise the prediction results of AI models. While the quantity of data available to AI technologies has become exponentially increased, their data quality remains generally poor. Namely, AI technology is faced with several technical challenges including data heterogeneity and bias. Model prediction and parameters may sensitively differ when trained on different data samples.; On the other hand, AI technology can inherently deploy biases. In addition to these technical challenges, the lack of robustness of the AI technology is can be critically exploited with adversarial attacks that specifically aim to fool or deviate the machine learning prediction.
Trustworthiness. The performance and robustness of AI technologies relies on the access to large datasets of good quality. Such datasets usually include privacy-sensitive information. In this context, Federated learning (FL) is emerging as a powerful paradigm to collaboratively train a machine-learning model among thousands or even millions of participants. Nevertheless, this technology is exposed to various privacy and security attacks. Tthe collaborative aggregation of models’ parameters can potentially expose clients' specific information, and opens up to security breaches with potential loss of privacy of clients’ data.
he overarching goal of this project TRAIN is to study these two challenges and design trustworthy and robust AI models. The proposed research work agenda target focuses on the challenging federated learning setting as a key paradigm to enable trustworthiness in the training phase, while the trustworthiness and robustness of the evaluation and deployment phase are investigated in both scenarios of centralized and collaborative machine learning.
Project coordination
Marco Lorenzi (Centre de Recherche Inria Sophia Antipolis - Méditerranée)
The author of this summary is the project coordinator, who is responsible for the content of this summary. The ANR declines any responsibility as for its contents.
Partnership
Fraunhofer Institute for Production Technology
EURECOM EURECOM
Inria Centre de Recherche Inria Sophia Antipolis - Méditerranée
RUB Ruhr-University Bochum
Help of the ANR 767,498 euros
Beginning and duration of the scientific project:
May 2023
- 48 Months