Search
Graph Neural Networks for Explainable Artificial Intelligence – GraphNEx
GraphNEx will contribute a graph-based framework for developing inherently explainable AI. Unlike current AI systems that utilise complex networks to learn high-dimensional, abstract representations of data, GraphNEx embeds symbolic meaning within AI frameworks. We will combine semantic reasoning ov
Intelligent Sharing of Explanation Experience by users for users – iSee
A right to obtain an explanation of the decision reached by a machine learning (ML) model is now an EU regulation. Different stakeholders (e.g. patients, clinicians, developers, auditors, etc.) may have different background knowledge, competencies and goals, thus requiring different kinds of explana
Supporting Energy Communities- Operational Research and Energy Analytics – SEC-OREA
SEC-OREA enables local energy communities (LECs) to participate in the decarbonisation of the energy sector by developing advanced efficient algorithms and analytics technologies. LECs are an efficient way to manage energy by increasing the use of renewable energy sources (RES) at a local level. We
Countering Creative Information Manipulation with Explainable AI – CIMPLE
Explainability is of significant importance in the move towards trusted, responsible and ethical AI, yet remains in infancy. Most relevant efforts focus on the increased transparency of AI model design and training data, and on statistics-based interpretations of resulting decisions (interpretabilit
Measuring and Improving Explainability for AI-based Face Recognition – XAIface
Face recognition has become a key technology in our society, frequently used in multiple applications, while creating an impact in terms of privacy. As face recognition solutions based on artificial intelligence (AI) are becoming popular, it is critical to fully understand and explain how these tech
Interpretability of Deep Neural Networks for Radiomics – INFORM
Deep neural networks (DNNs) have achieved outstanding performance and broad implementation in tasks such as classification, denoising, segmentation and image synthesis, including in medical imaging. However, DNN-based models and algorithms have seen limited adaptation and development within the radi
Explainable Predictive Maintenance – XPM
The XPM project aims to integrate explanations into Artificial Intelligence (AI) solutions within the area of Predictive Maintenance (PM). Real-world applications of PM are increasingly complex, with intricate interactions of many components. AI solutions are a very popular technique in this domain,
ArgumeNtaTIon-Driven explainable artificial intelligence fOr digiTal mEdicine – ANTIDOTE
Providing high quality explanations for AI predictions based on machine learning is a challenging and complex task. To work well it requires, among other factors: selecting a proper level of generality/specificity of the explanation; considering assumptions about the familiarity of the explanation b
Causal eXplanations in Reinforcement Learning – CausalXRL
Deep reinforcement learning (RL) systems are approaching or surpassing human-level performance in specific domains, from games to decision support to continuous control, albeit in non-critical environments, and usually learning via random explorations. Despite these prodigious achievements, many app