XAI4ML

Developing XAI-based tools to enhance ML models considering different data modalities

This project, a key initiative within the Software Campus program, receives its funding from the German Federal Ministry of Education and Research (BMBF). It is a collaborative effort that brings together the industry expertise of Software AG with the academic excellence of the Friedrich-Alexander University of Erlangen-Nuremberg, aiming to foster innovation and leadership in the field of technology.

Methods of Artificial Intelligence (AI) have evolved into a key technology for both science and practice. This development has contributed to unprecedented progress in many application areas, such as medical technology. However, one disadvantage of these systems can often be their lack of explainability. In decision-critical areas where legislation demands that automated decisions be justified, the explainability of AI methods is indispensable and also increases acceptance. This includes situations where the AI system plays a supportive role (e.g., medical diagnosis), as well as situations where it practically makes the decisions (e.g., autonomous driving).

The scientific objective of the project is the conception, implementation, and evaluation of complete, reusable, and marketable software components based on Explainable Artificial Intelligence (XAI). The conception is based on the principles of technical feasibility and economic necessity. The principle of economic necessity is particularly relevant when the developed XAI concepts and methods are to be used to make the application of AI accessible to new user groups without the corresponding technical know-how.

The application of XAI methods in the context of specific application scenarios has also become a focus of current research efforts. However, these applications depend on several factors, such as the operating environment, the amount of available data, and the chosen AI approach, which makes the application a challenging endeavor. Therefore, while XAI is being applied to an increasing number of systems and environments in practice, many aspects of development and practical challenges remain unclear. To contribute to clarifying these aspects, a better understanding of explainable artificial intelligence will be created through insights from selected case studies in the context of various application scenarios.

The technical goal of the project is to gain a better understanding of how XAI methods behave in various application contexts and how they can make existing AI systems more accessible and understandable to a wider audience. By implementing prototypes, it is possible to analyze how XAI methods can be concretely implemented and applied in practice, what challenges arise, and how they can be overcome. The project additionally seeks to explore the ways in which XAI tools can be leveraged to augment the performance and interpretability of machine learning (ML) models trained on both tabular and image datasets. This involves assessing how XAI can provide deeper insights into model decisions, identify potential biases in the data, and improve the overall reliability of the models through more transparent and understandable machine learning processes.