ETNS

Development of interative explainability tool for NLP systems

This project, a key initiative within the Software Campus program, receives its funding from the German Federal Ministry of Education and Research (BMBF). It is a collaborative effort that brings together the industry expertise of Software AG with the academic excellence of the Technical University of Munich (TU Munich), aiming to foster innovation and leadership in the field of technology.

Artificial Intelligence (AI) systems are advancing and transforming the world, with AI-driven solutions revolutionizing industries and making our lives more convenient. These advancements are particularly evident in recent progress in Natural Language Processing (NLP), where research intersects across computer science, computational linguistics, and AI. NLP applications are utilized in various fields where text is the primary input, such as in medical, legal, and academic domains. These applications include question answering, text summarization, sentiment analysis, information extraction, information retrieval, and machine translation.

Advancements in language models and their application to a range of NLP tasks have increased the demand for explanations of model decisions. Like most neural models, these too lack transparency in their internal workings and the process by which they reach conclusions. The advent of transformer-based models such as BERT and GPT has paved the way for high-performance language models that deliver remarkable results across various NLP tasks. The impressive capabilities of language models like GPT-3 and PaLM have showcased the potential of Large Language Models (LLMs) in many NLP applications. The introduction of ChatGPT marks a significant milestone for LLMs, as it is the first model to find widespread application beyond NLP research. These advancements in LLMs and NLP models in general, and their use in diverse fields, have underscored the need to make model decisions explainable.

The emerging field of Explainable AI (XAI) arose from the increasing recognition of the importance of explainability within the AI community. The need for responsible and trustworthy AI has been affirmed in various national and international legal frameworks. In April 2021, the European Commission proposed the first-ever legal framework for AI. This proposal outlines new regulations designed to ensure that AI systems are developed and applied responsibly and trustworthily. This regulation is expected to come into effect in the latter half of 2024. Explainable AI systems are a fundamental cornerstone in the journey towards trustworthy and responsible AI. XAI in NLP-based AI systems can be defined as methods and models that make the predictions and behaviors of these systems understandable to humans. A model that explains its output facilitates troubleshooting, auditing, and the detection of biases within these systems.

Despite the emergence of techniques in the literature, questions remain about how best to employ these methods to deliver optimal explanations to various stakeholders. Many aspects of modern explanation methods are still unanswered. As new techniques emerge, addressing these questions becomes crucial for their optimal adoption and use in real-world applications. This project investigates the current state of Explainable AI (XAI) for NLP-based AI systems in practical applications. The aim is to assist different stakeholders in understanding the decisions and predictions made by such systems, tailored to their specific needs. For developers, this might involve troubleshooting within a system, while for end-users, it might be about providing explanations that foster trust in the system. The project intends to establish an interactive framework where various explanation methods are identified and implemented to offer complementary explanations. This will enable the provision of detailed explanations and analyses of the decisions and behaviors of NLP-based AI systems to different stakeholders.