BMBF-Project DIS-Lab (Dependable Intelligent Systems)
- Project Lead: Prof. Dr. Diedrich Wolter, Prof. Dr. Ute Schmid (CogSys Team), Prof. Dr. Gerald Lüttgen, Prof. Dr. Michael Mendler
- Researchers (CogSys Team): Sarem Seitz
- Main research topic (CogSys Team): Explainable Reinforcement Learning
- Additional information (German): https://www.uni-bamberg.de/en/sme/research/disl/
- Duration: 24 months
The primary research focus for the CogSys Team in this project is the development of novel methods for explainable reinforcement learning. While Explainable AI (XAI) has seen a sharp rise in popularity for supervised and unsupervised machine learning tasks, its occurrence in the reinforcement learning domain is still much sparser. However, given the potential impact of autonomously and intelligently acting machines in the future – the ultimate goal of reinforcement learning – we are expecting to see an increasing need for transparent methods in this area of AI as well.
As of today, the dangers of non-transparent AI are already apparent and frequently experienced in particular by disadvantaged members of our society. This adds to the issue that safety guarantees for intelligent agents – think self-driving cars or robots – are currently very hard or even impossible to obtain, let alone prove equal safety for all societal groups. A complete understanding of an agent’s rationale will thus be inevitable in order to ensure fair and safe actions by advanced AI in the future.
Besides the ability to validate the safety, fairness and rationality of explainable agents, we are also interested in leveraging their transparency in order to improve and accelerate the learning process. Given that reinforcement learning currently requires large amounts of data or long-running simulations of artificial environments, we aim to provide relevant human knowledge to an agent who might otherwise require much longer to obtain such knowledge itself.
Workflow of Explainable Reinforcement Learning (click to enlarge)
The following list provides an overview of methods we are considering:
Leveraging existing XAI methods to reinforcement learning: While there are important differences between (un-)supervised machine learning and reinforcement learning, there exist also many parallels that connect both domains. Therefore, we see it as a logical step to take a closer look at existing methods like LIME or SHAP and examine their applicability to reinforcement learning.
Improving inherently transparent methods: Even today, we already have access to transparent methods that have been developed over many decades of machine learning research. Commonly known frameworks like linear regression and decision tree models from the statistical learning domain or inductive logic approaches from symbolic AI all share a considerable degree of transparency. However, it is commonly agreed on that these methods in their original form struggle to deal with nowadays’ complex vision, textual or merely big data as compared to popular deep learning frameworks. Nevertheless, we see a big potential in advancing these methods to be able to solve these tasks while retaining their capability to be understandable by humans.
Differentiable Programming: Initially, this term was jokingly introduced by Yann Lecun referring to the complexity of modern deep learning algorithms. However, today’s successful machine learning models tend to look more and more like actual computer programs turned end-to-end differentiable. Automatic Differentiation frameworks like Tensorflow or PyTorch allow for mathematically expressing abstract programs and fine-tuning their parameters in a machine learning sense. This obviously leads to the option of writing highly transparent programs that a human can make sense of and improve upon.
Bayesian Machine Learning: Bayesian Machine Learning provides a mathematically sound framework to encode prior or expert knowledge into Machine Learning systems. While Bayesian methods originated from Statistics, the Statistical Learning movement has adapted and further extended these ideas. In combination with XAI, Bayesian Learning could open the door to encode human knowledge into complex Machine Learning algorithms in order to improve the learning efficiency of the latter.
- Seitz, S. (2021) - Mixtures of Gaussian Processes for regression under multiple prior distributions (arXiv preprint)
- Seitz, S. (2021) - Self-explaining variational posterior distributions for Gaussian Process models (arXiv preprint)
- Ute Schmid (2020): Die Dritte Welle der KI - Vom rein datengetriebenem Blackbox Lernen zu interaktiven und erklärbaren Ansätzen
Online presentation at VDI, Verein deutscher Ingenieure (https://www.vdi.de/veranstaltungen/detail/die-dritte-welle-der-ki-vom-rein-datengetriebenem-blackbox-lernen-zu-interaktiven-und-erklaerbaren-ansaetzen)
- Sarem Seitz (2020): Wie lernen Maschinen? – Ein- und Ausblicke in aktuelle Erkenntnisse aus der KI-Forschung
Presentation at Total Digital 2020, Coburg (https://www.zukunftcoburgdigital.de/total-digital-coburger-digital-tage)
- Ute Schmid (2020): Workshop Dependable AI at KI2020
- Sarem Seitz (2020): Dependable AI under heavy tailed distributions
Short presentation at KI2020, Workshop Dependable AI
- Sarem Seitz (2019): Deep Learning – Das next big thing oder die nächste große Blase?
Presentation at Total Digital 2019, Coburg (https://www.zukunftcoburgdigital.de/total-digital-coburger-digital-tage)
Felix Schweinfest (2021): Comparing the Performance of Memory Augmented Neural Networks in Reinforcement Learning
- Statistical Machine Learning (summer term 2021)
Joint seminar between the Chair of Statistics and Econometrics (https://www.uni-bamberg.de/en/stat-oek/) and the Chair of Cognitive Systems
- Zertifizierung von KI-Systemen (2020)
Whitepaper with focus on dependable AI systems and the certification thereof (http://plattform-lernende-systeme.de/files/Downloads/Publikationen/AG1_3_Whitepaper_Zertifizierung_KI_Systemen.pdf)