BMBF-Project DIS-Lab (Dependable Intelligent Systems)
- Project Lead: Prof. Dr. Diedrich Wolter, Prof. Dr. Ute Schmid (CogSys Team), Prof. Dr. Gerald Lüttgen, Prof. Dr. Michael Mendler
- Researchers (CogSys Team): Sarem Seitz
- Main research topic (CogSys Team): Explainable Reinforcement Learning
- Additional information (German): https://www.uni-bamberg.de/en/sme/research/disl/
- Duration: 24 months
Description
The primary research focus for the CogSys Team in this project is the development of novel methods for explainable reinforcement learning. While Explainable AI (XAI) has seen a sharp rise in popularity for supervised and unsupervised machine learning tasks, its occurrence in the reinforcement learning domain is still much sparser. However, given the potential impact of autonomously and intelligently acting machines in the future – the ultimate goal of reinforcement learning – we are expecting to see an increasing need for transparent methods in this area of AI as well.
As of today, the dangers of non-transparent AI are already apparent and frequently experienced in particular by disadvantaged members of our society. This adds to the issue that safety guarantees for intelligent agents – think self-driving cars or robots – are currently very hard or even impossible to obtain, let alone prove equal safety for all societal groups. A complete understanding of an agent’s rationale will thus be inevitable in order to ensure fair and safe actions by advanced AI in the future.
Besides the ability to validate the safety, fairness and rationality of explainable agents, we are also interested in leveraging their transparency in order to improve and accelerate the learning process. Given that reinforcement learning currently requires large amounts of data or long-running simulations of artificial environments, we aim to provide relevant human knowledge to an agent who might otherwise require much longer to obtain such knowledge itself.
Workflow of Explainable Reinforcement Learning (click to enlarge)
Methodology
The following list provides an overview of methods we are considering:
Leveraging existing XAI methods to reinforcement learning: While there are important differences between (un-)supervised machine learning and reinforcement learning, there exist also many parallels that connect both domains. Therefore, we see it as a logical step to take a closer look at existing methods like LIME or SHAP and examine their applicability to reinforcement learning.
Improving inherently transparent methods: Even today, we already have access to transparent methods that have been developed over many decades of machine learning research. Commonly known frameworks like linear regression and decision tree models from the statistical learning domain or inductive logic approaches from symbolic AI all share a considerable degree of transparency. However, it is commonly agreed on that these methods in their original form struggle to deal with nowadays’ complex vision, textual or merely big data as compared to popular deep learning frameworks. Nevertheless, we see a big potential in advancing these methods to be able to solve these tasks while retaining their capability to be understandable by humans.
Differentiable Programming: Initially, this term was jokingly introduced by Yann Lecun referring to the complexity of modern deep learning algorithms. However, today’s successful machine learning models tend to look more and more like actual computer programs turned end-to-end differentiable. Automatic Differentiation frameworks like Tensorflow or PyTorch allow for mathematically expressing abstract programs and fine-tuning their parameters in a machine learning sense. This obviously leads to the option of writing highly transparent programs that a human can make sense of and improve upon.
Bayesian Machine Learning: Bayesian Machine Learning provides a mathematically sound framework to encode prior or expert knowledge into Machine Learning systems. While Bayesian methods originated from Statistics, the Statistical Learning movement has adapted and further extended these ideas. In combination with XAI, Bayesian Learning could open the door to encode human knowledge into complex Machine Learning algorithms in order to improve the learning efficiency of the latter.
Publications
- Seitz, S. (2021) - Mixtures of Gaussian Processes for regression under multiple prior distributions (arXiv preprint)
- Seitz, S. (2021) - Self-explaining variational posterior distributions for Gaussian Process models (arXiv preprint)
Presentations
- Ute Schmid (2021): So sieht sie aus - die „Erklärbare und kontrollierbare KI“
Opening keynote at Symposium „Digitalisierung souverän gestalten“, Institut für Innovation und Technik (https://www.iit-berlin.de/symposium-digitalisierung-souveraen-gestalten-ii/)
- Ute Schmid (2021): Einführung in die Künstliche Intelligenz -- und wie es dazu kommt, dass KI-Systeme einen Gender-Bias haben können
Presentation at Datev DigiCamp (https://www.datev.de/web/de/aktuelles/veranstaltungen/datev-digicamp/)
- Ute Schmid (2021): Introduction to AI and Why AI Systems Can Have a Gender Bias
Presentation at Intel WIN Conference
- Ute Schmid (2021): Vom datengetriebenem Blackbox Lernen zu erklärbarem und interaktivem maschinellen Lernen für menschzentrierte KI
Presentation at 14. Wissenschaftstag der Metropolregion Nürnberg (https://www.forschung-innovation-bayern.de/veranstaltungen/14-wissenschaftstag-metropolregion-nuernberg/)
- Ute Schmid (2021): Nachvollziehbarkeit von KI-Systemen aus technischer Sicht
Presentation at Verbraucherrechtstage 2021 für das Bundesministerium der Justiz und für Verbraucherschutz (https://www.bmjv.de/SharedDocs/Downloads/ZB3/Verbraucherrechtstage2021-Programm_de.pdf)
- Ute Schmid (2021): The Third Wave of Artificial Intelligence -- From Blackbox Machine Learning to Explanation-Based Cooperation
Presenation at International Conference on Innovations for Community Services (I4CS) (https://www.eah-jena.de/i4cs-conference)
- Ute Schmid (2021): Bringing the human in the loop with explanatory and interactive approaches to machine learning
Presentation at AI Knowledge Snacks, Telekom AI Action Week 2021
- Ute Schmid (2021): Experten Update: Erklärung erwünscht! Explainable AI: Anforderungen an die Erklärbarkeit von KI-Systemen und mögliche Lösungen
Presentation at BMWI, Forum Digitale Technologien (https://www.digitale-technologien.de/DT/Redaktion/DE/Videos/KI-Inno/20210428-Experten-Update.html)
- Ute Schmid (2021): The Third Wave of Artificial Intelligence -- From Blackbox Machine Learning to Explanation-Based Cooperation
Presentation at AI Colloquium, Association for AI in Science and Industry (AAISI) (https://www.aaisi.org/category/ai_speakers/)
- Ute Schmid (2021): The Third Wave of Artificial Intelligence -- From Blackbox Machine Learning to Explanatory and Interactive Learning
Keynote at syngo Developer Conference, Siemens Healthineers
- Ute Schmid (2020): Die Dritte Welle der KI -- Vom rein datengetriebenem Blackbox Lernen zu interaktiven und erklärbaren Ansätzen
E-Lecture at Bitkom AI Research Network (https://www.youtube.com/watch?v=SkZJc4KAFe8
- Ute Schmid (2020): Transparent, robust und nachvollziehbar – Anforderungen an erklärbares maschinelles Lernen
Keynote at Technologieforum Empowering Sensors, Fraunhofer IIS (https://www.sensorik-bayern.de/sensorik-news/artikel/technologieforum-empowering-sensors-entwicklung-im-bereich-se)
- Ute Schmid (2020): Die Dritte Welle der KI - Vom rein datengetriebenem Blackbox Lernen zu interaktiven und erklärbaren Ansätzen
Online presentation at VDI, Verein deutscher Ingenieure (https://www.vdi.de/veranstaltungen/detail/die-dritte-welle-der-ki-vom-rein-datengetriebenem-blackbox-lernen-zu-interaktiven-und-erklaerbaren-ansaetzen)
- Sarem Seitz (2020): Wie lernen Maschinen? – Ein- und Ausblicke in aktuelle Erkenntnisse aus der KI-Forschung
Presentation at Total Digital 2020, Coburg (https://www.zukunftcoburgdigital.de/total-digital-coburger-digital-tage)
- Ute Schmid (2020): Workshop Dependable AI at KI2020
(https://ki2020.uni-bamberg.de/workshops.html#W5)
- Sarem Seitz (2020): Dependable AI under heavy tailed distributions
Short presentation at KI2020, Workshop Dependable AI
- Sarem Seitz (2019): Deep Learning – Das next big thing oder die nächste große Blase?
Presentation at Total Digital 2019, Coburg (https://www.zukunftcoburgdigital.de/total-digital-coburger-digital-tage)
Supervised Theses
Felix Schweinfest (2021): Comparing the Performance of Memory Augmented Neural Networks in Reinforcement Learning
Further Activities
- Statistical Machine Learning (summer term 2021)
Joint seminar between the Chair of Statistics and Econometrics (https://www.uni-bamberg.de/en/stat-oek/) and the Chair of Cognitive Systems
- Zertifizierung von KI-Systemen (2020)
Whitepaper with focus on dependable AI systems and the certification thereof (http://plattform-lernende-systeme.de/files/Downloads/Publikationen/AG1_3_Whitepaper_Zertifizierung_KI_Systemen.pdf)