BMBF-Project hKI-Chemie

Project Description

The chemical industry is characterized by complex processes in logistics and parameterization. Their efficiency and effectiveness are crucial for economic success. These processes concern for example the logistics of materials or the parameterization of manufacturing systems. Automation of these processes is difficult mainly because of two reasons: Firstly, companies need to react flexibly to various situations. This flexibility is not always guaranteed by AI systems. And secondly, industrial processes come often with a tremendous amount of parameters. Therefore, the understanding of the automation task is hardly feasible.

Humans are capable of reacting quickly to changing situations. In solving the first problem, humans are superior to AI systems of today. AI systems, on the other hand, can abstract information with enormous efficiency. It can therefore be assumed that the second problem will be solved far more efficiently by AI systems than by humans. 

The interaction between humans and AI systems thus offers an enormous opportunity for numerous industrial processes. The goal of the project is therefore to utilize AI systems in such a way that they enable people to make better decisions in complex situations. It is not intended to replace humans with AI systems. Rather, the objective is to develop combined systems of humans and AI whose combination is better than any of its components.

The developments of the project are being discussed closely with social partners such as the labor unions of the industry partners. This is to ensure that the AI systems serve the benefit of humans.

 

Project Partners

The project partners can be divided into research partners and industry partners. Research partners are the Chair of Distributed Systems as well as the Chair of Psychological Research Methods at the University of Duisburg-Essen and the Fraunhofer IIS Project Group Comprehensible Artificial Intelligence. The latter is headed by Prof. Dr. Ute Schmid, who is also the IIS project lead together with Dr. Stephan Scheele. Emanuel Slany, M.Sc. is doctoral candidate at the chair of Cognitive Systems headed by  Prof. Dr. Ute Schmid and responsible for the IIS developments within the project.

Industrial partners Continental Automotive GmbH, Evonik Industries AG, InfraServ Wiesbaden Technik GmbH & Co. KG, Boldly Go Industries GmbH, RheinByteSystems GmbH as well as VDI Technologiezentrum.

 

Publications, Talks, Code, and Student Projects

Publications

Schmid, U., Slany, E., Scheele, S. (2023). Understanding the Why and How of Trustworthy AI. Smart Sensing Insights Blog. websites.fraunhofer.de/smart-sensing-insights/trustworthy-ai/

Heidrich, L., Slany, E., Scheele, S., Schmid, U. (2023). FairCaipi:  A Combination of Explanatory Interactive and Fair Machine Learning for Human and Machine Bias Reduction. Mach. Learn. Knowl. Extr. 2023, 5, 1519-1538. doi.org/10.3390/make5040076

Slany, E., Scheele, S., Schmid, U. (2023). Bayesian CAIPI: A Probabilistic Approach to Explanatory Interactive Machine Learning. European Conference on Artificial Intelligence. ECAI Workshop Proceedings. ECAI 2023. to appear

Atzmueller, M., Fürnkranz, J.,  Kliegr, T., Schmid, U. (2023). Special Issue on Explainable and Interpretable Machine Learning and Data Mining. Data Mining and Knowledge Discovery.

Schmid, U. (2023). Trustworthy Artificial Intelligence – Comprehensible, Transparent, Correctable. In: H. Werthner, C. Ghezzi, J. Kramer, J. Nida-Rümelin, B. Nuseibeh, E. Prem, A. Stanger (Eds.): Introduction to Digital Humanism. Springer.

Slany, E., Ott, Y., Scheele, S., Paulus, J., Schmid, U. (2022). CAIPI in Practice: Towards Explainable Interactive Medical Image Classification. In: Maglogiannis, I., Iliadis, L., Macintyre, J., Cortez, P. (eds) Artificial Intelligence Applications and Innovations. AIAI 2022 IFIP WG 12.5 International Workshops. AIAI 2022. IFIP Advances in Information and Communication Technology, vol 652. Springer, Cham. doi.org/10.1007/978-3-031-08341-9_31

Wirth, C., Schmid, U., Voget, S. (2022). Humanzentrierte Künstliche Intelligenz: Erklärendes interaktives maschinelles Lernen für Effizienzsteigerung von Parametrieraufgaben. In: Hartmann, E.A. (eds) Digitalisierung souverän gestalten II. Springer Vieweg, Berlin, Heidelberg. doi.org/10.1007/978-3-662-64408-9_7

Schmid, U., Wrede, B. (2022). What is Missing in XAI So Far? Künstliche Intelligenz, 36(3), 303-315.

Schmid, U., Wrede, B. (2022). Special Topic Explainable AI. Künstliche Intelligenz, 36(3).

Schwalbe, G., Wirth, C., Schmid, U. (2022). Enabling Verification of Deep Neural Networks in Perception Tasks Using Fuzzy Logic and Concept Embeddings. KI 2022, Workshop on Robust AI in High-Stakes Applications, Sept. 19-23, LNI.

Finzel, B., Schmid, U. (2022). Context-aware XAI Methods for Joint Human-AI Problem Solving. Symposium Flexible Representations for Human(-like) Problem Solving, KogSys 2022, Freiburg, 5.-7.9.2022.

Ai, L., Muggleton, S., Schmid, U., Hocquette, C., Gromowski, M., Langer, J. (2022). Explanatory effects of machine learned logic theories. FutureAI-2022 Towards the future of AI – AI network of Excellence at Imperial College London, June 7, 2022.

Herchenbach, M., Müller, D., Scheele, S., Schmid, U. (2022). Explaining Image Classifications with Near Misses, Near Hits and Prototypes - Supporting Domain Experts in Understanding Decision Boundaries. ICPRAI 2022, Springer.

Wehner, C., Altakrouri, B., Powlesland, F., Schmid, U. (2022). Explainable Online Lane Change Predictions on a Digital Twin with a Layer Normalized LSTM and LRP. 35th International Conference on Industrial, Engineering & Other Applications of Applied Intelligent Systems (IEA AIE 2022, July 19-22, Kitakyushu, Japan), 621-632. Springer.

Müller, D., März, M., Scheele, S., Schmid, U. (2022). An Interactive Explanatory AI System for Industrial Quality Control. Thirty-Fourth Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-22). Fed. Feb 22 - March 1 2022, virtual (colocated with 36 AAAI Conference on Artificial Intelligence).

Schmid, U. (2022). Vertrauenswürdige Künstliche Intelligenz: Nachvollziehbar, Transparent, Korrigierbar. In F. Rostalski (Ed.), Künstliche Intelligenz: Wie gelingt eine vertrauenswürdige Verwendung in Deutschland und in Europa? (pp. 287-298) Tagungsband, Mohr Siebeck.

Ai, L., Muggleton, S., Hocquette, C., Gromowski, M., Schmid, U. (2021). Beneficial and Harmful Explanatory Machine Learning. Machine Learning, 110, 695-721.

Thaler, A., Schmid, U. (2021). Explaining Machine Learned Relational Concepts in Visual Domains Effects of Perceived Accuracy on Joint Performance and Trust. Proceedings of the 43th Annual Meeting of the Cognitive Science Society, 1705-1711.

Talks

Slany, E. PHAL Post-Hoc model Approximation with Logic. Nordic Probabilistic AI Summer School 2023. Trondheim (Norway), 15 June 2023.

Schmid, U. Keynote AIC. Genua (Italy), 14-16 December 2023.

Schmid, U. Eröffnungs-Keynote Trustworthy AI, DAAD-Netzwerktreffen, IFI Programm. Berlin (Germany), 24 November 2023.

Schmid, U. Keynote auf der KI 2023 (46th German Conference on Artificial Intelligence): Near-miss Explanations to Teach Humans and Machines. Berlin (Germany), 29 September 2023.

Schmid, U. From the Deep Learning Hype to the Generative AI Buzz – Can we keep control? Keynote Tech Days Munich. Munich (Germany), 26-27 June 2023.

Schmid, U. Explicit and implicit knowledge injection for human-in-the-loop machine learning, Biweekly AI, Porsche digital. 15 February 2023.

Schmid, U. Invited Talk Near-miss Explanations to Teach Humans and Machines, Workshop Machine Teaching for Humans: Rethinking Example-Based Explanations (MT4H-2023). Funchal (Madeira), 12-13 January 2023.

Schmid, U., Slany, E., Scheele, S. Hybrid, Explantory, Interactive Machine Learning – Towards Human-AI Partnerships. Airbus Tech Talk. 21 November 2022.

Schmid, U. Hybrid, Explantory, Interactive Machine Learning – Towards Human-AI Partnerships, Explainable AI and Society Lecture Series. Bayreuth/Dortmund/Saarbrücken (Germany), 17 November 2022. https://explainable-intelligent.systems/lectures

Slany, E. Explainable Gaussian Process Regression with Probabilistic Influence and Logic. Heinz-Nixdorf Symposium 2022. Paderborn (Germany), 15 September 2022.

Slany, E. CAIPI in Practice: Towards Explainable Interactive Medical Image Classification. Heinz-Nixdorf Symposium 2022. Paderborn (Germany), 16 September 2022.

Schmid, U. Hybrid, Explanatory, Interactive Machine Learning – Towards Trustworthy Human-AI Partnerships. Keynote at the 8th International Workshop on Artificial Intelligence and Cognition. Oerbro (Sweden), 14 to 17 June 2022.

Schmid, U. Hybrid AI for Comprehensible, Fair, and Robust Machine Learning. 1st French-German dialogue on Perspectives on Ethics of Artificial Intelligence. Sorbonne Center for Artificiel Intelligence (SCAI). Paris (France), 2-3 June 2022.

Schmid, U. Reconciling knowledge-based and data-driven AI for human-in-the-loop machine learning. Colloquium SFB 876. TU Dortmund. Dortmund (Germany), 21 March 2022.

Schmid, U. Menschzentrierte und nachhaltige KI in der Produktion der Zukunft. Eröffnung Cleantech Innovation Park. Bamberg (Germany), 23 February 2022.

Schmid, U. Hybrid, Explantory, Interactive Machine Learning – Towards Human-AI Partnerships. Roundtable Next Generation AI, Topic IV: Social Aspects of AI. CAS LMU Next Generation AI. 18 February 2022.

Code and Demonstrators

FairCaipi: A code base that enables end users to interactively train a machine learning by interacting with its decsion-making mechanism model with the objective to reduce human and machine bias. https://github.com/emanuelsla/faircaipi

ExaPlain Demonstrator: A demonstration of the innovative use of explanatory machine learning models for industrial purposes. https://www.iis.fraunhofer.de/de/ff/sse/affective-computing/cai/explainable-ai.html

Student Projects

Lukas Gernlein (2023). An Explanatory and Interactive Machine Learning Approach for Multi-Label Classification. Master Thesis, supervision by Emanuel Slany and Stephan Scheele.

Felix Hempel (2023). Explainable and Interactive Machine Learning with Counterfactuals and Ordinal Data. Master Thesis, supervision by Emanuel Slany and Stephan Scheele.

Solveig Rabshal (2023). Exploring the Impact of Scale of Measurement on Counterfactual Explanations. Bachelor Thesis, supervision by Emanuel Slany and Stephan Scheele.

Yannik Ott (2022). An explanatory interactive machine learning approach for image classification in medical engineering. Bachelor Thesis, supervision by Emanuel Slany and Ute Schmid.

Oraz Serdarov (2022). Explainable Unsupervised Learning for Fraud Detection. Cooperation with HUK-Coburg, Master Thesis, supervision by Emanuel Slany and Ute Schmid.

We are always looking for students interested in contributing to hKI-Chemie in the form of a thesis, project, or as a student assistant. If you are interested, feel free to contact Emanuel Slany.