TV Oberfranken GmbH & Co. KG

Making Artificial Intelligence Understandable and Trustworthy

Artificial intelligence is transforming research, education, and society at breathtaking speed. In a recent 4you report, four Upper Franconian universities showcased how they are actively shaping this transformation. Among them, our Chair of Explainable Machine Learning at the University of Bamberg demonstrates how transparent, robust, and privacy-preserving AI can build trust and deliver real benefits, especially in medicine and education.

Teaser

Artificial intelligence has long since left the realm of science fiction. As the 4You television report shows, AI tools are already part of everyday academic life, cutting across disciplines from computer science and law to education and medicine. The report brings together perspectives from four Upper Franconian institutions, each highlighting a different facet of the AI revolution: from practical tools for students to fundamental questions of law and ethics.

At the Hochschule Hof, the focus lies on hands-on AI tools that students already use in their daily studies. Chatbots, writing assistants, and image generators are integrated directly into teaching, preparing graduates for a job market where AI literacy will soon be indispensable. The message is clear: those who actively engage with AI today will be better equipped to shape tomorrow.

The University of Bayreuth, by contrast, examines AI from a legal and societal angle. Researchers there explore how technologies such as facial recognition and deepfakes could be used (or misused) in criminal justice. Their work highlights a critical tension: while AI can support law enforcement and courts, its lack of transparency poses serious challenges for fundamental legal principles such as due process and explainability.

At Hochschule Coburg, the spotlight shifts to learning itself. Researchers are investigating whether AI-driven tutors, ranging from text-based systems to immersive virtual reality avatars, can compensate for shortages in academic staff and offer students more personalized support. Early results suggest that interactive AI tutors may significantly enhance engagement and learning outcomes, bringing higher education closer to individualized mentoring at scale.

Against this diverse backdrop, the contribution from the Chair of Explainable Machine Learning (xAILab Bamberg) at the University of Bamberg delivers a unifying perspective: trust. While AI systems are becoming increasingly powerful, public concerns about opacity, loss of control, and data misuse remain high. The Bamberg team directly addresses these concerns by focusing on AI that is not only accurate, but also explainable, robust, and data-efficient.

Professor Christian Ledig emphasizes that explainable AI is not a single technique, but a guiding principle. His research group develops systems that can justify their decisions in ways that are meaningful for different stakeholders, be it doctors, educators, or end users. This transparency is a prerequisite for deploying AI responsibly in sensitive domains such as medicine and education.

Concrete examples from the lab make this vision tangible. Sebastian Dörrich, PhD candidate at the xAILab Bamberg, presented work on medical image analysis in collaboration with the Klinikum Nuremberg. Using deep learning methods, the team develops AI systems that automatically detect and classify stages of gastric mucosal inflammation. What currently requires time-consuming manual examination by medical experts could, in the future, be supported by transparent AI tools—helping clinicians make faster and more informed decisions without replacing their expertise.

Another key challenge in medical AI is data scarcity combined with strict privacy regulations. My Nguyen, also a PhD candidate at xAILab Bamberg, addresses this dilemma with innovative approaches to privacy-preserving learning. Instead of sharing sensitive patient data across hospitals, AI models can learn from distributed data sources and generate synthetic training data that capture rare disease patterns, without ever exposing real patient records. This approach promises more robust AI systems while fully respecting data protection requirements.

The 4you report makes one thing unmistakably clear: AI research in Upper Franconia is vibrant, diverse, and forward-looking. Within this landscape, the University of Bamberg’s Chair of Explainable Machine Learning plays a pivotal role by tackling one of the most pressing questions of our time: not just how powerful AI can become, but how understandable and trustworthy it must be to truly serve people.

See the full broadcast here

4you - das Hochschulmagazin: KI in der Strafverfolgung