Chair of Explainable Machine Learning
Research at the Chair focuses on the development of robust, data-efficient machine learning methods, with a particular emphasis on Deep Learning and its versatile applications in industry and healthcare. Our research also encompasses the quantification of uncertainty in classification predictions, enhancing interpretability to improve communication with users and patients, and the extensive, task-oriented evaluation of AI models.
A further research priority lies in the reconstruction and quantitative analysis of image data, specifically within the medical domain. By integrating imaging techniques with AI, anatomical irregularities can be identified to support clinical staff in the diagnosis and treatment of conditions such as dementia or cancer. A central goal of this research is the transfer of results into industrial and medical contexts, ensuring the responsible development of AI-based solutions within regulated environments. The primary focus remains on making a positive contribution to society and patient care.
Key Research Areas:
- Efficient and Robust AI: Research into data-efficient and resilient machine learning methods.
- AI Evaluation and Regulation: Expertise in the assessment and regulatory framework of AI systems.
- Interpretability and Uncertainty Estimation: Developing methods to improve model transparency and quantify the reliability of AI predictions.
AI for Quantitative Image Analysis and Decision Support: Developing algorithms for automated image analysis, particularly for clinical decision support systems.
Generative AI for Image/Video Generation and Reconstruction: Investigating generative techniques for the creation and restoration of visual data across various domains.
AI-Focused Start-ups: Consulting and supporting AI start-ups, specifically regarding product and technology development.