Abschlussarbeiten
Bachelor- und Masterarbeiten werden aus verschiedenen Forschungsbereichen des Lehrstuhls angeboten. Hierbei werden konkrete Themen in der Regel vom Lehrstuhl angeboten (s.u.) oder in Zusammenarbeit mit der/dem Studierenden definiert.
Um eine Abschlussarbeit am Lehrstuhl für Erklärbares Maschinelles Lernen zu schreiben, gelten in der Regel folgende Voraussetzungen:
- erfolgreiche Prüfung in einem Modul mit Vorlesung und Übung in DeepLearning (für Masterarbeit), Maschinelles Lernen oder Einführung in die KI
- erfolgreiche Teilnahme an einem vom Lehrstuhl angebotenen Seminar oder Projekt
Offene Abschlussarbeiten
Please refer to VC [Link] for further details.
Laufende Abschlussarbeiten
- "Unveiling CNN Layer Contributions: Application of Feature Visualization in Medical Image Classification Tasks" - Jonida Mukaj betreut von Ines Rieger
- "Evaluation and feasibility of selected data-driven Machine Learning approaches for Production Planning to enhance Order Sequencing and to improve OEE in Manufacturing" - Nicolai Christian Frosch betreut von Christian Ledig
- Markus Brücklmayr - betreut von Christian Ledig
- Erik-Jonathan Schmidt, Zusammenarbeit mit Stabilo- betreut von Christian Ledig
- Aaron-Lukas Pieger, Zusammenarbeit mit Stabilo- betreut von Christian Ledig
- Junquan Pan - betreut von Christian Ledig
- Pascal Cezanne - betreut von Sebastian Dörrich
- Michael Sebastian Schick - betreut von Sebastian Dörrich
- Julius Stutz - betreut von Sebastian Dörrich
- Marius Ludwig Bachmeier - betreut von Sebastian Dörrich
- Peiyao Mao - betreut von Francesco Di Salvo
Abgeschlossene Abschlussarbeiten
„Generative Data Augmentation in the Embedding Space of Vision Foundation Models to Address Long-Tailed Learning and Privacy Constraints” - David Elias Tafler
Autor: David Elias Tafler, betreut von Francesco Di Salvo
This thesis explores the potential of generative data augmentation in the embedding space of vision foundation models, aiming to address the challenges of long-tailed learning and privacy constraints. Our work leverages Conditional Variational Autoencoders (CVAEs) to enrich the representation space for underrepresented classes in highly imbalanced datasets and to enhance data privacy without compromising utility. We develop and assess methods that generate synthetic data embeddings conditioned on class labels, which both mimic the distribution of original data for privacy purposes and augment data for tail classes to balance datasets. Our methodology shows that embedding-based augmentation can effectively improve classification accuracy in long-tailed scenarios by increasing the diversity and volume of minor class samples. Additionally, we demonstrate that our approach can generate data that maintains privacy through effective anonymization of embeddings. The outcomes suggest that generative augmentation in embedding spaces of foundation models offers a promising avenue for enhancing model robustness and data security in practical applications. The findings have significant implications for deploying machine learning models in sensitive domains, where data imbalance and privacy are critical concerns.
Link to thesis(8.8 MB)
"CNN-based Classification of I-123 ioflupane dopamine transporter SPECT brain images to support the diagnosis of Parkinson’s disease with Decision Confidence Estimation"- Aleksej Kucerenko
Autor: Aleksej Kucerenko, betreut von Prof. Dr. Christian Ledig and Dr. Ralph Buchert
Parkinson's disease (PD) is a prevalent neurodegenerative condition posing significant challenges to individuals and societies alike.
It is anticipated to become a growing burden on healthcare systems as populations age.
The differentiation between PD and secondary parkinsonian syndromes (PS) is crucial for effective treatment, yet it remains challenging,
particularly in cases of clinically uncertain parkinsonian syndromes (CUPS).
Dopamine transporter single-photon emission computed tomography (DAT-SPECT) is a widely used diagnostic tool for PD,
offering high accuracy but also presenting interpretational challenges, especially in borderline cases.
This study aims to develop reliable automated classification methods for DAT-SPECT images, particularly targeting inconclusive cases,
which may be misclassified by conventional approaches.
Convolutional neural networks (CNNs) are investigated as promising tools for this task.
The study proposes a novel performance metric, the area under balanced accuracy (AUC-bACC) over the percentage of inconclusive cases,
to compare the performance of CNN-based methods with benchmark approaches (SBR and Random Forest).
A key focus is the training label selection strategy, comparing majority vote training (MVT) with random label training (RLT),
which aims to expose the model to the uncertainty inherent in borderline cases.
The study evaluates the methods on internal and external testing datasets to assess generalizability and robustness.
The research was conducted in collaboration with the University Medical Center Hamburg-Eppendorf (UKE).
The dataset utilized for model training originated from clinical routine at the Department of Nuclear Medicine, UKE.
The attached figure showcases augmented versions for two sample cases from the dataset:
a healthy control case ('normal') and a Parkinson's disease case ('reduced') with reduced availability of DAT in the striatum.
The study addresses the need for reliable and automated classification of DAT-SPECT images,
providing insights into improving diagnostic accuracy,
reducing the burden of misclassifications and minimizing the manual inspection effort.
Link to thesis(12.5 MB)
Benchmarking selected State-of-the-Art Baseline Neural Networks for 2D Biomedical Image Classification, Inspired by the MedMNIST v2 Framework"- Julius Brockmann
Autor: Julius Brockmann, betreut von Sebastian Dörrich
This thesis examines the benchmarking of state-of-the-art baseline neural networks in the field of 2D biomedical image classification. Focusing on the effectiveness of deep learning models on high-quality medical databases, the study employs pre-trained baseline networks to establish benchmarks. The research investigates four convolutional neural
networks and one transformer-based architecture, exploring how changes in image resolution affect performance. The findings highlight the advanced capabilities of newer convolutional networks and demonstrate the effectiveness of transformer architectures for handling large datasets. Common misclassifications and their causes are also briefly analyzed, offering insights into potential areas for improvement in future studies.
Link to thesis(8.4 MB)
"Development of a dataset and AI-based proof-of-concept algorithm for the classification of digitized whole slide images of gastric tissue"- Tom Hempel
Autor: Tom Hempel, betreut von Prof. Dr. Christian Ledig
The thesis focuses on the development of a dataset and AI algorithms for classifying digitized whole slide images (WSIs) of gastric tissue. It details the creation and meticulous annotation of the dataset, which is crucial for effectively training the AI. The process involved gathering, anonymizing, and annotating a vast array of WSIs, aimed at building robust AI models that can accurately classify different regions of the stomach and identify inflammatory conditions.
Two AI models were developed, one for assessing gastric regions and another for inflammation detection, achieving high accuracy in areas like the antrum and corpus but facing challenges with intermediate regions due to dataset limitations and the specificity of training samples.
The challenges encountered during the dataset creation, such as data collection and the necessity for detailed annotation to ensure data integrity and privacy, highlight the complexity of this research.
The dataset and initial models serve as a foundation for further research by Philipp Andreas Höfling in his master thesis, aiming to refine these AI algorithms and enhance their utility in medical diagnostics.
Link to thesis(5.1 MB)
"Component ldentification for Geometrie Measurements in the Vehicle Development Process Using Machine Learning" - Tobias Koch
Autor: Tobias Koch, betreut von Prof. Dr. Christian Ledig
Geometric measurements are frequently performed along the virtual vehicle development chain to monitor and confirm the fulfillment of dimensional requirements for purposes like safety and comfort. The current manual measuring process lacks in comparability and quality aspects and involves high time and cost expenditure due to the repetition across different departments, engineers, and vehicle projects.
Thereby motivated, this thesis presents an automated approach to component identification, leveraging the power of Machine Learning (ML) in combination with rule-based filters. It streamlines the geometric measurement process by classifying vehicle components as relevant or not and assigning uniformly coded designations. To determine the most effective approach, the study compares various ML models regarding performance and training complexity, including Light Gradient-Boosting Machines (LightGBMs), eXtreme Gradient Boosting (XGBoost), Categorical Boosting (CatBoost), and Feedforward Neural Networks (FNNs).
The results indicate that the integration of ML models can substainally improve the geometric measurement process in the virtual vehicle development process. Especially LightGBM and CatBoost proved to be the most capable models for this tasks and offer promising progress in the virtual development of vehicles.
Link to the thesis(3.6 MB)
Weitere abgeschlossene Abschlussarbeiten
- "Reproduction of Selected State-of-the-Art Methods for Anomaly Detection in Time Series Using Generative Adversarial Networks" - Anastasia Sinitsyna, betreut von Ines Rieger
- “Addressing Continual Learning and Data Privacy Challenges with an explainable kNN-based Image Classifier”(18.9 MB) - Tobias Archut betreut von Sebastian Dörric