Location: An der Weberei 5, WE5/00.022
ICSLecture: Ensuring Responsible AI Development
As AI technologies become more entangled with our daily lives, ensuring their responsible development is essential.
In her lecture, Prof. Dr.Simone Stumpffrom the University of Glasgow explored what it takes to build Responsible AI systems that are safe, trustworthy, and fair.
Drawing from her work in interactive and explainable AI, Prof. Stumpf emphasized that responsibility in AI isn’t just about technology - it’s about people. Her talk centered around three pillars:
- 🔍 Accountability & Correctness: Responsible AI begins with inclusive design. Involving all stakeholders - especially end users - through participatory methods and human-in-the-loop approaches ensures systems are built for real-world complexity.
- 💡 Transparency: Transparency isn’t one-size-fits-all. Reframing explainability to match users’ needs means asking: Who needs to understand the AI, what do they need to know, when, and how should it be presented?
- ⚖️ Fairness: Fairness in AI goes beyond datasets. It’s about embedding inclusion and diversity across the AI lifecycle. Without attention to governance and historical bias, systems risk perpetuating discrimination and marginalization.
Prof. Stumpf’s key message: we must move toward human-centered AI - by listening to users, integrating responsible practices into development pipelines, and balancing regulatory pressures with the benefits of safer, more widely accepted systems.
We thank Prof. Dr. Simone Stumpf for her inspiring lecture and the insightful discussion on the path toward truly responsible AI.
For more on her research, visit: https://www.gla.ac.uk/schools/computing/staff/simonestumpf/#publications