Artificial intelligence in medicine: Between trust, transparency, and responsibility
Teaser
In its article, Der Tagesspiegel addresses the question of whether the use of artificial intelligence can jeopardize the relationship between doctors and their patients. The background to this is a recent study according to which people have less trust in doctors when they use AI in their practice, even if the technology only performs administrative tasks.
In the “Three on One” section, three experts share their different perspectives on the topic. Prof. Dr. Christian Ledig makes it clear that the AI systems currently in use do not possess human-like intelligence, but are based on the recognition of statistical patterns. Differentiated communication is therefore crucial. Talking about autonomous machines stirs up unnecessary uncertainty. Acceptance comes when doctors and patients can realistically assess the role of the technology and understand how it can contribute to diagnosis or relieve the workload.
Prof. Dr. Stefanie Speidel from the National Center for Tumor Diseases in Dresden also emphasizes that AI can even strengthen the doctor-patient relationship, for example by automating routine tasks and preparing treatment recommendations. However, this requires transparency, traceability, and the involvement of all parties in the development and use of the systems.
Marie Zahout, editor for e-health at Tagesspiegel, adds that AI should primarily be understood as a tool. She advocates for clear legal frameworks and simple communication that does not overwhelm patients but keeps them informed. Only in this way can trust in new technologies be strengthened without losing sight of the responsibility of the treating professionals.