Trust in the age of digital medicine: Artificial intelligence and the doctor-patient relationship
Teaser
The article in the Deutsches Ärzteblatt sheds light on the question of how patients react to AI in medical care. A recent study by the University of Würzburg and Charité Berlin shows that the mere mention of AI in a doctor's surgery triggers skepticism in many people. Doctors who use AI according to the advertisement, even for organizational tasks only, were rated as less competent, empathetic and trustworthy by the study participants.
The article brings together various perspectives from research and practice. Prof. Dr. Christian Ledig explains that the AI systems used do not make real decisions, but recognize patterns and apply predefined rules. He warns against stirring up public fears of supposedly autonomous systems and instead calls for objective and informative communication about their actual possibilities and limitations.
Other experts emphasize that the responsible use of AI in medicine must be comprehensible and controllable. The basis for decision-making should be clearly recognizable and medical staff must be able to intervene in the process at any time. The handling of information also plays a key role. A research team from Stanford University has presented a model that differentiates between when information about AI applications is necessary and when it is not, depending on the risk and the patient's ability to act.