DFG-Project "Pain Face Reader"

Videobasierte automatische Schmerzerkennung auf Grundlage von Kombinations- und Zeitmerkmalen von Action Units (PainFaceReader)
Video-based automated pain detection exploiting compositional and temporal characteristics of action units
Duration: Oct 2018--Sept 2021
Funding: DFG
Researchers: Prof. Dr. Ute Schmid, N.N.
with Stefan Lautenbacher (Psychologie, Universität Bamberg), Jens Garbas (Intelligente Systeme, Fraunhofer Institut für Integrierte Schaltungen)

Research involving anatomy-related, objectivying facial expression analysis usually is based on the Facial Action Coding System (FACS) where individual movements of the mimetic musculature of the face are described by so called Action Units (AUs). Specific constellations of AUs and their intensities are used to indicate the mental state of a person, such as pain. Up to now, FACS-coding is realized mostly manually by trained FACS-coders in a cost- and time-intensive process which is limiting online applications in academic and clinical studies. For example, expression-based post-operative pain monitoring is currently not possible, although patients who are not fully oriented and conscious would highly profit.
Goal of the planned project is the development of a camera-based online monitoring and analysis system for faces which are video recorded. The system (1) continuously registers occurrences of AUs and their intensities, and (2) exploits the recognized AUs to identify whether a person has pain and distinguish it from other aversive states (e.g., anger).
Such a type of diagnostic application has particular requirements for the sensitivity and specifity of automated classifiers: While in many domains it is sufficient to reach high classification accuracy on average, clinical application demands highly accurate diagnoses for individuals. Second, it is necessary to communicate with pain therapists from different disciplinary backgrounds which makes it highly recommendable to use white-box approaches where decisions can be made transparent by recurring to an established, anatomically grounded descriptive vocabulary, namely AUs. In contrast, existing classifier systems for mental states are typically blackbox learners where an emotion is inferred from videos or artificial  features.
In an interdisciplinary team of engineers, computer scientists, and psychologists with a background in pain research, we plan to develop the PainFaceReader as an innovative approach which fulfills the described requirements.
Focus is on giving a proof of concept for reliable and valid automated classification of pain. The psychologists will provide specific video material from experimental pain assessments to train the classifiers for AU detection as well as pain identification. They will validate the developed technologies for facial expression analysis of patients with acute postoperative pain. The engineers and computer vision experts of the Fraunhofer Institute will develop an approach to identify and track AUs occurring in parallel. The machine learning and artificial intelligence team of the University of Bamberg will develop logic-based classificators for pain which can make use of compositional and temporal characteristics of AU appearances and which allow a robust, sensitive and specific identification of pain while distinguishing from anger.