The Multimodal Intelligent Interaction group performs research in the field of intelligent cyber-physical systems and human-robot interaction. The focus is on autonomous solving of complex tasks by robots and natural interaction between humans and robots in the context of joint action scenarios. Application areas include the intuitive instruction of robot systems for small batch sizes, service robots, smart factories and socio-technical assistance systems.
Emphasis is on cognitive systems with a combination of symbolic and sub-symbolic approaches in hybrid AI approaches, machine learning methods, knowledge representation, integrated task and motion planning, multimodal interaction and advanced systems engineering.
The following positions are currently open at the Multimodal Intelligent Interaction group: