Research in human-robot interaction (HRI) often puts emphasis on either the cognitive level or on the physical level. In a scenario, where a robot physically guides a person to perform a complex series of tasks (e.g., a patient making tea), information is exchanged on the cogniti
...
Research in human-robot interaction (HRI) often puts emphasis on either the cognitive level or on the physical level. In a scenario, where a robot physically guides a person to perform a complex series of tasks (e.g., a patient making tea), information is exchanged on the cognitive level and forces/torques are exchanged on the physical level, continuously. Such a continuous co-adaptive interaction between both agents and the environment requires the robot to be anticipating, proactive, and able to react flexibly to the user's intentions and situation context. The unification of sequential cognitive situation modeling and continuous robotic movement control is a challenge currently missing a conceptual framework. We conceptualize strategies on how to connect models of physical HRI and models of cognitive HRI, depending on the level of assistance provided by the robot system, from mere warnings of dangerous situations (level 1) to on-body continuous movement guidance (level 4). In this, we consider the requirements for the robot to be aware of the interaction environment and have a dynamic representation of the individual user. Our conceptual framework is intended to spark discussions and formalize assistance approaches with the aim to integrate cognitive and physical human-robot interaction approaches for anticipatory assistance in continuous dynamic tasks.