View Chapter

Chapter 69 — Physical Human-Robot Interaction

Sami Haddadin and Elizabeth Croft

Over the last two decades, the foundations for physical human–robot interaction (pHRI) have evolved from successful developments in mechatronics, control, and planning, leading toward safer lightweight robot designs and interaction control schemes that advance beyond the current capacities of existing high-payload and highprecision position-controlled industrial robots. Based on their ability to sense physical interaction, render compliant behavior along the robot structure, plan motions that respect human preferences, and generate interaction plans for collaboration and coaction with humans, these novel robots have opened up novel and unforeseen application domains, and have advanced the field of human safety in robotics.

This chapter gives an overview on the state of the art in pHRI as of the date of publication. First, the advances in human safety are outlined, addressing topics in human injury analysis in robotics and safety standards for pHRI. Then, the foundations of human-friendly robot design, including the development of lightweight and intrinsically flexible force/torque-controlled machines together with the required perception abilities for interaction are introduced. Subsequently, motionplanning techniques for human environments, including the domains of biomechanically safe, risk-metric-based, human-aware planning are covered. Finally, the rather recent problem of interaction planning is summarized, including the issues of collaborative action planning, the definition of the interaction planning problem, and an introduction to robot reflexes and reactive control architecture for pHRI.

The power of prediction: Robots that read intentions

Author  E. Bicho , W. Erlhagen , E. Sousa , L. Louro , N. Hipolito , E.C. Silva , R. Silva , F. Ferreira , T. Machado , M. Hulstijn , Y.Maas , E. de Bruijn , R.H. Cuijpers , R. Newman-Norlund , H. van Schie, R.G.J. Meulenbroek , H. Bekkering

Video ID : 617

Action and intention understanding are critical components of efficient joint action. In the context of the EU Integrated Project JAST, the authors have developed an anthropomorphic robot endowed with these cognitive capacities. This project and the respective robot (ARoS) is the focus of the video. More specifically, the results illustrate crucial cognitive capacities for efficient and successful human-robot collaboration, such as goal inference, error detection, and anticipatory-action selection.

Reach and grasp by people with tetraplegia using a neurally-controlled robotic arm

Author  Leigh R. Hochberg, Daniel Bacher, Beata Jarosiewicz, Nicolas Y. Masse, John D. Simeral, Jörn Vogel, Sami Haddadin, Jie Liu, Sydney S. Cash, Patrick van der Smagt, John P. Donoghue

Video ID : 618

The authors have shown that people with long-standing tetraplegia can use a neural interface system to move and click a computer cursor and to control physical devices. One of the study participants, implanted with the sensor five years earlier, also used a robotic arm to drink coffee from a bottle. Although robotic reach and grasp actions were not as fast or accurate as those of an able-bodied person, the results demonstrate the feasibility for people with tetraplegia, years after injury to the central nervous system, to recreate useful multidimensional control of complex devices directly from a small sample of neural signals.

An assistive, decision-and-control architecture for force-sensitive, hand-arm systems driven via human-machine interfaces (MM1)

Author  Jörn Vogel, Sami Haddadin, John D. Simeral, Daniel Bacher , Beata Jarosiewicz, Leigh R. Hochberg, John P. Donoghue, Patrick van der Smagt

Video ID : 619

The video shows the "grasp" and "release" skills demonstrated in a 1-D control task using the Braingate2 neural-interface system. The robot is controlled through a multipriority Cartesian impedance controller and its behavior is extended with collision detection and reflex reaction. Furthermore, virtual workspaces are added to ensure safety. On top of this, a decision-and-control architecture, which uses sensory information available from the robotic system to evaluate the current state of task execution, is employed.

An assistive decision-and-control architecture for force-sensitive, hand-arm systems driven via human-machine interfaces (MM2)

Author  Jörn Vogel, Sami Haddadin, John D. Simeral, Daniel Bacher , Beata Jarosiewicz, Leigh R. Hochberg, John P. Donoghue, Patrick van der Smagt

Video ID : 620

This video shows a 2-D pick and place of an object using the Braingate2 neural interface. The robot is controlled through a multipriority Cartesian impedance controller, and its behavior is extended with collision detection and reflex reaction. Furthermore, virtual workspaces are added to ensure safety. On top of this, a decision-and-control architecture, which uses sensory information available from the robotic system to evaluate the current state of task execution, is employed.

An assistive, decision-and-control architecture for force-sensitive, hand–arm systems driven by human–machine interfaces (MM3)

Author  Jörn Vogel, Sami Haddadin, John D. Simeral, Daniel Bacher , Beata Jarosiewicz, Leigh R. Hochberg, John P. Donoghue, Patrick van der Smagt

Video ID : 621

This video shows a 3-D reach and grasp experiment using the Braingate2 neural interface system. The robot is controlled through a multipriority Cartesian impedance controller and its behavior is extended with collision detection and reflex reaction. Furthermore, virtual workspaces are added to ensure safety. On top of this a decision-and-control architecture, which uses sensory information available from the robotic system to evaluate the current state of task execution, is employed. Available assistive skills of the robotic system are not actively helping in this task but they are used to evaluate task success.

An assistive, decision-and-control architecture for force-sensitive, hand–arm systems driven by human–machine interfaces (MM4)

Author  Jörn Vogel, Sami Haddadin, John D. Simeral, Daniel Bacher , Beata Jarosiewicz, Leigh R. Hochberg, John P. Donoghue, Patrick van der Smagt

Video ID : 622

The video shows a 2-D drinking demonstration using the Braingate2 neural interface. The robot is controlled through a multipriority Cartesian impedance controller and its behavior is extended with collision detection and reflex reaction. Furthermore, virtual workspaces are added to ensure safety. On top of this, a decision-and-control architecture which uses sensory information available from the robotic system to evaluate the current state of task execution, is employed. During the task, the full functionality of skills currently available in a skill library of the robotic systems is used.

Twendy-One demo

Author  WASEDA University, Sugano Laboratory

Video ID : 623

The video shows the Twendy-One robot from the WASEDA University Sugano Laboratory performing several tasks in personal care including sitting-up motion support, transferring the care-receipient safely onto a wheelchair, or giving support during breakfast preparation. The acoustic communication between human and robot is extended by the possibility of haptic instructions.

Full-body, compliant humanoid COMAN

Author  Department of Advanced Robotics, Istituto Italiano di Tecnologia

Video ID : 624

The video shows different characteristics of the compliant humanoid (COMAN) which is developed by the Department of Advanced Robotics (ADVR), Istituto Italiano di Tecnologia (IIT), i.e.: i) fully torque controlled, ii) compliant human-robot interaction, iii) joint impedance control, iv) exploration of natural dynamics, v) robust stabilization control including disturbance rejection;and vi) adaption to inclined terrain.

Physical human-robot interaction in imitation learning

Author  Dongheui Lee, Christian Ott, Yoshihiko Nakamura, Gerd Hirzinger

Video ID : 625

This video presents our recent research on the integration of physical human-robot interaction (pHRI) with imitation learning. First, a marker control approach for real-time human-motion imitation is shown. Second, physical coaching in addition to observational learning is applied for the incremental learning of motion primitives. Last, we extend imitation learning to learning pHRI which includes the establishment of intended physical contacts. The proposed methods were implemented and tested using the IRT humanoid robot and DLR’s humanoid upper-body robot Justin.

Justin: A humanoid upper body system for two-handed manipulation experiments

Author  Christoph Borst, Christian Ott, Thomas Wimböck, Bernhard Brunner, Franziska Zacharias, Berthold Bäuml

Video ID : 626

This video presents a humanoid two-arm system developed as a research platform for studying dexterous two-handed manipulation. The system is based on the modular DLR-Lightweight-Robot-III and the DLR-Hand-II. Two arms and hands are combined with a 3-DOF movable torso and a visual system to form a complete humanoid upper body. The diversity of the system is demonstrated by showing the mechanical design, several control concepts, the application of rapid prototyping and hardware-in-the-loop (HIL) development, as well as two-handed manipulation experiments and the integration of path planning capabilities.