Over the last two decades, the foundations for physical human–robot interaction (pHRI) have evolved from successful developments in mechatronics, control, and planning, leading toward safer lightweight robot designs and interaction control schemes that advance beyond the current capacities of existing high-payload and highprecision position-controlled industrial robots. Based on their ability to sense physical interaction, render compliant behavior along the robot structure, plan motions that respect human preferences, and generate interaction plans for collaboration and coaction with humans, these novel robots have opened up novel and unforeseen application domains, and have advanced the field of human safety in robotics.
This chapter gives an overview on the state of the art in pHRI as of the date of publication. First, the advances in human safety are outlined, addressing topics in human injury analysis in robotics and safety standards for pHRI. Then, the foundations of human-friendly robot design, including the development of lightweight and intrinsically flexible force/torque-controlled machines together with the required perception abilities for interaction are introduced. Subsequently, motionplanning techniques for human environments, including the domains of biomechanically safe, risk-metric-based, human-aware planning are covered. Finally, the rather recent problem of interaction planning is summarized, including the issues of collaborative action planning, the definition of the interaction planning problem, and an introduction to robot reflexes and reactive control architecture for pHRI.
An assistive, decision-and-control architecture for force-sensitive, hand-arm systems driven via human-machine interfaces (MM1)
Author Jörn Vogel, Sami Haddadin, John D. Simeral, Daniel Bacher , Beata Jarosiewicz, Leigh R. Hochberg, John P. Donoghue, Patrick van der Smagt
Video ID : 619
The video shows the "grasp" and "release" skills demonstrated in a 1-D control task using the Braingate2 neural-interface system. The robot is controlled through a multipriority Cartesian impedance controller and its behavior is extended with collision detection and reflex reaction. Furthermore, virtual workspaces are added to ensure safety. On top of this, a decision-and-control architecture, which uses sensory information available from the robotic system to evaluate the current state of task execution, is employed.