View Chapter

Chapter 36 — Motion for Manipulation Tasks

James Kuffner and Jing Xiao

This chapter serves as an introduction to Part D by giving an overview of motion generation and control strategies in the context of robotic manipulation tasks. Automatic control ranging from the abstract, high-level task specification down to fine-grained feedback at the task interface are considered. Some of the important issues include modeling of the interfaces between the robot and the environment at the different time scales of motion and incorporating sensing and feedback. Manipulation planning is introduced as an extension to the basic motion planning problem, which can be modeled as a hybrid system of continuous configuration spaces arising from the act of grasping and moving parts in the environment. The important example of assembly motion is discussed through the analysis of contact states and compliant motion control. Finally, methods aimed at integrating global planning with state feedback control are summarized.

Robotic assembly of emergency-stop buttons

Author  Andreas Stolt et al.

Video ID : 358

The video presents a framework for dual-arm robotic assembly of stop buttons utilizing force/torque sensing under the fixture and force control.

Chapter 20 — Snake-Like and Continuum Robots

Ian D. Walker, Howie Choset and Gregory S. Chirikjian

This chapter provides an overview of the state of the art of snake-like (backbones comprised of many small links) and continuum (continuous backbone) robots. The history of each of these classes of robot is reviewed, focusing on key hardware developments. A review of the existing theory and algorithms for kinematics for both types of robot is presented, followed by a summary ofmodeling of locomotion for snake-like and continuum mechanisms.

Modsnake fence navigation

Author  Howie Choset

Video ID : 165

Video of the CMU Modsnake navigating under a fence.

Chapter 74 — Learning from Humans

Aude G. Billard, Sylvain Calinon and Rüdiger Dillmann

This chapter surveys the main approaches developed to date to endow robots with the ability to learn from human guidance. The field is best known as robot programming by demonstration, robot learning from/by demonstration, apprenticeship learning and imitation learning. We start with a brief historical overview of the field. We then summarize the various approaches taken to solve four main questions: when, what, who and when to imitate. We emphasize the importance of choosing well the interface and the channels used to convey the demonstrations, with an eye on interfaces providing force control and force feedback. We then review algorithmic approaches to model skills individually and as a compound and algorithms that combine learning from human guidance with reinforcement learning. We close with a look on the use of language to guide teaching and a list of open issues.

Exploitation of social cues to speed up learning

Author  Sylvain Calinon, Aude Billard

Video ID : 106

Use of social cues to speed up the imitation-learning process, with gazing and pointing information to select the objects relevant for the task. Reference: S. Calinon, A.G. Billard: Teaching a humanoid robot to recognize and reproduce social cues, Proc. IEEE Int. Symp. Robot Human Interactive Communication (Ro-Man), Hatfield (2006), pp. 346–351; URL: .

Chapter 69 — Physical Human-Robot Interaction

Sami Haddadin and Elizabeth Croft

Over the last two decades, the foundations for physical human–robot interaction (pHRI) have evolved from successful developments in mechatronics, control, and planning, leading toward safer lightweight robot designs and interaction control schemes that advance beyond the current capacities of existing high-payload and highprecision position-controlled industrial robots. Based on their ability to sense physical interaction, render compliant behavior along the robot structure, plan motions that respect human preferences, and generate interaction plans for collaboration and coaction with humans, these novel robots have opened up novel and unforeseen application domains, and have advanced the field of human safety in robotics.

This chapter gives an overview on the state of the art in pHRI as of the date of publication. First, the advances in human safety are outlined, addressing topics in human injury analysis in robotics and safety standards for pHRI. Then, the foundations of human-friendly robot design, including the development of lightweight and intrinsically flexible force/torque-controlled machines together with the required perception abilities for interaction are introduced. Subsequently, motionplanning techniques for human environments, including the domains of biomechanically safe, risk-metric-based, human-aware planning are covered. Finally, the rather recent problem of interaction planning is summarized, including the issues of collaborative action planning, the definition of the interaction planning problem, and an introduction to robot reflexes and reactive control architecture for pHRI.

Flexible robot gripper for KUKA Light Weight Robot (LWR): Collaboration between human and robot

Author  Robotiq

Video ID : 632

Flexible robot gripper on KUKA Light Weight Robot engaged in a proximal human-robot collaboration. The human-safe robot combined with a agile robot gripper demonstrates collaborative part feeding and part holding in assembly tasks.

Chapter 32 — 3-D Vision for Navigation and Grasping

Danica Kragic and Kostas Daniilidis

In this chapter, we describe algorithms for three-dimensional (3-D) vision that help robots accomplish navigation and grasping. To model cameras, we start with the basics of perspective projection and distortion due to lenses. This projection from a 3-D world to a two-dimensional (2-D) image can be inverted only by using information from the world or multiple 2-D views. If we know the 3-D model of an object or the location of 3-D landmarks, we can solve the pose estimation problem from one view. When two views are available, we can compute the 3-D motion and triangulate to reconstruct the world up to a scale factor. When multiple views are given either as sparse viewpoints or a continuous incoming video, then the robot path can be computer and point tracks can yield a sparse 3-D representation of the world. In order to grasp objects, we can estimate 3-D pose of the end effector or 3-D coordinates of the graspable points on the object.

Google's Project Tango

Author  Google, Inc.

Video ID : 120

Google's Project Tango has been collaborating with robotics laboratories from around the world to synthesize the past decade of research and computer vision into the development of a new class of mobile devices. This video contains one of the first public announcements and presentations of a device that can be used for multiple robot-perception applications described in this chapter.

Chapter 72 — Social Robotics

Cynthia Breazeal, Kerstin Dautenhahn and Takayuki Kanda

This chapter surveys some of the principal research trends in Social Robotics and its application to human–robot interaction (HRI). Social (or Sociable) robots are designed to interact with people in a natural, interpersonal manner – often to achieve positive outcomes in diverse applications such as education, health, quality of life, entertainment, communication, and tasks requiring collaborative teamwork. The long-term goal of creating social robots that are competent and capable partners for people is quite a challenging task. They will need to be able to communicate naturally with people using both verbal and nonverbal signals. They will need to engage us not only on a cognitive level, but on an emotional level as well in order to provide effective social and task-related support to people. They will need a wide range of socialcognitive skills and a theory of other minds to understand human behavior, and to be intuitively understood by people. A deep understanding of human intelligence and behavior across multiple dimensions (i. e., cognitive, affective, physical, social, etc.) is necessary in order to design robots that can successfully play a beneficial role in the daily lives of people. This requires a multidisciplinary approach where the design of social robot technologies and methodologies are informed by robotics, artificial intelligence, psychology, neuroscience, human factors, design, anthropology, and more.

Region-pointing gesture

Author  Takayuki Kanda

Video ID : 811

This short video explains what "region pointing" is. While it known that there are a variety of pointing gestures, in region pointing, unlike in other pointing gestures where the pointing arm is fixed, the arm moves as if it depicts a circle, which evokes the region it refers to.

Chapter 38 — Grasping

Domenico Prattichizzo and Jeffrey C. Trinkle

This chapter introduces fundamental models of grasp analysis. The overall model is a coupling of models that define contact behavior with widely used models of rigid-body kinematics and dynamics. The contact model essentially boils down to the selection of components of contact force and moment that are transmitted through each contact. Mathematical properties of the complete model naturally give rise to five primary grasp types whose physical interpretations provide insight for grasp and manipulation planning.

After introducing the basic models and types of grasps, this chapter focuses on the most important grasp characteristic: complete restraint. A grasp with complete restraint prevents loss of contact and thus is very secure. Two primary restraint properties are form closure and force closure. A form closure grasp guarantees maintenance of contact as long as the links of the hand and the object are well-approximated as rigid and as long as the joint actuators are sufficiently strong. As will be seen, the primary difference between form closure and force closure grasps is the latter’s reliance on contact friction. This translates into requiring fewer contacts to achieve force closure than form closure.

The goal of this chapter is to give a thorough understanding of the all-important grasp properties of form and force closure. This will be done through detailed derivations of grasp models and discussions of illustrative examples. For an indepth historical perspective and a treasure-trove bibliography of papers addressing a wide range of topics in grasping, the reader is referred to [38.1].

Grasp analysis using the MATLAB toolbox SynGrasp

Author  Monica Malvezzi, Guido Gioioso, Gionata Salvietti, Domenico Prattichizzo

Video ID : 551

In this video a examples of few grasp analysis are documented and reported. The analysis is performed using SynGrasp, a MATLAB toolbox for grasp analysis. It provides a graphical user interface (GUI) which the user can adopt to easily load a hand and an object, and a series of functions that the user can assemble and modify to exploit all the toolbox features. The video shows how to use SynGrasp to model and analyze grasping, and, in particular it shows how users can select and load in the GUI a hand model, then choose an object and place it in the workspace selecting its position w.r.t. the hand. The grasp is obtained closing the hand from an initial configuration, which can be set by the users acting on hand joints. Once the grasp is defined, it can be analyzed by evaluating grasp quality measures available in the toolbox. Grasps can be described either using the provided grasp planner or directly defining contact points on the hand with the respective contact normal directions. SynGrasp can model both fully and underactuated robotic hands. An important role in grasp analysis, in particular with underactuated hands, is played by system compliance. SynGrasp can model the stiffness at contact points, at the joints or in the actuation system including transmission. A wide set of analytical functions, continuously increasing with new features and capabilities, has been developed to investigate the main grasp properties: controllable forces and object displacement, manipulability analysis, grasp stiffness and different measures of grasp quality. A set of functions for the graphical representation of the hand, the object, and the main analysis results is provided. The toolbox is freely available at

Chapter 43 — Telerobotics

Günter Niemeyer, Carsten Preusche, Stefano Stramigioli and Dongjun Lee

In this chapter we present an overview of the field of telerobotics with a focus on control aspects. To acknowledge some of the earliest contributions and motivations the field has provided to robotics in general, we begin with a brief historical perspective and discuss some of the challenging applications. Then, after introducing and classifying the various system architectures and control strategies, we emphasize bilateral control and force feedback. This particular area has seen intense research work in the pursuit of telepresence. We also examine some of the emerging efforts, extending telerobotic concepts to unconventional systems and applications. Finally,we suggest some further reading for a closer engagement with the field.

Teleoperated humanoid robot - HRP: Tele-driving of lifting vehicle

Author  Masami Kobayashi, Hisashi Moriyama, Toshiyuki Itoko, Yoshitaka Yanagihara, Takao Ueno, Kazuhisa Ohya, Kazuhito Yokoi

Video ID : 319

This video shows the teleoperation a humanoid robot HRP using whole-body multimodal tele-existence system. The human operator teleoperates the humanoid robot to drive a lifting vehicle in a warehouse. Presented at ICRA 2002.

Chapter 49 — Modeling and Control of Wheeled Mobile Robots

Claude Samson, Pascal Morin and Roland Lenain

This chaptermay be seen as a follow up to Chap. 24, devoted to the classification and modeling of basic wheeled mobile robot (WMR) structures, and a natural complement to Chap. 47, which surveys motion planning methods for WMRs. A typical output of these methods is a feasible (or admissible) reference state trajectory for a given mobile robot, and a question which then arises is how to make the physical mobile robot track this reference trajectory via the control of the actuators with which the vehicle is equipped. The object of the present chapter is to bring elements of the answer to this question based on simple and effective control strategies.

The chapter is organized as follows. Section 49.2 is devoted to the choice of controlmodels and the determination of modeling equations associated with the path-following control problem. In Sect. 49.3, the path following and trajectory stabilization problems are addressed in the simplest case when no requirement is made on the robot orientation (i. e., position control). In Sect. 49.4 the same problems are revisited for the control of both position and orientation. The previously mentionned sections consider an ideal robot satisfying the rolling-without-sliding assumption. In Sect. 49.5, we relax this assumption in order to take into account nonideal wheel-ground contact. This is especially important for field-robotics applications and the proposed results are validated through full scale experiments on natural terrain. Finally, a few complementary issues on the feedback control of mobile robots are briefly discussed in the concluding Sect. 49.6, with a list of commented references for further reading on WMRs motion control.

Tracking of an omnidirectional frame with a unicycle-like robot

Author  Guillaume Artus, Pascal Morin, Claude Samson

Video ID : 243

This video shows an experiment performed in 2005 with a unicyle-like robot. A video camera mounted at the top of a robotic arm enabled estimation of the 2-D pose (position/orientation) of the robot with respect to a visual target consisting of three white bars. These bars materialized an omnidirectional moving frame. The experiment demonstrated the capacity of the nonholonomic robot to track in both position and orientation this ominidirectional frame, based on the transverse function control approach.

Chapter 24 — Wheeled Robots

Woojin Chung and Karl Iagnemma

The purpose of this chapter is to introduce, analyze, and compare various wheeled mobile robots (WMRs) and to present several realizations and commonly encountered designs. The mobility of WMR is discussed on the basis of the kinematic constraints resulting from the pure rolling conditions at the contact points between the wheels and the ground. Practical robot structures are classified according to the number of wheels, and features are introduced focusing on commonly adopted designs. Omnimobile robot and articulated robots realizations are described. Wheel–terrain interaction models are presented in order to compute forces at the contact interface. Four possible wheel-terrain interaction cases are shown on the basis of relative stiffness of the wheel and terrain. A suspension system is required to move on uneven surfaces. Structures, dynamics, and important features of commonly used suspensions are explained.

An omnidirectional mobile robot with active caster wheels

Author  Woojin Chung

Video ID : 325

This video shows a holonomic omnidirectional mobile robot with two active and two passive caster wheels. Each active caster is composed of two actuators. The first actuator drives a wheel; the second actuator steers the wheel orientation. Although the mechanical structure of the driving mechanisms becomes a little complicated, conventional tires can be used for omnidirectional motions. Since the robot is overactuated, four actuators should be carefully controlled.