View Chapter

Chapter 32 — 3-D Vision for Navigation and Grasping

Danica Kragic and Kostas Daniilidis

In this chapter, we describe algorithms for three-dimensional (3-D) vision that help robots accomplish navigation and grasping. To model cameras, we start with the basics of perspective projection and distortion due to lenses. This projection from a 3-D world to a two-dimensional (2-D) image can be inverted only by using information from the world or multiple 2-D views. If we know the 3-D model of an object or the location of 3-D landmarks, we can solve the pose estimation problem from one view. When two views are available, we can compute the 3-D motion and triangulate to reconstruct the world up to a scale factor. When multiple views are given either as sparse viewpoints or a continuous incoming video, then the robot path can be computer and point tracks can yield a sparse 3-D representation of the world. In order to grasp objects, we can estimate 3-D pose of the end effector or 3-D coordinates of the graspable points on the object.

3-D models from 2-D video - automatically

Author  Marc Pollefeys

Video ID : 125

We show how a video is automatically converted into a 3-D model using computer-vision techniques. More details on this approach can be found in: M. Pollefeys, L. Van Gool, M. Vergauwen, F. Verbiest, K. Cornelis, J. Tops, R. Koch: Visual modeling with a hand-held camera, Int. J. Comp. Vis. 59(3), 207-232 (2004).

Chapter 69 — Physical Human-Robot Interaction

Sami Haddadin and Elizabeth Croft

Over the last two decades, the foundations for physical human–robot interaction (pHRI) have evolved from successful developments in mechatronics, control, and planning, leading toward safer lightweight robot designs and interaction control schemes that advance beyond the current capacities of existing high-payload and highprecision position-controlled industrial robots. Based on their ability to sense physical interaction, render compliant behavior along the robot structure, plan motions that respect human preferences, and generate interaction plans for collaboration and coaction with humans, these novel robots have opened up novel and unforeseen application domains, and have advanced the field of human safety in robotics.

This chapter gives an overview on the state of the art in pHRI as of the date of publication. First, the advances in human safety are outlined, addressing topics in human injury analysis in robotics and safety standards for pHRI. Then, the foundations of human-friendly robot design, including the development of lightweight and intrinsically flexible force/torque-controlled machines together with the required perception abilities for interaction are introduced. Subsequently, motionplanning techniques for human environments, including the domains of biomechanically safe, risk-metric-based, human-aware planning are covered. Finally, the rather recent problem of interaction planning is summarized, including the issues of collaborative action planning, the definition of the interaction planning problem, and an introduction to robot reflexes and reactive control architecture for pHRI.

Safe physical human-robot collaboration

Author  Fabrizio Flacco, Alessandro De Luca

Video ID : 609

The video summarizes the state of the on-going research activities on physical human-robot collaboration (pHRC) at the DIAG Robotics Lab, Sapienza University of Rome, as of March 2013, and performed within the European Research Project FP7 287511 SAPHARI (http://www.saphari.eu) Reference: F. Flacco, A. De Luca: Safe physical human-robot collaboration, IEEE/RSJ Int. Conf. Intel. Robot. Syst. (IROS), Tokyo (2013)

Chapter 66 — Robotics Competitions and Challenges

Daniele Nardi, Jonathan Roberts, Manuela Veloso and Luke Fletcher

This chapter explores the use of competitions to accelerate robotics research and promote science, technology, engineering, and mathematics (STEM) education. We argue that the field of robotics is particularly well suited to innovation through competitions. Two broad categories of robot competition are used to frame the discussion: human-inspired competitions and task-based challenges. Human-inspired robot competitions, of which the majority are sports contests, quickly move through platform development to focus on problemsolving and test through game play. Taskbased challenges attempt to attract participants by presenting a high aim for a robotic system. The contest can then be tuned, as required, to maintain motivation and ensure that the progress is made. Three case studies of robot competitions are presented, namely robot soccer, the UAV challenge, and the DARPA (Defense Advanced Research Projects Agency) grand challenges. The case studies serve to explore from the point of view of organizers and participants, the benefits and limitations of competitions, and what makes a good robot competition.

This chapter ends with some concluding remarks on the natural convergence of humaninspired competitions and task-based challenges in the promotion of STEM education, research, and vocations.

Brief history of RoboCup robot soccer

Author  Manuela Veloso

Video ID : 385

In this 5 min video, we explain the history of the multiple RoboCup soccer leagues.

Chapter 20 — Snake-Like and Continuum Robots

Ian D. Walker, Howie Choset and Gregory S. Chirikjian

This chapter provides an overview of the state of the art of snake-like (backbones comprised of many small links) and continuum (continuous backbone) robots. The history of each of these classes of robot is reviewed, focusing on key hardware developments. A review of the existing theory and algorithms for kinematics for both types of robot is presented, followed by a summary ofmodeling of locomotion for snake-like and continuum mechanisms.

Modsnake autonomous pole-climbing

Author  Howie Choset

Video ID : 166

Video of the CMU Modsnake autonomously climbing a pole using LIDAR.

Chapter 34 — Visual Servoing

François Chaumette, Seth Hutchinson and Peter Corke

This chapter introduces visual servo control, using computer vision data in the servo loop to control the motion of a robot. We first describe the basic techniques that are by now well established in the field. We give a general overview of the formulation of the visual servo control problem, and describe the two archetypal visual servo control schemes: image-based and pose-based visual servo control. We then discuss performance and stability issues that pertain to these two schemes, motivating advanced techniques. Of the many advanced techniques that have been developed, we discuss 2.5-D, hybrid, partitioned, and switched approaches. Having covered a variety of control schemes, we deal with target tracking and controlling motion directly in the joint space and extensions to under-actuated ground and aerial robots. We conclude by describing applications of visual servoing in robotics.

2.5-D VS on a 6 DOF robot arm (2)

Author  Francois Chaumette, Seth Hutchinson, Peter Corke

Video ID : 65

This video shows a 2.5-D VS on a 6 DOF robot arm with (c*^t_c, x_g, theta u_z) as visual features. It corresponds to the results depicted in Figure 34.13.

Chapter 41 — Active Manipulation for Perception

Anna Petrovskaya and Kaijen Hsiao

This chapter covers perceptual methods in which manipulation is an integral part of perception. These methods face special challenges due to data sparsity and high costs of sensing actions. However, they can also succeed where other perceptual methods fail, for example, in poor-visibility conditions or for learning the physical properties of a scene.

The chapter focuses on specialized methods that have been developed for object localization, inference, planning, recognition, and modeling in activemanipulation approaches.We concludewith a discussion of real-life applications and directions for future research.

6-DOF object localization via touch

Author  Anna Petrovskaya

Video ID : 721

The PUMA robot arm performs 6-DOF localization of an object (i.e., a cash register) via touch starting with global uncertainty. After each contact, the robot analyzes the resulting belief about the object pose. If the uncertainty of the belief is too large, the robot continues to probe the object. Once, the uncertainty is small enough, the robot is able to push buttons and manipulate the drawer based on its knowledge of the object pose and prior knowledge of the object model. A prior 3-D mesh model of the object was constructed by touching the object with the robot's end-effector.

Chapter 50 — Modeling and Control of Robots on Rough Terrain

Keiji Nagatani, Genya Ishigami and Yoshito Okada

In this chapter, we introduce modeling and control for wheeled mobile robots and tracked vehicles. The target environment is rough terrains, which includes both deformable soil and heaps of rubble. Therefore, the topics are roughly divided into two categories, wheeled robots on deformable soil and tracked vehicles on heaps of rubble.

After providing an overview of this area in Sect. 50.1, a modeling method of wheeled robots on a deformable terrain is introduced in Sect. 50.2. It is based on terramechanics, which is the study focusing on the mechanical properties of natural rough terrain and its response to off-road vehicle, specifically the interaction between wheel/track and soil. In Sect. 50.3, the control of wheeled robots is introduced. A wheeled robot often experiences wheel slippage as well as its sideslip while traversing rough terrain. Therefore, the basic approach in this section is to compensate the slip via steering and driving maneuvers. In the case of navigation on heaps of rubble, tracked vehicles have much advantage. To improve traversability in such challenging environments, some tracked vehicles are equipped with subtracks, and one kinematical modeling method of tracked vehicle on rough terrain is introduced in Sect. 50.4. In addition, stability analysis of such vehicles is introduced in Sect. 50.5. Based on such kinematical model and stability analysis, a sensor-based control of tracked vehicle on rough terrain is introduced in Sect. 50.6. Sect. 50.7 summarizes this chapter.

Qualification testing of a tracked vehicle in the NIST Disaster City

Author  SuperDroid Robots, Inc

Video ID : 189

NIST (National Institute of Standards and Technology) developed a standard test field for evaluation of all-terrain mobile robots, called Disaster City in Texas, U.S.A. The field includes steps, stairs, steep slopes, and random step fields (unfixed wooden blocks), which simulates a disaster environment. This video-clip shows an evaluation test of the tracked vehicle, called LT-F, produced by SuperDroidRobots in 2011 in the Disaster City. All tests had to be performed remotely by the vehicle for 10 successful iterations each to qualify.

Chapter 36 — Motion for Manipulation Tasks

James Kuffner and Jing Xiao

This chapter serves as an introduction to Part D by giving an overview of motion generation and control strategies in the context of robotic manipulation tasks. Automatic control ranging from the abstract, high-level task specification down to fine-grained feedback at the task interface are considered. Some of the important issues include modeling of the interfaces between the robot and the environment at the different time scales of motion and incorporating sensing and feedback. Manipulation planning is introduced as an extension to the basic motion planning problem, which can be modeled as a hybrid system of continuous configuration spaces arising from the act of grasping and moving parts in the environment. The important example of assembly motion is discussed through the analysis of contact states and compliant motion control. Finally, methods aimed at integrating global planning with state feedback control are summarized.

Reducing uncertainty in robotics surface-assembly tasks

Author  Jing Xiao et al.

Video ID : 356

This video demonstrates how surface assembly strategies with pose estimation can be used to overcome pose uncertainties. The assembly path is updated based on the newly estimated values of parameters after the compliant exploratory move. In this way, the robot is able to successfully overcome disparities between the nominal and the actual poses of the objects to accomplish the assembly. No force sensor is used.

Chapter 19 — Robot Hands

Claudio Melchiorri and Makoto Kaneko

Multifingered robot hands have a potential capability for achieving dexterous manipulation of objects by using rolling and sliding motions. This chapter addresses design, actuation, sensing and control of multifingered robot hands. From the design viewpoint, they have a strong constraint in actuator implementation due to the space limitation in each joint. After briefly introducing the overview of anthropomorphic end-effector and its dexterity in Sect. 19.1, various approaches for actuation are provided with their advantages and disadvantages in Sect. 19.2. The key classification is (1) remote actuation or build-in actuation and (2) the relationship between the number of joints and the number of actuator. In Sect. 19.3, actuators and sensors used for multifingered hands are described. In Sect. 19.4, modeling and control are introduced by considering both dynamic effects and friction. Applications and trends are given in Sect. 19.5. Finally, this chapter is closed with conclusions and further reading.

DLR hand

Author  DLR -Robotics and Mechatronics Center

Video ID : 768

A DLR hand

Chapter 13 — Behavior-Based Systems

François Michaud and Monica Nicolescu

Nature is filled with examples of autonomous creatures capable of dealing with the diversity, unpredictability, and rapidly changing conditions of the real world. Such creatures must make decisions and take actions based on incomplete perception, time constraints, limited knowledge about the world, cognition, reasoning and physical capabilities, in uncontrolled conditions and with very limited cues about the intent of others. Consequently, one way of evaluating intelligence is based on the creature’s ability to make the most of what it has available to handle the complexities of the real world. The main objective of this chapter is to explain behavior-based systems and their use in autonomous control problems and applications. The chapter is organized as follows. Section 13.1 overviews robot control, introducing behavior-based systems in relation to other established approaches to robot control. Section 13.2 follows by outlining the basic principles of behavior-based systems that make them distinct from other types of robot control architectures. The concept of basis behaviors, the means of modularizing behavior-based systems, is presented in Sect. 13.3. Section 13.4 describes how behaviors are used as building blocks for creating representations for use by behavior-based systems, enabling the robot to reason about the world and about itself in that world. Section 13.5 presents several different classes of learning methods for behavior-based systems, validated on single-robot and multirobot systems. Section 13.6 provides an overview of various robotics problems and application domains that have successfully been addressed or are currently being studied with behavior-based control. Finally, Sect. 13.7 concludes the chapter.

The Nerd Herd

Author  Maja J. Mataric

Video ID : 34

This is a video showing the work done in the early 1990s with the Nerd Herd used as a multirobot behavior-based system. Reference: M.J. Matarić: Designing and understanding adaptive group behavior, Adapt. Behav. 4(1), 50–81 (1995)