View Chapter

Chapter 34 — Visual Servoing

François Chaumette, Seth Hutchinson and Peter Corke

This chapter introduces visual servo control, using computer vision data in the servo loop to control the motion of a robot. We first describe the basic techniques that are by now well established in the field. We give a general overview of the formulation of the visual servo control problem, and describe the two archetypal visual servo control schemes: image-based and pose-based visual servo control. We then discuss performance and stability issues that pertain to these two schemes, motivating advanced techniques. Of the many advanced techniques that have been developed, we discuss 2.5-D, hybrid, partitioned, and switched approaches. Having covered a variety of control schemes, we deal with target tracking and controlling motion directly in the joint space and extensions to under-actuated ground and aerial robots. We conclude by describing applications of visual servoing in robotics.

2.5-D VS on a 6-DOF robot arm (1)

Author  Francois Chaumette, Seth Hutchinson, Peter Corke

Video ID : 64

This video shows a 2.5-D VS on a 6-DOF robot arm with (x_g, log(Z_g), theta u) as visual features. It corresponds to the results depicted in Figure 34.12.

Chapter 76 — Evolutionary Robotics

Stefano Nolfi, Josh Bongard, Phil Husbands and Dario Floreano

Evolutionary Robotics is a method for automatically generating artificial brains and morphologies of autonomous robots. This approach is useful both for investigating the design space of robotic applications and for testing scientific hypotheses of biological mechanisms and processes. In this chapter we provide an overview of methods and results of Evolutionary Robotics with robots of different shapes, dimensions, and operation features. We consider both simulated and physical robots with special consideration to the transfer between the two worlds.

Online learning to adapt to fast environmental variations

Author  Dario Floreano

Video ID : 40

A mobile robot Khepera, equipped with a vision module, can gain fitness points by staying on the gray area only when the light is on. The light is normally off, but it can be switched on if the robot passes over the black area positioned on the other side of the arena. The robot can detect ambient light and wall color, but not the color of the floor.

Chapter 24 — Wheeled Robots

Woojin Chung and Karl Iagnemma

The purpose of this chapter is to introduce, analyze, and compare various wheeled mobile robots (WMRs) and to present several realizations and commonly encountered designs. The mobility of WMR is discussed on the basis of the kinematic constraints resulting from the pure rolling conditions at the contact points between the wheels and the ground. Practical robot structures are classified according to the number of wheels, and features are introduced focusing on commonly adopted designs. Omnimobile robot and articulated robots realizations are described. Wheel–terrain interaction models are presented in order to compute forces at the contact interface. Four possible wheel-terrain interaction cases are shown on the basis of relative stiffness of the wheel and terrain. A suspension system is required to move on uneven surfaces. Structures, dynamics, and important features of commonly used suspensions are explained.

An innovative planetary rover with extended climbing abilities

Author  Roland Siegwart

Video ID : 329

This video shows a suspension design for a prototype planetary exploration rover. In this suspension design, each wheel is equipped with independent actuators and a linkage mechanism that enables the robot to adapt its configuration to irregular ground conditions. This enables the rover to exhibit superior traction and obstacle-crossing performance compared to those with a standard suspension.

Chapter 69 — Physical Human-Robot Interaction

Sami Haddadin and Elizabeth Croft

Over the last two decades, the foundations for physical human–robot interaction (pHRI) have evolved from successful developments in mechatronics, control, and planning, leading toward safer lightweight robot designs and interaction control schemes that advance beyond the current capacities of existing high-payload and highprecision position-controlled industrial robots. Based on their ability to sense physical interaction, render compliant behavior along the robot structure, plan motions that respect human preferences, and generate interaction plans for collaboration and coaction with humans, these novel robots have opened up novel and unforeseen application domains, and have advanced the field of human safety in robotics.

This chapter gives an overview on the state of the art in pHRI as of the date of publication. First, the advances in human safety are outlined, addressing topics in human injury analysis in robotics and safety standards for pHRI. Then, the foundations of human-friendly robot design, including the development of lightweight and intrinsically flexible force/torque-controlled machines together with the required perception abilities for interaction are introduced. Subsequently, motionplanning techniques for human environments, including the domains of biomechanically safe, risk-metric-based, human-aware planning are covered. Finally, the rather recent problem of interaction planning is summarized, including the issues of collaborative action planning, the definition of the interaction planning problem, and an introduction to robot reflexes and reactive control architecture for pHRI.

Safe physical human-robot collaboration

Author  Fabrizio Flacco, Alessandro De Luca

Video ID : 609

The video summarizes the state of the on-going research activities on physical human-robot collaboration (pHRC) at the DIAG Robotics Lab, Sapienza University of Rome, as of March 2013, and performed within the European Research Project FP7 287511 SAPHARI (http://www.saphari.eu) Reference: F. Flacco, A. De Luca: Safe physical human-robot collaboration, IEEE/RSJ Int. Conf. Intel. Robot. Syst. (IROS), Tokyo (2013)

Chapter 62 — Intelligent Vehicles

Alberto Broggi, Alex Zelinsky, Ümit Özgüner and Christian Laugier

This chapter describes the emerging robotics application field of intelligent vehicles – motor vehicles that have autonomous functions and capabilities. The chapter is organized as follows. Section 62.1 provides a motivation for why the development of intelligent vehicles is important, a brief history of the field, and the potential benefits of the technology. Section 62.2 describes the technologies that enable intelligent vehicles to sense vehicle, environment, and driver state, work with digital maps and satellite navigation, and communicate with intelligent transportation infrastructure. Section 62.3 describes the challenges and solutions associated with road scene understanding – a key capability for all intelligent vehicles. Section 62.4 describes advanced driver assistance systems, which use the robotics and sensing technologies described earlier to create new safety and convenience systems for motor vehicles, such as collision avoidance, lane keeping, and parking assistance. Section 62.5 describes driver monitoring technologies that are being developed to mitigate driver fatigue, inattention, and impairment. Section 62.6 describes fully autonomous intelligent vehicles systems that have been developed and deployed. The chapter is concluded in Sect. 62.7 with a discussion of future prospects, while Sect. 62.8 provides references to further reading and additional resources.

Inria/Ligier automated parallel-parking demo in an open parking area

Author  Christian Laugier, Igor Paromtchik

Video ID : 567

This video shows a pioneer demonstration of the concept of "autonomous parallel parking" on the early Inria/Ligier autonomous vehicle (1996). The approach does not require any prior model of the parking area. The car is controlled using information coming from inexpensive, on-board sensors, and motion control decisions (including parking maneuvers) are taken online according to the state of the sensed environment. Public demonstrations of the systems have been performed during several publicized and scientific events (including during three days at the IEEE/RSJ IROS 1997 Conference). More technical details can be found in [62.89].

Chapter 71 — Cognitive Human-Robot Interaction

Bilge Mutlu, Nicholas Roy and Selma Šabanović

A key research challenge in robotics is to design robotic systems with the cognitive capabilities necessary to support human–robot interaction. These systems will need to have appropriate representations of the world; the task at hand; the capabilities, expectations, and actions of their human counterparts; and how their own actions might affect the world, their task, and their human partners. Cognitive human–robot interaction is a research area that considers human(s), robot(s), and their joint actions as a cognitive system and seeks to create models, algorithms, and design guidelines to enable the design of such systems. Core research activities in this area include the development of representations and actions that allow robots to participate in joint activities with people; a deeper understanding of human expectations and cognitive responses to robot actions; and, models of joint activity for human–robot interaction. This chapter surveys these research activities by drawing on research questions and advances from a wide range of fields including computer science, cognitive science, linguistics, and robotics.

Gaze and gesture cues for robots

Author  Bilge Mutlu

Video ID : 128

In human-robot communication, nonverbal cues like gaze and gesture can be a source of important information for starting and maintaining interaction. Gaze, for example, can tell a person about what the robot is attending to, its mental state, and its role in a conversation. Researchers are studying and developing models of nonverbal cues in human-robot interaction to enable more successful collaboration between robots and humans in a variety of domains, including education.

Chapter 41 — Active Manipulation for Perception

Anna Petrovskaya and Kaijen Hsiao

This chapter covers perceptual methods in which manipulation is an integral part of perception. These methods face special challenges due to data sparsity and high costs of sensing actions. However, they can also succeed where other perceptual methods fail, for example, in poor-visibility conditions or for learning the physical properties of a scene.

The chapter focuses on specialized methods that have been developed for object localization, inference, planning, recognition, and modeling in activemanipulation approaches.We concludewith a discussion of real-life applications and directions for future research.

Tactile localization of a power drill

Author  Kaijen Hsiao

Video ID : 77

This video shows a Barrett WAM arm tactilely localizing and reorienting a power drill under high positional uncertainty. The goal is for the robot to robustly grasp the power drill such that the trigger can be activated. The robot tracks the distribution of possible object poses on the table over a 3-D grid (the belief space). It then selects between information-gathering, reorienting, and goal-seeking actions by modeling the problem as a POMDP (partially observable Markov decision process) and using receding-horizon, forward search through the belief space. In the video, the inset window with the simulated robot is a visualization of the current belief state. The red spheres sit at the vertices of the object mesh placed at the most likely state, and the dark-blue box also shows the location of the most likely state. The purple box shows the location of the mean of the belief state, and the light-blue boxes show the variance of the belief state in the form of the locations of various states that are one standard deviation away from the mean in each of the three dimensions of uncertainty (x, y, and theta). The magenta spheres and arrows that appear when the robot touches the object show the contact locations and normals as reported by the sensors, and the cyan spheres that largely overlap the hand show where the robot controllers are trying to move the hand.

Chapter 71 — Cognitive Human-Robot Interaction

Bilge Mutlu, Nicholas Roy and Selma Šabanović

A key research challenge in robotics is to design robotic systems with the cognitive capabilities necessary to support human–robot interaction. These systems will need to have appropriate representations of the world; the task at hand; the capabilities, expectations, and actions of their human counterparts; and how their own actions might affect the world, their task, and their human partners. Cognitive human–robot interaction is a research area that considers human(s), robot(s), and their joint actions as a cognitive system and seeks to create models, algorithms, and design guidelines to enable the design of such systems. Core research activities in this area include the development of representations and actions that allow robots to participate in joint activities with people; a deeper understanding of human expectations and cognitive responses to robot actions; and, models of joint activity for human–robot interaction. This chapter surveys these research activities by drawing on research questions and advances from a wide range of fields including computer science, cognitive science, linguistics, and robotics.

Active key-frame-based learning from demonstration

Author  Maya Cakmak, Andrea Thomaz

Video ID : 238

Simon asks different types of questions in response to demonstrations given by the teacher.

Chapter 46 — Simultaneous Localization and Mapping

Cyrill Stachniss, John J. Leonard and Sebastian Thrun

This chapter provides a comprehensive introduction in to the simultaneous localization and mapping problem, better known in its abbreviated form as SLAM. SLAM addresses the main perception problem of a robot navigating an unknown environment. While navigating the environment, the robot seeks to acquire a map thereof, and at the same time it wishes to localize itself using its map. The use of SLAM problems can be motivated in two different ways: one might be interested in detailed environment models, or one might seek to maintain an accurate sense of a mobile robot’s location. SLAM serves both of these purposes.

We review the three major paradigms from which many published methods for SLAM are derived: (1) the extended Kalman filter (EKF); (2) particle filtering; and (3) graph optimization. We also review recent work in three-dimensional (3-D) SLAM using visual and red green blue distance-sensors (RGB-D), and close with a discussion of open research problems in robotic mapping.

Pose graph compression for laser-based SLAM 2

Author  Cyrill Stachniss

Video ID : 450

This video illustrates pose graph compression, a technique for achieving long-term SLAM, as discussed in Chap. 46.5, Springer Handbook of Robotics, 2nd edn (2016). Reference: H. Kretzschmar, C. Stachniss: Information-theoretic compression of pose graphs for laser-based SLAM. Reference: Int. J. Robot. Res. 31(11), 1219-1230 (2012).

Chapter 36 — Motion for Manipulation Tasks

James Kuffner and Jing Xiao

This chapter serves as an introduction to Part D by giving an overview of motion generation and control strategies in the context of robotic manipulation tasks. Automatic control ranging from the abstract, high-level task specification down to fine-grained feedback at the task interface are considered. Some of the important issues include modeling of the interfaces between the robot and the environment at the different time scales of motion and incorporating sensing and feedback. Manipulation planning is introduced as an extension to the basic motion planning problem, which can be modeled as a hybrid system of continuous configuration spaces arising from the act of grasping and moving parts in the environment. The important example of assembly motion is discussed through the analysis of contact states and compliant motion control. Finally, methods aimed at integrating global planning with state feedback control are summarized.

The Mobipulator

Author  Siddhartha Srinivasa et al.

Video ID : 367

The video shows a dual-differential drive robot that uses its wheels for both manipulation and locomotion. The front wheels move objects by vibrating asymmetrically while the rear wheels help to move the robot and the object around the environment.