View Chapter

Chapter 46 — Simultaneous Localization and Mapping

Cyrill Stachniss, John J. Leonard and Sebastian Thrun

This chapter provides a comprehensive introduction in to the simultaneous localization and mapping problem, better known in its abbreviated form as SLAM. SLAM addresses the main perception problem of a robot navigating an unknown environment. While navigating the environment, the robot seeks to acquire a map thereof, and at the same time it wishes to localize itself using its map. The use of SLAM problems can be motivated in two different ways: one might be interested in detailed environment models, or one might seek to maintain an accurate sense of a mobile robot’s location. SLAM serves both of these purposes.

We review the three major paradigms from which many published methods for SLAM are derived: (1) the extended Kalman filter (EKF); (2) particle filtering; and (3) graph optimization. We also review recent work in three-dimensional (3-D) SLAM using visual and red green blue distance-sensors (RGB-D), and close with a discussion of open research problems in robotic mapping.

SLAM++: Simultaneous localization and mapping at the level of objects

Author  Andrew Davison

Video ID : 454

This video describes SLAM++, an object-based, 3-D SLAM system. Reference. R.F. Salas-Moreno, R.A. Newcombe, H. Strasdat, P.H.J. Kelly, A.J. Davison: SLAM++: Simultaneous localisation and mapping at the level of objects, Proc. IEEE Int. Conf. Computer Vision Pattern Recognition, Portland (2013).

Chapter 69 — Physical Human-Robot Interaction

Sami Haddadin and Elizabeth Croft

Over the last two decades, the foundations for physical human–robot interaction (pHRI) have evolved from successful developments in mechatronics, control, and planning, leading toward safer lightweight robot designs and interaction control schemes that advance beyond the current capacities of existing high-payload and highprecision position-controlled industrial robots. Based on their ability to sense physical interaction, render compliant behavior along the robot structure, plan motions that respect human preferences, and generate interaction plans for collaboration and coaction with humans, these novel robots have opened up novel and unforeseen application domains, and have advanced the field of human safety in robotics.

This chapter gives an overview on the state of the art in pHRI as of the date of publication. First, the advances in human safety are outlined, addressing topics in human injury analysis in robotics and safety standards for pHRI. Then, the foundations of human-friendly robot design, including the development of lightweight and intrinsically flexible force/torque-controlled machines together with the required perception abilities for interaction are introduced. Subsequently, motionplanning techniques for human environments, including the domains of biomechanically safe, risk-metric-based, human-aware planning are covered. Finally, the rather recent problem of interaction planning is summarized, including the issues of collaborative action planning, the definition of the interaction planning problem, and an introduction to robot reflexes and reactive control architecture for pHRI.

Full-body, compliant humanoid COMAN

Author  Department of Advanced Robotics, Istituto Italiano di Tecnologia

Video ID : 624

The video shows different characteristics of the compliant humanoid (COMAN) which is developed by the Department of Advanced Robotics (ADVR), Istituto Italiano di Tecnologia (IIT), i.e.: i) fully torque controlled, ii) compliant human-robot interaction, iii) joint impedance control, iv) exploration of natural dynamics, v) robust stabilization control including disturbance rejection;and vi) adaption to inclined terrain.

Chapter 35 — Multisensor Data Fusion

Hugh Durrant-Whyte and Thomas C. Henderson

Multisensor data fusion is the process of combining observations from a number of different sensors to provide a robust and complete description of an environment or process of interest. Data fusion finds wide application in many areas of robotics such as object recognition, environment mapping, and localization.

This chapter has three parts: methods, architectures, and applications. Most current data fusion methods employ probabilistic descriptions of observations and processes and use Bayes’ rule to combine this information. This chapter surveys the main probabilistic modeling and fusion techniques including grid-based models, Kalman filtering, and sequential Monte Carlo techniques. This chapter also briefly reviews a number of nonprobabilistic data fusion methods. Data fusion systems are often complex combinations of sensor devices, processing, and fusion algorithms. This chapter provides an overview of key principles in data fusion architectures from both a hardware and algorithmic viewpoint. The applications of data fusion are pervasive in robotics and underly the core problem of sensing, estimation, and perception. We highlight two example applications that bring out these features. The first describes a navigation or self-tracking application for an autonomous vehicle. The second describes an application in mapping and environment modeling.

The essential algorithmic tools of data fusion are reasonably well established. However, the development and use of these tools in realistic robotics applications is still developing.

Multisensor remote surface inspection

Author  S. Hayati, H. Seraji, B. Balaram, R. Volpe, B. Ivlev, G. Tharp, T. Ohm, D. Lim

Video ID : 639

Jet Propulson Lab, Pasadena, applies telerobotic inspection techniques to space platforms.

Chapter 18 — Parallel Mechanisms

Jean-Pierre Merlet, Clément Gosselin and Tian Huang

This chapter presents an introduction to the kinematics and dynamics of parallel mechanisms, also referred to as parallel robots. As opposed to classical serial manipulators, the kinematic architecture of parallel robots includes closed-loop kinematic chains. As a consequence, their analysis differs considerably from that of their serial counterparts. This chapter aims at presenting the fundamental formulations and techniques used in their analysis.

6-DOF cable-suspended robot

Author  Clément Gosselin

Video ID : 44

This video demonstrates a 6-DOF cable-suspended robot acting in a large workspace to scan artefacts. References: 1. C. Gosselin, S. Bouchard: A gravity-powered mechanism for extending the workspace of a cable-driven parallel mechanism: Application to the appearance modeling of objects, Int. J. Autom. Technol. 4(4), 372-379 (2010); 2. J.D. Deschênes, P. Lambert, S. Perreault, N. Martel-Brisson, N. Zoso, A. Zaccarin, P. Hébert, S. Bouchard, C. Gosselin: A cable-driven parallel mechanism for capturing object appearance from multiple viewpoints, Proc. 6th Int. Conf. 3-D Digital Imaging and Modeling, Montréal (2007)

Chapter 49 — Modeling and Control of Wheeled Mobile Robots

Claude Samson, Pascal Morin and Roland Lenain

This chaptermay be seen as a follow up to Chap. 24, devoted to the classification and modeling of basic wheeled mobile robot (WMR) structures, and a natural complement to Chap. 47, which surveys motion planning methods for WMRs. A typical output of these methods is a feasible (or admissible) reference state trajectory for a given mobile robot, and a question which then arises is how to make the physical mobile robot track this reference trajectory via the control of the actuators with which the vehicle is equipped. The object of the present chapter is to bring elements of the answer to this question based on simple and effective control strategies.

The chapter is organized as follows. Section 49.2 is devoted to the choice of controlmodels and the determination of modeling equations associated with the path-following control problem. In Sect. 49.3, the path following and trajectory stabilization problems are addressed in the simplest case when no requirement is made on the robot orientation (i. e., position control). In Sect. 49.4 the same problems are revisited for the control of both position and orientation. The previously mentionned sections consider an ideal robot satisfying the rolling-without-sliding assumption. In Sect. 49.5, we relax this assumption in order to take into account nonideal wheel-ground contact. This is especially important for field-robotics applications and the proposed results are validated through full scale experiments on natural terrain. Finally, a few complementary issues on the feedback control of mobile robots are briefly discussed in the concluding Sect. 49.6, with a list of commented references for further reading on WMRs motion control.

Tracking of an omnidirectional frame with a unicycle-like robot

Author  Guillaume Artus, Pascal Morin, Claude Samson

Video ID : 243

This video shows an experiment performed in 2005 with a unicyle-like robot. A video camera mounted at the top of a robotic arm enabled estimation of the 2-D pose (position/orientation) of the robot with respect to a visual target consisting of three white bars. These bars materialized an omnidirectional moving frame. The experiment demonstrated the capacity of the nonholonomic robot to track in both position and orientation this ominidirectional frame, based on the transverse function control approach.

Chapter 17 — Limbed Systems

Shuuji Kajita and Christian Ott

A limbed system is a mobile robot with a body, legs and arms. First, its general design process is discussed in Sect. 17.1. Then we consider issues of conceptual design and observe designs of various existing robots in Sect. 17.2. As an example in detail, the design of a humanoid robot HRP-4C is shown in Sect. 17.3. To design a limbed system of good performance, it is important to take into account of actuation and control, like gravity compensation, limit cycle dynamics, template models, and backdrivable actuation. These are discussed in Sect. 17.4.

In Sect. 17.5, we overview divergence of limbed systems. We see odd legged walkers, leg–wheel hybrid robots, leg–arm hybrid robots, tethered walking robots, and wall-climbing robots. To compare limbed systems of different configurations,we can use performance indices such as the gait sensitivity norm, the Froude number, and the specific resistance, etc., which are introduced in Sect. 17.6.

Linear inverted pendulum mode

Author  Shuuji Kajita

Video ID : 512

Demonstration of the linear inverted pendulum mode (LIPM) and its application for biped walking control. This biped robot with parallel link legs was developed by Dr. Kajita and Dr. Tani.

Chapter 71 — Cognitive Human-Robot Interaction

Bilge Mutlu, Nicholas Roy and Selma Šabanović

A key research challenge in robotics is to design robotic systems with the cognitive capabilities necessary to support human–robot interaction. These systems will need to have appropriate representations of the world; the task at hand; the capabilities, expectations, and actions of their human counterparts; and how their own actions might affect the world, their task, and their human partners. Cognitive human–robot interaction is a research area that considers human(s), robot(s), and their joint actions as a cognitive system and seeks to create models, algorithms, and design guidelines to enable the design of such systems. Core research activities in this area include the development of representations and actions that allow robots to participate in joint activities with people; a deeper understanding of human expectations and cognitive responses to robot actions; and, models of joint activity for human–robot interaction. This chapter surveys these research activities by drawing on research questions and advances from a wide range of fields including computer science, cognitive science, linguistics, and robotics.

Human-robot jazz improvisation

Author  Guy Hoffman

Video ID : 236

The stage debut of Shimon, the robotic marimba player. Also, the world's first human-robot rendition of Duke Jordan's "Jordu", for human piano and robot marimba.

Chapter 56 — Robotics in Agriculture and Forestry

Marcel Bergerman, John Billingsley, John Reid and Eldert van Henten

Robotics for agriculture and forestry (A&F) represents the ultimate application of one of our society’s latest and most advanced innovations to its most ancient and important industries. Over the course of history, mechanization and automation increased crop output several orders of magnitude, enabling a geometric growth in population and an increase in quality of life across the globe. Rapid population growth and rising incomes in developing countries, however, require ever larger amounts of A&F output. This chapter addresses robotics for A&F in the form of case studies where robotics is being successfully applied to solve well-identified problems. With respect to plant crops, the focus is on the in-field or in-farm tasks necessary to guarantee a quality crop and, generally speaking, end at harvest time. In the livestock domain, the focus is on breeding and nurturing, exploiting, harvesting, and slaughtering and processing. The chapter is organized in four main sections. The first one explains the scope, in particular, what aspects of robotics for A&F are dealt with in the chapter. The second one discusses the challenges and opportunities associated with the application of robotics to A&F. The third section is the core of the chapter, presenting twenty case studies that showcase (mostly) mature applications of robotics in various agricultural and forestry domains. The case studies are not meant to be comprehensive but instead to give the reader a general overview of how robotics has been applied to A&F in the last 10 years. The fourth section concludes the chapter with a discussion on specific improvements to current technology and paths to commercialization.

An autonomous cucumber harvester

Author  Elder J. van Henten, Jochen Hemming, Bart A.J. van Tuijl, J.G. Kornet, Jan Meuleman, Jan Bontsema, Erik A. van Os

Video ID : 308

The video demonstrates an autonomous cucumber harvester developed at Wageningen University and Research Centre, Wageningen, The Netherlands. The machine consists of a mobile platform which runs on rails, which are commonly used in greenhouses in The Netherlands for the purpose of internal transport, but they are also used as a hot- water heating system for the greenhouse. Harvesting requires functional steps such as the detection and localization of the fruit and assessment of its ripeness. In the case of the cucumber harvester, the different reflection properties in the near infrared spectrum are exploited to detect green cucumbers in the green environment. Whether the cucumber was ready for harvest was identified based on an estimation of its weight. Since cucumbers consist 95% of water, the weight estimation was achieved by estimating the volume of each fruit. Stereo-vision principles were then used to locate the fruits to be harvested in the 3-D environment. For that purpose, the camera was shifted 50 mm on a linear slide and two images of the same scene were taken and processed. A Mitsubishi RV-E2 manipulator was used to steer the gripper-cutter mechanism to the fruit and transport the harvested fruit back to a storage crate. Collision-free motion planning based on the A* algorithm was used to steer the manipulator during the harvesting operation. The cutter consisted of a parallel gripper that grabbed the peduncle of the fruit, i.e., the stem segment that connects the fruit to the main stem of the plant. Then the action of a suction cup immobilized the fruit in the gripper. A special thermal cutting device was used to separate the fruit from the plant. The high temperature of the cutting device also prevented the potential transport of viruses from one plant to the other during the harvesting process. For each successful cucumber harvested, this machine needed 65.2 s on average. The average success rate was 74.4%. It was found to be a great advantage that the system was able to perform several harvest attempts on a single cucumber from different harvest positions of the robot. This improved the success rate considerably. Since not all attempts were successful, a cycle time of 124 s per harvested cucumber was measured under practical circumstances.

Chapter 68 — Human Motion Reconstruction

Katsu Yamane and Wataru Takano

This chapter presents a set of techniques for reconstructing and understanding human motions measured using current motion capture technologies. We first review modeling and computation techniques for obtaining motion and force information from human motion data (Sect. 68.2). Here we show that kinematics and dynamics algorithms for articulated rigid bodies can be applied to human motion data processing, with help from models based on knowledge in anatomy and physiology. We then describe methods for analyzing human motions so that robots can segment and categorize different behaviors and use them as the basis for human motion understanding and communication (Sect. 68.3). These methods are based on statistical techniques widely used in linguistics. The two fields share the common goal of converting continuous and noisy signal to discrete symbols, and therefore it is natural to apply similar techniques. Finally, we introduce some application examples of human motion and models ranging from simulated human control to humanoid robot motion synthesis.

Human motion mapped to a humanoid robot

Author  Katsu Yamane

Video ID : 765

This video shows an example of a humanoid robot controlled using human motion. The robot is equipped with a tracking controller and a balance controller.

Chapter 39 — Cooperative Manipulation

Fabrizio Caccavale and Masaru Uchiyama

This chapter is devoted to cooperative manipulation of a common object by means of two or more robotic arms. The chapter opens with a historical overview of the research on cooperativemanipulation, ranging from early 1970s to very recent years. Kinematics and dynamics of robotic arms cooperatively manipulating a tightly grasped rigid object are presented in depth. As for the kinematics and statics, the chosen approach is based on the socalled symmetric formulation; fundamentals of dynamics and reduced-order models for closed kinematic chains are discussed as well. A few special topics, such as the definition of geometrically meaningful cooperative task space variables, the problem of load distribution, and the definition of manipulability ellipsoids, are included to give the reader a complete picture ofmodeling and evaluation methodologies for cooperative manipulators. Then, the chapter presents the main strategies for controlling both the motion of the cooperative system and the interaction forces between the manipulators and the grasped object; in detail, fundamentals of hybrid force/position control, proportional–derivative (PD)-type force/position control schemes, feedback linearization techniques, and impedance control approaches are given. In the last section further reading on advanced topics related to control of cooperative robots is suggested; in detail, advanced nonlinear control strategies are briefly discussed (i. e., intelligent control approaches, synchronization control, decentralized control); also, fundamental results on modeling and control of cooperative systems possessing some degree of flexibility are briefly outlined.

Impedance control for cooperative manipulators

Author  Fabrizio Caccavale, Pasquale Chiacchio, Alessandro Marino, Luigi Villani

Video ID : 67

This is a video showing experiments on impedance control for cooperative manipulators. Reference: F. Caccavale, P. Chiacchio, A. Marino, L. Villani: Six-DOF impedance control of dual-arm cooperative manipulators, IEEE/ASME Trans. Mechatron. 13, 576-586 (2008).