View Chapter

Chapter 50 — Modeling and Control of Robots on Rough Terrain

Keiji Nagatani, Genya Ishigami and Yoshito Okada

In this chapter, we introduce modeling and control for wheeled mobile robots and tracked vehicles. The target environment is rough terrains, which includes both deformable soil and heaps of rubble. Therefore, the topics are roughly divided into two categories, wheeled robots on deformable soil and tracked vehicles on heaps of rubble.

After providing an overview of this area in Sect. 50.1, a modeling method of wheeled robots on a deformable terrain is introduced in Sect. 50.2. It is based on terramechanics, which is the study focusing on the mechanical properties of natural rough terrain and its response to off-road vehicle, specifically the interaction between wheel/track and soil. In Sect. 50.3, the control of wheeled robots is introduced. A wheeled robot often experiences wheel slippage as well as its sideslip while traversing rough terrain. Therefore, the basic approach in this section is to compensate the slip via steering and driving maneuvers. In the case of navigation on heaps of rubble, tracked vehicles have much advantage. To improve traversability in such challenging environments, some tracked vehicles are equipped with subtracks, and one kinematical modeling method of tracked vehicle on rough terrain is introduced in Sect. 50.4. In addition, stability analysis of such vehicles is introduced in Sect. 50.5. Based on such kinematical model and stability analysis, a sensor-based control of tracked vehicle on rough terrain is introduced in Sect. 50.6. Sect. 50.7 summarizes this chapter.

Terradynamics of legged locomotion for traversal in granular media

Author  Chen Li, Tingnan Zhang, Daniel Goldman

Video ID : 186

The experiments in this video evaluate the effect of leg shape on the robot's dynamic behavior on soft sand. Several types of leg shapes have been tested, e.g., from linear shapes to arcs, with varying curvatures.

Chapter 67 — Humanoids

Paul Fitzpatrick, Kensuke Harada, Charles C. Kemp, Yoshio Matsumoto, Kazuhito Yokoi and Eiichi Yoshida

Humanoid robots selectively immitate aspects of human form and behavior. Humanoids come in a variety of shapes and sizes, from complete human-size legged robots to isolated robotic heads with human-like sensing and expression. This chapter highlights significant humanoid platforms and achievements, and discusses some of the underlying goals behind this area of robotics. Humanoids tend to require the integration ofmany of the methods covered in detail within other chapters of this handbook, so this chapter focuses on distinctive aspects of humanoid robotics with liberal cross-referencing.

This chapter examines what motivates researchers to pursue humanoid robotics, and provides a taste of the evolution of this field over time. It summarizes work on legged humanoid locomotion, whole-body activities, and approaches to human–robot communication. It concludes with a brief discussion of factors that may influence the future of humanoid robots.

3-D, collision-free motion combining locomotion and manipulation by humanoid robot HRP-2

Author  Eiichi Yoshida

Video ID : 594

This video shows an example of 3-D, whole-body motion generation combining manipulation and dynamic biped locomotion, based on two-stage motion generation. At the first stage, the motion planner generates the upper-body motion with a walking path of the bounding box of the lower body. The second stage overlays the desired upper-body motion on the dynamically-stable walking motions generated by a dynamic walking-pattern generator, based on preview control of ZMP for a linear, inverted-pendulum model. If collisions occur, the planner goes back to the first stage to reshape the trajectory until collision-free motion is obtained.

Chapter 69 — Physical Human-Robot Interaction

Sami Haddadin and Elizabeth Croft

Over the last two decades, the foundations for physical human–robot interaction (pHRI) have evolved from successful developments in mechatronics, control, and planning, leading toward safer lightweight robot designs and interaction control schemes that advance beyond the current capacities of existing high-payload and highprecision position-controlled industrial robots. Based on their ability to sense physical interaction, render compliant behavior along the robot structure, plan motions that respect human preferences, and generate interaction plans for collaboration and coaction with humans, these novel robots have opened up novel and unforeseen application domains, and have advanced the field of human safety in robotics.

This chapter gives an overview on the state of the art in pHRI as of the date of publication. First, the advances in human safety are outlined, addressing topics in human injury analysis in robotics and safety standards for pHRI. Then, the foundations of human-friendly robot design, including the development of lightweight and intrinsically flexible force/torque-controlled machines together with the required perception abilities for interaction are introduced. Subsequently, motionplanning techniques for human environments, including the domains of biomechanically safe, risk-metric-based, human-aware planning are covered. Finally, the rather recent problem of interaction planning is summarized, including the issues of collaborative action planning, the definition of the interaction planning problem, and an introduction to robot reflexes and reactive control architecture for pHRI.

Justin: A humanoid upper body system for two-handed manipulation experiments

Author  Christoph Borst, Christian Ott, Thomas Wimböck, Bernhard Brunner, Franziska Zacharias, Berthold Bäuml

Video ID : 626

This video presents a humanoid two-arm system developed as a research platform for studying dexterous two-handed manipulation. The system is based on the modular DLR-Lightweight-Robot-III and the DLR-Hand-II. Two arms and hands are combined with a 3-DOF movable torso and a visual system to form a complete humanoid upper body. The diversity of the system is demonstrated by showing the mechanical design, several control concepts, the application of rapid prototyping and hardware-in-the-loop (HIL) development, as well as two-handed manipulation experiments and the integration of path planning capabilities.

Chapter 43 — Telerobotics

Günter Niemeyer, Carsten Preusche, Stefano Stramigioli and Dongjun Lee

In this chapter we present an overview of the field of telerobotics with a focus on control aspects. To acknowledge some of the earliest contributions and motivations the field has provided to robotics in general, we begin with a brief historical perspective and discuss some of the challenging applications. Then, after introducing and classifying the various system architectures and control strategies, we emphasize bilateral control and force feedback. This particular area has seen intense research work in the pursuit of telepresence. We also examine some of the emerging efforts, extending telerobotic concepts to unconventional systems and applications. Finally,we suggest some further reading for a closer engagement with the field.

Single- and dual-arm supervisory and shared control

Author  Paul S. Schenker, Antal K. Bejczy, Won S. Kim

Video ID : 299

This video shows single- and dual-arm supervisory and shared teleoperation control for the remote repair of solar panels attached to a space satellite.

Chapter 51 — Modeling and Control of Underwater Robots

Gianluca Antonelli, Thor I. Fossen and Dana R. Yoerger

This chapter deals with modeling and control of underwater robots. First, a brief introduction showing the constantly expanding role of marine robotics in oceanic engineering is given; this section also contains some historical backgrounds. Most of the following sections strongly overlap with the corresponding chapters presented in this handbook; hence, to avoid useless repetitions, only those aspects peculiar to the underwater environment are discussed, assuming that the reader is already familiar with concepts such as fault detection systems when discussing the corresponding underwater implementation. Themodeling section is presented by focusing on a coefficient-based approach capturing the most relevant underwater dynamic effects. Two sections dealing with the description of the sensor and the actuating systems are then given. Autonomous underwater vehicles require the implementation of mission control system as well as guidance and control algorithms. Underwater localization is also discussed. Underwater manipulation is then briefly approached. Fault detection and fault tolerance, together with the coordination control of multiple underwater vehicles, conclude the theoretical part of the chapter. Two final sections, reporting some successful applications and discussing future perspectives, conclude the chapter. The reader is referred to Chap. 25 for the design issues.

Saturation-based, nonlinear, depth-and-yaw control of an underwater vehicle

Author  Eduardo Campos-Mercado, Ahmed Chemori, Vincent Creuze, Jorge Torres-Munoz, Rogelio Lozano

Video ID : 268

This video demonstrates the robustness of a saturation-based, nonlinear controller for underwater vehicles. The performance of yaw and depth control of the L2ROV prototype is maintained, even when the buoyancy and the damping are changed. This work has been conducted by the LIRMM (University Montpellier 2, France) and the LAFMIA (CINVESTAV Mexico), in collaboration with Tecnalia France Foundation. This work has been supported by the French-Mexican PCP program and by the Region Languedoc-Roussillon.

Chapter 32 — 3-D Vision for Navigation and Grasping

Danica Kragic and Kostas Daniilidis

In this chapter, we describe algorithms for three-dimensional (3-D) vision that help robots accomplish navigation and grasping. To model cameras, we start with the basics of perspective projection and distortion due to lenses. This projection from a 3-D world to a two-dimensional (2-D) image can be inverted only by using information from the world or multiple 2-D views. If we know the 3-D model of an object or the location of 3-D landmarks, we can solve the pose estimation problem from one view. When two views are available, we can compute the 3-D motion and triangulate to reconstruct the world up to a scale factor. When multiple views are given either as sparse viewpoints or a continuous incoming video, then the robot path can be computer and point tracks can yield a sparse 3-D representation of the world. In order to grasp objects, we can estimate 3-D pose of the end effector or 3-D coordinates of the graspable points on the object.

LIBVISO: Visual odometry for intelligent vehicles

Author  Andreas Geiger

Video ID : 122

This video demonstrates a visual-odometry algorithm on the performance of the vehicle Annieway (VW Passat). Visual odometry is the estimation of a video camera's 3-D motion and orientation, which is purely based on stereo vision in this case. The blue trajectory is the motion estimated by visual odometry, and the red trajectory is the ground truth by a high-precision OXTS RT3000 GPS+IMU system. The software is available from

Chapter 46 — Simultaneous Localization and Mapping

Cyrill Stachniss, John J. Leonard and Sebastian Thrun

This chapter provides a comprehensive introduction in to the simultaneous localization and mapping problem, better known in its abbreviated form as SLAM. SLAM addresses the main perception problem of a robot navigating an unknown environment. While navigating the environment, the robot seeks to acquire a map thereof, and at the same time it wishes to localize itself using its map. The use of SLAM problems can be motivated in two different ways: one might be interested in detailed environment models, or one might seek to maintain an accurate sense of a mobile robot’s location. SLAM serves both of these purposes.

We review the three major paradigms from which many published methods for SLAM are derived: (1) the extended Kalman filter (EKF); (2) particle filtering; and (3) graph optimization. We also review recent work in three-dimensional (3-D) SLAM using visual and red green blue distance-sensors (RGB-D), and close with a discussion of open research problems in robotic mapping.

Hierarchical optimization for pose graphs on manifolds

Author  Giorgio Grisetti

Video ID : 445

This video provides an illustration of graph-based SLAM, as described in Chap. 46.3.3, Springer Handbook of Robotics, 2nd edn (2016), using the HOGMAN algorithm. Reference: G. Grisetti, R. Kuemmerle, C. Stachniss, U. Frese, C. Hertzberg: Hierarchical optimization on manifolds for online 2-D and 3-D mapping, IEEE Int. Conf. Robot. Autom. (ICRA), Anchorage (2010), pp. 273-278; doi: 10.1109/ROBOT.2010.5509407.

Chapter 61 — Robot Surveillance and Security

Wendell H. Chun and Nikolaos Papanikolopoulos

This chapter introduces the foundation for surveillance and security robots for multiple military and civilian applications. The key environmental domains are mobile robots for ground, aerial, surface water, and underwater applications. Surveillance literallymeans to watch fromabove,while surveillance robots are used to monitor the behavior, activities, and other changing information that are gathered for the general purpose of managing, directing, or protecting one’s assets or position. In a practical sense, the term surveillance is taken to mean the act of observation from a distance, and security robots are commonly used to protect and safeguard a location, some valuable assets, or personal against danger, damage, loss, and crime. Surveillance is a proactive operation,while security robots are a defensive operation. The construction of each type of robot is similar in nature with amobility component, sensor payload, communication system, and an operator control station.

After introducing the major robot components, this chapter focuses on the various applications. More specifically, Sect. 61.3 discusses the enabling technologies of mobile robot navigation, various payload sensors used for surveillance or security applications, target detection and tracking algorithms, and the operator’s robot control console for human–machine interface (HMI). Section 61.4 presents selected research activities relevant to surveillance and security, including automatic data processing of the payload sensors, automaticmonitoring of human activities, facial recognition, and collaborative automatic target recognition (ATR). Finally, Sect. 61.5 discusses future directions in robot surveillance and security, giving some conclusions and followed by references.

Surveillance by a drone

Author  Bernd Lutz

Video ID : 554

The MULTIROTOR by is an innovative measuring instrument that can be used for surveillance. Besides delivering very stable pictures, the MULTIROTOR is also able to fly fully-automated measurement flights with a high precision of 1 mm ground resolution and equally impressive flight stability at wind strengths up to 10-15 m/s.

Chapter 58 — Robotics in Hazardous Applications

James Trevelyan, William R. Hamel and Sung-Chul Kang

Robotics researchers have worked hard to realize a long-awaited vision: machines that can eliminate the need for people to work in hazardous environments. Chapter 60 is framed by the vision of disaster response: search and rescue robots carrying people from burning buildings or tunneling through collapsed rock falls to reach trapped miners. In this chapter we review tangible progress towards robots that perform routine work in places too dangerous for humans. Researchers still have many challenges ahead of them but there has been remarkable progress in some areas. Hazardous environments present special challenges for the accomplishment of desired tasks depending on the nature and magnitude of the hazards. Hazards may be present in the form of radiation, toxic contamination, falling objects or potential explosions. Technology that specialized engineering companies can develop and sell without active help from researchers marks the frontier of commercial feasibility. Just inside this border lie teleoperated robots for explosive ordnance disposal (EOD) and for underwater engineering work. Even with the typical tenfold disadvantage in manipulation performance imposed by the limits of today’s telepresence and teleoperation technology, in terms of human dexterity and speed, robots often can offer a more cost-effective solution. However, most routine applications in hazardous environments still lie far beyond the feasibility frontier. Fire fighting, remediating nuclear contamination, reactor decommissioning, tunneling, underwater engineering, underground mining and clearance of landmines and unexploded ordnance still present many unsolved problems.

IED hunters

Author  James P. Trevelyan

Video ID : 572

The video shows the work of route-clearance teams in Afghanistan.   This video has been included because researchers can see plenty of examples of realistic field conditions under which explosive-ordnance clearance is being done in Afghanistan. It is essential for researchers to have an accurate appreciation of the real field conditions before considering expensive research projects. It is also essential that researchers understand how easily insurgent forces can adapt and defeat technological solutions that have cost tens of millions of dollars to develop. Read the caption below carefully and then watch the video with this in mind. Better-quality blast-protected vehicles provide the teams with more confidence to handle challenging tasks. You will also see that improvised explosive devices (IEDs) used by insurgents are typically made from the unexploded ordnance (UXO) which the demining teams are trying to remove. Between 15% (typical failure rate for high quality US-made ammunition) and 70% (old Russian-designed ammunition) fail to explode when used.   These UXOs lie in the ground in a, at best, semi-stable state, so some easily exploded accidentally at times. Insurgents collect and attempt to disarm them, then set them up with remotely operated or vehicle-triggered detonation fuses. That is why the demining teams came to be seen as legitimate targets by insurgents, because they were removing the explosive devices the insurgency needed to fight people who they regarded as legitimate enemies. Although not explicitly acknowledged in the commentary, this video also demonstrates one of the many methods used by insurgents to adapt their techniques to defeat the highly advanced technologies available to the ISAF teams. By laying multiple devices in different locations, using different triggering devices and different deployment methods, the insurgents soon learned what the ISAF teams could and could not detect.   Every blast indicated a device that was not detected in advance by the ISAF team. Every device removed by the team indicated a device that was detected. In this way, the insurgents rapidly learned how to deploy undetectable devices that maximized their destructive power.

Chapter 40 — Mobility and Manipulation

Oliver Brock, Jaeheung Park and Marc Toussaint

Mobile manipulation requires the integration of methodologies from all aspects of robotics. Instead of tackling each aspect in isolation,mobilemanipulation research exploits their interdependence to solve challenging problems. As a result, novel views of long-standing problems emerge. In this chapter, we present these emerging views in the areas of grasping, control, motion generation, learning, and perception. All of these areas must address the shared challenges of high-dimensionality, uncertainty, and task variability. The section on grasping and manipulation describes a trend towards actively leveraging contact and physical and dynamic interactions between hand, object, and environment. Research in control addresses the challenges of appropriately coupling mobility and manipulation. The field of motion generation increasingly blurs the boundaries between control and planning, leading to task-consistent motion in high-dimensional configuration spaces, even in dynamic and partially unknown environments. A key challenge of learning formobilemanipulation consists of identifying the appropriate priors, and we survey recent learning approaches to perception, grasping, motion, and manipulation. Finally, a discussion of promising methods in perception shows how concepts and methods from navigation and active perception are applied.

Autonomous robot skill acquisition

Author  Scott Kuindersma, George Konidaris

Video ID : 669

This video demonstrates the autonomous-skill acquisition of a robot acting in a constrained environment called the "Red Room". The environment consists of buttons, levers, and switches, all located at points of interest designated by ARTags. The robot can navigate to these locations and perform primitive manipulation actions, some of which affect the physical state of the maze (e.g., by opening or closing a door).