View Chapter

Chapter 58 — Robotics in Hazardous Applications

James Trevelyan, William R. Hamel and Sung-Chul Kang

Robotics researchers have worked hard to realize a long-awaited vision: machines that can eliminate the need for people to work in hazardous environments. Chapter 60 is framed by the vision of disaster response: search and rescue robots carrying people from burning buildings or tunneling through collapsed rock falls to reach trapped miners. In this chapter we review tangible progress towards robots that perform routine work in places too dangerous for humans. Researchers still have many challenges ahead of them but there has been remarkable progress in some areas. Hazardous environments present special challenges for the accomplishment of desired tasks depending on the nature and magnitude of the hazards. Hazards may be present in the form of radiation, toxic contamination, falling objects or potential explosions. Technology that specialized engineering companies can develop and sell without active help from researchers marks the frontier of commercial feasibility. Just inside this border lie teleoperated robots for explosive ordnance disposal (EOD) and for underwater engineering work. Even with the typical tenfold disadvantage in manipulation performance imposed by the limits of today’s telepresence and teleoperation technology, in terms of human dexterity and speed, robots often can offer a more cost-effective solution. However, most routine applications in hazardous environments still lie far beyond the feasibility frontier. Fire fighting, remediating nuclear contamination, reactor decommissioning, tunneling, underwater engineering, underground mining and clearance of landmines and unexploded ordnance still present many unsolved problems.

iRobots used to examine interior of Fukushima powerplant

Author  James P. Trevelyan

Video ID : 579

Brief videos of robots in operation at the Fukushima plant, with English commentary from contemporary news sources.

Chapter 25 — Underwater Robots

Hyun-Taek Choi and Junku Yuh

Covering about two-thirds of the earth, the ocean is an enormous system that dominates processes on the Earth and has abundant living and nonliving resources, such as fish and subsea gas and oil. Therefore, it has a great effect on our lives on land, and the importance of the ocean for the future existence of all human beings cannot be overemphasized. However, we have not been able to explore the full depths of the ocean and do not fully understand the complex processes of the ocean. Having said that, underwater robots including remotely operated vehicles (ROVs) and autonomous underwater vehicles (AUVs) have received much attention since they can be an effective tool to explore the ocean and efficiently utilize the ocean resources. This chapter focuses on design issues of underwater robots including major subsystems such as mechanical systems, power sources, actuators and sensors, computers and communications, software architecture, and manipulators while Chap. 51 covers modeling and control of underwater robots.

First recorded dive of the deep-sea ROV Hamire at a depth of 5,882 m

Author  Hyun-Taek Choi

Video ID : 796

This video shows the first deep-sea trial of the ROV Hamire developed by KRISO (Korea Research Institute of Ships and Ocean Engineering) at a depth of 5,882 m.

Chapter 10 — Redundant Robots

Stefano Chiaverini, Giuseppe Oriolo and Anthony A. Maciejewski

This chapter focuses on redundancy resolution schemes, i. e., the techniques for exploiting the redundant degrees of freedom in the solution of the inverse kinematics problem. This is obviously an issue of major relevance for motion planning and control purposes.

In particular, task-oriented kinematics and the basic methods for its inversion at the velocity (first-order differential) level are first recalled, with a discussion of the main techniques for handling kinematic singularities. Next, different firstorder methods to solve kinematic redundancy are arranged in two main categories, namely those based on the optimization of suitable performance criteria and those relying on the augmentation of the task space. Redundancy resolution methods at the acceleration (second-order differential) level are then considered in order to take into account dynamics issues, e.g., torque minimization. Conditions under which a cyclic task motion results in a cyclic joint motion are also discussed; this is a major issue when a redundant manipulator is used to execute a repetitive task, e.g., in industrial applications. The use of kinematic redundancy for fault tolerance is analyzed in detail. Suggestions for further reading are given in a final section.

Visual servoing control of Baxter robot arms with obstacle avoidance using kinematic edundancy

Author  Chenguang Yang

Video ID : 819

Visual servoing control rby an obstacle avoidance strategy using kinematics redundancy has been developed and tested on a Baxter robot. A Point Grey Bumblebee2 stereo camera is used to obtain the 3-D point cloud of a target object. The object tracking task allocation between two arms has been developed by identifying workspaces of the dual arms and tracing the object location in a convex hull of the workspace. By employment of a simulated artificial robot as a parallel system as well as a task-switching weight factor, the robot is actually able to restore back to the natural pose smoothly in the absence of the obstacle. Two sets of experiments were carried out to demonstrate the effectiveness of the developed servoing control method.

Chapter 43 — Telerobotics

Günter Niemeyer, Carsten Preusche, Stefano Stramigioli and Dongjun Lee

In this chapter we present an overview of the field of telerobotics with a focus on control aspects. To acknowledge some of the earliest contributions and motivations the field has provided to robotics in general, we begin with a brief historical perspective and discuss some of the challenging applications. Then, after introducing and classifying the various system architectures and control strategies, we emphasize bilateral control and force feedback. This particular area has seen intense research work in the pursuit of telepresence. We also examine some of the emerging efforts, extending telerobotic concepts to unconventional systems and applications. Finally,we suggest some further reading for a closer engagement with the field.

JPL dual-arm telerobot system

Author  Antal K. Bejczy, Zoltan Szakaly

Video ID : 298

This video shows a dual-arm, force-reflecting telerobotic system developed by the Jet Propulsion Laboratory for space teleoperation applications of kinematically and dynamically different slave systems. Presented at ICRA 1990.

Chapter 50 — Modeling and Control of Robots on Rough Terrain

Keiji Nagatani, Genya Ishigami and Yoshito Okada

In this chapter, we introduce modeling and control for wheeled mobile robots and tracked vehicles. The target environment is rough terrains, which includes both deformable soil and heaps of rubble. Therefore, the topics are roughly divided into two categories, wheeled robots on deformable soil and tracked vehicles on heaps of rubble.

After providing an overview of this area in Sect. 50.1, a modeling method of wheeled robots on a deformable terrain is introduced in Sect. 50.2. It is based on terramechanics, which is the study focusing on the mechanical properties of natural rough terrain and its response to off-road vehicle, specifically the interaction between wheel/track and soil. In Sect. 50.3, the control of wheeled robots is introduced. A wheeled robot often experiences wheel slippage as well as its sideslip while traversing rough terrain. Therefore, the basic approach in this section is to compensate the slip via steering and driving maneuvers. In the case of navigation on heaps of rubble, tracked vehicles have much advantage. To improve traversability in such challenging environments, some tracked vehicles are equipped with subtracks, and one kinematical modeling method of tracked vehicle on rough terrain is introduced in Sect. 50.4. In addition, stability analysis of such vehicles is introduced in Sect. 50.5. Based on such kinematical model and stability analysis, a sensor-based control of tracked vehicle on rough terrain is introduced in Sect. 50.6. Sect. 50.7 summarizes this chapter.

Autonomous sub-tracks control

Author  Field Robotics Group, Tohoku University

Video ID : 190

Field Robotics Group, Tohoku University, developed an autonomous controller for the tracked vehicle (Kenaf) to generate terrain-reflective motions of the sub-tracks. Terrain information is obtained using laser range sensors that are located on both sides of the Kenaf. The videoclip shows the basic function of the controller in a simple environment.

Chapter 41 — Active Manipulation for Perception

Anna Petrovskaya and Kaijen Hsiao

This chapter covers perceptual methods in which manipulation is an integral part of perception. These methods face special challenges due to data sparsity and high costs of sensing actions. However, they can also succeed where other perceptual methods fail, for example, in poor-visibility conditions or for learning the physical properties of a scene.

The chapter focuses on specialized methods that have been developed for object localization, inference, planning, recognition, and modeling in activemanipulation approaches.We concludewith a discussion of real-life applications and directions for future research.

Touch-based, door-handle localization and manipulation

Author  Anna Petrovskaya

Video ID : 723

The harmonic arm robot localizes the door handle by touching it. 3-DOF localization is performed in this video. Once the localization is complete, the robot is able to grasp and manipulate the handle. The mobile platform is teleoperated, whereas the robotic arm motions are autonomous. A 2-D model of the door and handle was constructed from hand measurements for this experiment.

Chapter 69 — Physical Human-Robot Interaction

Sami Haddadin and Elizabeth Croft

Over the last two decades, the foundations for physical human–robot interaction (pHRI) have evolved from successful developments in mechatronics, control, and planning, leading toward safer lightweight robot designs and interaction control schemes that advance beyond the current capacities of existing high-payload and highprecision position-controlled industrial robots. Based on their ability to sense physical interaction, render compliant behavior along the robot structure, plan motions that respect human preferences, and generate interaction plans for collaboration and coaction with humans, these novel robots have opened up novel and unforeseen application domains, and have advanced the field of human safety in robotics.

This chapter gives an overview on the state of the art in pHRI as of the date of publication. First, the advances in human safety are outlined, addressing topics in human injury analysis in robotics and safety standards for pHRI. Then, the foundations of human-friendly robot design, including the development of lightweight and intrinsically flexible force/torque-controlled machines together with the required perception abilities for interaction are introduced. Subsequently, motionplanning techniques for human environments, including the domains of biomechanically safe, risk-metric-based, human-aware planning are covered. Finally, the rather recent problem of interaction planning is summarized, including the issues of collaborative action planning, the definition of the interaction planning problem, and an introduction to robot reflexes and reactive control architecture for pHRI.

Reach and grasp by people with tetraplegia using a neurally-controlled robotic arm

Author  Leigh R. Hochberg, Daniel Bacher, Beata Jarosiewicz, Nicolas Y. Masse, John D. Simeral, Jörn Vogel, Sami Haddadin, Jie Liu, Sydney S. Cash, Patrick van der Smagt, John P. Donoghue

Video ID : 618

The authors have shown that people with long-standing tetraplegia can use a neural interface system to move and click a computer cursor and to control physical devices. One of the study participants, implanted with the sensor five years earlier, also used a robotic arm to drink coffee from a bottle. Although robotic reach and grasp actions were not as fast or accurate as those of an able-bodied person, the results demonstrate the feasibility for people with tetraplegia, years after injury to the central nervous system, to recreate useful multidimensional control of complex devices directly from a small sample of neural signals.

Chapter 51 — Modeling and Control of Underwater Robots

Gianluca Antonelli, Thor I. Fossen and Dana R. Yoerger

This chapter deals with modeling and control of underwater robots. First, a brief introduction showing the constantly expanding role of marine robotics in oceanic engineering is given; this section also contains some historical backgrounds. Most of the following sections strongly overlap with the corresponding chapters presented in this handbook; hence, to avoid useless repetitions, only those aspects peculiar to the underwater environment are discussed, assuming that the reader is already familiar with concepts such as fault detection systems when discussing the corresponding underwater implementation. Themodeling section is presented by focusing on a coefficient-based approach capturing the most relevant underwater dynamic effects. Two sections dealing with the description of the sensor and the actuating systems are then given. Autonomous underwater vehicles require the implementation of mission control system as well as guidance and control algorithms. Underwater localization is also discussed. Underwater manipulation is then briefly approached. Fault detection and fault tolerance, together with the coordination control of multiple underwater vehicles, conclude the theoretical part of the chapter. Two final sections, reporting some successful applications and discussing future perspectives, conclude the chapter. The reader is referred to Chap. 25 for the design issues.

Neptus command and control infrastructure

Author  Laboratario de Sistemas e Tecnologias Subaquaticas - Porto University

Video ID : 324

See how Neptus is used to plan, simulate, monitor and review missions performed by autonomous vehicles. Neptus, originally developed at the Underwater Systems and Technology Laboratory, is open source software available from http://github.com/LSTS/neptus / NOPTILUS project [NOPTILUS is funded by European Community's Seventh Framework Programme ICT-FP]

Chapter 0 — Preface

Bruno Siciliano, Oussama Khatib and Torsten Kröger

The preface of the Second Edition of the Springer Handbook of Robotics contains three videos about the creation of the book and using its multimedia app on mobile devices.

Using the multimedia app on mobile devices

Author  Torsten Kröger

Video ID : 843

The video illustrates how to use the multimedia app for the Second Edition of the Springer Handbook of Robotics. Using a smartphone or tablet PC, users can access each of the more than 700 videos while reading the printed or e-book version of the handbook.

Chapter 32 — 3-D Vision for Navigation and Grasping

Danica Kragic and Kostas Daniilidis

In this chapter, we describe algorithms for three-dimensional (3-D) vision that help robots accomplish navigation and grasping. To model cameras, we start with the basics of perspective projection and distortion due to lenses. This projection from a 3-D world to a two-dimensional (2-D) image can be inverted only by using information from the world or multiple 2-D views. If we know the 3-D model of an object or the location of 3-D landmarks, we can solve the pose estimation problem from one view. When two views are available, we can compute the 3-D motion and triangulate to reconstruct the world up to a scale factor. When multiple views are given either as sparse viewpoints or a continuous incoming video, then the robot path can be computer and point tracks can yield a sparse 3-D representation of the world. In order to grasp objects, we can estimate 3-D pose of the end effector or 3-D coordinates of the graspable points on the object.

LIBVISO: Visual odometry for intelligent vehicles

Author  Andreas Geiger

Video ID : 122

This video demonstrates a visual-odometry algorithm on the performance of the vehicle Annieway (VW Passat). Visual odometry is the estimation of a video camera's 3-D motion and orientation, which is purely based on stereo vision in this case. The blue trajectory is the motion estimated by visual odometry, and the red trajectory is the ground truth by a high-precision OXTS RT3000 GPS+IMU system. The software is available from http://www.cvlibs.net/