View Chapter

Chapter 58 — Robotics in Hazardous Applications

James Trevelyan, William R. Hamel and Sung-Chul Kang

Robotics researchers have worked hard to realize a long-awaited vision: machines that can eliminate the need for people to work in hazardous environments. Chapter 60 is framed by the vision of disaster response: search and rescue robots carrying people from burning buildings or tunneling through collapsed rock falls to reach trapped miners. In this chapter we review tangible progress towards robots that perform routine work in places too dangerous for humans. Researchers still have many challenges ahead of them but there has been remarkable progress in some areas. Hazardous environments present special challenges for the accomplishment of desired tasks depending on the nature and magnitude of the hazards. Hazards may be present in the form of radiation, toxic contamination, falling objects or potential explosions. Technology that specialized engineering companies can develop and sell without active help from researchers marks the frontier of commercial feasibility. Just inside this border lie teleoperated robots for explosive ordnance disposal (EOD) and for underwater engineering work. Even with the typical tenfold disadvantage in manipulation performance imposed by the limits of today’s telepresence and teleoperation technology, in terms of human dexterity and speed, robots often can offer a more cost-effective solution. However, most routine applications in hazardous environments still lie far beyond the feasibility frontier. Fire fighting, remediating nuclear contamination, reactor decommissioning, tunneling, underwater engineering, underground mining and clearance of landmines and unexploded ordnance still present many unsolved problems.

iRobots inspecting interior of Fukushima powerplant

Author  James P. Trevelyan

Video ID : 580

A video timestamped April 17, 2011, with English commentary.

Chapter 32 — 3-D Vision for Navigation and Grasping

Danica Kragic and Kostas Daniilidis

In this chapter, we describe algorithms for three-dimensional (3-D) vision that help robots accomplish navigation and grasping. To model cameras, we start with the basics of perspective projection and distortion due to lenses. This projection from a 3-D world to a two-dimensional (2-D) image can be inverted only by using information from the world or multiple 2-D views. If we know the 3-D model of an object or the location of 3-D landmarks, we can solve the pose estimation problem from one view. When two views are available, we can compute the 3-D motion and triangulate to reconstruct the world up to a scale factor. When multiple views are given either as sparse viewpoints or a continuous incoming video, then the robot path can be computer and point tracks can yield a sparse 3-D representation of the world. In order to grasp objects, we can estimate 3-D pose of the end effector or 3-D coordinates of the graspable points on the object.

Parallel tracking and mapping for small AR workspaces (PTAM)

Author  Georg Klein, David Murray

Video ID : 123

Video results for an augmented-reality tracking system. A computer tracks a camera and works out a map of the environment in real time, and this can be used to overlay virtual graphics. Presented at the ISMAR 2007 conference.

Chapter 76 — Evolutionary Robotics

Stefano Nolfi, Josh Bongard, Phil Husbands and Dario Floreano

Evolutionary Robotics is a method for automatically generating artificial brains and morphologies of autonomous robots. This approach is useful both for investigating the design space of robotic applications and for testing scientific hypotheses of biological mechanisms and processes. In this chapter we provide an overview of methods and results of Evolutionary Robotics with robots of different shapes, dimensions, and operation features. We consider both simulated and physical robots with special consideration to the transfer between the two worlds.

Evolution of cooperative and communicative behaviors

Author  Stefano Nolfi, Joachim De Greeff

Video ID : 117

A group of two e-puck robots are evolved for the capacity to reach and to move back and forth between the two circular areas. The robots are provided with infrared sensors, a camera with which they can perceive the relative position of the other robot, a microphone with which they can sense the sound-signal produced by the other robot, two motors which set the desired speed of the two wheels, and a speaker to emit sound signals. The evolved robots coordinate and cooperate on the basis of an evolved communication system which includes several implicit and explicit signals constituted, respectively, by the relative positions assumed by the robots in the environment as perceived through the robots' cameras and by the sounds with varying frequencies emitted and perceived by the robots through the robots' speakers and microphones.

Chapter 79 — Robotics for Education

David P. Miller and Illah Nourbakhsh

Educational robotics programs have become popular in most developed countries and are becoming more and more prevalent in the developing world as well. Robotics is used to teach problem solving, programming, design, physics, math and even music and art to students at all levels of their education. This chapter provides an overview of some of the major robotics programs along with the robot platforms and the programming environments commonly used. Like robot systems used in research, there is a constant development and upgrade of hardware and software – so this chapter provides a snapshot of the technologies being used at this time. The chapter concludes with a review of the assessment strategies that can be used to determine if a particular robotics program is benefitting students in the intended ways.

Elementary robotics challenge: Soldier Creek Elementary

Author  Sherry Admire

Video ID : 240

This video shows some of the runs by the Soldier Creek Elementary School participating in a Norman Oklahoma Challenge event of the Junior Botball Challenge (http://www.juniorbotballchallenge.org) in March 2014. These elementary-school students wrote their own C code to guide their robots around the can obstacle and to maneuver their robot to push a large number of cans into the starting box.

Chapter 56 — Robotics in Agriculture and Forestry

Marcel Bergerman, John Billingsley, John Reid and Eldert van Henten

Robotics for agriculture and forestry (A&F) represents the ultimate application of one of our society’s latest and most advanced innovations to its most ancient and important industries. Over the course of history, mechanization and automation increased crop output several orders of magnitude, enabling a geometric growth in population and an increase in quality of life across the globe. Rapid population growth and rising incomes in developing countries, however, require ever larger amounts of A&F output. This chapter addresses robotics for A&F in the form of case studies where robotics is being successfully applied to solve well-identified problems. With respect to plant crops, the focus is on the in-field or in-farm tasks necessary to guarantee a quality crop and, generally speaking, end at harvest time. In the livestock domain, the focus is on breeding and nurturing, exploiting, harvesting, and slaughtering and processing. The chapter is organized in four main sections. The first one explains the scope, in particular, what aspects of robotics for A&F are dealt with in the chapter. The second one discusses the challenges and opportunities associated with the application of robotics to A&F. The third section is the core of the chapter, presenting twenty case studies that showcase (mostly) mature applications of robotics in various agricultural and forestry domains. The case studies are not meant to be comprehensive but instead to give the reader a general overview of how robotics has been applied to A&F in the last 10 years. The fourth section concludes the chapter with a discussion on specific improvements to current technology and paths to commercialization.

Autonomous utility vehicle - R Gator

Author  John Reid

Video ID : 93

The John Deere R Gator is an unmanned ground vehicle capable of operating in urban and off-road terrain with a large payload capacity to carry supplies or a marsupial robot. The R Gator can operate in teleoperation mode, waypoint navigation, direction drive, and path playback. The perception system on the vehicle is able to detect both positive and negative (holes) obstacles in off-road terrain and is capable of driving through tall vegetation while maintaining safety. The remote operator is able to send commands to the R Gator wirelessly, through an intuitive, video game-style, wearable interface, and can see video and telematics from the R Gator in a heads-up display. This video shows the R Gator performing various missions in off-road terrain in a surrogate agricultural environment. Screen shots from the operator display are shown, including an overhead map with waypoint path visible, video views available to the operator, and telematics. The video also shows the R Gator detecting and avoiding fence posts and a negative obstacle, both of which are quite common in orchards.

Chapter 26 — Flying Robots

Stefan Leutenegger, Christoph Hürzeler, Amanda K. Stowers, Kostas Alexis, Markus W. Achtelik, David Lentink, Paul Y. Oh and Roland Siegwart

Unmanned aircraft systems (UASs) have drawn increasing attention recently, owing to advancements in related research, technology, and applications. While having been deployed successfully in military scenarios for decades, civil use cases have lately been tackled by the robotics research community.

This chapter overviews the core elements of this highly interdisciplinary field; the reader is guided through the design process of aerial robots for various applications starting with a qualitative characterization of different types of UAS. Design and modeling are closely related, forming a typically iterative process of drafting and analyzing the related properties. Therefore, we overview aerodynamics and dynamics, as well as their application to fixed-wing, rotary-wing, and flapping-wing UAS, including related analytical tools and practical guidelines. Respecting use-case-specific requirements and core autonomous robot demands, we finally provide guidelines to related system integration challenges.

DelFly II in hover

Author  David Lentink

Video ID : 493

This video shows a DelFly flapping-winged vehicle flying in hover. The vehicle flaps at approximately 14 Hz. The video was filmed at high speed and slowed down. For more information please see D. Lentink, S.R. Jongerius, N.L. Bradshaw: Flying Insects and Robots (Springer, Berlin, Heidelberg 2009).

Chapter 51 — Modeling and Control of Underwater Robots

Gianluca Antonelli, Thor I. Fossen and Dana R. Yoerger

This chapter deals with modeling and control of underwater robots. First, a brief introduction showing the constantly expanding role of marine robotics in oceanic engineering is given; this section also contains some historical backgrounds. Most of the following sections strongly overlap with the corresponding chapters presented in this handbook; hence, to avoid useless repetitions, only those aspects peculiar to the underwater environment are discussed, assuming that the reader is already familiar with concepts such as fault detection systems when discussing the corresponding underwater implementation. Themodeling section is presented by focusing on a coefficient-based approach capturing the most relevant underwater dynamic effects. Two sections dealing with the description of the sensor and the actuating systems are then given. Autonomous underwater vehicles require the implementation of mission control system as well as guidance and control algorithms. Underwater localization is also discussed. Underwater manipulation is then briefly approached. Fault detection and fault tolerance, together with the coordination control of multiple underwater vehicles, conclude the theoretical part of the chapter. Two final sections, reporting some successful applications and discussing future perspectives, conclude the chapter. The reader is referred to Chap. 25 for the design issues.

Dive with REMUS

Author  Woods Hole Oceanographic Institution

Video ID : 87

Travel with a REMUS 100 autonomous, underwater vehicle on a dive off the Carolina coast to study the connection between the physical processes in the ocean at the edge of the continental shelf and the things that live there. Video footage by Chris Linder. Funding by the Department of the Navy, Science & Technology; and Centers for Ocean Sciences Education Excellence (COSEE).

Chapter 53 — Multiple Mobile Robot Systems

Lynne E. Parker, Daniela Rus and Gaurav S. Sukhatme

Within the context of multiple mobile, and networked robot systems, this chapter explores the current state of the art. After a brief introduction, we first examine architectures for multirobot cooperation, exploring the alternative approaches that have been developed. Next, we explore communications issues and their impact on multirobot teams in Sect. 53.3, followed by a discussion of networked mobile robots in Sect. 53.4. Following this we discuss swarm robot systems in Sect. 53.5 and modular robot systems in Sect. 53.6. While swarm and modular systems typically assume large numbers of homogeneous robots, other types of multirobot systems include heterogeneous robots. We therefore next discuss heterogeneity in cooperative robot teams in Sect. 53.7. Once robot teams allow for individual heterogeneity, issues of task allocation become important; Sect. 53.8 therefore discusses common approaches to task allocation. Section 53.9 discusses the challenges of multirobot learning, and some representative approaches. We outline some of the typical application domains which serve as test beds for multirobot systems research in Sect. 53.10. Finally, we conclude in Sect. 53.11 with some summary remarks and suggestions for further reading.

Metamorphic robotic system

Author  Amit Pamecha, Gregory Chirikjian

Video ID : 198

This video describes a metamorphic robotic system composed of many robotic modules, each of which has the ability to locomote over its neighbors. Mechanical coupling enables the robots to interact with each other.

Chapter 18 — Parallel Mechanisms

Jean-Pierre Merlet, Clément Gosselin and Tian Huang

This chapter presents an introduction to the kinematics and dynamics of parallel mechanisms, also referred to as parallel robots. As opposed to classical serial manipulators, the kinematic architecture of parallel robots includes closed-loop kinematic chains. As a consequence, their analysis differs considerably from that of their serial counterparts. This chapter aims at presenting the fundamental formulations and techniques used in their analysis.

6-DOF statically balanced parallel robot

Author  Clément Gosselin

Video ID : 48

This video demonstrates a 6-DOF statically balanced parallel robot. References: 1. C. Gosselin, J. Wang, T. Laliberté, I. Ebert-Uphoff: On the design of a statically balanced 6-DOF parallel manipulator, Proc. IFToMM Tenth World Congress Theory of Machines and Mechanisms, Oulu (1999) pp. 1045-1050; 2. C. Gosselin, J. Wang: On the design of statically balanced motion bases for flight simulators, Proc. AIAA Modeling and Simulation Technologies Conf., Boston (1998), pp. 272-282; 3. I. Ebert-Uphoff, C. Gosselin: Dynamic modeling of a class of spatial statically-balanced parallel platform mechanisms, Proc. IEEE Int. Conf. Robot. Autom. (ICRA), Detroit (1999), Vol. 2, pp. 881-888

Chapter 40 — Mobility and Manipulation

Oliver Brock, Jaeheung Park and Marc Toussaint

Mobile manipulation requires the integration of methodologies from all aspects of robotics. Instead of tackling each aspect in isolation,mobilemanipulation research exploits their interdependence to solve challenging problems. As a result, novel views of long-standing problems emerge. In this chapter, we present these emerging views in the areas of grasping, control, motion generation, learning, and perception. All of these areas must address the shared challenges of high-dimensionality, uncertainty, and task variability. The section on grasping and manipulation describes a trend towards actively leveraging contact and physical and dynamic interactions between hand, object, and environment. Research in control addresses the challenges of appropriately coupling mobility and manipulation. The field of motion generation increasingly blurs the boundaries between control and planning, leading to task-consistent motion in high-dimensional configuration spaces, even in dynamic and partially unknown environments. A key challenge of learning formobilemanipulation consists of identifying the appropriate priors, and we survey recent learning approaches to perception, grasping, motion, and manipulation. Finally, a discussion of promising methods in perception shows how concepts and methods from navigation and active perception are applied.

Task-consistent, obstacle avoidance for mobile manipulation

Author  Oliver Brock, Oussama Khatib, Sriram Viji

Video ID : 784

This robot can avoid moving obstacles with real-time path modification by using an elastic-strip framework. However, the real-time path modification can interfere with task execution. The proposed task-consistent, elastic planning method can ensure the task execution while achieving obstacle avoidance.