View Chapter

Chapter 13 — Behavior-Based Systems

François Michaud and Monica Nicolescu

Nature is filled with examples of autonomous creatures capable of dealing with the diversity, unpredictability, and rapidly changing conditions of the real world. Such creatures must make decisions and take actions based on incomplete perception, time constraints, limited knowledge about the world, cognition, reasoning and physical capabilities, in uncontrolled conditions and with very limited cues about the intent of others. Consequently, one way of evaluating intelligence is based on the creature’s ability to make the most of what it has available to handle the complexities of the real world. The main objective of this chapter is to explain behavior-based systems and their use in autonomous control problems and applications. The chapter is organized as follows. Section 13.1 overviews robot control, introducing behavior-based systems in relation to other established approaches to robot control. Section 13.2 follows by outlining the basic principles of behavior-based systems that make them distinct from other types of robot control architectures. The concept of basis behaviors, the means of modularizing behavior-based systems, is presented in Sect. 13.3. Section 13.4 describes how behaviors are used as building blocks for creating representations for use by behavior-based systems, enabling the robot to reason about the world and about itself in that world. Section 13.5 presents several different classes of learning methods for behavior-based systems, validated on single-robot and multirobot systems. Section 13.6 provides an overview of various robotics problems and application domains that have successfully been addressed or are currently being studied with behavior-based control. Finally, Sect. 13.7 concludes the chapter.

Experience-based learning of high-level task representations: Reproduction

Author  Monica Nicolescu

Video ID : 28

This is a video recorded in early 2000s, showing a Pioneer robot visiting a number of targets in a certain order based on a demonstration provided by a human user. The robot training stage is also shown in a related video in this chapter. References: 1. M. Nicolescu, M.J. Mataric: Experience-based learning of task representations from human-robot interaction, Proc. IEEE Int. Symp. Comput. Intell. Robot. Autom. , Banff (2001), pp. 463-468; 2. M. Nicolescu, M.J. Mataric: Learning and interacting in human-robot domains, IEEE Trans. Syst. Man Cybernet. A31(5), 419-430 (2001)

Chapter 59 — Robotics in Mining

Joshua A. Marshall, Adrian Bonchis, Eduardo Nebot and Steven Scheding

This chapter presents an overview of the state of the art in mining robotics, from surface to underground applications, and beyond. Mining is the practice of extracting resources for utilitarian purposes. Today, the international business of mining is a heavily mechanized industry that exploits the use of large diesel and electric equipment. These machines must operate in harsh, dynamic, and uncertain environments such as, for example, in the high arctic, in extreme desert climates, and in deep underground tunnel networks where it can be very hot and humid. Applications of robotics in mining are broad and include robotic dozing, excavation, and haulage, robotic mapping and surveying, as well as robotic drilling and explosives handling. This chapter describes how many of these applications involve unique technical challenges for field roboticists. However, there are compelling reasons to advance the discipline of mining robotics, which include not only a desire on the part of miners to improve productivity, safety, and lower costs, but also out of a need to meet product demands by accessing orebodies situated in increasingly challenging conditions.

Autonomous tramming

Author  Oscar Lundhede

Video ID : 142

This video shows one example of the current state of the art in LHD automation for underground mining operations. The Atlas Copco Scooptram Automation system depicted in this video automatically hauls and dumps material from underground draw points.

Chapter 56 — Robotics in Agriculture and Forestry

Marcel Bergerman, John Billingsley, John Reid and Eldert van Henten

Robotics for agriculture and forestry (A&F) represents the ultimate application of one of our society’s latest and most advanced innovations to its most ancient and important industries. Over the course of history, mechanization and automation increased crop output several orders of magnitude, enabling a geometric growth in population and an increase in quality of life across the globe. Rapid population growth and rising incomes in developing countries, however, require ever larger amounts of A&F output. This chapter addresses robotics for A&F in the form of case studies where robotics is being successfully applied to solve well-identified problems. With respect to plant crops, the focus is on the in-field or in-farm tasks necessary to guarantee a quality crop and, generally speaking, end at harvest time. In the livestock domain, the focus is on breeding and nurturing, exploiting, harvesting, and slaughtering and processing. The chapter is organized in four main sections. The first one explains the scope, in particular, what aspects of robotics for A&F are dealt with in the chapter. The second one discusses the challenges and opportunities associated with the application of robotics to A&F. The third section is the core of the chapter, presenting twenty case studies that showcase (mostly) mature applications of robotics in various agricultural and forestry domains. The case studies are not meant to be comprehensive but instead to give the reader a general overview of how robotics has been applied to A&F in the last 10 years. The fourth section concludes the chapter with a discussion on specific improvements to current technology and paths to commercialization.

Autonomous orchard tractors

Author  John Reid

Video ID : 26

Mowing and spraying are two common tasks in orchard environments that require the use of tractors. These tasks take significant time and resources and spraying, in particular, can be dangerous for the operators, all of which suggest benefits from their automation. This video shows two John Deere tractors driving autonomously in an orange orchard. The first tractor is performing a spraying task, using the perception sensors for obstacle detection and to control the amount of spray applied to the trees, such that each tree receives only the minimum amount of chemicals necessary for its size. The second tractor is performing a mowing task, keeping the grass short to improve access to the orchard and reduce competition for resouces with the trees.

Chapter 70 — Human-Robot Augmentation

Massimo Bergamasco and Hugh Herr

The development of robotic systems capable of sharing with humans the load of heavy tasks has been one of the primary objectives in robotics research. At present, in order to fulfil such an objective, a strong interest in the robotics community is collected by the so-called wearable robots, a class of robotics systems that are worn and directly controlled by the human operator. Wearable robots, together with powered orthoses that exploit robotic components and control strategies, can represent an immediate resource also for allowing humans to restore manipulation and/or walking functionalities.

The present chapter deals with wearable robotics systems capable of providing different levels of functional and/or operational augmentation to the human beings for specific functions or tasks. Prostheses, powered orthoses, and exoskeletons are described for upper limb, lower limb, and whole body structures. State-of-theart devices together with their functionalities and main components are presented for each class of wearable system. Critical design issues and open research aspects are reported.

Arm-Exos

Author  Massimo Bergamasco

Video ID : 148

The video details the Arm-Exos and, in particular, its capability for tracking the operator's motions and for rendering the contact forces in a simple, demonstrative, virtual environment.

Chapter 51 — Modeling and Control of Underwater Robots

Gianluca Antonelli, Thor I. Fossen and Dana R. Yoerger

This chapter deals with modeling and control of underwater robots. First, a brief introduction showing the constantly expanding role of marine robotics in oceanic engineering is given; this section also contains some historical backgrounds. Most of the following sections strongly overlap with the corresponding chapters presented in this handbook; hence, to avoid useless repetitions, only those aspects peculiar to the underwater environment are discussed, assuming that the reader is already familiar with concepts such as fault detection systems when discussing the corresponding underwater implementation. Themodeling section is presented by focusing on a coefficient-based approach capturing the most relevant underwater dynamic effects. Two sections dealing with the description of the sensor and the actuating systems are then given. Autonomous underwater vehicles require the implementation of mission control system as well as guidance and control algorithms. Underwater localization is also discussed. Underwater manipulation is then briefly approached. Fault detection and fault tolerance, together with the coordination control of multiple underwater vehicles, conclude the theoretical part of the chapter. Two final sections, reporting some successful applications and discussing future perspectives, conclude the chapter. The reader is referred to Chap. 25 for the design issues.

REMUS SharkCam: The hunter and the hunted

Author  Woods Hole Oceanographic Institution

Video ID : 90

In 2013, a team from the Oceanographic Systems Lab at the Woods Hole Oceanographic Institution took a specially equipped REMUS SharkCam underwater vehicle to Guadalupe Island in Mexico to film great white sharks in the wild. They captured more action than they bargained for.

Chapter 76 — Evolutionary Robotics

Stefano Nolfi, Josh Bongard, Phil Husbands and Dario Floreano

Evolutionary Robotics is a method for automatically generating artificial brains and morphologies of autonomous robots. This approach is useful both for investigating the design space of robotic applications and for testing scientific hypotheses of biological mechanisms and processes. In this chapter we provide an overview of methods and results of Evolutionary Robotics with robots of different shapes, dimensions, and operation features. We consider both simulated and physical robots with special consideration to the transfer between the two worlds.

Evolved walking in octopod

Author  Phil Husbands

Video ID : 372

Evolved-walking behaviors on an octopod robot. Multiple gaits and obstacle avoidance can be observed. The behavior was evolved in a minimal simulation by Nick Jakobi at Sussex University and is successfully transferred to the real world as is evident from the video.

Chapter 47 — Motion Planning and Obstacle Avoidance

Javier Minguez, Florant Lamiraux and Jean-Paul Laumond

This chapter describes motion planning and obstacle avoidance for mobile robots. We will see how the two areas do not share the same modeling background. From the very beginning of motion planning, research has been dominated by computer sciences. Researchers aim at devising well-grounded algorithms with well-understood completeness and exactness properties.

The challenge of this chapter is to present both nonholonomic motion planning (Sects. 47.1–47.6) and obstacle avoidance (Sects. 47.7–47.10) issues. Section 47.11 reviews recent successful approaches that tend to embrace the whole problemofmotion planning and motion control. These approaches benefit from both nonholonomic motion planning and obstacle avoidance methods.

Sena wheelchair: Autonomous navigation at University of Malaga (2007)

Author  Jose Luis Blanco

Video ID : 708

This experiment demonstrates how a reactive navigation method successfully enables our robotic wheelchair SENA to navigate reliably in the entrance of our building at the University of Malaga (Spain). The robot navigates autonomously amidst dozens of students while avoiding collisions. The method is based on a space transformation, which simplifies finding collision-free movements in real-time despite the arbitrarily complex shape of the robot and its kinematic restrictions.

Chapter 56 — Robotics in Agriculture and Forestry

Marcel Bergerman, John Billingsley, John Reid and Eldert van Henten

Robotics for agriculture and forestry (A&F) represents the ultimate application of one of our society’s latest and most advanced innovations to its most ancient and important industries. Over the course of history, mechanization and automation increased crop output several orders of magnitude, enabling a geometric growth in population and an increase in quality of life across the globe. Rapid population growth and rising incomes in developing countries, however, require ever larger amounts of A&F output. This chapter addresses robotics for A&F in the form of case studies where robotics is being successfully applied to solve well-identified problems. With respect to plant crops, the focus is on the in-field or in-farm tasks necessary to guarantee a quality crop and, generally speaking, end at harvest time. In the livestock domain, the focus is on breeding and nurturing, exploiting, harvesting, and slaughtering and processing. The chapter is organized in four main sections. The first one explains the scope, in particular, what aspects of robotics for A&F are dealt with in the chapter. The second one discusses the challenges and opportunities associated with the application of robotics to A&F. The third section is the core of the chapter, presenting twenty case studies that showcase (mostly) mature applications of robotics in various agricultural and forestry domains. The case studies are not meant to be comprehensive but instead to give the reader a general overview of how robotics has been applied to A&F in the last 10 years. The fourth section concludes the chapter with a discussion on specific improvements to current technology and paths to commercialization.

Automatic plant probing

Author  Guillem Alenya, Babette Dellen, Sergi Foix, Carme Torras

Video ID : 95

This is a video showing the automatic probing of plant leaves (to measure chlorophyll) with a robotic arm, using a time-of-flight camera and a spadmeter, which are mounted on top. The first part shows plant probing during the final experiments of the EU project GARNICS, performed with a KUKA robot of the Forschungszentrum Juelich. The second part shows probing with a WAM arm at the Institut de Robotica i Informatica Industrial.

Chapter 46 — Simultaneous Localization and Mapping

Cyrill Stachniss, John J. Leonard and Sebastian Thrun

This chapter provides a comprehensive introduction in to the simultaneous localization and mapping problem, better known in its abbreviated form as SLAM. SLAM addresses the main perception problem of a robot navigating an unknown environment. While navigating the environment, the robot seeks to acquire a map thereof, and at the same time it wishes to localize itself using its map. The use of SLAM problems can be motivated in two different ways: one might be interested in detailed environment models, or one might seek to maintain an accurate sense of a mobile robot’s location. SLAM serves both of these purposes.

We review the three major paradigms from which many published methods for SLAM are derived: (1) the extended Kalman filter (EKF); (2) particle filtering; and (3) graph optimization. We also review recent work in three-dimensional (3-D) SLAM using visual and red green blue distance-sensors (RGB-D), and close with a discussion of open research problems in robotic mapping.

Deformation-based loop closure for dense RGB-D SLAM

Author  Thomas Whelan

Video ID : 439

This video shows the integration of SLAM-pose-graph optimization, spatially extended KinectFusion, and deformation-based loop closure in dense RGB-D mapping - integrating several of the capabilities discussed in Chap. 46.3.3 and Chap. 46.4, Springer Handbook of Robotics, 2nd edn (2016). Reference: T. Whelan, M. Kaess, H. Johannsson, M. Fallon, J.J. Leonard, J. McDonald: Real-time large scale dense RGB-D SLAM with volumetric fusion, Int. J. Robot. Res. 34(4-5), 598-626 (2014).

Chapter 74 — Learning from Humans

Aude G. Billard, Sylvain Calinon and Rüdiger Dillmann

This chapter surveys the main approaches developed to date to endow robots with the ability to learn from human guidance. The field is best known as robot programming by demonstration, robot learning from/by demonstration, apprenticeship learning and imitation learning. We start with a brief historical overview of the field. We then summarize the various approaches taken to solve four main questions: when, what, who and when to imitate. We emphasize the importance of choosing well the interface and the channels used to convey the demonstrations, with an eye on interfaces providing force control and force feedback. We then review algorithmic approaches to model skills individually and as a compound and algorithms that combine learning from human guidance with reinforcement learning. We close with a look on the use of language to guide teaching and a list of open issues.

Demonstration by visual tracking of gestures

Author  Ales Ude

Video ID : 99

Demonstration by visual tracking of gestures. Reference: A. Ude: Trajectory generation from noisy positions of object features for teaching robot paths, Robot. Auton. Syst. 11(2), 113–127 (1993); URL: http://www.cns.atr.jp/~aude/movies/ .