View Chapter

Chapter 13 — Behavior-Based Systems

François Michaud and Monica Nicolescu

Nature is filled with examples of autonomous creatures capable of dealing with the diversity, unpredictability, and rapidly changing conditions of the real world. Such creatures must make decisions and take actions based on incomplete perception, time constraints, limited knowledge about the world, cognition, reasoning and physical capabilities, in uncontrolled conditions and with very limited cues about the intent of others. Consequently, one way of evaluating intelligence is based on the creature’s ability to make the most of what it has available to handle the complexities of the real world. The main objective of this chapter is to explain behavior-based systems and their use in autonomous control problems and applications. The chapter is organized as follows. Section 13.1 overviews robot control, introducing behavior-based systems in relation to other established approaches to robot control. Section 13.2 follows by outlining the basic principles of behavior-based systems that make them distinct from other types of robot control architectures. The concept of basis behaviors, the means of modularizing behavior-based systems, is presented in Sect. 13.3. Section 13.4 describes how behaviors are used as building blocks for creating representations for use by behavior-based systems, enabling the robot to reason about the world and about itself in that world. Section 13.5 presents several different classes of learning methods for behavior-based systems, validated on single-robot and multirobot systems. Section 13.6 provides an overview of various robotics problems and application domains that have successfully been addressed or are currently being studied with behavior-based control. Finally, Sect. 13.7 concludes the chapter.

Experience-based learning of high-level task representations: Demonstration

Author  Monica Nicolescu

Video ID : 27

This is a video recorded in early 2000s, showing a Pioneer robot learning to visit a number of targets in a certain order - the human demonstration stage. The robot execution stage is also shown in a related video in this chapter. References: 1. M. Nicolescu, M.J. Mataric: Experience-based learning of task representations from human-robot interaction, Proc. IEEE Int. Symp. Comput. Intell. Robot. Autom. Banff (2001), pp. 463-468; 2. M. Nicolescu, M.J. Mataric: Learning and interacting in human-robot domains, IEEE Trans. Syst. Man Cybernet. A31(5), 419-430 (2001)

Chapter 76 — Evolutionary Robotics

Stefano Nolfi, Josh Bongard, Phil Husbands and Dario Floreano

Evolutionary Robotics is a method for automatically generating artificial brains and morphologies of autonomous robots. This approach is useful both for investigating the design space of robotic applications and for testing scientific hypotheses of biological mechanisms and processes. In this chapter we provide an overview of methods and results of Evolutionary Robotics with robots of different shapes, dimensions, and operation features. We consider both simulated and physical robots with special consideration to the transfer between the two worlds.

Resilent machines through continuous self-modeling

Author  Josh Bongard, Victor Zykov, Hod Lipson

Video ID : 114

This video demonstrates a typical experiment with a resilent machine.

Chapter 10 — Redundant Robots

Stefano Chiaverini, Giuseppe Oriolo and Anthony A. Maciejewski

This chapter focuses on redundancy resolution schemes, i. e., the techniques for exploiting the redundant degrees of freedom in the solution of the inverse kinematics problem. This is obviously an issue of major relevance for motion planning and control purposes.

In particular, task-oriented kinematics and the basic methods for its inversion at the velocity (first-order differential) level are first recalled, with a discussion of the main techniques for handling kinematic singularities. Next, different firstorder methods to solve kinematic redundancy are arranged in two main categories, namely those based on the optimization of suitable performance criteria and those relying on the augmentation of the task space. Redundancy resolution methods at the acceleration (second-order differential) level are then considered in order to take into account dynamics issues, e.g., torque minimization. Conditions under which a cyclic task motion results in a cyclic joint motion are also discussed; this is a major issue when a redundant manipulator is used to execute a repetitive task, e.g., in industrial applications. The use of kinematic redundancy for fault tolerance is analyzed in detail. Suggestions for further reading are given in a final section.

Visual servoing control of Baxter robot arms with obstacle avoidance using kinematic edundancy

Author  Chenguang Yang

Video ID : 819

Visual servoing control rby an obstacle avoidance strategy using kinematics redundancy has been developed and tested on a Baxter robot. A Point Grey Bumblebee2 stereo camera is used to obtain the 3-D point cloud of a target object. The object tracking task allocation between two arms has been developed by identifying workspaces of the dual arms and tracing the object location in a convex hull of the workspace. By employment of a simulated artificial robot as a parallel system as well as a task-switching weight factor, the robot is actually able to restore back to the natural pose smoothly in the absence of the obstacle. Two sets of experiments were carried out to demonstrate the effectiveness of the developed servoing control method.

Chapter 6 — Model Identification

John Hollerbach, Wisama Khalil and Maxime Gautier

This chapter discusses how to determine the kinematic parameters and the inertial parameters of robot manipulators. Both instances of model identification are cast into a common framework of least-squares parameter estimation, and are shown to have common numerical issues relating to the identifiability of parameters, adequacy of the measurement sets, and numerical robustness. These discussions are generic to any parameter estimation problem, and can be applied in other contexts.

For kinematic calibration, the main aim is to identify the geometric Denavit–Hartenberg (DH) parameters, although joint-based parameters relating to the sensing and transmission elements can also be identified. Endpoint sensing or endpoint constraints can provide equivalent calibration equations. By casting all calibration methods as closed-loop calibration, the calibration index categorizes methods in terms of how many equations per pose are generated.

Inertial parameters may be estimated through the execution of a trajectory while sensing one or more components of force/torque at a joint. Load estimation of a handheld object is simplest because of full mobility and full wrist force-torque sensing. For link inertial parameter estimation, restricted mobility of links nearer the base as well as sensing only the joint torque means that not all inertial parameters can be identified. Those that can be identified are those that affect joint torque, although they may appear in complicated linear combinations.

Dynamic identification of Kuka KR270 : Trajectory without load

Author  Maxime Gautier

Video ID : 486

This video shows a trajectory without load used to identify the dynamic parameters of the links, load, joint drive gains and gravity compensator of a heavy industrial Kuka KR 270 manipulator. Details and results are given in the paper: A. Jubien, M. Gautier: Global identification of spring balancer, dynamic parameters and drive gains of heavy industrial robots, IEEE/RSJ Int. Conf. Intel. Robot. Syst. (IROS), Tokyo (2013) pp. 1355-1360

Chapter 13 — Behavior-Based Systems

François Michaud and Monica Nicolescu

Nature is filled with examples of autonomous creatures capable of dealing with the diversity, unpredictability, and rapidly changing conditions of the real world. Such creatures must make decisions and take actions based on incomplete perception, time constraints, limited knowledge about the world, cognition, reasoning and physical capabilities, in uncontrolled conditions and with very limited cues about the intent of others. Consequently, one way of evaluating intelligence is based on the creature’s ability to make the most of what it has available to handle the complexities of the real world. The main objective of this chapter is to explain behavior-based systems and their use in autonomous control problems and applications. The chapter is organized as follows. Section 13.1 overviews robot control, introducing behavior-based systems in relation to other established approaches to robot control. Section 13.2 follows by outlining the basic principles of behavior-based systems that make them distinct from other types of robot control architectures. The concept of basis behaviors, the means of modularizing behavior-based systems, is presented in Sect. 13.3. Section 13.4 describes how behaviors are used as building blocks for creating representations for use by behavior-based systems, enabling the robot to reason about the world and about itself in that world. Section 13.5 presents several different classes of learning methods for behavior-based systems, validated on single-robot and multirobot systems. Section 13.6 provides an overview of various robotics problems and application domains that have successfully been addressed or are currently being studied with behavior-based control. Finally, Sect. 13.7 concludes the chapter.

Experience-based learning of high-level task representations: Reproduction

Author  Monica Nicolescu

Video ID : 28

This is a video recorded in early 2000s, showing a Pioneer robot visiting a number of targets in a certain order based on a demonstration provided by a human user. The robot training stage is also shown in a related video in this chapter. References: 1. M. Nicolescu, M.J. Mataric: Experience-based learning of task representations from human-robot interaction, Proc. IEEE Int. Symp. Comput. Intell. Robot. Autom. , Banff (2001), pp. 463-468; 2. M. Nicolescu, M.J. Mataric: Learning and interacting in human-robot domains, IEEE Trans. Syst. Man Cybernet. A31(5), 419-430 (2001)

Chapter 49 — Modeling and Control of Wheeled Mobile Robots

Claude Samson, Pascal Morin and Roland Lenain

This chaptermay be seen as a follow up to Chap. 24, devoted to the classification and modeling of basic wheeled mobile robot (WMR) structures, and a natural complement to Chap. 47, which surveys motion planning methods for WMRs. A typical output of these methods is a feasible (or admissible) reference state trajectory for a given mobile robot, and a question which then arises is how to make the physical mobile robot track this reference trajectory via the control of the actuators with which the vehicle is equipped. The object of the present chapter is to bring elements of the answer to this question based on simple and effective control strategies.

The chapter is organized as follows. Section 49.2 is devoted to the choice of controlmodels and the determination of modeling equations associated with the path-following control problem. In Sect. 49.3, the path following and trajectory stabilization problems are addressed in the simplest case when no requirement is made on the robot orientation (i. e., position control). In Sect. 49.4 the same problems are revisited for the control of both position and orientation. The previously mentionned sections consider an ideal robot satisfying the rolling-without-sliding assumption. In Sect. 49.5, we relax this assumption in order to take into account nonideal wheel-ground contact. This is especially important for field-robotics applications and the proposed results are validated through full scale experiments on natural terrain. Finally, a few complementary issues on the feedback control of mobile robots are briefly discussed in the concluding Sect. 49.6, with a list of commented references for further reading on WMRs motion control.

Tracking of an admissible trajectory with a car-like vehicle

Author  Pascal Morin, Claude Samson

Video ID : 181

This is an animation showing the tracking of an admissible reference trajectory (red vehicle) with a car-like vehicle (green vehicle). The robot is able to ensure asymptotic convergence of the tracking error to zero, based on the feedback controller presented in Chap. 49.4, Springer Handbook of Robotics, 2nd edn (2016).

Chapter 53 — Multiple Mobile Robot Systems

Lynne E. Parker, Daniela Rus and Gaurav S. Sukhatme

Within the context of multiple mobile, and networked robot systems, this chapter explores the current state of the art. After a brief introduction, we first examine architectures for multirobot cooperation, exploring the alternative approaches that have been developed. Next, we explore communications issues and their impact on multirobot teams in Sect. 53.3, followed by a discussion of networked mobile robots in Sect. 53.4. Following this we discuss swarm robot systems in Sect. 53.5 and modular robot systems in Sect. 53.6. While swarm and modular systems typically assume large numbers of homogeneous robots, other types of multirobot systems include heterogeneous robots. We therefore next discuss heterogeneity in cooperative robot teams in Sect. 53.7. Once robot teams allow for individual heterogeneity, issues of task allocation become important; Sect. 53.8 therefore discusses common approaches to task allocation. Section 53.9 discusses the challenges of multirobot learning, and some representative approaches. We outline some of the typical application domains which serve as test beds for multirobot systems research in Sect. 53.10. Finally, we conclude in Sect. 53.11 with some summary remarks and suggestions for further reading.

A robotic reconnaissance and surveillance team

Author  Paul Rybski, Saifallah Benjaafar, John R. Budenske, Mark Dvorak, Maria Gini, Dean F. Hougen, Donald G. Krantz, Perry Y. Li, Fred Malver, Brad Nelson, Nikolaos Papanikolopoulos, Sascha A. Stoeter, Richard Voyles, Kemel Berk Yesin

Video ID : 203

A two-tiered system for surveillance and exploration tasks is presented. The first tier is the scout (a small mobile sensor platform); the second tier consists of rangers (larger robots that transport and deploy scouts). Scouts send data (commonly video) to other robots via an RF data link.

Chapter 0 — Preface

Bruno Siciliano, Oussama Khatib and Torsten Kröger

The preface of the Second Edition of the Springer Handbook of Robotics contains three videos about the creation of the book and using its multimedia app on mobile devices.

The handbook — The story continues

Author  Bruno Siciliano

Video ID : 845

This video illustrates the joyful mood of the big team of the Springer Handbook of Robotics at the completion of the Second Edition.

Chapter 23 — Biomimetic Robots

Kyu-Jin Cho and Robert Wood

Biomimetic robot designs attempt to translate biological principles into engineered systems, replacing more classical engineering solutions in order to achieve a function observed in the natural system. This chapter will focus on mechanism design for bio-inspired robots that replicate key principles from nature with novel engineering solutions. The challenges of biomimetic design include developing a deep understanding of the relevant natural system and translating this understanding into engineering design rules. This often entails the development of novel fabrication and actuation to realize the biomimetic design.

This chapter consists of four sections. In Sect. 23.1, we will define what biomimetic design entails, and contrast biomimetic robots with bio-inspired robots. In Sect. 23.2, we will discuss the fundamental components for developing a biomimetic robot. In Sect. 23.3, we will review detailed biomimetic designs that have been developed for canonical robot locomotion behaviors including flapping-wing flight, jumping, crawling, wall climbing, and swimming. In Sect. 23.4, we will discuss the enabling technologies for these biomimetic designs including material and fabrication.

A perching mechanism for micro aerial vehicles

Author  Mirko Kovač, Jürg Germann, Christoph Hürzeler, Roland Y. Siegwart, Dario Floreano

Video ID : 416

This video shows a 4.6 g perching mechanism for micro aerial vehicles (MAVs) which enables them to perch on various vertical surfaces such as tree trunks and the external walls of concrete buildings. To achieve high impact force, needles snap forward and puncture as the trigger collides with the target's surface.

Chapter 32 — 3-D Vision for Navigation and Grasping

Danica Kragic and Kostas Daniilidis

In this chapter, we describe algorithms for three-dimensional (3-D) vision that help robots accomplish navigation and grasping. To model cameras, we start with the basics of perspective projection and distortion due to lenses. This projection from a 3-D world to a two-dimensional (2-D) image can be inverted only by using information from the world or multiple 2-D views. If we know the 3-D model of an object or the location of 3-D landmarks, we can solve the pose estimation problem from one view. When two views are available, we can compute the 3-D motion and triangulate to reconstruct the world up to a scale factor. When multiple views are given either as sparse viewpoints or a continuous incoming video, then the robot path can be computer and point tracks can yield a sparse 3-D representation of the world. In order to grasp objects, we can estimate 3-D pose of the end effector or 3-D coordinates of the graspable points on the object.

Finding paths through the world's photos

Author  Noah Snavely, Rahul Garg, Steven M. Seitz, Richard Szeliski

Video ID : 121

When a scene is photographed many times by different people, the viewpoints often cluster along certain paths. These paths are largely specific to the scene being photographed and follow interesting patterns and viewpoints. We seek to discover a range of such paths and turn them into controls for image-based rendering. Our approach takes as input a large set of community or personal photos, reconstructs camera viewpoints, and automatically computes orbits, panoramas, canonical views, and optimal paths between views. The scene can then be interactively browsed in 3-D using these controls or with six DOF free-viewpoint control. As the user browses the scene, nearby views are continuously selected and transformed, using control-adaptive reprojection techniques.