View Chapter

Chapter 45 — World Modeling

Wolfram Burgard, Martial Hebert and Maren Bennewitz

In this chapter we describe popular ways to represent the environment of a mobile robot. For indoor environments, which are often stored using two-dimensional representations, we discuss occupancy grids, line maps, topologicalmaps, and landmark-based representations. Each of these techniques has its own advantages and disadvantages. Whilst occupancy grid maps allow for quick access and can efficiently be updated, line maps are more compact. Also landmark-basedmaps can efficiently be updated and maintained, however, they do not readily support navigation tasks such as path planning like topological representations do.

Additionally, we discuss approaches suited for outdoor terrain modeling. In outdoor environments, the flat-surface assumption underling many mapping techniques for indoor environments is no longer valid. A very popular approach in this context are elevation and variants maps, which store the surface of the terrain over a regularly spaced grid. Alternatives to such maps are point clouds, meshes, or three-dimensional grids, which provide a greater flexibility but have higher storage demands.

3-D textured model of urban environments

Author  Michael Maurer

Video ID : 269

In this video, a micro aerial vehicle developed by the Institute for Computer Graphics and Vision, Graz Univ. of Technology, flies to predefined points and captures images for building a 3-D textured model of an urban environment. The video contains a nice description of the different steps necessary to generate a precise model by fusing the areal images with public geographic data.

Chapter 36 — Motion for Manipulation Tasks

James Kuffner and Jing Xiao

This chapter serves as an introduction to Part D by giving an overview of motion generation and control strategies in the context of robotic manipulation tasks. Automatic control ranging from the abstract, high-level task specification down to fine-grained feedback at the task interface are considered. Some of the important issues include modeling of the interfaces between the robot and the environment at the different time scales of motion and incorporating sensing and feedback. Manipulation planning is introduced as an extension to the basic motion planning problem, which can be modeled as a hybrid system of continuous configuration spaces arising from the act of grasping and moving parts in the environment. The important example of assembly motion is discussed through the analysis of contact states and compliant motion control. Finally, methods aimed at integrating global planning with state feedback control are summarized.

A square peg-in-hole demonstration using manipulation skills

Author  Unknown

Video ID : 362

This video shows a square peg-in-hole demonstration using manipulation skills which refer to a set of motion primitives derived from the analysis of assembly tasks. This video demonstrated three manipulation skills: move-to-touch skill, rotate-to-level skill, and rotate-to-insert skill, which are executed to insert a square peg into a hole.

Chapter 76 — Evolutionary Robotics

Stefano Nolfi, Josh Bongard, Phil Husbands and Dario Floreano

Evolutionary Robotics is a method for automatically generating artificial brains and morphologies of autonomous robots. This approach is useful both for investigating the design space of robotic applications and for testing scientific hypotheses of biological mechanisms and processes. In this chapter we provide an overview of methods and results of Evolutionary Robotics with robots of different shapes, dimensions, and operation features. We consider both simulated and physical robots with special consideration to the transfer between the two worlds.

Evolved GasNet visualisation

Author  Phil Husbands

Video ID : 375

The video shows a successfully evolved GasNet controlling a simulated robot engaged in a visual-discrimination task under noisy lighting. The GasNet architecture and all node properties are evolved along with the visual sampling morphology (parts of the visual field used as inputs to the GasNet). A minimal simulation is used which allows transfer to the real robot (see Sussex gantry Video 371). A highly minimal controller and visual morphology have evolved. The system is highly robust, coping with very noisy conditions. As can be seen, the GasNet employs multiple oscillator subcircuits - partly to filter out noise. Work by Tom Smith and Phil Husbands.

Chapter 58 — Robotics in Hazardous Applications

James Trevelyan, William R. Hamel and Sung-Chul Kang

Robotics researchers have worked hard to realize a long-awaited vision: machines that can eliminate the need for people to work in hazardous environments. Chapter 60 is framed by the vision of disaster response: search and rescue robots carrying people from burning buildings or tunneling through collapsed rock falls to reach trapped miners. In this chapter we review tangible progress towards robots that perform routine work in places too dangerous for humans. Researchers still have many challenges ahead of them but there has been remarkable progress in some areas. Hazardous environments present special challenges for the accomplishment of desired tasks depending on the nature and magnitude of the hazards. Hazards may be present in the form of radiation, toxic contamination, falling objects or potential explosions. Technology that specialized engineering companies can develop and sell without active help from researchers marks the frontier of commercial feasibility. Just inside this border lie teleoperated robots for explosive ordnance disposal (EOD) and for underwater engineering work. Even with the typical tenfold disadvantage in manipulation performance imposed by the limits of today’s telepresence and teleoperation technology, in terms of human dexterity and speed, robots often can offer a more cost-effective solution. However, most routine applications in hazardous environments still lie far beyond the feasibility frontier. Fire fighting, remediating nuclear contamination, reactor decommissioning, tunneling, underwater engineering, underground mining and clearance of landmines and unexploded ordnance still present many unsolved problems.

Remote handling and inspection with the VT450

Author  James P. Trevelyan

Video ID : 592

Promotional video for an inspection robot.

Chapter 13 — Behavior-Based Systems

François Michaud and Monica Nicolescu

Nature is filled with examples of autonomous creatures capable of dealing with the diversity, unpredictability, and rapidly changing conditions of the real world. Such creatures must make decisions and take actions based on incomplete perception, time constraints, limited knowledge about the world, cognition, reasoning and physical capabilities, in uncontrolled conditions and with very limited cues about the intent of others. Consequently, one way of evaluating intelligence is based on the creature’s ability to make the most of what it has available to handle the complexities of the real world. The main objective of this chapter is to explain behavior-based systems and their use in autonomous control problems and applications. The chapter is organized as follows. Section 13.1 overviews robot control, introducing behavior-based systems in relation to other established approaches to robot control. Section 13.2 follows by outlining the basic principles of behavior-based systems that make them distinct from other types of robot control architectures. The concept of basis behaviors, the means of modularizing behavior-based systems, is presented in Sect. 13.3. Section 13.4 describes how behaviors are used as building blocks for creating representations for use by behavior-based systems, enabling the robot to reason about the world and about itself in that world. Section 13.5 presents several different classes of learning methods for behavior-based systems, validated on single-robot and multirobot systems. Section 13.6 provides an overview of various robotics problems and application domains that have successfully been addressed or are currently being studied with behavior-based control. Finally, Sect. 13.7 concludes the chapter.

Experience-based learning of high-level task representations: Reproduction (3)

Author  Monica Nicolescu

Video ID : 33

This is a video recorded in early 2000s, showing a Pioneer robot learning to traverse "gates" and move objects from a source place to a destination - the robot is reproducing the learned task. The robot training stage is also shown in a related video in this chapter. Reference: M. Nicolescu, M.J. Mataric: Learning and interacting in human-robot domains, IEEE Trans. Syst. Man Cybernet. A31(5), 419-430 (2001)

Chapter 47 — Motion Planning and Obstacle Avoidance

Javier Minguez, Florant Lamiraux and Jean-Paul Laumond

This chapter describes motion planning and obstacle avoidance for mobile robots. We will see how the two areas do not share the same modeling background. From the very beginning of motion planning, research has been dominated by computer sciences. Researchers aim at devising well-grounded algorithms with well-understood completeness and exactness properties.

The challenge of this chapter is to present both nonholonomic motion planning (Sects. 47.1–47.6) and obstacle avoidance (Sects. 47.7–47.10) issues. Section 47.11 reviews recent successful approaches that tend to embrace the whole problemofmotion planning and motion control. These approaches benefit from both nonholonomic motion planning and obstacle avoidance methods.

Mobile-robot navigation system in outdoor pedestrian environment

Author  Chin-Kai Chang

Video ID : 711

We present a mobile-robot navigation system guided by a novel vision-based, road-recognition approach. The system represents the road as a set of lines extrapolated from the detected image contour segments. These lines enable the robot to maintain its heading by centering the vanishing point in its field of view, and to correct the long-term drift from its original lateral position. We integrate odometry and our visual, road-recognition system into a grid-based local map which estimates the robot pose as well as its surroundings to generate a movement path. Our road recognition system is able to estimate the road center on a standard dataset with 25 076 images to within 11.42 cm (with respect to roads that are at least 3 m wide). It outperforms three other state-of-the-art systems. In addition, we extensively test our navigation system in four busy campus environments using a wheeled robot. Our tests cover more than 5 km of autonomous driving on a busy college campus without failure. This demonstrates the robustness of the proposed approach to handle challenges including occlusion by pedestrians, non-standard complex road markings and shapes, shadows, and miscellaneous obstacle objects.

Chapter 70 — Human-Robot Augmentation

Massimo Bergamasco and Hugh Herr

The development of robotic systems capable of sharing with humans the load of heavy tasks has been one of the primary objectives in robotics research. At present, in order to fulfil such an objective, a strong interest in the robotics community is collected by the so-called wearable robots, a class of robotics systems that are worn and directly controlled by the human operator. Wearable robots, together with powered orthoses that exploit robotic components and control strategies, can represent an immediate resource also for allowing humans to restore manipulation and/or walking functionalities.

The present chapter deals with wearable robotics systems capable of providing different levels of functional and/or operational augmentation to the human beings for specific functions or tasks. Prostheses, powered orthoses, and exoskeletons are described for upper limb, lower limb, and whole body structures. State-of-theart devices together with their functionalities and main components are presented for each class of wearable system. Critical design issues and open research aspects are reported.

Arm-Exos

Author  Massimo Bergamasco

Video ID : 148

The video details the Arm-Exos and, in particular, its capability for tracking the operator's motions and for rendering the contact forces in a simple, demonstrative, virtual environment.

Chapter 23 — Biomimetic Robots

Kyu-Jin Cho and Robert Wood

Biomimetic robot designs attempt to translate biological principles into engineered systems, replacing more classical engineering solutions in order to achieve a function observed in the natural system. This chapter will focus on mechanism design for bio-inspired robots that replicate key principles from nature with novel engineering solutions. The challenges of biomimetic design include developing a deep understanding of the relevant natural system and translating this understanding into engineering design rules. This often entails the development of novel fabrication and actuation to realize the biomimetic design.

This chapter consists of four sections. In Sect. 23.1, we will define what biomimetic design entails, and contrast biomimetic robots with bio-inspired robots. In Sect. 23.2, we will discuss the fundamental components for developing a biomimetic robot. In Sect. 23.3, we will review detailed biomimetic designs that have been developed for canonical robot locomotion behaviors including flapping-wing flight, jumping, crawling, wall climbing, and swimming. In Sect. 23.4, we will discuss the enabling technologies for these biomimetic designs including material and fabrication.

Smooth vertical surface climbing with directional adhesion

Author  Sangbae Kim, Mark R. Cutkosky

Video ID : 389

Stickybot is a bioinspired robot that climbs smooth vertical surfaces such as those made of glass, plastic, and ceramic tile at 4 cm/s. The robot employs several design principles adapted from the gecko, including a hierarchy of compliant structures and directional adhesion. At the finest scale, the undersides of Stickybot’s toes are covered with arrays of small, angled polymer stalks.

SpinybotII: Climbing hard walls with compliant microspines

Author  Sangbae Kim, Alan T. Asbeck, Mark R. Cutkosky, William R. Provancher

Video ID : 388

This climbing robot can scale flat, hard vertical surfaces including those made of concrete, brick, stucco and masonry without using suction or adhesives. It employs arrays of miniature spines that catch opportunistically on surface asperities. The approach is inspired by the mechanisms observed in some climbing insects and spiders.

Chapter 76 — Evolutionary Robotics

Stefano Nolfi, Josh Bongard, Phil Husbands and Dario Floreano

Evolutionary Robotics is a method for automatically generating artificial brains and morphologies of autonomous robots. This approach is useful both for investigating the design space of robotic applications and for testing scientific hypotheses of biological mechanisms and processes. In this chapter we provide an overview of methods and results of Evolutionary Robotics with robots of different shapes, dimensions, and operation features. We consider both simulated and physical robots with special consideration to the transfer between the two worlds.

Visual navigation of mobile robot with pan-tilt camera

Author  Dario Floreano

Video ID : 36

A mobile robot with a pan-tilt camera is asked to to navigate in a square arena with low walls and located in an office.