View Chapter

Chapter 51 — Modeling and Control of Underwater Robots

Gianluca Antonelli, Thor I. Fossen and Dana R. Yoerger

This chapter deals with modeling and control of underwater robots. First, a brief introduction showing the constantly expanding role of marine robotics in oceanic engineering is given; this section also contains some historical backgrounds. Most of the following sections strongly overlap with the corresponding chapters presented in this handbook; hence, to avoid useless repetitions, only those aspects peculiar to the underwater environment are discussed, assuming that the reader is already familiar with concepts such as fault detection systems when discussing the corresponding underwater implementation. Themodeling section is presented by focusing on a coefficient-based approach capturing the most relevant underwater dynamic effects. Two sections dealing with the description of the sensor and the actuating systems are then given. Autonomous underwater vehicles require the implementation of mission control system as well as guidance and control algorithms. Underwater localization is also discussed. Underwater manipulation is then briefly approached. Fault detection and fault tolerance, together with the coordination control of multiple underwater vehicles, conclude the theoretical part of the chapter. Two final sections, reporting some successful applications and discussing future perspectives, conclude the chapter. The reader is referred to Chap. 25 for the design issues.

Two underwater Folaga vehicles patrolling a 3-D area

Author  Gianluca Antonelli, Alessandro Marino

Video ID : 94

This video records one of the final experiments for the European project Co3AUV (http://www.Co3-AUVs.eu). It was conducted successfully during February 2012 in collaboration with GraalTech at the NURC (NATO Undersea Research Center) site.

Chapter 17 — Limbed Systems

Shuuji Kajita and Christian Ott

A limbed system is a mobile robot with a body, legs and arms. First, its general design process is discussed in Sect. 17.1. Then we consider issues of conceptual design and observe designs of various existing robots in Sect. 17.2. As an example in detail, the design of a humanoid robot HRP-4C is shown in Sect. 17.3. To design a limbed system of good performance, it is important to take into account of actuation and control, like gravity compensation, limit cycle dynamics, template models, and backdrivable actuation. These are discussed in Sect. 17.4.

In Sect. 17.5, we overview divergence of limbed systems. We see odd legged walkers, leg–wheel hybrid robots, leg–arm hybrid robots, tethered walking robots, and wall-climbing robots. To compare limbed systems of different configurations,we can use performance indices such as the gait sensitivity norm, the Froude number, and the specific resistance, etc., which are introduced in Sect. 17.6.

Biped running robot MABEL

Author  Jessy Grizzle

Video ID : 533

A biped running robot MABEL developed at the University of Michigan in the lab of Prof. Grizzle. The robot was developed in collaboration with Jonathan Hurst, Al Rizzi and Jessica Hodgins of the Robotics Institute, Carnegie Mellon University.

Chapter 7 — Motion Planning

Lydia E. Kavraki and Steven M. LaValle

This chapter first provides a formulation of the geometric path planning problem in Sect. 7.2 and then introduces sampling-based planning in Sect. 7.3. Sampling-based planners are general techniques applicable to a wide set of problems and have been successful in dealing with hard planning instances. For specific, often simpler, planning instances, alternative approaches exist and are presented in Sect. 7.4. These approaches provide theoretical guarantees and for simple planning instances they outperform samplingbased planners. Section 7.5 considers problems that involve differential constraints, while Sect. 7.6 overviews several other extensions of the basic problem formulation and proposed solutions. Finally, Sect. 7.8 addresses some important andmore advanced topics related to motion planning.

Simulation of a large crowd

Author  Dinesh Manocha

Video ID : 21

Motion-planning methods can be used to simulate a large crowd which is a system with a very high degree of freedom. This video illustrates an approach that uses an optimization method to compute a biomechanically energy-efficient, collision-free trajectory for each agent. Many phenomena arise such as lane formation.

Chapter 47 — Motion Planning and Obstacle Avoidance

Javier Minguez, Florant Lamiraux and Jean-Paul Laumond

This chapter describes motion planning and obstacle avoidance for mobile robots. We will see how the two areas do not share the same modeling background. From the very beginning of motion planning, research has been dominated by computer sciences. Researchers aim at devising well-grounded algorithms with well-understood completeness and exactness properties.

The challenge of this chapter is to present both nonholonomic motion planning (Sects. 47.1–47.6) and obstacle avoidance (Sects. 47.7–47.10) issues. Section 47.11 reviews recent successful approaches that tend to embrace the whole problemofmotion planning and motion control. These approaches benefit from both nonholonomic motion planning and obstacle avoidance methods.

Mobile-robot navigation system in outdoor pedestrian environment

Author  Chin-Kai Chang

Video ID : 711

We present a mobile-robot navigation system guided by a novel vision-based, road-recognition approach. The system represents the road as a set of lines extrapolated from the detected image contour segments. These lines enable the robot to maintain its heading by centering the vanishing point in its field of view, and to correct the long-term drift from its original lateral position. We integrate odometry and our visual, road-recognition system into a grid-based local map which estimates the robot pose as well as its surroundings to generate a movement path. Our road recognition system is able to estimate the road center on a standard dataset with 25 076 images to within 11.42 cm (with respect to roads that are at least 3 m wide). It outperforms three other state-of-the-art systems. In addition, we extensively test our navigation system in four busy campus environments using a wheeled robot. Our tests cover more than 5 km of autonomous driving on a busy college campus without failure. This demonstrates the robustness of the proposed approach to handle challenges including occlusion by pedestrians, non-standard complex road markings and shapes, shadows, and miscellaneous obstacle objects.

Chapter 40 — Mobility and Manipulation

Oliver Brock, Jaeheung Park and Marc Toussaint

Mobile manipulation requires the integration of methodologies from all aspects of robotics. Instead of tackling each aspect in isolation,mobilemanipulation research exploits their interdependence to solve challenging problems. As a result, novel views of long-standing problems emerge. In this chapter, we present these emerging views in the areas of grasping, control, motion generation, learning, and perception. All of these areas must address the shared challenges of high-dimensionality, uncertainty, and task variability. The section on grasping and manipulation describes a trend towards actively leveraging contact and physical and dynamic interactions between hand, object, and environment. Research in control addresses the challenges of appropriately coupling mobility and manipulation. The field of motion generation increasingly blurs the boundaries between control and planning, leading to task-consistent motion in high-dimensional configuration spaces, even in dynamic and partially unknown environments. A key challenge of learning formobilemanipulation consists of identifying the appropriate priors, and we survey recent learning approaches to perception, grasping, motion, and manipulation. Finally, a discussion of promising methods in perception shows how concepts and methods from navigation and active perception are applied.

Reaching in clutter with whole-arm tactile sensing

Author  Advait Jain, Marc D. Killpack, Aaron Edsinger, Charles C. Kemp

Video ID : 674

In this video, our robot Cody attempts to reach to five different goal locations using four attempts (meaning four different base locations) for each goal. For each goal, we test our single-step, quasi-static, model-predictive controller against the performance of a baseline kinematic controller that has compliance at the joints.

Chapter 23 — Biomimetic Robots

Kyu-Jin Cho and Robert Wood

Biomimetic robot designs attempt to translate biological principles into engineered systems, replacing more classical engineering solutions in order to achieve a function observed in the natural system. This chapter will focus on mechanism design for bio-inspired robots that replicate key principles from nature with novel engineering solutions. The challenges of biomimetic design include developing a deep understanding of the relevant natural system and translating this understanding into engineering design rules. This often entails the development of novel fabrication and actuation to realize the biomimetic design.

This chapter consists of four sections. In Sect. 23.1, we will define what biomimetic design entails, and contrast biomimetic robots with bio-inspired robots. In Sect. 23.2, we will discuss the fundamental components for developing a biomimetic robot. In Sect. 23.3, we will review detailed biomimetic designs that have been developed for canonical robot locomotion behaviors including flapping-wing flight, jumping, crawling, wall climbing, and swimming. In Sect. 23.4, we will discuss the enabling technologies for these biomimetic designs including material and fabrication.

Torque-control strategies for snake robots

Author  David Rollinson, Kalyan Vasudev Alwala, Nico Zevallos, Howie Choset

Video ID : 392

This video provides an overview of some initial torque-based motions for the series elastic snake robot (SEA Snake). Because the SEA Snake has the unique ability to accurately sense and control the torque of each of its joints, it can execute life-like compliant and adaptive motions, without a complex controller or tactile sensing.

Chapter 71 — Cognitive Human-Robot Interaction

Bilge Mutlu, Nicholas Roy and Selma Šabanović

A key research challenge in robotics is to design robotic systems with the cognitive capabilities necessary to support human–robot interaction. These systems will need to have appropriate representations of the world; the task at hand; the capabilities, expectations, and actions of their human counterparts; and how their own actions might affect the world, their task, and their human partners. Cognitive human–robot interaction is a research area that considers human(s), robot(s), and their joint actions as a cognitive system and seeks to create models, algorithms, and design guidelines to enable the design of such systems. Core research activities in this area include the development of representations and actions that allow robots to participate in joint activities with people; a deeper understanding of human expectations and cognitive responses to robot actions; and, models of joint activity for human–robot interaction. This chapter surveys these research activities by drawing on research questions and advances from a wide range of fields including computer science, cognitive science, linguistics, and robotics.

Gaze and gesture cues for robots

Author  Bilge Mutlu

Video ID : 128

In human-robot communication, nonverbal cues like gaze and gesture can be a source of important information for starting and maintaining interaction. Gaze, for example, can tell a person about what the robot is attending to, its mental state, and its role in a conversation. Researchers are studying and developing models of nonverbal cues in human-robot interaction to enable more successful collaboration between robots and humans in a variety of domains, including education.

Chapter 53 — Multiple Mobile Robot Systems

Lynne E. Parker, Daniela Rus and Gaurav S. Sukhatme

Within the context of multiple mobile, and networked robot systems, this chapter explores the current state of the art. After a brief introduction, we first examine architectures for multirobot cooperation, exploring the alternative approaches that have been developed. Next, we explore communications issues and their impact on multirobot teams in Sect. 53.3, followed by a discussion of networked mobile robots in Sect. 53.4. Following this we discuss swarm robot systems in Sect. 53.5 and modular robot systems in Sect. 53.6. While swarm and modular systems typically assume large numbers of homogeneous robots, other types of multirobot systems include heterogeneous robots. We therefore next discuss heterogeneity in cooperative robot teams in Sect. 53.7. Once robot teams allow for individual heterogeneity, issues of task allocation become important; Sect. 53.8 therefore discusses common approaches to task allocation. Section 53.9 discusses the challenges of multirobot learning, and some representative approaches. We outline some of the typical application domains which serve as test beds for multirobot systems research in Sect. 53.10. Finally, we conclude in Sect. 53.11 with some summary remarks and suggestions for further reading.

Synchronization and fault detection in autonomous rbots

Author  Andres Lyhne Christensen, Rehan O'Grady, Marco Dorigo

Video ID : 194

This video demonstrates a group of robots detecting faults in each other and simulating repair. The technique relies on visual fire-fly-like synchronization. Each robot synchronizes with the others based on the detection of LED lights and flashes using on-board cameras. The robots simulate fault and repair based on the frequency of flashes. The video shows an experiment with many robots working together and simulating faults and repairs.

Chapter 21 — Actuators for Soft Robotics

Alin Albu-Schäffer and Antonio Bicchi

Although we do not know as yet how robots of the future will look like exactly, most of us are sure that they will not resemble the heavy, bulky, rigid machines dangerously moving around in old fashioned industrial automation. There is a growing consensus, in the research community as well as in expectations from the public, that robots of the next generation will be physically compliant and adaptable machines, closely interacting with humans and moving safely, smoothly and efficiently - in other terms, robots will be soft.

This chapter discusses the design, modeling and control of actuators for the new generation of soft robots, which can replace conventional actuators in applications where rigidity is not the first and foremost concern in performance. The chapter focuses on the technology, modeling, and control of lumped parameters of soft robotics, that is, systems of discrete, interconnected, and compliant elements. Distributed parameters, snakelike and continuum soft robotics, are presented in Chap. 20, while Chap. 23 discusses in detail the biomimetic motivations that are often behind soft robotics.

Active damping control on the DLR Hand Arm System

Author  Florian Petit, Alin Albu-Schäffer

Video ID : 548

The effectivness of active damping control is shown in a writing task performed by the DLR Hand Arm System.

Chapter 74 — Learning from Humans

Aude G. Billard, Sylvain Calinon and Rüdiger Dillmann

This chapter surveys the main approaches developed to date to endow robots with the ability to learn from human guidance. The field is best known as robot programming by demonstration, robot learning from/by demonstration, apprenticeship learning and imitation learning. We start with a brief historical overview of the field. We then summarize the various approaches taken to solve four main questions: when, what, who and when to imitate. We emphasize the importance of choosing well the interface and the channels used to convey the demonstrations, with an eye on interfaces providing force control and force feedback. We then review algorithmic approaches to model skills individually and as a compound and algorithms that combine learning from human guidance with reinforcement learning. We close with a look on the use of language to guide teaching and a list of open issues.

Exploitation of social cues to speed up learning

Author  Sylvain Calinon, Aude Billard

Video ID : 106

Use of social cues to speed up the imitation-learning process, with gazing and pointing information to select the objects relevant for the task. Reference: S. Calinon, A.G. Billard: Teaching a humanoid robot to recognize and reproduce social cues, Proc. IEEE Int. Symp. Robot Human Interactive Communication (Ro-Man), Hatfield (2006), pp. 346–351; URL: http://lasa.epfl.ch/research/control_automation/interaction/social/index.php .