View Chapter

Chapter 20 — Snake-Like and Continuum Robots

Ian D. Walker, Howie Choset and Gregory S. Chirikjian

This chapter provides an overview of the state of the art of snake-like (backbones comprised of many small links) and continuum (continuous backbone) robots. The history of each of these classes of robot is reviewed, focusing on key hardware developments. A review of the existing theory and algorithms for kinematics for both types of robot is presented, followed by a summary ofmodeling of locomotion for snake-like and continuum mechanisms.

Modsnake climbing a tree

Author  Howie Choset

Video ID : 168

The CMU Modsnake climbing a tree and surveying an area from this high vantage point.

Chapter 62 — Intelligent Vehicles

Alberto Broggi, Alex Zelinsky, Ümit Özgüner and Christian Laugier

This chapter describes the emerging robotics application field of intelligent vehicles – motor vehicles that have autonomous functions and capabilities. The chapter is organized as follows. Section 62.1 provides a motivation for why the development of intelligent vehicles is important, a brief history of the field, and the potential benefits of the technology. Section 62.2 describes the technologies that enable intelligent vehicles to sense vehicle, environment, and driver state, work with digital maps and satellite navigation, and communicate with intelligent transportation infrastructure. Section 62.3 describes the challenges and solutions associated with road scene understanding – a key capability for all intelligent vehicles. Section 62.4 describes advanced driver assistance systems, which use the robotics and sensing technologies described earlier to create new safety and convenience systems for motor vehicles, such as collision avoidance, lane keeping, and parking assistance. Section 62.5 describes driver monitoring technologies that are being developed to mitigate driver fatigue, inattention, and impairment. Section 62.6 describes fully autonomous intelligent vehicles systems that have been developed and deployed. The chapter is concluded in Sect. 62.7 with a discussion of future prospects, while Sect. 62.8 provides references to further reading and additional resources.

PROUD2013 - Inside VisLab's driverless car

Author  Alberto Broggi

Video ID : 178

This video shows the internal and external view of what happened during the PROUD2013 driverlesscar test in downtown Parma, Italy, on July 12, 2013. It also displays the internal status of the vehicle plus some vehicle data (speed, steering angle, and some perception results like pedestrian detection, roundabout merging alert, freeway merging alert, traffic light sensing, etc.). More info available from www.vislab.it/proud.

Chapter 63 — Medical Robotics and Computer-Integrated Surgery

Russell H. Taylor, Arianna Menciassi, Gabor Fichtinger, Paolo Fiorini and Paolo Dario

The growth of medical robotics since the mid- 1980s has been striking. From a few initial efforts in stereotactic brain surgery, orthopaedics, endoscopic surgery, microsurgery, and other areas, the field has expanded to include commercially marketed, clinically deployed systems, and a robust and exponentially expanding research community. This chapter will discuss some major themes and illustrate them with examples from current and past research. Further reading providing a more comprehensive review of this rapidly expanding field is suggested in Sect. 63.4.

Medical robotsmay be classified in many ways: by manipulator design (e.g., kinematics, actuation); by level of autonomy (e.g., preprogrammed versus teleoperation versus constrained cooperative control), by targeted anatomy or technique (e.g., cardiac, intravascular, percutaneous, laparoscopic, microsurgical); or intended operating environment (e.g., in-scanner, conventional operating room). In this chapter, we have chosen to focus on the role of medical robots within the context of larger computer-integrated systems including presurgical planning, intraoperative execution, and postoperative assessment and follow-up.

First, we introduce basic concepts of computerintegrated surgery, discuss critical factors affecting the eventual deployment and acceptance of medical robots, and introduce the basic system paradigms of surgical computer-assisted planning, execution, monitoring, and assessment (surgical CAD/CAM) and surgical assistance. In subsequent sections, we provide an overview of the technology ofmedical robot systems and discuss examples of our basic system paradigms, with brief additional discussion topics of remote telesurgery and robotic surgical simulators. We conclude with some thoughts on future research directions and provide suggested further reading.

CardioArm

Author  Carnegie Mellon University, CNN

Video ID : 829

A robotic snake for heart operations: CardioArm.

Chapter 13 — Behavior-Based Systems

François Michaud and Monica Nicolescu

Nature is filled with examples of autonomous creatures capable of dealing with the diversity, unpredictability, and rapidly changing conditions of the real world. Such creatures must make decisions and take actions based on incomplete perception, time constraints, limited knowledge about the world, cognition, reasoning and physical capabilities, in uncontrolled conditions and with very limited cues about the intent of others. Consequently, one way of evaluating intelligence is based on the creature’s ability to make the most of what it has available to handle the complexities of the real world. The main objective of this chapter is to explain behavior-based systems and their use in autonomous control problems and applications. The chapter is organized as follows. Section 13.1 overviews robot control, introducing behavior-based systems in relation to other established approaches to robot control. Section 13.2 follows by outlining the basic principles of behavior-based systems that make them distinct from other types of robot control architectures. The concept of basis behaviors, the means of modularizing behavior-based systems, is presented in Sect. 13.3. Section 13.4 describes how behaviors are used as building blocks for creating representations for use by behavior-based systems, enabling the robot to reason about the world and about itself in that world. Section 13.5 presents several different classes of learning methods for behavior-based systems, validated on single-robot and multirobot systems. Section 13.6 provides an overview of various robotics problems and application domains that have successfully been addressed or are currently being studied with behavior-based control. Finally, Sect. 13.7 concludes the chapter.

Experience-based learning of high-level task representations: Demonstration (3)

Author  Monica Nicolescu

Video ID : 32

This is a video recorded in early 2000s, showing a Pioneer robot learning to traverse "gates" and move objects from a source place to a destination - the human demonstration stage. The robot execution stage is also shown in a related video in this chapter. Reference: M. Nicolescu, M.J. Mataric: Learning and interacting in human-robot domains, IEEE Trans. Syst. Man Cybernet. A31(5), 419-430 (2001)

Chapter 9 — Force Control

Luigi Villani and Joris De Schutter

A fundamental requirement for the success of a manipulation task is the capability to handle the physical contact between a robot and the environment. Pure motion control turns out to be inadequate because the unavoidable modeling errors and uncertainties may cause a rise of the contact force, ultimately leading to an unstable behavior during the interaction, especially in the presence of rigid environments. Force feedback and force control becomes mandatory to achieve a robust and versatile behavior of a robotic system in poorly structured environments as well as safe and dependable operation in the presence of humans. This chapter starts from the analysis of indirect force control strategies, conceived to keep the contact forces limited by ensuring a suitable compliant behavior to the end effector, without requiring an accurate model of the environment. Then the problem of interaction tasks modeling is analyzed, considering both the case of a rigid environment and the case of a compliant environment. For the specification of an interaction task, natural constraints set by the task geometry and artificial constraints set by the control strategy are established, with respect to suitable task frames. This formulation is the essential premise to the synthesis of hybrid force/motion control schemes.

Compliant robot motion: Control and task specification

Author  Joris De Schutter

Video ID : 687

The video contains work developed in the PhD thesis of Joris De Schutter, where the concept of compliant motion based on external force feedback loops and on the task frame formalism to specify interaction tasks were introduced. The video was recorded in 1984. The references for this video are 1. J. De Schutter, H. Van Brussel: Compliant robot motion II. A control approach based on external control loops, Int. J. Robot. Res. 7(4), 18-33 (1988) 2. J. De Schutter, H. Van Brussel: Compliant robot motion I. A formalism for specifying compliant motion tasks, Int. J. Robot. Res. 7(4), 3-17 (1988)

Chapter 51 — Modeling and Control of Underwater Robots

Gianluca Antonelli, Thor I. Fossen and Dana R. Yoerger

This chapter deals with modeling and control of underwater robots. First, a brief introduction showing the constantly expanding role of marine robotics in oceanic engineering is given; this section also contains some historical backgrounds. Most of the following sections strongly overlap with the corresponding chapters presented in this handbook; hence, to avoid useless repetitions, only those aspects peculiar to the underwater environment are discussed, assuming that the reader is already familiar with concepts such as fault detection systems when discussing the corresponding underwater implementation. Themodeling section is presented by focusing on a coefficient-based approach capturing the most relevant underwater dynamic effects. Two sections dealing with the description of the sensor and the actuating systems are then given. Autonomous underwater vehicles require the implementation of mission control system as well as guidance and control algorithms. Underwater localization is also discussed. Underwater manipulation is then briefly approached. Fault detection and fault tolerance, together with the coordination control of multiple underwater vehicles, conclude the theoretical part of the chapter. Two final sections, reporting some successful applications and discussing future perspectives, conclude the chapter. The reader is referred to Chap. 25 for the design issues.

Mariana Trench: HROV Nereus samples the Challenger Deep seafloor

Author  Woods Hole Oceanographic Institution

Video ID : 89

Date: May 31, 2009. Depth: 10,006 meters (6.2 miles). A WHOI-led team successfully brought the newly-built hybrid remotely operated vehicle (HROV) Nereus to the deepest part of the world's ocean, the Challenger Deep in the Pacific Ocean. The dive makes the unmanned Nereus the world's deepest-diving vehicle and the first vehicle to explore the Mariana Trench since 1998. To learn more visit http://www.whoi.edu/page.do?pid=33775.

Chapter 54 — Industrial Robotics

Martin Hägele, Klas Nilsson, J. Norberto Pires and Rainer Bischoff

Much of the technology that makes robots reliable, human friendly, and adaptable for numerous applications has emerged from manufacturers of industrial robots. With an estimated installation base in 2014 of about 1:5million units, some 171 000 new installations in that year and an annual turnover of the robotics industry estimated to be US$ 32 billion, industrial robots are by far the largest commercial application of robotics technology today.

The foundations for robot motion planning and control were initially developed with industrial applications in mind. These applications deserve special attention in order to understand the origin of robotics science and to appreciate the many unsolved problems that still prevent the wider use of robots in today’s agile manufacturing environments. In this chapter, we present a brief history and descriptions of typical industrial robotics applications and at the same time we address current critical state-of-the-art technological developments. We show how robots with differentmechanisms fit different applications and how applications are further enabled by latest technologies, often adopted from technological fields outside manufacturing automation.

We will first present a brief historical introduction to industrial robotics with a selection of contemporary application examples which at the same time refer to a critical key technology. Then, the basic principles that are used in industrial robotics and a review of programming methods will be presented. We will also introduce the topic of system integration particularly from a data integration point of view. The chapter will be closed with an outlook based on a presentation of some unsolved problems that currently inhibit wider use of industrial robots.

SMErobotics Demonstrator D2 Human-Robot cooperation in wooden house production

Author  Martin Haegele, Thilo Zimmermann, Björn Kahl

Video ID : 381

SMErobotics: Europe's leading robot manufacturers and research institutes have teamed up with the European Robotics Initiative for Strengthening the Competitiveness of SMEs in Manufacturing - to make the vision of cognitive robotics a reality in a key segment of EU manufacturing. Funded by the European Union 7th Framework Programme under GA number 287787. Project runtime: 01.01.2012 - 30.06.2016 For a general introduction, please also watch the general SMErobotics project video (ID 260). About this video: Chapter 1: Introduction (0:00); Chapter 2: Use of CAD data (00:32); Chapter 3: Object recognition and human interaction (00:47); Chapter 4: Program planning (01:15); Chapter 5: Program execution (01:53); Chapter 6: Automatic Tool Change (02:44); Chapter 7: Error handling (03:13); Chapter 8: Statement (03:58) Chapter 9: Outro (04:18); Chapter 10: The Consortium (04:56). For details, please visit: http://www.smerobotics.org/project/video-of-demonstrator-d2.html

Chapter 56 — Robotics in Agriculture and Forestry

Marcel Bergerman, John Billingsley, John Reid and Eldert van Henten

Robotics for agriculture and forestry (A&F) represents the ultimate application of one of our society’s latest and most advanced innovations to its most ancient and important industries. Over the course of history, mechanization and automation increased crop output several orders of magnitude, enabling a geometric growth in population and an increase in quality of life across the globe. Rapid population growth and rising incomes in developing countries, however, require ever larger amounts of A&F output. This chapter addresses robotics for A&F in the form of case studies where robotics is being successfully applied to solve well-identified problems. With respect to plant crops, the focus is on the in-field or in-farm tasks necessary to guarantee a quality crop and, generally speaking, end at harvest time. In the livestock domain, the focus is on breeding and nurturing, exploiting, harvesting, and slaughtering and processing. The chapter is organized in four main sections. The first one explains the scope, in particular, what aspects of robotics for A&F are dealt with in the chapter. The second one discusses the challenges and opportunities associated with the application of robotics to A&F. The third section is the core of the chapter, presenting twenty case studies that showcase (mostly) mature applications of robotics in various agricultural and forestry domains. The case studies are not meant to be comprehensive but instead to give the reader a general overview of how robotics has been applied to A&F in the last 10 years. The fourth section concludes the chapter with a discussion on specific improvements to current technology and paths to commercialization.

Autonomous orchard tractors

Author  John Reid

Video ID : 26

Mowing and spraying are two common tasks in orchard environments that require the use of tractors. These tasks take significant time and resources and spraying, in particular, can be dangerous for the operators, all of which suggest benefits from their automation. This video shows two John Deere tractors driving autonomously in an orange orchard. The first tractor is performing a spraying task, using the perception sensors for obstacle detection and to control the amount of spray applied to the trees, such that each tree receives only the minimum amount of chemicals necessary for its size. The second tractor is performing a mowing task, keeping the grass short to improve access to the orchard and reduce competition for resouces with the trees.

Chapter 53 — Multiple Mobile Robot Systems

Lynne E. Parker, Daniela Rus and Gaurav S. Sukhatme

Within the context of multiple mobile, and networked robot systems, this chapter explores the current state of the art. After a brief introduction, we first examine architectures for multirobot cooperation, exploring the alternative approaches that have been developed. Next, we explore communications issues and their impact on multirobot teams in Sect. 53.3, followed by a discussion of networked mobile robots in Sect. 53.4. Following this we discuss swarm robot systems in Sect. 53.5 and modular robot systems in Sect. 53.6. While swarm and modular systems typically assume large numbers of homogeneous robots, other types of multirobot systems include heterogeneous robots. We therefore next discuss heterogeneity in cooperative robot teams in Sect. 53.7. Once robot teams allow for individual heterogeneity, issues of task allocation become important; Sect. 53.8 therefore discusses common approaches to task allocation. Section 53.9 discusses the challenges of multirobot learning, and some representative approaches. We outline some of the typical application domains which serve as test beds for multirobot systems research in Sect. 53.10. Finally, we conclude in Sect. 53.11 with some summary remarks and suggestions for further reading.

Handling of a single object by multiple mobile robots based on caster-like dynamics

Author  Yasuhisa Hirata, Youhei Kume, Zhi-dong Wang, Kazuhiro Kosuge

Video ID : 193

This video focuses on how to handle a single object using the coordination actions of multiple mobile robots. Each robot is controlled based on caster dynamics. The maneuverability of the object can be changed based on the caster offset of each robot. Caster dynamics in the 3-D space is extended to the 2-D plane using a virtual 3-D caster.

Chapter 40 — Mobility and Manipulation

Oliver Brock, Jaeheung Park and Marc Toussaint

Mobile manipulation requires the integration of methodologies from all aspects of robotics. Instead of tackling each aspect in isolation,mobilemanipulation research exploits their interdependence to solve challenging problems. As a result, novel views of long-standing problems emerge. In this chapter, we present these emerging views in the areas of grasping, control, motion generation, learning, and perception. All of these areas must address the shared challenges of high-dimensionality, uncertainty, and task variability. The section on grasping and manipulation describes a trend towards actively leveraging contact and physical and dynamic interactions between hand, object, and environment. Research in control addresses the challenges of appropriately coupling mobility and manipulation. The field of motion generation increasingly blurs the boundaries between control and planning, leading to task-consistent motion in high-dimensional configuration spaces, even in dynamic and partially unknown environments. A key challenge of learning formobilemanipulation consists of identifying the appropriate priors, and we survey recent learning approaches to perception, grasping, motion, and manipulation. Finally, a discussion of promising methods in perception shows how concepts and methods from navigation and active perception are applied.

Atlas walking and manipulation

Author  DRC Team MIT

Video ID : 662

Autonomy demonstration with the MIT Atlas robot which is composed of the execution of a sequence of autonomous sub-tasks. Walking and manipulation plans are computed online with object fitting input from the perception system.