View Chapter

Chapter 55 — Space Robotics

Kazuya Yoshida, Brian Wilcox, Gerd Hirzinger and Roberto Lampariello

In the space community, any unmanned spacecraft can be called a robotic spacecraft. However, Space Robots are considered to be more capable devices that can facilitate manipulation, assembling, or servicing functions in orbit as assistants to astronauts, or to extend the areas and abilities of exploration on remote planets as surrogates for human explorers.

In this chapter, a concise digest of the historical overview and technical advances of two distinct types of space robotic systems, orbital robots and surface robots, is provided. In particular, Sect. 55.1 describes orbital robots, and Sect. 55.2 describes surface robots. In Sect. 55.3, the mathematical modeling of the dynamics and control using reference equations are discussed. Finally, advanced topics for future space exploration missions are addressed in Sect. 55.4.

DLR GETEX manipulation experiments on ETS-VII

Author  Gerd Hirzinger, Klaus Landzettel

Video ID : 332

This is a video record of the remote control of the first free-flying space robot ETS-VII from the DLR ground control station in Tsukuba, done in close cooperation with Japan’s NASDA (today’s JAXA). The video shows a visual-servoing task in which the robot moves autonomously to a reference position defined by visual markers placed on the experimental task board. In view are the true camera measurements (top left, end-effector camera; top right, side camera), the control room in the ground control station (bottom left), and the robot simulation environment (bottom right), which was used as a predictive simulation tool.

Chapter 70 — Human-Robot Augmentation

Massimo Bergamasco and Hugh Herr

The development of robotic systems capable of sharing with humans the load of heavy tasks has been one of the primary objectives in robotics research. At present, in order to fulfil such an objective, a strong interest in the robotics community is collected by the so-called wearable robots, a class of robotics systems that are worn and directly controlled by the human operator. Wearable robots, together with powered orthoses that exploit robotic components and control strategies, can represent an immediate resource also for allowing humans to restore manipulation and/or walking functionalities.

The present chapter deals with wearable robotics systems capable of providing different levels of functional and/or operational augmentation to the human beings for specific functions or tasks. Prostheses, powered orthoses, and exoskeletons are described for upper limb, lower limb, and whole body structures. State-of-theart devices together with their functionalities and main components are presented for each class of wearable system. Critical design issues and open research aspects are reported.

Body Extender - A fully powered whole-body exoskeleton

Author  Massimo Bergamasco

Video ID : 152

The video shows the main functionalities and capabilities of the fully-powered, whole-body exoskeleton Body Extender.

Chapter 63 — Medical Robotics and Computer-Integrated Surgery

Russell H. Taylor, Arianna Menciassi, Gabor Fichtinger, Paolo Fiorini and Paolo Dario

The growth of medical robotics since the mid- 1980s has been striking. From a few initial efforts in stereotactic brain surgery, orthopaedics, endoscopic surgery, microsurgery, and other areas, the field has expanded to include commercially marketed, clinically deployed systems, and a robust and exponentially expanding research community. This chapter will discuss some major themes and illustrate them with examples from current and past research. Further reading providing a more comprehensive review of this rapidly expanding field is suggested in Sect. 63.4.

Medical robotsmay be classified in many ways: by manipulator design (e.g., kinematics, actuation); by level of autonomy (e.g., preprogrammed versus teleoperation versus constrained cooperative control), by targeted anatomy or technique (e.g., cardiac, intravascular, percutaneous, laparoscopic, microsurgical); or intended operating environment (e.g., in-scanner, conventional operating room). In this chapter, we have chosen to focus on the role of medical robots within the context of larger computer-integrated systems including presurgical planning, intraoperative execution, and postoperative assessment and follow-up.

First, we introduce basic concepts of computerintegrated surgery, discuss critical factors affecting the eventual deployment and acceptance of medical robots, and introduce the basic system paradigms of surgical computer-assisted planning, execution, monitoring, and assessment (surgical CAD/CAM) and surgical assistance. In subsequent sections, we provide an overview of the technology ofmedical robot systems and discuss examples of our basic system paradigms, with brief additional discussion topics of remote telesurgery and robotic surgical simulators. We conclude with some thoughts on future research directions and provide suggested further reading.

Da Vinci Xi introduction | Engadget

Author  Intuitive Surgical

Video ID : 824

The movie shows the use and performance of the Da Vinci Xi robot, the novel generation of the Da Vinci robot which features improved flexibility.

Chapter 53 — Multiple Mobile Robot Systems

Lynne E. Parker, Daniela Rus and Gaurav S. Sukhatme

Within the context of multiple mobile, and networked robot systems, this chapter explores the current state of the art. After a brief introduction, we first examine architectures for multirobot cooperation, exploring the alternative approaches that have been developed. Next, we explore communications issues and their impact on multirobot teams in Sect. 53.3, followed by a discussion of networked mobile robots in Sect. 53.4. Following this we discuss swarm robot systems in Sect. 53.5 and modular robot systems in Sect. 53.6. While swarm and modular systems typically assume large numbers of homogeneous robots, other types of multirobot systems include heterogeneous robots. We therefore next discuss heterogeneity in cooperative robot teams in Sect. 53.7. Once robot teams allow for individual heterogeneity, issues of task allocation become important; Sect. 53.8 therefore discusses common approaches to task allocation. Section 53.9 discusses the challenges of multirobot learning, and some representative approaches. We outline some of the typical application domains which serve as test beds for multirobot systems research in Sect. 53.10. Finally, we conclude in Sect. 53.11 with some summary remarks and suggestions for further reading.

Transport of a child by swarm-bots

Author  Ivan Aloisio, Michael Bonani, Francesco Mondada, Andre Guignard, Roderich Gross, Dario Floreano

Video ID : 212

This video shows a swarm of s-bot, miniature, mobile robots in swarm-bot formation pulling a child across the floor.

Chapter 34 — Visual Servoing

François Chaumette, Seth Hutchinson and Peter Corke

This chapter introduces visual servo control, using computer vision data in the servo loop to control the motion of a robot. We first describe the basic techniques that are by now well established in the field. We give a general overview of the formulation of the visual servo control problem, and describe the two archetypal visual servo control schemes: image-based and pose-based visual servo control. We then discuss performance and stability issues that pertain to these two schemes, motivating advanced techniques. Of the many advanced techniques that have been developed, we discuss 2.5-D, hybrid, partitioned, and switched approaches. Having covered a variety of control schemes, we deal with target tracking and controlling motion directly in the joint space and extensions to under-actuated ground and aerial robots. We conclude by describing applications of visual servoing in robotics.

PBVS on a 6-DOF robot arm (2)

Author  Francois Chaumette, Seth Hutchinson, Peter Corke

Video ID : 63

This video shows a PBVS on a 6-DOF robot arm with (c*^t_c, theta u) as visual features. It corresponds to the results depicted in Figure 34.10.

Chapter 47 — Motion Planning and Obstacle Avoidance

Javier Minguez, Florant Lamiraux and Jean-Paul Laumond

This chapter describes motion planning and obstacle avoidance for mobile robots. We will see how the two areas do not share the same modeling background. From the very beginning of motion planning, research has been dominated by computer sciences. Researchers aim at devising well-grounded algorithms with well-understood completeness and exactness properties.

The challenge of this chapter is to present both nonholonomic motion planning (Sects. 47.1–47.6) and obstacle avoidance (Sects. 47.7–47.10) issues. Section 47.11 reviews recent successful approaches that tend to embrace the whole problemofmotion planning and motion control. These approaches benefit from both nonholonomic motion planning and obstacle avoidance methods.

Autonomous robotic smart-wheelchair navigation in an urban environment

Author  VADERlab

Video ID : 707

This video demonstrates the reliable navigation of a smart wheelchair system (SWS) in an urban environment. Urban environments present unique challenges for service robots. They require localization accuracy at the sidewalk level, but compromise estimated GPS positions through significant multipath effects. However, they are also rich in landmarks that can be leveraged by feature-based localization approaches. To this end, the SWS employed a map-based approach. A map of South Bethlehem was acquired using a server vehicle, synthesized a priori, and made accessible to the SWS client. The map embedded not only the locations of landmarks, but also semantic data delineating seven different landmark classes to facilitate robust data association. Landmark segmentation and tracking by the SWS was then accomplished using both 2-D and 3-D LIDAR systems. The resulting localization algorithm has demonstrated decimeter-level positioning accuracy in a global coordinate frame. The localization package was integrated into a ROS framework with a sample-based planner and control loop running at 5 Hz. For validation, the SWS repeatedly navigated autonomously between Lehigh University's Packard Laboratory and the University bookstore, a distance of approximately 1.0 km roundtrip.

Chapter 40 — Mobility and Manipulation

Oliver Brock, Jaeheung Park and Marc Toussaint

Mobile manipulation requires the integration of methodologies from all aspects of robotics. Instead of tackling each aspect in isolation,mobilemanipulation research exploits their interdependence to solve challenging problems. As a result, novel views of long-standing problems emerge. In this chapter, we present these emerging views in the areas of grasping, control, motion generation, learning, and perception. All of these areas must address the shared challenges of high-dimensionality, uncertainty, and task variability. The section on grasping and manipulation describes a trend towards actively leveraging contact and physical and dynamic interactions between hand, object, and environment. Research in control addresses the challenges of appropriately coupling mobility and manipulation. The field of motion generation increasingly blurs the boundaries between control and planning, leading to task-consistent motion in high-dimensional configuration spaces, even in dynamic and partially unknown environments. A key challenge of learning formobilemanipulation consists of identifying the appropriate priors, and we survey recent learning approaches to perception, grasping, motion, and manipulation. Finally, a discussion of promising methods in perception shows how concepts and methods from navigation and active perception are applied.

Catching objects in flight

Author  Seungsu Kim, Ashwini Shukla, Aude Billard

Video ID : 653

We target the difficult problem of catching in-flight objects with uneven shapes. This requires the solution of three complex problems: predicting accurately the trajectory of fast-moving objects, predicting the feasible catching configuration, and planning the arm motion, all within milliseconds. We follow a programming-by-demonstration approach in order to learn models of the object and the arm dynamics from throwing examples. We propose a new methodology for finding a feasible catching configuration in a probabilistic manner. We leverage the strength of dynamical systems for encoding motion from several demonstrations. This enables fast and online adaptation of the arm motion in the presence of sensor uncertainty. We validate the approach in simulation with the iCub humanoid robot and in real-world experiment with the KUKA LWR 4+ (7-DOF arm robot) for catching a hammer, a tennis racket, an empty bottle, a partially filled bottle and a cardboard box.

Chapter 37 — Contact Modeling and Manipulation

Imin Kao, Kevin M. Lynch and Joel W. Burdick

Robotic manipulators use contact forces to grasp and manipulate objects in their environments. Fixtures rely on contacts to immobilize workpieces. Mobile robots and humanoids use wheels or feet to generate the contact forces that allow them to locomote. Modeling of the contact interface, therefore, is fundamental to analysis, design, planning, and control of many robotic tasks.

This chapter presents an overview of the modeling of contact interfaces, with a particular focus on their use in manipulation tasks, including graspless or nonprehensile manipulation modes such as pushing. Analysis and design of grasps and fixtures also depends on contact modeling, and these are discussed in more detail in Chap. 38. Sections 37.2–37.5 focus on rigid-body models of contact. Section 37.2 describes the kinematic constraints caused by contact, and Sect. 37.3 describes the contact forces that may arise with Coulomb friction. Section 37.4 provides examples of analysis of multicontact manipulation tasks with rigid bodies and Coulomb friction. Section 37.5 extends the analysis to manipulation by pushing. Section 37.6 introduces modeling of contact interfaces, kinematic duality, and pressure distribution and soft contact interface. Section 37.7 describes the concept of the friction limit surface and illustrates it with an example demonstrating the construction of a limit surface for a soft contact. Finally, Sect. 37.8 discusses how these more accurate models can be used in fixture analysis and design.

Horizontal transport by 2-DOF vibration

Author  Kevin M. Lynch, Paul Umbanhowar

Video ID : 803

This video demonstrates the use of vertical and horizontal vibration of a supporting bar to cause the object on top to slide one way or the other. Upward acceleration of the bar increases the normal force, thereby increasing the tangential friction force during sliding. With periodic vibration, the object achieves a limit-cycle motion. By choosing the phasing of the vertical and horizontal vibration, the net motion during a limit cycle can be to the left or right. Video shown at 1/20 actual speed. This video is related to the example shown in Fig. 37.9 in Chap. 37.4.3 of the Springer Handbook of Robotics, 2nd ed (2016).

Chapter 11 — Robots with Flexible Elements

Alessandro De Luca and Wayne J. Book

Design issues, dynamic modeling, trajectory planning, and feedback control problems are presented for robot manipulators having components with mechanical flexibility, either concentrated at the joints or distributed along the links. The chapter is divided accordingly into two main parts. Similarities or differences between the two types of flexibility are pointed out wherever appropriate.

For robots with flexible joints, the dynamic model is derived in detail by following a Lagrangian approach and possible simplified versions are discussed. The problem of computing the nominal torques that produce a desired robot motion is then solved. Regulation and trajectory tracking tasks are addressed by means of linear and nonlinear feedback control designs.

For robots with flexible links, relevant factors that lead to the consideration of distributed flexibility are analyzed. Dynamic models are presented, based on the treatment of flexibility through lumped elements, transfer matrices, or assumed modes. Several specific issues are then highlighted, including the selection of sensors, the model order used for control design, and the generation of effective commands that reduce or eliminate residual vibrations in rest-to-rest maneuvers. Feedback control alternatives are finally discussed.

In each of the two parts of this chapter, a section is devoted to the illustration of the original references and to further readings on the subject.

Cartesian impedance control with damping off

Author  Alin Albu-Schaeffer

Video ID : 133

This 2010 video shows the performance of a Cartesian impedance controller for the torque-controlled KUKA-LWR robot holding an extra payload, when the damping term in the controller has been turned off. The response to a contact force (a human pushing on the end-effector) is oscillatory due to the joint elasticity. This is one of two coordinated videos, the other for the case with controller damping turned on. Reference: A. Albu-Schaeffer, C. Ott, G. Hirzinger: A unified passivity-based control framework for position, torque and impedance control of flexible joint robots, Int. J. Robot. Res. 26(1), 23-39 (2007) doi: 10.1177/0278364907073776

Chapter 46 — Simultaneous Localization and Mapping

Cyrill Stachniss, John J. Leonard and Sebastian Thrun

This chapter provides a comprehensive introduction in to the simultaneous localization and mapping problem, better known in its abbreviated form as SLAM. SLAM addresses the main perception problem of a robot navigating an unknown environment. While navigating the environment, the robot seeks to acquire a map thereof, and at the same time it wishes to localize itself using its map. The use of SLAM problems can be motivated in two different ways: one might be interested in detailed environment models, or one might seek to maintain an accurate sense of a mobile robot’s location. SLAM serves both of these purposes.

We review the three major paradigms from which many published methods for SLAM are derived: (1) the extended Kalman filter (EKF); (2) particle filtering; and (3) graph optimization. We also review recent work in three-dimensional (3-D) SLAM using visual and red green blue distance-sensors (RGB-D), and close with a discussion of open research problems in robotic mapping.

Large-scale SLAM using the Atlas framework

Author  Michael Bosse

Video ID : 440

This video shows the operation of the Atlas framework for real-time, large-scale mapping using the MIT Killian Court data set. Atlas employed graphs of coordinate frames. Each vertex in the graph represents a local coordinate frame, and each edge represents the transformation between adjacent local coordinate frames. In each local coordinate frame, extended Kalman filter SLAM (Chap. 46.3.1, Springer Handbook of Robotics, 2nd edn 2016) is performed to make a map of the local environment and to estimate the current robot pose, along with the uncertainties of each. Each map's uncertainties were modelled with respect to its own local frame. Probabilities of entities in relation to arbitrary map-frames were generated by following a path formed by the edges between adjacent map-frames, using Dijkstra's shortest path algorithm. Loop-closing was achieved via an efficient map matching algorithm. Reference: M. Bosse, P. M. Newman, J. Leonard, S. Teller: Simultaneous localization and map building in large-scale cyclic environments using the Atlas framework, Int. J. Robot. Res. 23(12), 1113-1139 (2004).