View Chapter

Chapter 55 — Space Robotics

Kazuya Yoshida, Brian Wilcox, Gerd Hirzinger and Roberto Lampariello

In the space community, any unmanned spacecraft can be called a robotic spacecraft. However, Space Robots are considered to be more capable devices that can facilitate manipulation, assembling, or servicing functions in orbit as assistants to astronauts, or to extend the areas and abilities of exploration on remote planets as surrogates for human explorers.

In this chapter, a concise digest of the historical overview and technical advances of two distinct types of space robotic systems, orbital robots and surface robots, is provided. In particular, Sect. 55.1 describes orbital robots, and Sect. 55.2 describes surface robots. In Sect. 55.3, the mathematical modeling of the dynamics and control using reference equations are discussed. Finally, advanced topics for future space exploration missions are addressed in Sect. 55.4.

DLR ROTEX: The first remotely-controlled space robot

Author  Gerd Hirzinger, Klaus Landzettel

Video ID : 330

Remotely-controlled space robot ROTEX in the Spacelab D2 mission flown with Shuttle Columbia in April 1993. Among the highlights of the experiment were the verification of shared autonomy when opening a bayonet closure and the fully autonomous grasping of a free-flying object with 6 s round-trip delay.

Chapter 19 — Robot Hands

Claudio Melchiorri and Makoto Kaneko

Multifingered robot hands have a potential capability for achieving dexterous manipulation of objects by using rolling and sliding motions. This chapter addresses design, actuation, sensing and control of multifingered robot hands. From the design viewpoint, they have a strong constraint in actuator implementation due to the space limitation in each joint. After briefly introducing the overview of anthropomorphic end-effector and its dexterity in Sect. 19.1, various approaches for actuation are provided with their advantages and disadvantages in Sect. 19.2. The key classification is (1) remote actuation or build-in actuation and (2) the relationship between the number of joints and the number of actuator. In Sect. 19.3, actuators and sensors used for multifingered hands are described. In Sect. 19.4, modeling and control are introduced by considering both dynamic effects and friction. Applications and trends are given in Sect. 19.5. Finally, this chapter is closed with conclusions and further reading.

The PISA-IIT SoftHand (2)

Author  IIT - Pisa University

Video ID : 750

Demonsrations of the use of the Pisa-IIT SoftHand with human interface.

Chapter 20 — Snake-Like and Continuum Robots

Ian D. Walker, Howie Choset and Gregory S. Chirikjian

This chapter provides an overview of the state of the art of snake-like (backbones comprised of many small links) and continuum (continuous backbone) robots. The history of each of these classes of robot is reviewed, focusing on key hardware developments. A review of the existing theory and algorithms for kinematics for both types of robot is presented, followed by a summary ofmodeling of locomotion for snake-like and continuum mechanisms.

Binary manipulator navigating an obstacle

Author  Greg Chirikjian

Video ID : 163

Simulation of Greg Chirikjian's binary manipulator navigating an obstacle.

Chapter 4 — Mechanism and Actuation

Victor Scheinman, J. Michael McCarthy and Jae-Bok Song

This chapter focuses on the principles that guide the design and construction of robotic systems. The kinematics equations and Jacobian of the robot characterize its range of motion and mechanical advantage, and guide the selection of its size and joint arrangement. The tasks a robot is to perform and the associated precision of its movement determine detailed features such as mechanical structure, transmission, and actuator selection. Here we discuss in detail both the mathematical tools and practical considerations that guide the design of mechanisms and actuation for a robot system.

The following sections (Sect. 4.1) discuss characteristics of the mechanisms and actuation that affect the performance of a robot. Sections 4.2–4.6 discuss the basic features of a robot manipulator and their relationship to the mathematical model that is used to characterize its performance. Sections 4.7 and 4.8 focus on the details of the structure and actuation of the robot and how they combine to yield various types of robots. The final Sect. 4.9 relates these design features to various performance metrics.

Three-fingered robot hand

Author  Masatoshi Ishikawa

Video ID : 642

Fig. 4.5 to Fig. 4.7 Three-fingered robot hand moving very fast.

Chapter 40 — Mobility and Manipulation

Oliver Brock, Jaeheung Park and Marc Toussaint

Mobile manipulation requires the integration of methodologies from all aspects of robotics. Instead of tackling each aspect in isolation,mobilemanipulation research exploits their interdependence to solve challenging problems. As a result, novel views of long-standing problems emerge. In this chapter, we present these emerging views in the areas of grasping, control, motion generation, learning, and perception. All of these areas must address the shared challenges of high-dimensionality, uncertainty, and task variability. The section on grasping and manipulation describes a trend towards actively leveraging contact and physical and dynamic interactions between hand, object, and environment. Research in control addresses the challenges of appropriately coupling mobility and manipulation. The field of motion generation increasingly blurs the boundaries between control and planning, leading to task-consistent motion in high-dimensional configuration spaces, even in dynamic and partially unknown environments. A key challenge of learning formobilemanipulation consists of identifying the appropriate priors, and we survey recent learning approaches to perception, grasping, motion, and manipulation. Finally, a discussion of promising methods in perception shows how concepts and methods from navigation and active perception are applied.

Exploitation of environmental constraints in human and robotic grasping

Author  Clemens Eppner, Raphael Deimel, Jose Alvarez-Ruiz, Marianne Maertens, Oliver Brock

Video ID : 657

We investigate the premise that robust grasping performance is enabled by exploiting constraints present in the environment. Given this premise, grasping becomes a process of successive exploitation of environmental constraints, until a successful grasp has been established. We present evidence for this view by showing robust robotic grasping based on constraint-exploiting grasp strategies, and we show that it is possible to design robotic hands with inherent capabilities for the exploitation of environmental constraints.

Chapter 35 — Multisensor Data Fusion

Hugh Durrant-Whyte and Thomas C. Henderson

Multisensor data fusion is the process of combining observations from a number of different sensors to provide a robust and complete description of an environment or process of interest. Data fusion finds wide application in many areas of robotics such as object recognition, environment mapping, and localization.

This chapter has three parts: methods, architectures, and applications. Most current data fusion methods employ probabilistic descriptions of observations and processes and use Bayes’ rule to combine this information. This chapter surveys the main probabilistic modeling and fusion techniques including grid-based models, Kalman filtering, and sequential Monte Carlo techniques. This chapter also briefly reviews a number of nonprobabilistic data fusion methods. Data fusion systems are often complex combinations of sensor devices, processing, and fusion algorithms. This chapter provides an overview of key principles in data fusion architectures from both a hardware and algorithmic viewpoint. The applications of data fusion are pervasive in robotics and underly the core problem of sensing, estimation, and perception. We highlight two example applications that bring out these features. The first describes a navigation or self-tracking application for an autonomous vehicle. The second describes an application in mapping and environment modeling.

The essential algorithmic tools of data fusion are reasonably well established. However, the development and use of these tools in realistic robotics applications is still developing.

Application of visual odometry for sewer-inspection robots

Author  José Saenz, Christoph Walter, Erik Schulenburg, Norbert Elkmann, Heiko Althoff

Video ID : 638

Exploits a multisensor robot (multiple cameras and range finder) to inspect pipelines.

Chapter 46 — Simultaneous Localization and Mapping

Cyrill Stachniss, John J. Leonard and Sebastian Thrun

This chapter provides a comprehensive introduction in to the simultaneous localization and mapping problem, better known in its abbreviated form as SLAM. SLAM addresses the main perception problem of a robot navigating an unknown environment. While navigating the environment, the robot seeks to acquire a map thereof, and at the same time it wishes to localize itself using its map. The use of SLAM problems can be motivated in two different ways: one might be interested in detailed environment models, or one might seek to maintain an accurate sense of a mobile robot’s location. SLAM serves both of these purposes.

We review the three major paradigms from which many published methods for SLAM are derived: (1) the extended Kalman filter (EKF); (2) particle filtering; and (3) graph optimization. We also review recent work in three-dimensional (3-D) SLAM using visual and red green blue distance-sensors (RGB-D), and close with a discussion of open research problems in robotic mapping.

Hierarchical optimization for pose graphs on manifolds

Author  Giorgio Grisetti

Video ID : 445

This video provides an illustration of graph-based SLAM, as described in Chap. 46.3.3, Springer Handbook of Robotics, 2nd edn (2016), using the HOGMAN algorithm. Reference: G. Grisetti, R. Kuemmerle, C. Stachniss, U. Frese, C. Hertzberg: Hierarchical optimization on manifolds for online 2-D and 3-D mapping, IEEE Int. Conf. Robot. Autom. (ICRA), Anchorage (2010), pp. 273-278; doi: 10.1109/ROBOT.2010.5509407.

Chapter 75 — Biologically Inspired Robotics

Fumiya Iida and Auke Jan Ijspeert

Throughout the history of robotics research, nature has been providing numerous ideas and inspirations to robotics engineers. Small insect-like robots, for example, usually make use of reflexive behaviors to avoid obstacles during locomotion, whereas large bipedal robots are designed to control complex human-like leg for climbing up and down stairs. While providing an overview of bio-inspired robotics, this chapter particularly focus on research which aims to employ robotics systems and technologies for our deeper understanding of biological systems. Unlike most of the other robotics research where researchers attempt to develop robotic applications, these types of bio-inspired robots are generally developed to test unsolved hypotheses in biological sciences. Through close collaborations between biologists and roboticists, bio-inspired robotics research contributes not only to elucidating challenging questions in nature but also to developing novel technologies for robotics applications. In this chapter, we first provide a brief historical background of this research area and then an overview of ongoing research methodologies. A few representative case studies will detail the successful instances in which robotics technologies help identifying biological hypotheses. And finally we discuss challenges and perspectives in the field.

Biologically inspired robotics (or bio-inspired robotics in short) is a very broad research area because almost all robotic systems are, in one way or the other, inspired from biological systems. Therefore, there is no clear distinction between bio-inspired robots and the others, and there is no commonly agreed definition [75.1]. For example, legged robots that walk, hop, and run are usually regarded as bio-inspired robots because many biological systems rely on legged locomotion for their survival. On the other hand, many robotics researchers implement biologicalmodels ofmotion control and navigation onto wheeled platforms, which could also be regarded as bio-inspired robots [75.2].

MIT Compass Gait Robot - Locomotion over rough terrain

Author  Fumiya Iida, Auke Ijspeert

Video ID : 111

This video shows an experiment of the MIT Compass Gait Robot for locomotion over rough terrain. This platform takes advantage of point-feet of compass-gait robots which are usually advantageous for locomotion in challenging, rough terrains. The motion controller uses a simple oscillator because of the intrinsic dynamic stability of this robot.

Chapter 71 — Cognitive Human-Robot Interaction

Bilge Mutlu, Nicholas Roy and Selma Šabanović

A key research challenge in robotics is to design robotic systems with the cognitive capabilities necessary to support human–robot interaction. These systems will need to have appropriate representations of the world; the task at hand; the capabilities, expectations, and actions of their human counterparts; and how their own actions might affect the world, their task, and their human partners. Cognitive human–robot interaction is a research area that considers human(s), robot(s), and their joint actions as a cognitive system and seeks to create models, algorithms, and design guidelines to enable the design of such systems. Core research activities in this area include the development of representations and actions that allow robots to participate in joint activities with people; a deeper understanding of human expectations and cognitive responses to robot actions; and, models of joint activity for human–robot interaction. This chapter surveys these research activities by drawing on research questions and advances from a wide range of fields including computer science, cognitive science, linguistics, and robotics.

Robotic secrets revealed, Episode 1

Author  Greg Trafton

Video ID : 129

A Naval Research Laboratory (NRL) scientist shows a magic trick to a mobile-dextrous-social robot, demonstrating the robot's use and interpretation of gestures. The video highlights recent gesture-recognition work and NRL's novel cognitive architecture, ACT-R/E. While set within a popular game of skill, this video illustrates several Navy-relevant issues, including computational cognitive architecture which enables autonomous function, and integrates perceptual information with higher-level cognitive reasoning, gesture recognition for shoulder-to-shoulder human-robot interaction, and anticipation and learning on a robotic system. Such abilities will be critical for future, naval, autonomous systems for persistent surveillance, tactical mobile robots, and other autonomous platforms.

Chapter 21 — Actuators for Soft Robotics

Alin Albu-Schäffer and Antonio Bicchi

Although we do not know as yet how robots of the future will look like exactly, most of us are sure that they will not resemble the heavy, bulky, rigid machines dangerously moving around in old fashioned industrial automation. There is a growing consensus, in the research community as well as in expectations from the public, that robots of the next generation will be physically compliant and adaptable machines, closely interacting with humans and moving safely, smoothly and efficiently - in other terms, robots will be soft.

This chapter discusses the design, modeling and control of actuators for the new generation of soft robots, which can replace conventional actuators in applications where rigidity is not the first and foremost concern in performance. The chapter focuses on the technology, modeling, and control of lumped parameters of soft robotics, that is, systems of discrete, interconnected, and compliant elements. Distributed parameters, snakelike and continuum soft robotics, are presented in Chap. 20, while Chap. 23 discusses in detail the biomimetic motivations that are often behind soft robotics.

Introducing WildCat

Author  Boston Dynamics

Video ID : 458

WildCat is a four-legged robot being developed to run fast on all types of terrain. So far WildCat has run at about 16 mph on flat terrain using bounding and galloping gaits. The video shows WildCat's best performance so far. WildCat is being developed by Boston Dynamics with funding from DARPA's M3 program. For more information about WildCat visit our website at www.BostonDynamics.com.