View Chapter

Chapter 23 — Biomimetic Robots

Kyu-Jin Cho and Robert Wood

Biomimetic robot designs attempt to translate biological principles into engineered systems, replacing more classical engineering solutions in order to achieve a function observed in the natural system. This chapter will focus on mechanism design for bio-inspired robots that replicate key principles from nature with novel engineering solutions. The challenges of biomimetic design include developing a deep understanding of the relevant natural system and translating this understanding into engineering design rules. This often entails the development of novel fabrication and actuation to realize the biomimetic design.

This chapter consists of four sections. In Sect. 23.1, we will define what biomimetic design entails, and contrast biomimetic robots with bio-inspired robots. In Sect. 23.2, we will discuss the fundamental components for developing a biomimetic robot. In Sect. 23.3, we will review detailed biomimetic designs that have been developed for canonical robot locomotion behaviors including flapping-wing flight, jumping, crawling, wall climbing, and swimming. In Sect. 23.4, we will discuss the enabling technologies for these biomimetic designs including material and fabrication.

Gravity‐independent rock‐climbing robot and a sample acquisition tool with microspine grippers

Author  Aaron Parness, Matthew Frost, Nitish Thatte, Jonathan P King, Kevin Witkoe, Moises Nevarez, Michael Garrett, Hrand Aghazarian, Brett Kennedy

Video ID : 414

NASA JPL researchers present a 250 mm diameter omni-directional anchor that uses an array of claws with suspension flexures, called microspines, designed to grip rocks on the surfaces of asteroids and comets and to grip the cliff faces and lava tubes of Mars. Part of the paper: A. Parness, M. Frost, N. Thatte, J.P. King: Gravity-independent mobility and drilling on natural rock using microspines, Proc. IEEE Int. Conf. Robot. Autom. (ICRA), St. Paul (2012), pp. 3437-3442.

Chapter 10 — Redundant Robots

Stefano Chiaverini, Giuseppe Oriolo and Anthony A. Maciejewski

This chapter focuses on redundancy resolution schemes, i. e., the techniques for exploiting the redundant degrees of freedom in the solution of the inverse kinematics problem. This is obviously an issue of major relevance for motion planning and control purposes.

In particular, task-oriented kinematics and the basic methods for its inversion at the velocity (first-order differential) level are first recalled, with a discussion of the main techniques for handling kinematic singularities. Next, different firstorder methods to solve kinematic redundancy are arranged in two main categories, namely those based on the optimization of suitable performance criteria and those relying on the augmentation of the task space. Redundancy resolution methods at the acceleration (second-order differential) level are then considered in order to take into account dynamics issues, e.g., torque minimization. Conditions under which a cyclic task motion results in a cyclic joint motion are also discussed; this is a major issue when a redundant manipulator is used to execute a repetitive task, e.g., in industrial applications. The use of kinematic redundancy for fault tolerance is analyzed in detail. Suggestions for further reading are given in a final section.

Human robot arm with redundancy resolution

Author  PRISMA Lab

Video ID : 816

In this video, the mapping of human-arm motion to an anthropomorphic robot arm (7-DOF Kuka LWR ) using Xsens MVN is demonstrated. The desired end-effector trajectories of the robot are reconstructed from the human hand, forearm and upper arm trajectories in the Cartesian space obtained from the motion tracking system by means of human-arm biomechanical models and sensor-fusion algorithms embedded in the Xsens technology. The desired pose of the robot is reconstructed taking into account the differences between the robot and human-arm kinematics and is obtained by suitably scaling to the human-arm link dimensions.

Chapter 46 — Simultaneous Localization and Mapping

Cyrill Stachniss, John J. Leonard and Sebastian Thrun

This chapter provides a comprehensive introduction in to the simultaneous localization and mapping problem, better known in its abbreviated form as SLAM. SLAM addresses the main perception problem of a robot navigating an unknown environment. While navigating the environment, the robot seeks to acquire a map thereof, and at the same time it wishes to localize itself using its map. The use of SLAM problems can be motivated in two different ways: one might be interested in detailed environment models, or one might seek to maintain an accurate sense of a mobile robot’s location. SLAM serves both of these purposes.

We review the three major paradigms from which many published methods for SLAM are derived: (1) the extended Kalman filter (EKF); (2) particle filtering; and (3) graph optimization. We also review recent work in three-dimensional (3-D) SLAM using visual and red green blue distance-sensors (RGB-D), and close with a discussion of open research problems in robotic mapping.

Pose graph compression for laser-based SLAM 2

Author  Cyrill Stachniss

Video ID : 450

This video illustrates pose graph compression, a technique for achieving long-term SLAM, as discussed in Chap. 46.5, Springer Handbook of Robotics, 2nd edn (2016). Reference: H. Kretzschmar, C. Stachniss: Information-theoretic compression of pose graphs for laser-based SLAM. Reference: Int. J. Robot. Res. 31(11), 1219-1230 (2012).

Chapter 27 — Micro-/Nanorobots

Bradley J. Nelson, Lixin Dong and Fumihito Arai

The field of microrobotics covers the robotic manipulation of objects with dimensions in the millimeter to micron range as well as the design and fabrication of autonomous robotic agents that fall within this size range. Nanorobotics is defined in the same way only for dimensions smaller than a micron. With the ability to position and orient objects with micron- and nanometer-scale dimensions, manipulation at each of these scales is a promising way to enable the assembly of micro- and nanosystems, including micro- and nanorobots.

This chapter overviews the state of the art of both micro- and nanorobotics, outlines scaling effects, actuation, and sensing and fabrication at these scales, and focuses on micro- and nanorobotic manipulation systems and their application in microassembly, biotechnology, and the construction and characterization of micro and nanoelectromechanical systems (MEMS/NEMS). Material science, biotechnology, and micro- and nanoelectronics will also benefit from advances in these areas of robotics.

The electromagnetic control of an untethered microrobot

Author  Bradley J. Nelson

Video ID : 12

This is a video of a computer simulation showing the electromagnetic control of an untethered microrobot for ophthalmic applications, such as targeted drug delivery and epiretinal membrane peeling.

Chapter 1 — Robotics and the Handbook

Bruno Siciliano and Oussama Khatib

Robots! Robots on Mars and in oceans, in hospitals and homes, in factories and schools; robots fighting fires, making goods and products, saving time and lives. Robots today are making a considerable impact on many aspects of modern life, from industrial manufacturing to healthcare, transportation, and exploration of the deep space and sea. Tomorrow, robotswill be as pervasive and personal as today’s personal computers. This chapter retraces the evolution of this fascinating field from the ancient to themodern times through a number of milestones: from the first automated mechanical artifact (1400 BC) through the establishment of the robot concept in the 1920s, the realization of the first industrial robots in the 1960s, the definition of robotics science and the birth of an active research community in the 1980s, and the expansion towards the challenges of the human world of the twenty-first century. Robotics in its long journey has inspired this handbook which is organized in three layers: the foundations of robotics science; the consolidated methodologies and technologies of robot design, sensing and perception, manipulation and interfaces, mobile and distributed robotics; the advanced applications of field and service robotics, as well as of human-centered and life-like robotics.

Robots — A 50 year journey

Author  Oussama Khatib

Video ID : 805

In this collection of short segments, this video retraces the history of the most influential modern robots developed in the 20th century (1950-2000). The 50-year journey was first presented at the 2000 IEEE International Conference on Robotics and Automation (ICRA) in San Francisco.

Chapter 76 — Evolutionary Robotics

Stefano Nolfi, Josh Bongard, Phil Husbands and Dario Floreano

Evolutionary Robotics is a method for automatically generating artificial brains and morphologies of autonomous robots. This approach is useful both for investigating the design space of robotic applications and for testing scientific hypotheses of biological mechanisms and processes. In this chapter we provide an overview of methods and results of Evolutionary Robotics with robots of different shapes, dimensions, and operation features. We consider both simulated and physical robots with special consideration to the transfer between the two worlds.

Morphological change in an autonomous robot.

Author  Josh Bongard

Video ID : 771

This video demonstrates a robot that is able to change its morphology. It is here shown that this change enables evolution to create useful controllers for this robot faster than a comparable robot that does not undergo morphological change.

Chapter 74 — Learning from Humans

Aude G. Billard, Sylvain Calinon and Rüdiger Dillmann

This chapter surveys the main approaches developed to date to endow robots with the ability to learn from human guidance. The field is best known as robot programming by demonstration, robot learning from/by demonstration, apprenticeship learning and imitation learning. We start with a brief historical overview of the field. We then summarize the various approaches taken to solve four main questions: when, what, who and when to imitate. We emphasize the importance of choosing well the interface and the channels used to convey the demonstrations, with an eye on interfaces providing force control and force feedback. We then review algorithmic approaches to model skills individually and as a compound and algorithms that combine learning from human guidance with reinforcement learning. We close with a look on the use of language to guide teaching and a list of open issues.

Incremental learning of finger manipulation with tactile capability

Author  Eric Sauser, Brenna Argall, Aude Billard

Video ID : 104

Incremental learning of fingers manipulation skill, first demonstrated through a dataglove and then refined through kinesthetic teaching by exploiting the tactile capabilities of the iCub humanoid robot. Reference: E.L. Sauser, B.D. Argall, G. Metta, A.G. Billard: Iterative learning of grasp adaptation through human corrections, Robot. Auton. Syst. 60(1), 55–71 (2012); URL: http://www.sauser.org/videos.php?id=9 .

Chapter 9 — Force Control

Luigi Villani and Joris De Schutter

A fundamental requirement for the success of a manipulation task is the capability to handle the physical contact between a robot and the environment. Pure motion control turns out to be inadequate because the unavoidable modeling errors and uncertainties may cause a rise of the contact force, ultimately leading to an unstable behavior during the interaction, especially in the presence of rigid environments. Force feedback and force control becomes mandatory to achieve a robust and versatile behavior of a robotic system in poorly structured environments as well as safe and dependable operation in the presence of humans. This chapter starts from the analysis of indirect force control strategies, conceived to keep the contact forces limited by ensuring a suitable compliant behavior to the end effector, without requiring an accurate model of the environment. Then the problem of interaction tasks modeling is analyzed, considering both the case of a rigid environment and the case of a compliant environment. For the specification of an interaction task, natural constraints set by the task geometry and artificial constraints set by the control strategy are established, with respect to suitable task frames. This formulation is the essential premise to the synthesis of hybrid force/motion control schemes.

Compliant robot motion: Control and task specification

Author  Joris De Schutter

Video ID : 687

The video contains work developed in the PhD thesis of Joris De Schutter, where the concept of compliant motion based on external force feedback loops and on the task frame formalism to specify interaction tasks were introduced. The video was recorded in 1984. The references for this video are 1. J. De Schutter, H. Van Brussel: Compliant robot motion II. A control approach based on external control loops, Int. J. Robot. Res. 7(4), 18-33 (1988) 2. J. De Schutter, H. Van Brussel: Compliant robot motion I. A formalism for specifying compliant motion tasks, Int. J. Robot. Res. 7(4), 3-17 (1988)

Chapter 19 — Robot Hands

Claudio Melchiorri and Makoto Kaneko

Multifingered robot hands have a potential capability for achieving dexterous manipulation of objects by using rolling and sliding motions. This chapter addresses design, actuation, sensing and control of multifingered robot hands. From the design viewpoint, they have a strong constraint in actuator implementation due to the space limitation in each joint. After briefly introducing the overview of anthropomorphic end-effector and its dexterity in Sect. 19.1, various approaches for actuation are provided with their advantages and disadvantages in Sect. 19.2. The key classification is (1) remote actuation or build-in actuation and (2) the relationship between the number of joints and the number of actuator. In Sect. 19.3, actuators and sensors used for multifingered hands are described. In Sect. 19.4, modeling and control are introduced by considering both dynamic effects and friction. Applications and trends are given in Sect. 19.5. Finally, this chapter is closed with conclusions and further reading.

The DLR Hand performing several tasks

Author  DLR - Robotics and Mechatronics Center

Video ID : 769

In the video, several experiments and the execution of different tasks by the DLR Hand II are shown.

Chapter 71 — Cognitive Human-Robot Interaction

Bilge Mutlu, Nicholas Roy and Selma Šabanović

A key research challenge in robotics is to design robotic systems with the cognitive capabilities necessary to support human–robot interaction. These systems will need to have appropriate representations of the world; the task at hand; the capabilities, expectations, and actions of their human counterparts; and how their own actions might affect the world, their task, and their human partners. Cognitive human–robot interaction is a research area that considers human(s), robot(s), and their joint actions as a cognitive system and seeks to create models, algorithms, and design guidelines to enable the design of such systems. Core research activities in this area include the development of representations and actions that allow robots to participate in joint activities with people; a deeper understanding of human expectations and cognitive responses to robot actions; and, models of joint activity for human–robot interaction. This chapter surveys these research activities by drawing on research questions and advances from a wide range of fields including computer science, cognitive science, linguistics, and robotics.

Designing robot learners that ask good questions

Author  Maya Cakmak, Andrea Thomaz

Video ID : 237

Programming new skills on a robot should take minimal time and effort. One approach to achieve this goal is to allow the robot to ask questions. This idea, called active learning, has recently caught a lot of attention in the robotics community. However, it has not been explored from a human-robot interaction perspective. We identify three types of questions (label, demonstration, and feature queries) and discuss how a robot can use these while learning new skills. Then, we present an experiment on human question-asking which characterizes the extent to which humans use these question types. Finally, we evaluate the three types of question within a human-robot teaching interaction. We investigate the ease with which different types of questions are answered and whether or not there is a general preference of one type of question over another. Based on our findings from both experiments, we provide guidelines for designing question-asking behaviors for a robot learner.