View Chapter

Chapter 23 — Biomimetic Robots

Kyu-Jin Cho and Robert Wood

Biomimetic robot designs attempt to translate biological principles into engineered systems, replacing more classical engineering solutions in order to achieve a function observed in the natural system. This chapter will focus on mechanism design for bio-inspired robots that replicate key principles from nature with novel engineering solutions. The challenges of biomimetic design include developing a deep understanding of the relevant natural system and translating this understanding into engineering design rules. This often entails the development of novel fabrication and actuation to realize the biomimetic design.

This chapter consists of four sections. In Sect. 23.1, we will define what biomimetic design entails, and contrast biomimetic robots with bio-inspired robots. In Sect. 23.2, we will discuss the fundamental components for developing a biomimetic robot. In Sect. 23.3, we will review detailed biomimetic designs that have been developed for canonical robot locomotion behaviors including flapping-wing flight, jumping, crawling, wall climbing, and swimming. In Sect. 23.4, we will discuss the enabling technologies for these biomimetic designs including material and fabrication.

Robot dragonfly DelFly Explorer flies autonomously

Author  Christophe De Wagter, Sjoerd Tijmons, Bart D.W. Remes, Guido C.H.E. de Croon

Video ID : 402

The DelFly Explorer is the first flapping-wing micro air vehicle that is able to fly with complete autonomy in unknown environments. Weighing just 20 g, it is equipped with a 4 g onboard, stereo-vision system. The DelFly Explorer can perform an autonomous take-off, maintain its height, and avoid obstacles for as long as its battery lasts (~9 min). All sensing and processing is performed onboard, so no human or offboard computer is in the loop.

Chapter 21 — Actuators for Soft Robotics

Alin Albu-Schäffer and Antonio Bicchi

Although we do not know as yet how robots of the future will look like exactly, most of us are sure that they will not resemble the heavy, bulky, rigid machines dangerously moving around in old fashioned industrial automation. There is a growing consensus, in the research community as well as in expectations from the public, that robots of the next generation will be physically compliant and adaptable machines, closely interacting with humans and moving safely, smoothly and efficiently - in other terms, robots will be soft.

This chapter discusses the design, modeling and control of actuators for the new generation of soft robots, which can replace conventional actuators in applications where rigidity is not the first and foremost concern in performance. The chapter focuses on the technology, modeling, and control of lumped parameters of soft robotics, that is, systems of discrete, interconnected, and compliant elements. Distributed parameters, snakelike and continuum soft robotics, are presented in Chap. 20, while Chap. 23 discusses in detail the biomimetic motivations that are often behind soft robotics.

Maccepa system

Author  Michael Gutmacher, Bram Vanderborght et al.

Video ID : 467

The Maccepa system used for a brachiation robot.

Chapter 71 — Cognitive Human-Robot Interaction

Bilge Mutlu, Nicholas Roy and Selma Šabanović

A key research challenge in robotics is to design robotic systems with the cognitive capabilities necessary to support human–robot interaction. These systems will need to have appropriate representations of the world; the task at hand; the capabilities, expectations, and actions of their human counterparts; and how their own actions might affect the world, their task, and their human partners. Cognitive human–robot interaction is a research area that considers human(s), robot(s), and their joint actions as a cognitive system and seeks to create models, algorithms, and design guidelines to enable the design of such systems. Core research activities in this area include the development of representations and actions that allow robots to participate in joint activities with people; a deeper understanding of human expectations and cognitive responses to robot actions; and, models of joint activity for human–robot interaction. This chapter surveys these research activities by drawing on research questions and advances from a wide range of fields including computer science, cognitive science, linguistics, and robotics.

Human-robot jazz improvisation

Author  Guy Hoffman

Video ID : 236

The stage debut of Shimon, the robotic marimba player. Also, the world's first human-robot rendition of Duke Jordan's "Jordu", for human piano and robot marimba.

Chapter 40 — Mobility and Manipulation

Oliver Brock, Jaeheung Park and Marc Toussaint

Mobile manipulation requires the integration of methodologies from all aspects of robotics. Instead of tackling each aspect in isolation,mobilemanipulation research exploits their interdependence to solve challenging problems. As a result, novel views of long-standing problems emerge. In this chapter, we present these emerging views in the areas of grasping, control, motion generation, learning, and perception. All of these areas must address the shared challenges of high-dimensionality, uncertainty, and task variability. The section on grasping and manipulation describes a trend towards actively leveraging contact and physical and dynamic interactions between hand, object, and environment. Research in control addresses the challenges of appropriately coupling mobility and manipulation. The field of motion generation increasingly blurs the boundaries between control and planning, leading to task-consistent motion in high-dimensional configuration spaces, even in dynamic and partially unknown environments. A key challenge of learning formobilemanipulation consists of identifying the appropriate priors, and we survey recent learning approaches to perception, grasping, motion, and manipulation. Finally, a discussion of promising methods in perception shows how concepts and methods from navigation and active perception are applied.

Combined mobility and manipulation - Operational space control of free-flying space robots

Author  Jeff Russakow, Stephen Rock

Video ID : 787

An environmental space is simulated in two dimensions using an air-bearing over a flat surface. The operational space-control framework enables the dynamically decoupled motion and force control of the object.

Chapter 74 — Learning from Humans

Aude G. Billard, Sylvain Calinon and Rüdiger Dillmann

This chapter surveys the main approaches developed to date to endow robots with the ability to learn from human guidance. The field is best known as robot programming by demonstration, robot learning from/by demonstration, apprenticeship learning and imitation learning. We start with a brief historical overview of the field. We then summarize the various approaches taken to solve four main questions: when, what, who and when to imitate. We emphasize the importance of choosing well the interface and the channels used to convey the demonstrations, with an eye on interfaces providing force control and force feedback. We then review algorithmic approaches to model skills individually and as a compound and algorithms that combine learning from human guidance with reinforcement learning. We close with a look on the use of language to guide teaching and a list of open issues.

Active teaching

Author  Maya Cakmak, Andrea Thomaz

Video ID : 107

Active-teaching scenario where the Simon humanoid robot asks for help during or after teaching, verifying that its understanding of the task is correct. Reference: M. Cakmak, A.L. Thomaz: Designing robot learners that ask good questions, Proc. ACM/IEEE Int. Conf. Human-Robot Interaction (HRI), Boston (2012), pp. 17–24, URL: https://www.youtube.com/user/SimonTheSocialRobot .

Chapter 34 — Visual Servoing

François Chaumette, Seth Hutchinson and Peter Corke

This chapter introduces visual servo control, using computer vision data in the servo loop to control the motion of a robot. We first describe the basic techniques that are by now well established in the field. We give a general overview of the formulation of the visual servo control problem, and describe the two archetypal visual servo control schemes: image-based and pose-based visual servo control. We then discuss performance and stability issues that pertain to these two schemes, motivating advanced techniques. Of the many advanced techniques that have been developed, we discuss 2.5-D, hybrid, partitioned, and switched approaches. Having covered a variety of control schemes, we deal with target tracking and controlling motion directly in the joint space and extensions to under-actuated ground and aerial robots. We conclude by describing applications of visual servoing in robotics.

IBVS on a 6-DOF robot arm (1)

Author  Francois Chaumette, Seth Hutchinson, Peter Corke

Video ID : 59

This video shows an IBVS on a 6-DOF robot arm with Cartesian coordinates of image points as visual features and a desired interaction matrix in the control scheme. It corresponds to the results depicted in Figure 34.2.

Chapter 20 — Snake-Like and Continuum Robots

Ian D. Walker, Howie Choset and Gregory S. Chirikjian

This chapter provides an overview of the state of the art of snake-like (backbones comprised of many small links) and continuum (continuous backbone) robots. The history of each of these classes of robot is reviewed, focusing on key hardware developments. A review of the existing theory and algorithms for kinematics for both types of robot is presented, followed by a summary ofmodeling of locomotion for snake-like and continuum mechanisms.

OctArms I-V

Author  Ian Walker

Video ID : 158

Video showing five different iterations of the OctArm continuum manipulator.

Chapter 46 — Simultaneous Localization and Mapping

Cyrill Stachniss, John J. Leonard and Sebastian Thrun

This chapter provides a comprehensive introduction in to the simultaneous localization and mapping problem, better known in its abbreviated form as SLAM. SLAM addresses the main perception problem of a robot navigating an unknown environment. While navigating the environment, the robot seeks to acquire a map thereof, and at the same time it wishes to localize itself using its map. The use of SLAM problems can be motivated in two different ways: one might be interested in detailed environment models, or one might seek to maintain an accurate sense of a mobile robot’s location. SLAM serves both of these purposes.

We review the three major paradigms from which many published methods for SLAM are derived: (1) the extended Kalman filter (EKF); (2) particle filtering; and (3) graph optimization. We also review recent work in three-dimensional (3-D) SLAM using visual and red green blue distance-sensors (RGB-D), and close with a discussion of open research problems in robotic mapping.

Hierarchical optimization for pose graphs on manifolds

Author  Giorgio Grisetti

Video ID : 445

This video provides an illustration of graph-based SLAM, as described in Chap. 46.3.3, Springer Handbook of Robotics, 2nd edn (2016), using the HOGMAN algorithm. Reference: G. Grisetti, R. Kuemmerle, C. Stachniss, U. Frese, C. Hertzberg: Hierarchical optimization on manifolds for online 2-D and 3-D mapping, IEEE Int. Conf. Robot. Autom. (ICRA), Anchorage (2010), pp. 273-278; doi: 10.1109/ROBOT.2010.5509407.

Chapter 30 — Sonar Sensing

Lindsay Kleeman and Roman Kuc

Sonar or ultrasonic sensing uses the propagation of acoustic energy at higher frequencies than normal hearing to extract information from the environment. This chapter presents the fundamentals and physics of sonar sensing for object localization, landmark measurement and classification in robotics applications. The source of sonar artifacts is explained and how they can be dealt with. Different ultrasonic transducer technologies are outlined with their main characteristics highlighted.

Sonar systems are described that range in sophistication from low-cost threshold-based ranging modules to multitransducer multipulse configurations with associated signal processing requirements capable of accurate range and bearing measurement, interference rejection, motion compensation, and target classification. Continuous-transmission frequency-modulated (CTFM) systems are introduced and their ability to improve target sensitivity in the presence of noise is discussed. Various sonar ring designs that provide rapid surrounding environmental coverage are described in conjunction with mapping results. Finally the chapter ends with a discussion of biomimetic sonar, which draws inspiration from animals such as bats and dolphins.

Sonar-guided chair at Yale

Author  Roman Kuc

Video ID : 295

Four strategically-placed Polaroid vergence sonar pairs on an electric scooter are controlled by a PIC16877 microcontroller interfaced to the joystick and the wheelchair controller. The sonar vergence pair below the foot stand determines if the obstacle is to the left or right. A sonar vergence pair on each side of the chair (at knee level) determines if the chair can pass by an obstacle without collision. A right-side-looking vergence pair maintains the distance and a parallel path to the wall. When sonar detects obstacles, the user joystick commands are overridden to avoid collision with those obstacles. The blindfolded user navigates a cluttered hallway by holding the joystick in a constant forward position.

Chapter 76 — Evolutionary Robotics

Stefano Nolfi, Josh Bongard, Phil Husbands and Dario Floreano

Evolutionary Robotics is a method for automatically generating artificial brains and morphologies of autonomous robots. This approach is useful both for investigating the design space of robotic applications and for testing scientific hypotheses of biological mechanisms and processes. In this chapter we provide an overview of methods and results of Evolutionary Robotics with robots of different shapes, dimensions, and operation features. We consider both simulated and physical robots with special consideration to the transfer between the two worlds.

Visual navigation with collision avoidance

Author  Dario Floreano

Video ID : 37

Evolved Khepera displaying vision-based collision avoidance. A network of spiking neurons is evolved to drive the vision-based robot in the arena. A llight below the rotating contacts enables continuous evolution, even overnight.