View Chapter

Chapter 74 — Learning from Humans

Aude G. Billard, Sylvain Calinon and Rüdiger Dillmann

This chapter surveys the main approaches developed to date to endow robots with the ability to learn from human guidance. The field is best known as robot programming by demonstration, robot learning from/by demonstration, apprenticeship learning and imitation learning. We start with a brief historical overview of the field. We then summarize the various approaches taken to solve four main questions: when, what, who and when to imitate. We emphasize the importance of choosing well the interface and the channels used to convey the demonstrations, with an eye on interfaces providing force control and force feedback. We then review algorithmic approaches to model skills individually and as a compound and algorithms that combine learning from human guidance with reinforcement learning. We close with a look on the use of language to guide teaching and a list of open issues.

Reproduction of dishwasher-unloading task based on task-precedence graph

Author  Michael Pardowitz, Raoul Zöllner, Steffen Knoop, Tamim Asfour, Kristian Regenstein, Pedram Azad, Joachim Schröder, Rüdiger Dillmann

Video ID : 103

ARMAR-III humanoid robot reproducing the task of unloading a dishwasher, based on a task precedence graph learned from demonstrations. References: 1) T. Asfour, K. Regenstein, P. Azad, J. Schroeder, R. Dillmann: ARMAR-III: A humanoid platform for perception-action integration, Int. Workshop Human-Centered Robotic Systems (HCRS)(2006); 2) M. Pardowitz, R. Zöllner, S. Knoop, R. Dillmann: Incremental learning of tasks from user demonstrations, past experiences and vocal comments, IEEE Trans. Syst. Man Cybernet. B37(2), 322–332 (2007); URL: .

Chapter 39 — Cooperative Manipulation

Fabrizio Caccavale and Masaru Uchiyama

This chapter is devoted to cooperative manipulation of a common object by means of two or more robotic arms. The chapter opens with a historical overview of the research on cooperativemanipulation, ranging from early 1970s to very recent years. Kinematics and dynamics of robotic arms cooperatively manipulating a tightly grasped rigid object are presented in depth. As for the kinematics and statics, the chosen approach is based on the socalled symmetric formulation; fundamentals of dynamics and reduced-order models for closed kinematic chains are discussed as well. A few special topics, such as the definition of geometrically meaningful cooperative task space variables, the problem of load distribution, and the definition of manipulability ellipsoids, are included to give the reader a complete picture ofmodeling and evaluation methodologies for cooperative manipulators. Then, the chapter presents the main strategies for controlling both the motion of the cooperative system and the interaction forces between the manipulators and the grasped object; in detail, fundamentals of hybrid force/position control, proportional–derivative (PD)-type force/position control schemes, feedback linearization techniques, and impedance control approaches are given. In the last section further reading on advanced topics related to control of cooperative robots is suggested; in detail, advanced nonlinear control strategies are briefly discussed (i. e., intelligent control approaches, synchronization control, decentralized control); also, fundamental results on modeling and control of cooperative systems possessing some degree of flexibility are briefly outlined.

Cooperative capturing via flexible manipulators

Author  Masaru Uchiyama

Video ID : 68

This is a video showing cooperative capturing of a spinning object via flexible manipulators. Reference: T. Miyabe, M. Yamano, A. Konno, M. Uchiyama: An approach towards a robust object recovery with flexible manipulators, Proc. IEEE/RSJ Int. Conf. Intel. Robot. Syst. (2001) pp. 907-912.

Chapter 23 — Biomimetic Robots

Kyu-Jin Cho and Robert Wood

Biomimetic robot designs attempt to translate biological principles into engineered systems, replacing more classical engineering solutions in order to achieve a function observed in the natural system. This chapter will focus on mechanism design for bio-inspired robots that replicate key principles from nature with novel engineering solutions. The challenges of biomimetic design include developing a deep understanding of the relevant natural system and translating this understanding into engineering design rules. This often entails the development of novel fabrication and actuation to realize the biomimetic design.

This chapter consists of four sections. In Sect. 23.1, we will define what biomimetic design entails, and contrast biomimetic robots with bio-inspired robots. In Sect. 23.2, we will discuss the fundamental components for developing a biomimetic robot. In Sect. 23.3, we will review detailed biomimetic designs that have been developed for canonical robot locomotion behaviors including flapping-wing flight, jumping, crawling, wall climbing, and swimming. In Sect. 23.4, we will discuss the enabling technologies for these biomimetic designs including material and fabrication.

An octopus-bioinspired solution to movement and manipulation for soft robots

Author  Marcello Calisti, Michelle Giorelli, Guy Levy, Barbara Mazzolai, Binyamin Hochner, Cecilia Laschi, Paolo Dario

Video ID : 411

A totally soft robotic arm freely moving in water was inspired by the form and morphology of the octopus.

Chapter 56 — Robotics in Agriculture and Forestry

Marcel Bergerman, John Billingsley, John Reid and Eldert van Henten

Robotics for agriculture and forestry (A&F) represents the ultimate application of one of our society’s latest and most advanced innovations to its most ancient and important industries. Over the course of history, mechanization and automation increased crop output several orders of magnitude, enabling a geometric growth in population and an increase in quality of life across the globe. Rapid population growth and rising incomes in developing countries, however, require ever larger amounts of A&F output. This chapter addresses robotics for A&F in the form of case studies where robotics is being successfully applied to solve well-identified problems. With respect to plant crops, the focus is on the in-field or in-farm tasks necessary to guarantee a quality crop and, generally speaking, end at harvest time. In the livestock domain, the focus is on breeding and nurturing, exploiting, harvesting, and slaughtering and processing. The chapter is organized in four main sections. The first one explains the scope, in particular, what aspects of robotics for A&F are dealt with in the chapter. The second one discusses the challenges and opportunities associated with the application of robotics to A&F. The third section is the core of the chapter, presenting twenty case studies that showcase (mostly) mature applications of robotics in various agricultural and forestry domains. The case studies are not meant to be comprehensive but instead to give the reader a general overview of how robotics has been applied to A&F in the last 10 years. The fourth section concludes the chapter with a discussion on specific improvements to current technology and paths to commercialization.

A mini, unmanned, aerial system for remote sensing in agriculture

Author  Joao Valente, Julian Colorado, Claudio Rossi, Alex Martinez, Jaime Del Cerro, Antonio Barrientos

Video ID : 307

This video shows a mini-aerial robot employed for aerial sampling in precision agriculture (PA). Issues such as field partitioning, path planning, and robust flight control are addressed, together with experimental results collected during outdoor testing.

Chapter 30 — Sonar Sensing

Lindsay Kleeman and Roman Kuc

Sonar or ultrasonic sensing uses the propagation of acoustic energy at higher frequencies than normal hearing to extract information from the environment. This chapter presents the fundamentals and physics of sonar sensing for object localization, landmark measurement and classification in robotics applications. The source of sonar artifacts is explained and how they can be dealt with. Different ultrasonic transducer technologies are outlined with their main characteristics highlighted.

Sonar systems are described that range in sophistication from low-cost threshold-based ranging modules to multitransducer multipulse configurations with associated signal processing requirements capable of accurate range and bearing measurement, interference rejection, motion compensation, and target classification. Continuous-transmission frequency-modulated (CTFM) systems are introduced and their ability to improve target sensitivity in the presence of noise is discussed. Various sonar ring designs that provide rapid surrounding environmental coverage are described in conjunction with mapping results. Finally the chapter ends with a discussion of biomimetic sonar, which draws inspiration from animals such as bats and dolphins.

Monash DSP sonar tracking a moving plane

Author  Lindsay Kleeman

Video ID : 313

A four-transducer system is controlled with a DSP microcontroller which processes echoes to determine the normal incidence and range to a plane reflector. The transducer scans to locate the plane and then tracks the normal-incidence section of the plane as it moves in real time.

Chapter 36 — Motion for Manipulation Tasks

James Kuffner and Jing Xiao

This chapter serves as an introduction to Part D by giving an overview of motion generation and control strategies in the context of robotic manipulation tasks. Automatic control ranging from the abstract, high-level task specification down to fine-grained feedback at the task interface are considered. Some of the important issues include modeling of the interfaces between the robot and the environment at the different time scales of motion and incorporating sensing and feedback. Manipulation planning is introduced as an extension to the basic motion planning problem, which can be modeled as a hybrid system of continuous configuration spaces arising from the act of grasping and moving parts in the environment. The important example of assembly motion is discussed through the analysis of contact states and compliant motion control. Finally, methods aimed at integrating global planning with state feedback control are summarized.

Autonomous continuum grasping

Author  Jing Xiao et al.

Video ID : 357

The video shows three example tasks: (1) autonomous grasping and lifting operation of an object, (2) autonomous obstacle avoidance operation, and (3) autonomous operation of grasping and lifting an object while avoiding another object. Note that the grasped object was lifted about 2 inches off the table.

Chapter 4 — Mechanism and Actuation

Victor Scheinman, J. Michael McCarthy and Jae-Bok Song

This chapter focuses on the principles that guide the design and construction of robotic systems. The kinematics equations and Jacobian of the robot characterize its range of motion and mechanical advantage, and guide the selection of its size and joint arrangement. The tasks a robot is to perform and the associated precision of its movement determine detailed features such as mechanical structure, transmission, and actuator selection. Here we discuss in detail both the mathematical tools and practical considerations that guide the design of mechanisms and actuation for a robot system.

The following sections (Sect. 4.1) discuss characteristics of the mechanisms and actuation that affect the performance of a robot. Sections 4.2–4.6 discuss the basic features of a robot manipulator and their relationship to the mathematical model that is used to characterize its performance. Sections 4.7 and 4.8 focus on the details of the structure and actuation of the robot and how they combine to yield various types of robots. The final Sect. 4.9 relates these design features to various performance metrics.

A parallel robot

Author  Jae-Bok Song

Video ID : 640

Fig. 4.2 A parallel robot can have as many as six serial chains that connect a platform to the base frame.

Chapter 24 — Wheeled Robots

Woojin Chung and Karl Iagnemma

The purpose of this chapter is to introduce, analyze, and compare various wheeled mobile robots (WMRs) and to present several realizations and commonly encountered designs. The mobility of WMR is discussed on the basis of the kinematic constraints resulting from the pure rolling conditions at the contact points between the wheels and the ground. Practical robot structures are classified according to the number of wheels, and features are introduced focusing on commonly adopted designs. Omnimobile robot and articulated robots realizations are described. Wheel–terrain interaction models are presented in order to compute forces at the contact interface. Four possible wheel-terrain interaction cases are shown on the basis of relative stiffness of the wheel and terrain. A suspension system is required to move on uneven surfaces. Structures, dynamics, and important features of commonly used suspensions are explained.

Articulated robot - A robot pushing 3 passive trailers

Author  Woojin Chung

Video ID : 326

An omnidirectional robot pushes three passive trailers along a straight reference trajectory. There are no actuators in the modular passive trailers, and the trailers are connected through free joints. The backward-motion controller of the robot perceives the pose of the last trailer and the joint angles between trailers. Thus, one active robot can control an arbitrary number of trailers.

Chapter 19 — Robot Hands

Claudio Melchiorri and Makoto Kaneko

Multifingered robot hands have a potential capability for achieving dexterous manipulation of objects by using rolling and sliding motions. This chapter addresses design, actuation, sensing and control of multifingered robot hands. From the design viewpoint, they have a strong constraint in actuator implementation due to the space limitation in each joint. After briefly introducing the overview of anthropomorphic end-effector and its dexterity in Sect. 19.1, various approaches for actuation are provided with their advantages and disadvantages in Sect. 19.2. The key classification is (1) remote actuation or build-in actuation and (2) the relationship between the number of joints and the number of actuator. In Sect. 19.3, actuators and sensors used for multifingered hands are described. In Sect. 19.4, modeling and control are introduced by considering both dynamic effects and friction. Applications and trends are given in Sect. 19.5. Finally, this chapter is closed with conclusions and further reading.

The DLR Hand performing several tasks

Author  DLR - Robotics and Mechatronics Center

Video ID : 769

In the video, several experiments and the execution of different tasks by the DLR Hand II are shown.

Chapter 78 — Perceptual Robotics

Heinrich Bülthoff, Christian Wallraven and Martin A. Giese

Robots that share their environment with humans need to be able to recognize and manipulate objects and users, perform complex navigation tasks, and interpret and react to human emotional and communicative gestures. In all of these perceptual capabilities, the human brain, however, is still far ahead of robotic systems. Hence, taking clues from the way the human brain solves such complex perceptual tasks will help to design better robots. Similarly, once a robot interacts with humans, its behaviors and reactions will be judged by humans – movements of the robot, for example, should be fluid and graceful, and it should not evoke an eerie feeling when interacting with a user. In this chapter, we present Perceptual Robotics as the field of robotics that takes inspiration from perception research and neuroscience to, first, build better perceptual capabilities into robotic systems and, second, to validate the perceptual impact of robotic systems on the user.

Active in-hand object recognition

Author  Christian Wallraven

Video ID : 569

This video showcases the implementation of active object learning and recognition using the framework proposed in Browatzki et al. [1, 2]. The first phase shows the robot trying to learn the visual representation of several paper cups differing by a few key features. The robot executes a pre-programmed exploration program to look at the cup from all sides. The (very low-resolution) visual input is tracked and so-called key-frames are extracted which represent the (visual) exploration. After learning, the robot tries to recognize cups that have been placed into its hands using a similar exploration program based on visual information - due to the low-resolution input and the highly similar objects, the robot, however, fails to make the correct decision. The video then shows the second, advanced, exploration, which is based on actively seeking the view that is expected to provide maximum information about the object. For this, the robot embeds the learned visual information into a proprioceptive map indexed by the two joint angles of the hand. In this map, the robot now tries to predict the joint-angle combination that provides the most information about the object, given the current state of exploration. The implementation uses particle filtering to track a large number of object (view) hypotheses at the same time. Since the robot now uses a multisensory representation, the subsequent object-recognition trials are all correct, despite poor visual input and highly similar objects. References: [1] B Browatzki, V. Tikhanoff, G. Metta, H.H. Bülthoff, C. Wallraven: Active in-hand object recognition on a humanoid robot, IEEE Trans. Robot. 30(5), 1260-1269 (2014); [2] B. Browatzki, V. Tikhanoff, G. Metta, H.H. Bülthoff, C. Wallraven: Active object recognition on a humanoid robot, Proc. IEEE Int. Conf. Robot. Autom. (ICRA), St. Paul (2012), pp. 2021-2028.