View Chapter

Chapter 69 — Physical Human-Robot Interaction

Sami Haddadin and Elizabeth Croft

Over the last two decades, the foundations for physical human–robot interaction (pHRI) have evolved from successful developments in mechatronics, control, and planning, leading toward safer lightweight robot designs and interaction control schemes that advance beyond the current capacities of existing high-payload and highprecision position-controlled industrial robots. Based on their ability to sense physical interaction, render compliant behavior along the robot structure, plan motions that respect human preferences, and generate interaction plans for collaboration and coaction with humans, these novel robots have opened up novel and unforeseen application domains, and have advanced the field of human safety in robotics.

This chapter gives an overview on the state of the art in pHRI as of the date of publication. First, the advances in human safety are outlined, addressing topics in human injury analysis in robotics and safety standards for pHRI. Then, the foundations of human-friendly robot design, including the development of lightweight and intrinsically flexible force/torque-controlled machines together with the required perception abilities for interaction are introduced. Subsequently, motionplanning techniques for human environments, including the domains of biomechanically safe, risk-metric-based, human-aware planning are covered. Finally, the rather recent problem of interaction planning is summarized, including the issues of collaborative action planning, the definition of the interaction planning problem, and an introduction to robot reflexes and reactive control architecture for pHRI.

Physical human-robot interaction in imitation learning

Author  Dongheui Lee, Christian Ott, Yoshihiko Nakamura, Gerd Hirzinger

Video ID : 625

This video presents our recent research on the integration of physical human-robot interaction (pHRI) with imitation learning. First, a marker control approach for real-time human-motion imitation is shown. Second, physical coaching in addition to observational learning is applied for the incremental learning of motion primitives. Last, we extend imitation learning to learning pHRI which includes the establishment of intended physical contacts. The proposed methods were implemented and tested using the IRT humanoid robot and DLR’s humanoid upper-body robot Justin.

Chapter 67 — Humanoids

Paul Fitzpatrick, Kensuke Harada, Charles C. Kemp, Yoshio Matsumoto, Kazuhito Yokoi and Eiichi Yoshida

Humanoid robots selectively immitate aspects of human form and behavior. Humanoids come in a variety of shapes and sizes, from complete human-size legged robots to isolated robotic heads with human-like sensing and expression. This chapter highlights significant humanoid platforms and achievements, and discusses some of the underlying goals behind this area of robotics. Humanoids tend to require the integration ofmany of the methods covered in detail within other chapters of this handbook, so this chapter focuses on distinctive aspects of humanoid robotics with liberal cross-referencing.

This chapter examines what motivates researchers to pursue humanoid robotics, and provides a taste of the evolution of this field over time. It summarizes work on legged humanoid locomotion, whole-body activities, and approaches to human–robot communication. It concludes with a brief discussion of factors that may influence the future of humanoid robots.

Footstep planning modeled as a whole-body, inverse-kinematic problem

Author  Eiichi Yoshida

Video ID : 596

An augmented-robot structure was introduced as "virtual" planar links attached to a foot that represents footsteps. This modeling makes it possible to solve the footstep planning as a problem of inverse kinematics, and also to determine the final whole-body configuration. After planning the footsteps, the dynamically-stable, whole-body motion including walking can be computed by using a dynamic pattern generator.

Chapter 40 — Mobility and Manipulation

Oliver Brock, Jaeheung Park and Marc Toussaint

Mobile manipulation requires the integration of methodologies from all aspects of robotics. Instead of tackling each aspect in isolation,mobilemanipulation research exploits their interdependence to solve challenging problems. As a result, novel views of long-standing problems emerge. In this chapter, we present these emerging views in the areas of grasping, control, motion generation, learning, and perception. All of these areas must address the shared challenges of high-dimensionality, uncertainty, and task variability. The section on grasping and manipulation describes a trend towards actively leveraging contact and physical and dynamic interactions between hand, object, and environment. Research in control addresses the challenges of appropriately coupling mobility and manipulation. The field of motion generation increasingly blurs the boundaries between control and planning, leading to task-consistent motion in high-dimensional configuration spaces, even in dynamic and partially unknown environments. A key challenge of learning formobilemanipulation consists of identifying the appropriate priors, and we survey recent learning approaches to perception, grasping, motion, and manipulation. Finally, a discussion of promising methods in perception shows how concepts and methods from navigation and active perception are applied.

Dynamic robot manipulation

Author  Boston Dynamics

Video ID : 664

BigDog handles heavy objects. The goal is to use the strength of the legs and torso to help power motions of the arm. This sort of dynamic, whole-body approach to manipulation is used routinely by human athletes and will enhance the performance of advanced robots.

Chapter 53 — Multiple Mobile Robot Systems

Lynne E. Parker, Daniela Rus and Gaurav S. Sukhatme

Within the context of multiple mobile, and networked robot systems, this chapter explores the current state of the art. After a brief introduction, we first examine architectures for multirobot cooperation, exploring the alternative approaches that have been developed. Next, we explore communications issues and their impact on multirobot teams in Sect. 53.3, followed by a discussion of networked mobile robots in Sect. 53.4. Following this we discuss swarm robot systems in Sect. 53.5 and modular robot systems in Sect. 53.6. While swarm and modular systems typically assume large numbers of homogeneous robots, other types of multirobot systems include heterogeneous robots. We therefore next discuss heterogeneity in cooperative robot teams in Sect. 53.7. Once robot teams allow for individual heterogeneity, issues of task allocation become important; Sect. 53.8 therefore discusses common approaches to task allocation. Section 53.9 discusses the challenges of multirobot learning, and some representative approaches. We outline some of the typical application domains which serve as test beds for multirobot systems research in Sect. 53.10. Finally, we conclude in Sect. 53.11 with some summary remarks and suggestions for further reading.

Multi-robot box pushing

Author  C. Ronald Kube, Hong Zhang

Video ID : 199

Robots are used to locate an object in the environment (a box with lights on it) and push it to the desired position (an area of the environment with a light shining on it). The robots cannot communicate with each other, and the box is weighted so at least two robots have to push the box to move it. Each robot has three levels of control. First, it wanders randomly looking for the box. Second, it travels toward the box until contact is made. Third, it checks to see if the box is facing the desired direction; if so, it pushes the box, and, if not, it relocates to a different side of the box.

Chapter 46 — Simultaneous Localization and Mapping

Cyrill Stachniss, John J. Leonard and Sebastian Thrun

This chapter provides a comprehensive introduction in to the simultaneous localization and mapping problem, better known in its abbreviated form as SLAM. SLAM addresses the main perception problem of a robot navigating an unknown environment. While navigating the environment, the robot seeks to acquire a map thereof, and at the same time it wishes to localize itself using its map. The use of SLAM problems can be motivated in two different ways: one might be interested in detailed environment models, or one might seek to maintain an accurate sense of a mobile robot’s location. SLAM serves both of these purposes.

We review the three major paradigms from which many published methods for SLAM are derived: (1) the extended Kalman filter (EKF); (2) particle filtering; and (3) graph optimization. We also review recent work in three-dimensional (3-D) SLAM using visual and red green blue distance-sensors (RGB-D), and close with a discussion of open research problems in robotic mapping.

Graph-based SLAM using TORO

Author  Cyrill Stachniss

Video ID : 446

This video provides an illustration of graph-based SLAM, as described in Chap. 46.3.3, Springer Handbook of Robotics, 2nd edn (2016), using the TORO algorithm. Reference: G. Grisetti, C. Stachniss, S. Grzonka, W. Burgard. A tree parameterization for efficiently computing maximum likelihood maps using gradient descent, Proc. Robot. Sci. Syst. (RSS), Atlanta (2007)

Chapter 56 — Robotics in Agriculture and Forestry

Marcel Bergerman, John Billingsley, John Reid and Eldert van Henten

Robotics for agriculture and forestry (A&F) represents the ultimate application of one of our society’s latest and most advanced innovations to its most ancient and important industries. Over the course of history, mechanization and automation increased crop output several orders of magnitude, enabling a geometric growth in population and an increase in quality of life across the globe. Rapid population growth and rising incomes in developing countries, however, require ever larger amounts of A&F output. This chapter addresses robotics for A&F in the form of case studies where robotics is being successfully applied to solve well-identified problems. With respect to plant crops, the focus is on the in-field or in-farm tasks necessary to guarantee a quality crop and, generally speaking, end at harvest time. In the livestock domain, the focus is on breeding and nurturing, exploiting, harvesting, and slaughtering and processing. The chapter is organized in four main sections. The first one explains the scope, in particular, what aspects of robotics for A&F are dealt with in the chapter. The second one discusses the challenges and opportunities associated with the application of robotics to A&F. The third section is the core of the chapter, presenting twenty case studies that showcase (mostly) mature applications of robotics in various agricultural and forestry domains. The case studies are not meant to be comprehensive but instead to give the reader a general overview of how robotics has been applied to A&F in the last 10 years. The fourth section concludes the chapter with a discussion on specific improvements to current technology and paths to commercialization.

An autonomous cucumber harvester

Author  Elder J. van Henten, Jochen Hemming, Bart A.J. van Tuijl, J.G. Kornet, Jan Meuleman, Jan Bontsema, Erik A. van Os

Video ID : 308

The video demonstrates an autonomous cucumber harvester developed at Wageningen University and Research Centre, Wageningen, The Netherlands. The machine consists of a mobile platform which runs on rails, which are commonly used in greenhouses in The Netherlands for the purpose of internal transport, but they are also used as a hot- water heating system for the greenhouse. Harvesting requires functional steps such as the detection and localization of the fruit and assessment of its ripeness. In the case of the cucumber harvester, the different reflection properties in the near infrared spectrum are exploited to detect green cucumbers in the green environment. Whether the cucumber was ready for harvest was identified based on an estimation of its weight. Since cucumbers consist 95% of water, the weight estimation was achieved by estimating the volume of each fruit. Stereo-vision principles were then used to locate the fruits to be harvested in the 3-D environment. For that purpose, the camera was shifted 50 mm on a linear slide and two images of the same scene were taken and processed. A Mitsubishi RV-E2 manipulator was used to steer the gripper-cutter mechanism to the fruit and transport the harvested fruit back to a storage crate. Collision-free motion planning based on the A* algorithm was used to steer the manipulator during the harvesting operation. The cutter consisted of a parallel gripper that grabbed the peduncle of the fruit, i.e., the stem segment that connects the fruit to the main stem of the plant. Then the action of a suction cup immobilized the fruit in the gripper. A special thermal cutting device was used to separate the fruit from the plant. The high temperature of the cutting device also prevented the potential transport of viruses from one plant to the other during the harvesting process. For each successful cucumber harvested, this machine needed 65.2 s on average. The average success rate was 74.4%. It was found to be a great advantage that the system was able to perform several harvest attempts on a single cucumber from different harvest positions of the robot. This improved the success rate considerably. Since not all attempts were successful, a cycle time of 124 s per harvested cucumber was measured under practical circumstances.

Chapter 69 — Physical Human-Robot Interaction

Sami Haddadin and Elizabeth Croft

Over the last two decades, the foundations for physical human–robot interaction (pHRI) have evolved from successful developments in mechatronics, control, and planning, leading toward safer lightweight robot designs and interaction control schemes that advance beyond the current capacities of existing high-payload and highprecision position-controlled industrial robots. Based on their ability to sense physical interaction, render compliant behavior along the robot structure, plan motions that respect human preferences, and generate interaction plans for collaboration and coaction with humans, these novel robots have opened up novel and unforeseen application domains, and have advanced the field of human safety in robotics.

This chapter gives an overview on the state of the art in pHRI as of the date of publication. First, the advances in human safety are outlined, addressing topics in human injury analysis in robotics and safety standards for pHRI. Then, the foundations of human-friendly robot design, including the development of lightweight and intrinsically flexible force/torque-controlled machines together with the required perception abilities for interaction are introduced. Subsequently, motionplanning techniques for human environments, including the domains of biomechanically safe, risk-metric-based, human-aware planning are covered. Finally, the rather recent problem of interaction planning is summarized, including the issues of collaborative action planning, the definition of the interaction planning problem, and an introduction to robot reflexes and reactive control architecture for pHRI.

Collaborative human-focused robotics for manufacturing

Author  CHARM Project Consortium

Video ID : 717

The CHARM project demonstrates methods for interacting with robotic assistants through developments in the perception, communication, control, and safe interaction technologies and techniques centered on supporting workers performing complex manufacturing tasks.

Chapter 79 — Robotics for Education

David P. Miller and Illah Nourbakhsh

Educational robotics programs have become popular in most developed countries and are becoming more and more prevalent in the developing world as well. Robotics is used to teach problem solving, programming, design, physics, math and even music and art to students at all levels of their education. This chapter provides an overview of some of the major robotics programs along with the robot platforms and the programming environments commonly used. Like robot systems used in research, there is a constant development and upgrade of hardware and software – so this chapter provides a snapshot of the technologies being used at this time. The chapter concludes with a review of the assessment strategies that can be used to determine if a particular robotics program is benefitting students in the intended ways.

Global Conference on Educational Robotics and International Botball Tournament

Author  KIPR

Video ID : 241

GCER is a STEM-oriented robotics conference, in which the majority of the attendees, paper authors, and presenters are K-12 robotics students. Educator-paper tracks and technology-research tracks also occur. GCER is also the site of the International Botball Tournament, KIPR Open, aerial robots contests, and elementary-school robotics challenges. Some of the recent guest speakers at the conference have included Dr. Maja Mataric (human-robot interactions), Dr. Vijay Kumar (coordinated flying robots), and Dr. Hiroshi Ishiguro (androids). Details from: http://www.kipr.org/gcer .

Chapter 74 — Learning from Humans

Aude G. Billard, Sylvain Calinon and Rüdiger Dillmann

This chapter surveys the main approaches developed to date to endow robots with the ability to learn from human guidance. The field is best known as robot programming by demonstration, robot learning from/by demonstration, apprenticeship learning and imitation learning. We start with a brief historical overview of the field. We then summarize the various approaches taken to solve four main questions: when, what, who and when to imitate. We emphasize the importance of choosing well the interface and the channels used to convey the demonstrations, with an eye on interfaces providing force control and force feedback. We then review algorithmic approaches to model skills individually and as a compound and algorithms that combine learning from human guidance with reinforcement learning. We close with a look on the use of language to guide teaching and a list of open issues.

Learning from failure II

Author  Aude Billard

Video ID : 477

This video illustrates in a second example how learning from demonstration can benefit from failed demonstrations (as opposed to learning from successful demonstrations). Here, the robot Robota must learn how to coordinate its two arms in a timely manner for the left arm to hit the ball with the racket right on time, after the left arm sent the ball flying by hitting the catapult. More details on this work is available in: A. Rai, G. de Chambrier, A. Billard: Learning from failed demonstrations in unreliable systems, Proc. IEEE-RAS Int. Conf. Humanoid Robots (Humanoids), Atlanta (2013), pp. 410 – 416; doi: 10.1109/HUMANOIDS.2013.7030007 .

Chapter 23 — Biomimetic Robots

Kyu-Jin Cho and Robert Wood

Biomimetic robot designs attempt to translate biological principles into engineered systems, replacing more classical engineering solutions in order to achieve a function observed in the natural system. This chapter will focus on mechanism design for bio-inspired robots that replicate key principles from nature with novel engineering solutions. The challenges of biomimetic design include developing a deep understanding of the relevant natural system and translating this understanding into engineering design rules. This often entails the development of novel fabrication and actuation to realize the biomimetic design.

This chapter consists of four sections. In Sect. 23.1, we will define what biomimetic design entails, and contrast biomimetic robots with bio-inspired robots. In Sect. 23.2, we will discuss the fundamental components for developing a biomimetic robot. In Sect. 23.3, we will review detailed biomimetic designs that have been developed for canonical robot locomotion behaviors including flapping-wing flight, jumping, crawling, wall climbing, and swimming. In Sect. 23.4, we will discuss the enabling technologies for these biomimetic designs including material and fabrication.

Robot dragonfly DelFly Explorer flies autonomously

Author  Christophe De Wagter, Sjoerd Tijmons, Bart D.W. Remes, Guido C.H.E. de Croon

Video ID : 402

The DelFly Explorer is the first flapping-wing micro air vehicle that is able to fly with complete autonomy in unknown environments. Weighing just 20 g, it is equipped with a 4 g onboard, stereo-vision system. The DelFly Explorer can perform an autonomous take-off, maintain its height, and avoid obstacles for as long as its battery lasts (~9 min). All sensing and processing is performed onboard, so no human or offboard computer is in the loop.