View Chapter

Chapter 43 — Telerobotics

Günter Niemeyer, Carsten Preusche, Stefano Stramigioli and Dongjun Lee

In this chapter we present an overview of the field of telerobotics with a focus on control aspects. To acknowledge some of the earliest contributions and motivations the field has provided to robotics in general, we begin with a brief historical perspective and discuss some of the challenging applications. Then, after introducing and classifying the various system architectures and control strategies, we emphasize bilateral control and force feedback. This particular area has seen intense research work in the pursuit of telepresence. We also examine some of the emerging efforts, extending telerobotic concepts to unconventional systems and applications. Finally,we suggest some further reading for a closer engagement with the field.

Asymmetric teleoperation of dual-arm mobile manipulator

Author  Pawel Malysz, Shahin Sirouspour

Video ID : 75

The video presents an experiment demonstrating a dual-master system to teleoperate a single-slave mobile manipulator system with haptic feedback for the remote-block transfer task.

Chapter 35 — Multisensor Data Fusion

Hugh Durrant-Whyte and Thomas C. Henderson

Multisensor data fusion is the process of combining observations from a number of different sensors to provide a robust and complete description of an environment or process of interest. Data fusion finds wide application in many areas of robotics such as object recognition, environment mapping, and localization.

This chapter has three parts: methods, architectures, and applications. Most current data fusion methods employ probabilistic descriptions of observations and processes and use Bayes’ rule to combine this information. This chapter surveys the main probabilistic modeling and fusion techniques including grid-based models, Kalman filtering, and sequential Monte Carlo techniques. This chapter also briefly reviews a number of nonprobabilistic data fusion methods. Data fusion systems are often complex combinations of sensor devices, processing, and fusion algorithms. This chapter provides an overview of key principles in data fusion architectures from both a hardware and algorithmic viewpoint. The applications of data fusion are pervasive in robotics and underly the core problem of sensing, estimation, and perception. We highlight two example applications that bring out these features. The first describes a navigation or self-tracking application for an autonomous vehicle. The second describes an application in mapping and environment modeling.

The essential algorithmic tools of data fusion are reasonably well established. However, the development and use of these tools in realistic robotics applications is still developing.

AnnieWay

Author  Thomas C. Henderson

Video ID : 132

This is a video showing the multisensor autonomous vehicle merging into traffic.

Chapter 76 — Evolutionary Robotics

Stefano Nolfi, Josh Bongard, Phil Husbands and Dario Floreano

Evolutionary Robotics is a method for automatically generating artificial brains and morphologies of autonomous robots. This approach is useful both for investigating the design space of robotic applications and for testing scientific hypotheses of biological mechanisms and processes. In this chapter we provide an overview of methods and results of Evolutionary Robotics with robots of different shapes, dimensions, and operation features. We consider both simulated and physical robots with special consideration to the transfer between the two worlds.

Online learning to adapt to fast environmental variations

Author  Dario Floreano

Video ID : 40

A mobile robot Khepera, equipped with a vision module, can gain fitness points by staying on the gray area only when the light is on. The light is normally off, but it can be switched on if the robot passes over the black area positioned on the other side of the arena. The robot can detect ambient light and wall color, but not the color of the floor.

Chapter 72 — Social Robotics

Cynthia Breazeal, Kerstin Dautenhahn and Takayuki Kanda

This chapter surveys some of the principal research trends in Social Robotics and its application to human–robot interaction (HRI). Social (or Sociable) robots are designed to interact with people in a natural, interpersonal manner – often to achieve positive outcomes in diverse applications such as education, health, quality of life, entertainment, communication, and tasks requiring collaborative teamwork. The long-term goal of creating social robots that are competent and capable partners for people is quite a challenging task. They will need to be able to communicate naturally with people using both verbal and nonverbal signals. They will need to engage us not only on a cognitive level, but on an emotional level as well in order to provide effective social and task-related support to people. They will need a wide range of socialcognitive skills and a theory of other minds to understand human behavior, and to be intuitively understood by people. A deep understanding of human intelligence and behavior across multiple dimensions (i. e., cognitive, affective, physical, social, etc.) is necessary in order to design robots that can successfully play a beneficial role in the daily lives of people. This requires a multidisciplinary approach where the design of social robot technologies and methodologies are informed by robotics, artificial intelligence, psychology, neuroscience, human factors, design, anthropology, and more.

Mental-state inference to support human-robot collaboration

Author  Cynthia Breazeal

Video ID : 563

In this video, the Leonardo robot infers mental states from the observable behavior of two human collaborators in order to assist them in achieving their respective goals. The robot engages in a simulation-theory-inspired approach to make these inferences and to plan the appropriate actions to achieve the task goals. Each person wants a different food item (chips or cookies), locked in one of two larger boxes. The robot can operate a remote control interface to open two smaller boxes, one containing chips and the other cookies. The task is inspired by the Sally-Anne false-belief task, where the humans have diverging beliefs caused by a manipulation witnessed by only one of the participants. The robot must keep track of its own beliefs, in addition to inferring the beliefs of the human collaborators, as well as infer their respective goals, to offer the correct assistance.

Chapter 58 — Robotics in Hazardous Applications

James Trevelyan, William R. Hamel and Sung-Chul Kang

Robotics researchers have worked hard to realize a long-awaited vision: machines that can eliminate the need for people to work in hazardous environments. Chapter 60 is framed by the vision of disaster response: search and rescue robots carrying people from burning buildings or tunneling through collapsed rock falls to reach trapped miners. In this chapter we review tangible progress towards robots that perform routine work in places too dangerous for humans. Researchers still have many challenges ahead of them but there has been remarkable progress in some areas. Hazardous environments present special challenges for the accomplishment of desired tasks depending on the nature and magnitude of the hazards. Hazards may be present in the form of radiation, toxic contamination, falling objects or potential explosions. Technology that specialized engineering companies can develop and sell without active help from researchers marks the frontier of commercial feasibility. Just inside this border lie teleoperated robots for explosive ordnance disposal (EOD) and for underwater engineering work. Even with the typical tenfold disadvantage in manipulation performance imposed by the limits of today’s telepresence and teleoperation technology, in terms of human dexterity and speed, robots often can offer a more cost-effective solution. However, most routine applications in hazardous environments still lie far beyond the feasibility frontier. Fire fighting, remediating nuclear contamination, reactor decommissioning, tunneling, underwater engineering, underground mining and clearance of landmines and unexploded ordnance still present many unsolved problems.

“Sukura” robot developed for reconnaissance missions inside nuclear reactor buildings

Author  James P. Trevelyan

Video ID : 584

This video shows a robot "Sakura" or "Cherry Blossom" developed by researchers at the Chiba Institute of Technology, creators of the successful "Quince" robot.

Chapter 56 — Robotics in Agriculture and Forestry

Marcel Bergerman, John Billingsley, John Reid and Eldert van Henten

Robotics for agriculture and forestry (A&F) represents the ultimate application of one of our society’s latest and most advanced innovations to its most ancient and important industries. Over the course of history, mechanization and automation increased crop output several orders of magnitude, enabling a geometric growth in population and an increase in quality of life across the globe. Rapid population growth and rising incomes in developing countries, however, require ever larger amounts of A&F output. This chapter addresses robotics for A&F in the form of case studies where robotics is being successfully applied to solve well-identified problems. With respect to plant crops, the focus is on the in-field or in-farm tasks necessary to guarantee a quality crop and, generally speaking, end at harvest time. In the livestock domain, the focus is on breeding and nurturing, exploiting, harvesting, and slaughtering and processing. The chapter is organized in four main sections. The first one explains the scope, in particular, what aspects of robotics for A&F are dealt with in the chapter. The second one discusses the challenges and opportunities associated with the application of robotics to A&F. The third section is the core of the chapter, presenting twenty case studies that showcase (mostly) mature applications of robotics in various agricultural and forestry domains. The case studies are not meant to be comprehensive but instead to give the reader a general overview of how robotics has been applied to A&F in the last 10 years. The fourth section concludes the chapter with a discussion on specific improvements to current technology and paths to commercialization.

An autonomous cucumber harvester

Author  Elder J. van Henten, Jochen Hemming, Bart A.J. van Tuijl, J.G. Kornet, Jan Meuleman, Jan Bontsema, Erik A. van Os

Video ID : 308

The video demonstrates an autonomous cucumber harvester developed at Wageningen University and Research Centre, Wageningen, The Netherlands. The machine consists of a mobile platform which runs on rails, which are commonly used in greenhouses in The Netherlands for the purpose of internal transport, but they are also used as a hot- water heating system for the greenhouse. Harvesting requires functional steps such as the detection and localization of the fruit and assessment of its ripeness. In the case of the cucumber harvester, the different reflection properties in the near infrared spectrum are exploited to detect green cucumbers in the green environment. Whether the cucumber was ready for harvest was identified based on an estimation of its weight. Since cucumbers consist 95% of water, the weight estimation was achieved by estimating the volume of each fruit. Stereo-vision principles were then used to locate the fruits to be harvested in the 3-D environment. For that purpose, the camera was shifted 50 mm on a linear slide and two images of the same scene were taken and processed. A Mitsubishi RV-E2 manipulator was used to steer the gripper-cutter mechanism to the fruit and transport the harvested fruit back to a storage crate. Collision-free motion planning based on the A* algorithm was used to steer the manipulator during the harvesting operation. The cutter consisted of a parallel gripper that grabbed the peduncle of the fruit, i.e., the stem segment that connects the fruit to the main stem of the plant. Then the action of a suction cup immobilized the fruit in the gripper. A special thermal cutting device was used to separate the fruit from the plant. The high temperature of the cutting device also prevented the potential transport of viruses from one plant to the other during the harvesting process. For each successful cucumber harvested, this machine needed 65.2 s on average. The average success rate was 74.4%. It was found to be a great advantage that the system was able to perform several harvest attempts on a single cucumber from different harvest positions of the robot. This improved the success rate considerably. Since not all attempts were successful, a cycle time of 124 s per harvested cucumber was measured under practical circumstances.

Chapter 34 — Visual Servoing

François Chaumette, Seth Hutchinson and Peter Corke

This chapter introduces visual servo control, using computer vision data in the servo loop to control the motion of a robot. We first describe the basic techniques that are by now well established in the field. We give a general overview of the formulation of the visual servo control problem, and describe the two archetypal visual servo control schemes: image-based and pose-based visual servo control. We then discuss performance and stability issues that pertain to these two schemes, motivating advanced techniques. Of the many advanced techniques that have been developed, we discuss 2.5-D, hybrid, partitioned, and switched approaches. Having covered a variety of control schemes, we deal with target tracking and controlling motion directly in the joint space and extensions to under-actuated ground and aerial robots. We conclude by describing applications of visual servoing in robotics.

PBVS on a 6-DOF robot arm (1)

Author  Francois Chaumette, Seth Hutchinson, Peter Corke

Video ID : 62

This video shows a PBVS on a 6-DOF robot arm with (c^t_o, theta u) as visual features. It corresponds to the results depicted in Figure 34.9.

Chapter 23 — Biomimetic Robots

Kyu-Jin Cho and Robert Wood

Biomimetic robot designs attempt to translate biological principles into engineered systems, replacing more classical engineering solutions in order to achieve a function observed in the natural system. This chapter will focus on mechanism design for bio-inspired robots that replicate key principles from nature with novel engineering solutions. The challenges of biomimetic design include developing a deep understanding of the relevant natural system and translating this understanding into engineering design rules. This often entails the development of novel fabrication and actuation to realize the biomimetic design.

This chapter consists of four sections. In Sect. 23.1, we will define what biomimetic design entails, and contrast biomimetic robots with bio-inspired robots. In Sect. 23.2, we will discuss the fundamental components for developing a biomimetic robot. In Sect. 23.3, we will review detailed biomimetic designs that have been developed for canonical robot locomotion behaviors including flapping-wing flight, jumping, crawling, wall climbing, and swimming. In Sect. 23.4, we will discuss the enabling technologies for these biomimetic designs including material and fabrication.

Smooth vertical surface climbing with directional adhesion

Author  Sangbae Kim, Mark R. Cutkosky

Video ID : 389

Stickybot is a bioinspired robot that climbs smooth vertical surfaces such as those made of glass, plastic, and ceramic tile at 4 cm/s. The robot employs several design principles adapted from the gecko, including a hierarchy of compliant structures and directional adhesion. At the finest scale, the undersides of Stickybot’s toes are covered with arrays of small, angled polymer stalks.

Chapter 69 — Physical Human-Robot Interaction

Sami Haddadin and Elizabeth Croft

Over the last two decades, the foundations for physical human–robot interaction (pHRI) have evolved from successful developments in mechatronics, control, and planning, leading toward safer lightweight robot designs and interaction control schemes that advance beyond the current capacities of existing high-payload and highprecision position-controlled industrial robots. Based on their ability to sense physical interaction, render compliant behavior along the robot structure, plan motions that respect human preferences, and generate interaction plans for collaboration and coaction with humans, these novel robots have opened up novel and unforeseen application domains, and have advanced the field of human safety in robotics.

This chapter gives an overview on the state of the art in pHRI as of the date of publication. First, the advances in human safety are outlined, addressing topics in human injury analysis in robotics and safety standards for pHRI. Then, the foundations of human-friendly robot design, including the development of lightweight and intrinsically flexible force/torque-controlled machines together with the required perception abilities for interaction are introduced. Subsequently, motionplanning techniques for human environments, including the domains of biomechanically safe, risk-metric-based, human-aware planning are covered. Finally, the rather recent problem of interaction planning is summarized, including the issues of collaborative action planning, the definition of the interaction planning problem, and an introduction to robot reflexes and reactive control architecture for pHRI.

An assistive, decision-and-control architecture for force-sensitive, hand–arm systems driven by human–machine interfaces (MM4)

Author  Jörn Vogel, Sami Haddadin, John D. Simeral, Daniel Bacher , Beata Jarosiewicz, Leigh R. Hochberg, John P. Donoghue, Patrick van der Smagt

Video ID : 622

The video shows a 2-D drinking demonstration using the Braingate2 neural interface. The robot is controlled through a multipriority Cartesian impedance controller and its behavior is extended with collision detection and reflex reaction. Furthermore, virtual workspaces are added to ensure safety. On top of this, a decision-and-control architecture which uses sensory information available from the robotic system to evaluate the current state of task execution, is employed. During the task, the full functionality of skills currently available in a skill library of the robotic systems is used.

Injury evaluation of human-robot impacts

Author  Sami Haddadin, Alin Albu-Schäffer, Michael Strohmayr, Mirko Frommberger, Gerd Hirzinger

Video ID : 608

In this video, several blunt impact tests are shown, leading to an assessment of which factors dominate injury severity. We will illustrate the effects that robot speed, robot mass, and constraints in the environment have on safety in human-robot impacts. It will be shown that the intuition about high-impact loads being transmitted by heavy robots is wrong. Furthermore, the conclusion is reached that free impacts are by far less dangerous than being crushed. Reference: S. Haddadin, A. Albu-Schäffer, M. Strohmayr, M. Frommberger, G. Hirzinger: Injury evaluation of human-robot impacts, Proc. IEEE Int. Conf. Robot. Autom. (ICRA), Pasadena (2008), pp. 2203 – 2204; doi: 10.1109/ROBOT.2008.4543534.