View Chapter

Chapter 72 — Social Robotics

Cynthia Breazeal, Kerstin Dautenhahn and Takayuki Kanda

This chapter surveys some of the principal research trends in Social Robotics and its application to human–robot interaction (HRI). Social (or Sociable) robots are designed to interact with people in a natural, interpersonal manner – often to achieve positive outcomes in diverse applications such as education, health, quality of life, entertainment, communication, and tasks requiring collaborative teamwork. The long-term goal of creating social robots that are competent and capable partners for people is quite a challenging task. They will need to be able to communicate naturally with people using both verbal and nonverbal signals. They will need to engage us not only on a cognitive level, but on an emotional level as well in order to provide effective social and task-related support to people. They will need a wide range of socialcognitive skills and a theory of other minds to understand human behavior, and to be intuitively understood by people. A deep understanding of human intelligence and behavior across multiple dimensions (i. e., cognitive, affective, physical, social, etc.) is necessary in order to design robots that can successfully play a beneficial role in the daily lives of people. This requires a multidisciplinary approach where the design of social robot technologies and methodologies are informed by robotics, artificial intelligence, psychology, neuroscience, human factors, design, anthropology, and more.

Playing triadic games with KASPAR

Author  Kerstin Dautenhahn

Video ID : 220

The video illustrates (using researchers taking the roles of children) the system developed by Joshua Wainer as part of his PhD research at University of Hertfordshire. In this study, KASPAR was developed to fully autonomously play games with pairs of children with autism. The robot provides encouragement, motivation and feedback, and 'joins in the game'. The system was evaluated in long-term studies with children with autism (J. Wainer et al. 2014). Results show that KASPAR encourages collaborative skills in children with autism.

Nonverbal envelope displays to support turn-taking behavior

Author  Cynthia Breazeal

Video ID : 559

This video is a demonstration of Kismet's envelope displays to regulate turn-taking during a "conversation". In this video, Kismet is "speaking" with one person, but also acknowledges the presence of a second person. The robot is not communicating an actual language, so this video is more reminiscent of speaking with a pre-linguistic child. The nonverbal turn-taking behavior is what is being highlighted.

Chapter 21 — Actuators for Soft Robotics

Alin Albu-Schäffer and Antonio Bicchi

Although we do not know as yet how robots of the future will look like exactly, most of us are sure that they will not resemble the heavy, bulky, rigid machines dangerously moving around in old fashioned industrial automation. There is a growing consensus, in the research community as well as in expectations from the public, that robots of the next generation will be physically compliant and adaptable machines, closely interacting with humans and moving safely, smoothly and efficiently - in other terms, robots will be soft.

This chapter discusses the design, modeling and control of actuators for the new generation of soft robots, which can replace conventional actuators in applications where rigidity is not the first and foremost concern in performance. The chapter focuses on the technology, modeling, and control of lumped parameters of soft robotics, that is, systems of discrete, interconnected, and compliant elements. Distributed parameters, snakelike and continuum soft robotics, are presented in Chap. 20, while Chap. 23 discusses in detail the biomimetic motivations that are often behind soft robotics.

Throwing a ball with the DLR VS-Joint

Author  Sebastian Wolf, Gerd Hirzinger

Video ID : 549

The video shows the difference between a stiff and a flexible actuator in a 1-DOF throwing demonstration. The variable stiffness actuator (VS-joint) can store potential energy in a strike out movement and release it by accelerating the lever and ball. Additional energy is transferred to the lever by stiffening up during the forward motion.

Hammering task with the DLR Hand Arm System

Author  Markus Grebenstein, Alin Albu-Schäffer, Thomas Bahls, Maxime Chalon, Oliver Eiberger, Werner Friedl, Robin Gruber, Sami Haddadin, Ulrich Hagn, Robert Haslinger, Hannes Höppner, Stefan Jörg, Mathias Nickl, Alexander Nothhelfer, Florian Petit, Josef Rei

Video ID : 464

The DLR Hand Arm System uses a hammer to drive a nail into a wooden board. The passive flexibility in the variable stiffness actuators (VSA) helps to keep a stable grasp during the impact and protects the hardware from damage.

Chapter 32 — 3-D Vision for Navigation and Grasping

Danica Kragic and Kostas Daniilidis

In this chapter, we describe algorithms for three-dimensional (3-D) vision that help robots accomplish navigation and grasping. To model cameras, we start with the basics of perspective projection and distortion due to lenses. This projection from a 3-D world to a two-dimensional (2-D) image can be inverted only by using information from the world or multiple 2-D views. If we know the 3-D model of an object or the location of 3-D landmarks, we can solve the pose estimation problem from one view. When two views are available, we can compute the 3-D motion and triangulate to reconstruct the world up to a scale factor. When multiple views are given either as sparse viewpoints or a continuous incoming video, then the robot path can be computer and point tracks can yield a sparse 3-D representation of the world. In order to grasp objects, we can estimate 3-D pose of the end effector or 3-D coordinates of the graspable points on the object.

Finding paths through the world's photos

Author  Noah Snavely, Rahul Garg, Steven M. Seitz, Richard Szeliski

Video ID : 121

When a scene is photographed many times by different people, the viewpoints often cluster along certain paths. These paths are largely specific to the scene being photographed and follow interesting patterns and viewpoints. We seek to discover a range of such paths and turn them into controls for image-based rendering. Our approach takes as input a large set of community or personal photos, reconstructs camera viewpoints, and automatically computes orbits, panoramas, canonical views, and optimal paths between views. The scene can then be interactively browsed in 3-D using these controls or with six DOF free-viewpoint control. As the user browses the scene, nearby views are continuously selected and transformed, using control-adaptive reprojection techniques.

Chapter 36 — Motion for Manipulation Tasks

James Kuffner and Jing Xiao

This chapter serves as an introduction to Part D by giving an overview of motion generation and control strategies in the context of robotic manipulation tasks. Automatic control ranging from the abstract, high-level task specification down to fine-grained feedback at the task interface are considered. Some of the important issues include modeling of the interfaces between the robot and the environment at the different time scales of motion and incorporating sensing and feedback. Manipulation planning is introduced as an extension to the basic motion planning problem, which can be modeled as a hybrid system of continuous configuration spaces arising from the act of grasping and moving parts in the environment. The important example of assembly motion is discussed through the analysis of contact states and compliant motion control. Finally, methods aimed at integrating global planning with state feedback control are summarized.

A square peg-in-hole demonstration using manipulation skills

Author  Unknown

Video ID : 362

This video shows a square peg-in-hole demonstration using manipulation skills which refer to a set of motion primitives derived from the analysis of assembly tasks. This video demonstrated three manipulation skills: move-to-touch skill, rotate-to-level skill, and rotate-to-insert skill, which are executed to insert a square peg into a hole.

Chapter 40 — Mobility and Manipulation

Oliver Brock, Jaeheung Park and Marc Toussaint

Mobile manipulation requires the integration of methodologies from all aspects of robotics. Instead of tackling each aspect in isolation,mobilemanipulation research exploits their interdependence to solve challenging problems. As a result, novel views of long-standing problems emerge. In this chapter, we present these emerging views in the areas of grasping, control, motion generation, learning, and perception. All of these areas must address the shared challenges of high-dimensionality, uncertainty, and task variability. The section on grasping and manipulation describes a trend towards actively leveraging contact and physical and dynamic interactions between hand, object, and environment. Research in control addresses the challenges of appropriately coupling mobility and manipulation. The field of motion generation increasingly blurs the boundaries between control and planning, leading to task-consistent motion in high-dimensional configuration spaces, even in dynamic and partially unknown environments. A key challenge of learning formobilemanipulation consists of identifying the appropriate priors, and we survey recent learning approaches to perception, grasping, motion, and manipulation. Finally, a discussion of promising methods in perception shows how concepts and methods from navigation and active perception are applied.

Free-floating autonomous underwater manipulation: Connector plug/unplug

Author  CIRS Universitat de Girona

Video ID : 789

Peg-in-hole demonstration performed autonomously with an underwater-vehicle manipulator system. The implementation is done through MoveIt!.

Chapter 56 — Robotics in Agriculture and Forestry

Marcel Bergerman, John Billingsley, John Reid and Eldert van Henten

Robotics for agriculture and forestry (A&F) represents the ultimate application of one of our society’s latest and most advanced innovations to its most ancient and important industries. Over the course of history, mechanization and automation increased crop output several orders of magnitude, enabling a geometric growth in population and an increase in quality of life across the globe. Rapid population growth and rising incomes in developing countries, however, require ever larger amounts of A&F output. This chapter addresses robotics for A&F in the form of case studies where robotics is being successfully applied to solve well-identified problems. With respect to plant crops, the focus is on the in-field or in-farm tasks necessary to guarantee a quality crop and, generally speaking, end at harvest time. In the livestock domain, the focus is on breeding and nurturing, exploiting, harvesting, and slaughtering and processing. The chapter is organized in four main sections. The first one explains the scope, in particular, what aspects of robotics for A&F are dealt with in the chapter. The second one discusses the challenges and opportunities associated with the application of robotics to A&F. The third section is the core of the chapter, presenting twenty case studies that showcase (mostly) mature applications of robotics in various agricultural and forestry domains. The case studies are not meant to be comprehensive but instead to give the reader a general overview of how robotics has been applied to A&F in the last 10 years. The fourth section concludes the chapter with a discussion on specific improvements to current technology and paths to commercialization.

An automated mobile platform for orchard scanning and for soil, yield, and flower mapping

Author  James Underwood, Calvin Hung, Suchet Bargoti, Mark Calleija, Robert Fitch, Juan Nieto, Salah Sukkarieh

Video ID : 306

This video shows an end-to-end system for acquiring high-resolution information to support precision agriculture in almond orchards. The robot drives along the orchard rows autonomously, gathering LIDAR and camera data while passing the trees. Each tree is automatically identified and photographed. Image classification is performed on the photos to estimate flower and fruit densities per tree. The information can be stored in a database, compared throughout the season and from one year to the next, and mapped and displayed visually to assist growers in managing and optimizing production.

Chapter 9 — Force Control

Luigi Villani and Joris De Schutter

A fundamental requirement for the success of a manipulation task is the capability to handle the physical contact between a robot and the environment. Pure motion control turns out to be inadequate because the unavoidable modeling errors and uncertainties may cause a rise of the contact force, ultimately leading to an unstable behavior during the interaction, especially in the presence of rigid environments. Force feedback and force control becomes mandatory to achieve a robust and versatile behavior of a robotic system in poorly structured environments as well as safe and dependable operation in the presence of humans. This chapter starts from the analysis of indirect force control strategies, conceived to keep the contact forces limited by ensuring a suitable compliant behavior to the end effector, without requiring an accurate model of the environment. Then the problem of interaction tasks modeling is analyzed, considering both the case of a rigid environment and the case of a compliant environment. For the specification of an interaction task, natural constraints set by the task geometry and artificial constraints set by the control strategy are established, with respect to suitable task frames. This formulation is the essential premise to the synthesis of hybrid force/motion control schemes.

COMRADE: Compliant motion research and development environment

Author  Joris De Schutter, Herman Bruyninckx, Hendrik Van Brussel et al.

Video ID : 691

The video collects works on force control developed in the 1970s-1980s and 1990s at the Department of Mechanical Engineering of the Katholieke Universiteit Leuven, Belgium. The tasks were programmed and simulated using the task-frame-based software package COMRADE (compliant motion research and development environment). The video was recorded in the mid-1990s. The main references for the video are: 1. H. Van Brussel, J. Simons: The adaptable compliance concept and its use for automatic assembly by active force feedback accommodations, Proc. 9th Int. Symposium Indust. Robot., Washington (1979), pp.167-181 2. J. Simons, H. Van Brussel, J. De Schutter, J. Verhaert: A self-learning automaton with variable resolution for high precision assembly by industrial robots, IEEE Trans. Autom. Control 27(5), 1109-1113 (1982) 3. J. De Schutter, H. Van Brussel: Compliant robot motion II. A control approach based on external control loops, Int. J. Robot. Res. 7(4), 18-33 (1988) 3.J. De Schutter, H. Van Brussel: Compliant robot motion I. A formalism for specifying compliant motion tasks, Int. J. Robot. Res. 7(4), 3-17 (1988) 4. W. Witvrouw, P. Van de Poel, H. Bruyninckx, J. De Schutter: ROSI: A task specification and simulation tool for force-sensor-based robot control, Proc. 24th Int. Symp. Indust. Robot., Tokyo (1993), pp. 385-392 5. W. Witvrouw, P. Van de Poel, J. De Schutter: COMRADE: Compliant motion research and development environment, Proc. 3rd IFAC/IFIP Workshop on Algorithms and Architecture for Real-Time Control. Ostend (1995), pp. 81-87 6. H. Bruyninckx, S. Dutre, J. De Schutter: Peg-on-hole, a model-based solution to peg and hole alignment, Proc. IEEE Int. Conf. Robot. Autom. (ICRA), Nagoya (1995), pp. 1919-1924 7. M. Nuttin, H. Van Brussel: Learning the peg-into-hole assembly operation with a connectionist reinforcement technique, Comput. Ind. 33(1), 101-109 (1997)

Chapter 50 — Modeling and Control of Robots on Rough Terrain

Keiji Nagatani, Genya Ishigami and Yoshito Okada

In this chapter, we introduce modeling and control for wheeled mobile robots and tracked vehicles. The target environment is rough terrains, which includes both deformable soil and heaps of rubble. Therefore, the topics are roughly divided into two categories, wheeled robots on deformable soil and tracked vehicles on heaps of rubble.

After providing an overview of this area in Sect. 50.1, a modeling method of wheeled robots on a deformable terrain is introduced in Sect. 50.2. It is based on terramechanics, which is the study focusing on the mechanical properties of natural rough terrain and its response to off-road vehicle, specifically the interaction between wheel/track and soil. In Sect. 50.3, the control of wheeled robots is introduced. A wheeled robot often experiences wheel slippage as well as its sideslip while traversing rough terrain. Therefore, the basic approach in this section is to compensate the slip via steering and driving maneuvers. In the case of navigation on heaps of rubble, tracked vehicles have much advantage. To improve traversability in such challenging environments, some tracked vehicles are equipped with subtracks, and one kinematical modeling method of tracked vehicle on rough terrain is introduced in Sect. 50.4. In addition, stability analysis of such vehicles is introduced in Sect. 50.5. Based on such kinematical model and stability analysis, a sensor-based control of tracked vehicle on rough terrain is introduced in Sect. 50.6. Sect. 50.7 summarizes this chapter.

Interactive, human-robot supervision test with the long-range science rover for Mars exploration

Author  Samad Hayati, Richard Volpe, Paul Backes, J. (Bob) Balaram, Richard Welch, Robert Ivlev, Gregory Tharp, Steve Peters, Tim Ohm, Richard Petras

Video ID : 187

This video records a demonstration of the long-range rover mission on the surface of Mars. The Mars rover, the test bed Rocky 7, performs several demonstrations including 3-D terrain mapping using the panoramic camera, telescience over the internet, an autonomous mobility test, and soil sampling. This demonstration was among the preliminary tests for the Mars Pathfinder mission executed in 1997.