View Chapter

Chapter 71 — Cognitive Human-Robot Interaction

Bilge Mutlu, Nicholas Roy and Selma Šabanović

A key research challenge in robotics is to design robotic systems with the cognitive capabilities necessary to support human–robot interaction. These systems will need to have appropriate representations of the world; the task at hand; the capabilities, expectations, and actions of their human counterparts; and how their own actions might affect the world, their task, and their human partners. Cognitive human–robot interaction is a research area that considers human(s), robot(s), and their joint actions as a cognitive system and seeks to create models, algorithms, and design guidelines to enable the design of such systems. Core research activities in this area include the development of representations and actions that allow robots to participate in joint activities with people; a deeper understanding of human expectations and cognitive responses to robot actions; and, models of joint activity for human–robot interaction. This chapter surveys these research activities by drawing on research questions and advances from a wide range of fields including computer science, cognitive science, linguistics, and robotics.

Gaze and gesture cues for robots

Author  Bilge Mutlu

Video ID : 128

In human-robot communication, nonverbal cues like gaze and gesture can be a source of important information for starting and maintaining interaction. Gaze, for example, can tell a person about what the robot is attending to, its mental state, and its role in a conversation. Researchers are studying and developing models of nonverbal cues in human-robot interaction to enable more successful collaboration between robots and humans in a variety of domains, including education.

Chapter 72 — Social Robotics

Cynthia Breazeal, Kerstin Dautenhahn and Takayuki Kanda

This chapter surveys some of the principal research trends in Social Robotics and its application to human–robot interaction (HRI). Social (or Sociable) robots are designed to interact with people in a natural, interpersonal manner – often to achieve positive outcomes in diverse applications such as education, health, quality of life, entertainment, communication, and tasks requiring collaborative teamwork. The long-term goal of creating social robots that are competent and capable partners for people is quite a challenging task. They will need to be able to communicate naturally with people using both verbal and nonverbal signals. They will need to engage us not only on a cognitive level, but on an emotional level as well in order to provide effective social and task-related support to people. They will need a wide range of socialcognitive skills and a theory of other minds to understand human behavior, and to be intuitively understood by people. A deep understanding of human intelligence and behavior across multiple dimensions (i. e., cognitive, affective, physical, social, etc.) is necessary in order to design robots that can successfully play a beneficial role in the daily lives of people. This requires a multidisciplinary approach where the design of social robot technologies and methodologies are informed by robotics, artificial intelligence, psychology, neuroscience, human factors, design, anthropology, and more.

Human-robot teaming in a search-and-retrieve task

Author  Cynthia Breazeal

Video ID : 555

This video shows an example from a human participant study examining the role of nonverbal social signals on human-robot teamwork for a complex search-and-retrieve task. In a controlled experiment, we examined the role of backchanneling and task complexity on team functioning and perceptions of the robots’ engagement and competence. Seventy three participants interacted with autonomous humanoid robots as part of a human-robot team: One participant, one confederate (a remote operator controlling an aerial robot), and three robots (2 mobile humanoids and an aerial robot). We found that, when robots used backchanneling, team functioning improved and the robots were seen as more engaged.

Chapter 18 — Parallel Mechanisms

Jean-Pierre Merlet, Clément Gosselin and Tian Huang

This chapter presents an introduction to the kinematics and dynamics of parallel mechanisms, also referred to as parallel robots. As opposed to classical serial manipulators, the kinematic architecture of parallel robots includes closed-loop kinematic chains. As a consequence, their analysis differs considerably from that of their serial counterparts. This chapter aims at presenting the fundamental formulations and techniques used in their analysis.

R4 robot

Author  Sébastien Krut

Video ID : 53

This video demonstrates the R4 robot, a 100 g parallel robot.

Chapter 46 — Simultaneous Localization and Mapping

Cyrill Stachniss, John J. Leonard and Sebastian Thrun

This chapter provides a comprehensive introduction in to the simultaneous localization and mapping problem, better known in its abbreviated form as SLAM. SLAM addresses the main perception problem of a robot navigating an unknown environment. While navigating the environment, the robot seeks to acquire a map thereof, and at the same time it wishes to localize itself using its map. The use of SLAM problems can be motivated in two different ways: one might be interested in detailed environment models, or one might seek to maintain an accurate sense of a mobile robot’s location. SLAM serves both of these purposes.

We review the three major paradigms from which many published methods for SLAM are derived: (1) the extended Kalman filter (EKF); (2) particle filtering; and (3) graph optimization. We also review recent work in three-dimensional (3-D) SLAM using visual and red green blue distance-sensors (RGB-D), and close with a discussion of open research problems in robotic mapping.

DTAM: Dense tracking and mapping in real-time

Author  Richard Newcombe

Video ID : 452

This video shows DTAM: Dense tracking and mapping in real-time, a system for real-time, fully-dense visual tracking and reconstruction, described in Chap. 46.4, Springer Handbook of Robotics, 2nd edn (2016). Reference: R.A. Newcombe, S.J. Lovegrove, A.J. Davison: DTAM: Dense tracking and mapping in real-time. Int. Conf. Computer Vision (ICCV),, Barcelona (2011), pp. 2320–2327

Chapter 56 — Robotics in Agriculture and Forestry

Marcel Bergerman, John Billingsley, John Reid and Eldert van Henten

Robotics for agriculture and forestry (A&F) represents the ultimate application of one of our society’s latest and most advanced innovations to its most ancient and important industries. Over the course of history, mechanization and automation increased crop output several orders of magnitude, enabling a geometric growth in population and an increase in quality of life across the globe. Rapid population growth and rising incomes in developing countries, however, require ever larger amounts of A&F output. This chapter addresses robotics for A&F in the form of case studies where robotics is being successfully applied to solve well-identified problems. With respect to plant crops, the focus is on the in-field or in-farm tasks necessary to guarantee a quality crop and, generally speaking, end at harvest time. In the livestock domain, the focus is on breeding and nurturing, exploiting, harvesting, and slaughtering and processing. The chapter is organized in four main sections. The first one explains the scope, in particular, what aspects of robotics for A&F are dealt with in the chapter. The second one discusses the challenges and opportunities associated with the application of robotics to A&F. The third section is the core of the chapter, presenting twenty case studies that showcase (mostly) mature applications of robotics in various agricultural and forestry domains. The case studies are not meant to be comprehensive but instead to give the reader a general overview of how robotics has been applied to A&F in the last 10 years. The fourth section concludes the chapter with a discussion on specific improvements to current technology and paths to commercialization.

An autonomous robot for de-leafing cucumber plants

Author  Elder J. van Henten, Bart A.J. van Tuijl, G. J. Hoogakker, M.J. van der Weerd, Jochen Hemming, J.G. Kornet, Jan Bontsema

Video ID : 309

In cucumber production, amongst other crops, removal of old non-productive leaves in the lower regions of the plant is a time consuming task. Based on the platform of the autonomous cucumber harvester at Wageningen University and Research Centre, Wageningen, The Netherlands, a robot for de-leafing cucumber plants was developed. The platform's camera system identifies and locates the main stems of the plants. The gripper is sent to the plant and moved upwards. Leaves encountered during this upward motion are separated from the plant using a thermal cutting device which prevents transmission of viruses from plant to plant. An interesting feature of this machine is that, with slight modifications of software and hardware, two greenhouse operations can be performed.

Chapter 32 — 3-D Vision for Navigation and Grasping

Danica Kragic and Kostas Daniilidis

In this chapter, we describe algorithms for three-dimensional (3-D) vision that help robots accomplish navigation and grasping. To model cameras, we start with the basics of perspective projection and distortion due to lenses. This projection from a 3-D world to a two-dimensional (2-D) image can be inverted only by using information from the world or multiple 2-D views. If we know the 3-D model of an object or the location of 3-D landmarks, we can solve the pose estimation problem from one view. When two views are available, we can compute the 3-D motion and triangulate to reconstruct the world up to a scale factor. When multiple views are given either as sparse viewpoints or a continuous incoming video, then the robot path can be computer and point tracks can yield a sparse 3-D representation of the world. In order to grasp objects, we can estimate 3-D pose of the end effector or 3-D coordinates of the graspable points on the object.

LIBVISO: Visual odometry for intelligent vehicles

Author  Andreas Geiger

Video ID : 122

This video demonstrates a visual-odometry algorithm on the performance of the vehicle Annieway (VW Passat). Visual odometry is the estimation of a video camera's 3-D motion and orientation, which is purely based on stereo vision in this case. The blue trajectory is the motion estimated by visual odometry, and the red trajectory is the ground truth by a high-precision OXTS RT3000 GPS+IMU system. The software is available from http://www.cvlibs.net/

Chapter 10 — Redundant Robots

Stefano Chiaverini, Giuseppe Oriolo and Anthony A. Maciejewski

This chapter focuses on redundancy resolution schemes, i. e., the techniques for exploiting the redundant degrees of freedom in the solution of the inverse kinematics problem. This is obviously an issue of major relevance for motion planning and control purposes.

In particular, task-oriented kinematics and the basic methods for its inversion at the velocity (first-order differential) level are first recalled, with a discussion of the main techniques for handling kinematic singularities. Next, different firstorder methods to solve kinematic redundancy are arranged in two main categories, namely those based on the optimization of suitable performance criteria and those relying on the augmentation of the task space. Redundancy resolution methods at the acceleration (second-order differential) level are then considered in order to take into account dynamics issues, e.g., torque minimization. Conditions under which a cyclic task motion results in a cyclic joint motion are also discussed; this is a major issue when a redundant manipulator is used to execute a repetitive task, e.g., in industrial applications. The use of kinematic redundancy for fault tolerance is analyzed in detail. Suggestions for further reading are given in a final section.

Human robot arm with redundancy resolution

Author  PRISMA Lab

Video ID : 816

In this video, the mapping of human-arm motion to an anthropomorphic robot arm (7-DOF Kuka LWR ) using Xsens MVN is demonstrated. The desired end-effector trajectories of the robot are reconstructed from the human hand, forearm and upper arm trajectories in the Cartesian space obtained from the motion tracking system by means of human-arm biomechanical models and sensor-fusion algorithms embedded in the Xsens technology. The desired pose of the robot is reconstructed taking into account the differences between the robot and human-arm kinematics and is obtained by suitably scaling to the human-arm link dimensions.

Chapter 64 — Rehabilitation and Health Care Robotics

H.F. Machiel Van der Loos, David J. Reinkensmeyer and Eugenio Guglielmelli

The field of rehabilitation robotics considers robotic systems that 1) provide therapy for persons seeking to recover their physical, social, communication, or cognitive function, and/or that 2) assist persons who have a chronic disability to accomplish activities of daily living. This chapter will discuss these two main domains and provide descriptions of the major achievements of the field over its short history and chart out the challenges to come. Specifically, after providing background information on demographics (Sect. 64.1.2) and history (Sect. 64.1.3) of the field, Sect. 64.2 describes physical therapy and exercise training robots, and Sect. 64.3 describes robotic aids for people with disabilities. Section 64.4 then presents recent advances in smart prostheses and orthoses that are related to rehabilitation robotics. Finally, Sect. 64.5 provides an overview of recent work in diagnosis and monitoring for rehabilitation as well as other health-care issues. The reader is referred to Chap. 73 for cognitive rehabilitation robotics and to Chap. 65 for robotic smart home technologies, which are often considered assistive technologies for persons with disabilities. At the conclusion of the present chapter, the reader will be familiar with the history of rehabilitation robotics and its primary accomplishments, and will understand the challenges the field may face in the future as it seeks to improve health care and the well being of persons with disabilities.

Gait Trainer GT 1

Author  Reha Stim

Video ID : 504

The Gait Trainer GT1 was one of the first robotic gait trainers and now is widely used in clinics.

BONES and SUE exoskeletons for robotic therapy

Author  Julius Klein, Steve Spencer, James Allington, Marie-Helene Milot, Jim Bobrow, David Reinkensmeyer

Video ID : 498

BONES is a 5-DOF, pneumatic robot developed at the University of California at Irvine for naturalistic arm training after stroke. It incorporates an assistance-as-needed algorithm that adapts in real time to patient errors during game play by developing a computer model of the patient's weakness as a function of workspace location. The controller incorporates an anti-slacking term. SUE is a 2-DOF pneumatic robot for providing wrist assistance. The video shows a person with a stroke using the device to drive a simulated motor cycle through a simulated Death Valley.

Chapter 23 — Biomimetic Robots

Kyu-Jin Cho and Robert Wood

Biomimetic robot designs attempt to translate biological principles into engineered systems, replacing more classical engineering solutions in order to achieve a function observed in the natural system. This chapter will focus on mechanism design for bio-inspired robots that replicate key principles from nature with novel engineering solutions. The challenges of biomimetic design include developing a deep understanding of the relevant natural system and translating this understanding into engineering design rules. This often entails the development of novel fabrication and actuation to realize the biomimetic design.

This chapter consists of four sections. In Sect. 23.1, we will define what biomimetic design entails, and contrast biomimetic robots with bio-inspired robots. In Sect. 23.2, we will discuss the fundamental components for developing a biomimetic robot. In Sect. 23.3, we will review detailed biomimetic designs that have been developed for canonical robot locomotion behaviors including flapping-wing flight, jumping, crawling, wall climbing, and swimming. In Sect. 23.4, we will discuss the enabling technologies for these biomimetic designs including material and fabrication.

Jumping-and-landing robot MOWGLI

Author  Ryuma Niiyama, Akihiko Nagakubo, Yasuo Kuniyoshi

Video ID : 285

In this research, we developed a bipedal robot with an artificial musculoskeletal system. Here, we present an approach to realize motor control of jumping and landing that exploits the synergy between control and mechanical structure. Our experimental system is a bipedal robot called MOWGLI. This video shows a jumping-onto-a-chair experiment to a height of 0.4 m. MOWGLI can reach heights of more than 50 % of its body height and can land softly. As a multiple-DOF legged robot, this performance is extremely high. Our results show a proximo-distal sequence of joint extensions during jumping despite simultaneous motor activity. In addition to the experiments with the real robot, the simulation results demonstrate the contribution of the artificial musculoskeletal system as a physical feedback loop in explosive movements.