View Chapter

Chapter 68 — Human Motion Reconstruction

Katsu Yamane and Wataru Takano

This chapter presents a set of techniques for reconstructing and understanding human motions measured using current motion capture technologies. We first review modeling and computation techniques for obtaining motion and force information from human motion data (Sect. 68.2). Here we show that kinematics and dynamics algorithms for articulated rigid bodies can be applied to human motion data processing, with help from models based on knowledge in anatomy and physiology. We then describe methods for analyzing human motions so that robots can segment and categorize different behaviors and use them as the basis for human motion understanding and communication (Sect. 68.3). These methods are based on statistical techniques widely used in linguistics. The two fields share the common goal of converting continuous and noisy signal to discrete symbols, and therefore it is natural to apply similar techniques. Finally, we introduce some application examples of human motion and models ranging from simulated human control to humanoid robot motion synthesis.

The Crystal Ball: Predicting future motions

Author  Katsu Yamane

Video ID : 764

This video shows a demonstration of The Crystal Ball, a system that predicts future motions based on a graphical motion model. The rightmost figure represents the current motion, while the other figures represent the predicted motions.

Chapter 21 — Actuators for Soft Robotics

Alin Albu-Schäffer and Antonio Bicchi

Although we do not know as yet how robots of the future will look like exactly, most of us are sure that they will not resemble the heavy, bulky, rigid machines dangerously moving around in old fashioned industrial automation. There is a growing consensus, in the research community as well as in expectations from the public, that robots of the next generation will be physically compliant and adaptable machines, closely interacting with humans and moving safely, smoothly and efficiently - in other terms, robots will be soft.

This chapter discusses the design, modeling and control of actuators for the new generation of soft robots, which can replace conventional actuators in applications where rigidity is not the first and foremost concern in performance. The chapter focuses on the technology, modeling, and control of lumped parameters of soft robotics, that is, systems of discrete, interconnected, and compliant elements. Distributed parameters, snakelike and continuum soft robotics, are presented in Chap. 20, while Chap. 23 discusses in detail the biomimetic motivations that are often behind soft robotics.

Active damping control on the DLR Hand Arm System

Author  Florian Petit, Alin Albu-Schäffer

Video ID : 548

The effectivness of active damping control is shown in a writing task performed by the DLR Hand Arm System.

Chapter 1 — Robotics and the Handbook

Bruno Siciliano and Oussama Khatib

Robots! Robots on Mars and in oceans, in hospitals and homes, in factories and schools; robots fighting fires, making goods and products, saving time and lives. Robots today are making a considerable impact on many aspects of modern life, from industrial manufacturing to healthcare, transportation, and exploration of the deep space and sea. Tomorrow, robotswill be as pervasive and personal as today’s personal computers. This chapter retraces the evolution of this fascinating field from the ancient to themodern times through a number of milestones: from the first automated mechanical artifact (1400 BC) through the establishment of the robot concept in the 1920s, the realization of the first industrial robots in the 1960s, the definition of robotics science and the birth of an active research community in the 1980s, and the expansion towards the challenges of the human world of the twenty-first century. Robotics in its long journey has inspired this handbook which is organized in three layers: the foundations of robotics science; the consolidated methodologies and technologies of robot design, sensing and perception, manipulation and interfaces, mobile and distributed robotics; the advanced applications of field and service robotics, as well as of human-centered and life-like robotics.

Robots — A 50 year journey

Author  Oussama Khatib

Video ID : 805

In this collection of short segments, this video retraces the history of the most influential modern robots developed in the 20th century (1950-2000). The 50-year journey was first presented at the 2000 IEEE International Conference on Robotics and Automation (ICRA) in San Francisco.

Chapter 75 — Biologically Inspired Robotics

Fumiya Iida and Auke Jan Ijspeert

Throughout the history of robotics research, nature has been providing numerous ideas and inspirations to robotics engineers. Small insect-like robots, for example, usually make use of reflexive behaviors to avoid obstacles during locomotion, whereas large bipedal robots are designed to control complex human-like leg for climbing up and down stairs. While providing an overview of bio-inspired robotics, this chapter particularly focus on research which aims to employ robotics systems and technologies for our deeper understanding of biological systems. Unlike most of the other robotics research where researchers attempt to develop robotic applications, these types of bio-inspired robots are generally developed to test unsolved hypotheses in biological sciences. Through close collaborations between biologists and roboticists, bio-inspired robotics research contributes not only to elucidating challenging questions in nature but also to developing novel technologies for robotics applications. In this chapter, we first provide a brief historical background of this research area and then an overview of ongoing research methodologies. A few representative case studies will detail the successful instances in which robotics technologies help identifying biological hypotheses. And finally we discuss challenges and perspectives in the field.

Biologically inspired robotics (or bio-inspired robotics in short) is a very broad research area because almost all robotic systems are, in one way or the other, inspired from biological systems. Therefore, there is no clear distinction between bio-inspired robots and the others, and there is no commonly agreed definition [75.1]. For example, legged robots that walk, hop, and run are usually regarded as bio-inspired robots because many biological systems rely on legged locomotion for their survival. On the other hand, many robotics researchers implement biologicalmodels ofmotion control and navigation onto wheeled platforms, which could also be regarded as bio-inspired robots [75.2].

Analog Robot

Author  Fumiya Iida, Auke Ijspeert

Video ID : 242

This video presents Analog Robot that uses a biologically- inspired, visual-homing method for navigation. This robot is equipped with a set of analog circuitry for vision-based landmark navigation based on the mechanisms identified in biological systems, the so-called "snapshot model". The image registered at the start of the experiment will be used as a reference frame, and the analog circuitry finds a direction to travel by comparing it with the current frame.

Chapter 64 — Rehabilitation and Health Care Robotics

H.F. Machiel Van der Loos, David J. Reinkensmeyer and Eugenio Guglielmelli

The field of rehabilitation robotics considers robotic systems that 1) provide therapy for persons seeking to recover their physical, social, communication, or cognitive function, and/or that 2) assist persons who have a chronic disability to accomplish activities of daily living. This chapter will discuss these two main domains and provide descriptions of the major achievements of the field over its short history and chart out the challenges to come. Specifically, after providing background information on demographics (Sect. 64.1.2) and history (Sect. 64.1.3) of the field, Sect. 64.2 describes physical therapy and exercise training robots, and Sect. 64.3 describes robotic aids for people with disabilities. Section 64.4 then presents recent advances in smart prostheses and orthoses that are related to rehabilitation robotics. Finally, Sect. 64.5 provides an overview of recent work in diagnosis and monitoring for rehabilitation as well as other health-care issues. The reader is referred to Chap. 73 for cognitive rehabilitation robotics and to Chap. 65 for robotic smart home technologies, which are often considered assistive technologies for persons with disabilities. At the conclusion of the present chapter, the reader will be familiar with the history of rehabilitation robotics and its primary accomplishments, and will understand the challenges the field may face in the future as it seeks to improve health care and the well being of persons with disabilities.

The Arm Guide

Author  Lennie Kahn

Video ID : 494

The Arm Guide was an early rehabilitation therapy robot used to study the role of active assistance in robotic therapy after stroke, which was developed at the Rehabilitation Institute of Chicago and the University of California at Irvine. It was a singly-actuated, trombone-like device which could be oriented in different directions. It was used to sense patient's arm movement along a linear bearing and then assisted in completing movements with a motor attached to a timing belt along the bearing. It also measured off-axis forces generated against the linear bearing, using a 6-axis force-torque cell in order to quantify abnormal synergies.

Chapter 40 — Mobility and Manipulation

Oliver Brock, Jaeheung Park and Marc Toussaint

Mobile manipulation requires the integration of methodologies from all aspects of robotics. Instead of tackling each aspect in isolation,mobilemanipulation research exploits their interdependence to solve challenging problems. As a result, novel views of long-standing problems emerge. In this chapter, we present these emerging views in the areas of grasping, control, motion generation, learning, and perception. All of these areas must address the shared challenges of high-dimensionality, uncertainty, and task variability. The section on grasping and manipulation describes a trend towards actively leveraging contact and physical and dynamic interactions between hand, object, and environment. Research in control addresses the challenges of appropriately coupling mobility and manipulation. The field of motion generation increasingly blurs the boundaries between control and planning, leading to task-consistent motion in high-dimensional configuration spaces, even in dynamic and partially unknown environments. A key challenge of learning formobilemanipulation consists of identifying the appropriate priors, and we survey recent learning approaches to perception, grasping, motion, and manipulation. Finally, a discussion of promising methods in perception shows how concepts and methods from navigation and active perception are applied.

Catching objects in flight

Author  Seungsu Kim, Ashwini Shukla, Aude Billard

Video ID : 653

We target the difficult problem of catching in-flight objects with uneven shapes. This requires the solution of three complex problems: predicting accurately the trajectory of fast-moving objects, predicting the feasible catching configuration, and planning the arm motion, all within milliseconds. We follow a programming-by-demonstration approach in order to learn models of the object and the arm dynamics from throwing examples. We propose a new methodology for finding a feasible catching configuration in a probabilistic manner. We leverage the strength of dynamical systems for encoding motion from several demonstrations. This enables fast and online adaptation of the arm motion in the presence of sensor uncertainty. We validate the approach in simulation with the iCub humanoid robot and in real-world experiment with the KUKA LWR 4+ (7-DOF arm robot) for catching a hammer, a tennis racket, an empty bottle, a partially filled bottle and a cardboard box.

HERMES, a humanoid experimental robot for mobile manipulation and exploration services

Author  Rainer Bischoff

Video ID : 783

Mobile robot HERMES grasps and releases a glass with tactile sensing using joint-angle, encoder values and motor currents. The robot can fill a glass with water from a bottle using vision. It can communicate with natural spoken language,and it can come to you to get your cup and take the cup to the kitchen, by planning a path and avoiding obstacles.

Chapter 32 — 3-D Vision for Navigation and Grasping

Danica Kragic and Kostas Daniilidis

In this chapter, we describe algorithms for three-dimensional (3-D) vision that help robots accomplish navigation and grasping. To model cameras, we start with the basics of perspective projection and distortion due to lenses. This projection from a 3-D world to a two-dimensional (2-D) image can be inverted only by using information from the world or multiple 2-D views. If we know the 3-D model of an object or the location of 3-D landmarks, we can solve the pose estimation problem from one view. When two views are available, we can compute the 3-D motion and triangulate to reconstruct the world up to a scale factor. When multiple views are given either as sparse viewpoints or a continuous incoming video, then the robot path can be computer and point tracks can yield a sparse 3-D representation of the world. In order to grasp objects, we can estimate 3-D pose of the end effector or 3-D coordinates of the graspable points on the object.

Finding paths through the world's photos

Author  Noah Snavely, Rahul Garg, Steven M. Seitz, Richard Szeliski

Video ID : 121

When a scene is photographed many times by different people, the viewpoints often cluster along certain paths. These paths are largely specific to the scene being photographed and follow interesting patterns and viewpoints. We seek to discover a range of such paths and turn them into controls for image-based rendering. Our approach takes as input a large set of community or personal photos, reconstructs camera viewpoints, and automatically computes orbits, panoramas, canonical views, and optimal paths between views. The scene can then be interactively browsed in 3-D using these controls or with six DOF free-viewpoint control. As the user browses the scene, nearby views are continuously selected and transformed, using control-adaptive reprojection techniques.

Chapter 30 — Sonar Sensing

Lindsay Kleeman and Roman Kuc

Sonar or ultrasonic sensing uses the propagation of acoustic energy at higher frequencies than normal hearing to extract information from the environment. This chapter presents the fundamentals and physics of sonar sensing for object localization, landmark measurement and classification in robotics applications. The source of sonar artifacts is explained and how they can be dealt with. Different ultrasonic transducer technologies are outlined with their main characteristics highlighted.

Sonar systems are described that range in sophistication from low-cost threshold-based ranging modules to multitransducer multipulse configurations with associated signal processing requirements capable of accurate range and bearing measurement, interference rejection, motion compensation, and target classification. Continuous-transmission frequency-modulated (CTFM) systems are introduced and their ability to improve target sensitivity in the presence of noise is discussed. Various sonar ring designs that provide rapid surrounding environmental coverage are described in conjunction with mapping results. Finally the chapter ends with a discussion of biomimetic sonar, which draws inspiration from animals such as bats and dolphins.

Monash DSP sonar tracking a moving plane

Author  Lindsay Kleeman

Video ID : 313

A four-transducer system is controlled with a DSP microcontroller which processes echoes to determine the normal incidence and range to a plane reflector. The transducer scans to locate the plane and then tracks the normal-incidence section of the plane as it moves in real time.

Chapter 72 — Social Robotics

Cynthia Breazeal, Kerstin Dautenhahn and Takayuki Kanda

This chapter surveys some of the principal research trends in Social Robotics and its application to human–robot interaction (HRI). Social (or Sociable) robots are designed to interact with people in a natural, interpersonal manner – often to achieve positive outcomes in diverse applications such as education, health, quality of life, entertainment, communication, and tasks requiring collaborative teamwork. The long-term goal of creating social robots that are competent and capable partners for people is quite a challenging task. They will need to be able to communicate naturally with people using both verbal and nonverbal signals. They will need to engage us not only on a cognitive level, but on an emotional level as well in order to provide effective social and task-related support to people. They will need a wide range of socialcognitive skills and a theory of other minds to understand human behavior, and to be intuitively understood by people. A deep understanding of human intelligence and behavior across multiple dimensions (i. e., cognitive, affective, physical, social, etc.) is necessary in order to design robots that can successfully play a beneficial role in the daily lives of people. This requires a multidisciplinary approach where the design of social robot technologies and methodologies are informed by robotics, artificial intelligence, psychology, neuroscience, human factors, design, anthropology, and more.

Visual communicative nonverbal behaviors of the Sunflower robot

Author  Kerstin Dautenhahn

Video ID : 219

The video illustrates the experiments as described in Koay et. al (2013). The Sunflower robot, developed by Kheng Lee Koay at the University of Hertfordshire, is a non-humanoid robot, using communicative signals inspired by dog-human interaction. The biological behaviors had been abstracted and translated to the specific robot embodiment. The results show that the robot is able to communicate its intention to a person and encourages the participant to attend to events and locations in a home environment. The work has been part of the of the European project LIREC (http://lirec.eu/project).