View Chapter

Chapter 10 — Redundant Robots

Stefano Chiaverini, Giuseppe Oriolo and Anthony A. Maciejewski

This chapter focuses on redundancy resolution schemes, i. e., the techniques for exploiting the redundant degrees of freedom in the solution of the inverse kinematics problem. This is obviously an issue of major relevance for motion planning and control purposes.

In particular, task-oriented kinematics and the basic methods for its inversion at the velocity (first-order differential) level are first recalled, with a discussion of the main techniques for handling kinematic singularities. Next, different firstorder methods to solve kinematic redundancy are arranged in two main categories, namely those based on the optimization of suitable performance criteria and those relying on the augmentation of the task space. Redundancy resolution methods at the acceleration (second-order differential) level are then considered in order to take into account dynamics issues, e.g., torque minimization. Conditions under which a cyclic task motion results in a cyclic joint motion are also discussed; this is a major issue when a redundant manipulator is used to execute a repetitive task, e.g., in industrial applications. The use of kinematic redundancy for fault tolerance is analyzed in detail. Suggestions for further reading are given in a final section.

Visual servoing control of Baxter robot arms with obstacle avoidance using kinematic edundancy

Author  Chenguang Yang

Video ID : 819

Visual servoing control rby an obstacle avoidance strategy using kinematics redundancy has been developed and tested on a Baxter robot. A Point Grey Bumblebee2 stereo camera is used to obtain the 3-D point cloud of a target object. The object tracking task allocation between two arms has been developed by identifying workspaces of the dual arms and tracing the object location in a convex hull of the workspace. By employment of a simulated artificial robot as a parallel system as well as a task-switching weight factor, the robot is actually able to restore back to the natural pose smoothly in the absence of the obstacle. Two sets of experiments were carried out to demonstrate the effectiveness of the developed servoing control method.

Chapter 74 — Learning from Humans

Aude G. Billard, Sylvain Calinon and Rüdiger Dillmann

This chapter surveys the main approaches developed to date to endow robots with the ability to learn from human guidance. The field is best known as robot programming by demonstration, robot learning from/by demonstration, apprenticeship learning and imitation learning. We start with a brief historical overview of the field. We then summarize the various approaches taken to solve four main questions: when, what, who and when to imitate. We emphasize the importance of choosing well the interface and the channels used to convey the demonstrations, with an eye on interfaces providing force control and force feedback. We then review algorithmic approaches to model skills individually and as a compound and algorithms that combine learning from human guidance with reinforcement learning. We close with a look on the use of language to guide teaching and a list of open issues.

Demonstrations and reproduction of moving a chessman

Author  Sylvain Calinon, Florent Guenter, Aude Billard

Video ID : 97

A robot learns how to make a chess move from multiple demonstrations and to reproduce the skill in a new situation (different position of the chessman) by finding a controller which satisfies both the task constraints (what-to-imitate) and constraints relative to its body limitation (how-to-imitate). Reference: S. Calinon, F. Guenter, A. Billard: On earning, representing and generalizing a task in a humanoid robot, IEEE Trans. Syst. Man Cybernet. B 37(2), 286-298 (2007); URL: http://lasa.epfl.ch/videos/control.php.

Chapter 40 — Mobility and Manipulation

Oliver Brock, Jaeheung Park and Marc Toussaint

Mobile manipulation requires the integration of methodologies from all aspects of robotics. Instead of tackling each aspect in isolation,mobilemanipulation research exploits their interdependence to solve challenging problems. As a result, novel views of long-standing problems emerge. In this chapter, we present these emerging views in the areas of grasping, control, motion generation, learning, and perception. All of these areas must address the shared challenges of high-dimensionality, uncertainty, and task variability. The section on grasping and manipulation describes a trend towards actively leveraging contact and physical and dynamic interactions between hand, object, and environment. Research in control addresses the challenges of appropriately coupling mobility and manipulation. The field of motion generation increasingly blurs the boundaries between control and planning, leading to task-consistent motion in high-dimensional configuration spaces, even in dynamic and partially unknown environments. A key challenge of learning formobilemanipulation consists of identifying the appropriate priors, and we survey recent learning approaches to perception, grasping, motion, and manipulation. Finally, a discussion of promising methods in perception shows how concepts and methods from navigation and active perception are applied.

Yale Aerial Manipulator - Dollar Grasp Lab

Author  Paul E. I. Pounds, Daniel R. Bersak, Aaron M. Dollar

Video ID : 656

Aaron Dollar's Aerial Manipulator integrates a gripper that is able to directly grasp and transport objects.

Chapter 69 — Physical Human-Robot Interaction

Sami Haddadin and Elizabeth Croft

Over the last two decades, the foundations for physical human–robot interaction (pHRI) have evolved from successful developments in mechatronics, control, and planning, leading toward safer lightweight robot designs and interaction control schemes that advance beyond the current capacities of existing high-payload and highprecision position-controlled industrial robots. Based on their ability to sense physical interaction, render compliant behavior along the robot structure, plan motions that respect human preferences, and generate interaction plans for collaboration and coaction with humans, these novel robots have opened up novel and unforeseen application domains, and have advanced the field of human safety in robotics.

This chapter gives an overview on the state of the art in pHRI as of the date of publication. First, the advances in human safety are outlined, addressing topics in human injury analysis in robotics and safety standards for pHRI. Then, the foundations of human-friendly robot design, including the development of lightweight and intrinsically flexible force/torque-controlled machines together with the required perception abilities for interaction are introduced. Subsequently, motionplanning techniques for human environments, including the domains of biomechanically safe, risk-metric-based, human-aware planning are covered. Finally, the rather recent problem of interaction planning is summarized, including the issues of collaborative action planning, the definition of the interaction planning problem, and an introduction to robot reflexes and reactive control architecture for pHRI.

Dancing with Juliet

Author  Oussama Khatib, Kyong-Sok Chang, Oliver Brock, Kazuhito Yokoi, Arancha Casal, Robert Holmberg

Video ID : 820

This video presents experiments in human-robot interaction using the Stanford Mobile Manipulator platforms. Each platform consists of a Puma 560 manipulator mounted on a holonomic mobile base. The experiments shown in this video are the results of the implementation of various methodologies developed for establishing the basic autonomous capabilities needed for robot operations in human environments. The integration of mobility and manipulation is based on a task-oriented control strategy which provides the user with two basic control primitives: end-effector task control and platform self-posture control.

Chapter 21 — Actuators for Soft Robotics

Alin Albu-Schäffer and Antonio Bicchi

Although we do not know as yet how robots of the future will look like exactly, most of us are sure that they will not resemble the heavy, bulky, rigid machines dangerously moving around in old fashioned industrial automation. There is a growing consensus, in the research community as well as in expectations from the public, that robots of the next generation will be physically compliant and adaptable machines, closely interacting with humans and moving safely, smoothly and efficiently - in other terms, robots will be soft.

This chapter discusses the design, modeling and control of actuators for the new generation of soft robots, which can replace conventional actuators in applications where rigidity is not the first and foremost concern in performance. The chapter focuses on the technology, modeling, and control of lumped parameters of soft robotics, that is, systems of discrete, interconnected, and compliant elements. Distributed parameters, snakelike and continuum soft robotics, are presented in Chap. 20, while Chap. 23 discusses in detail the biomimetic motivations that are often behind soft robotics.

PETMAN tests Camo

Author  Boston Dynamics

Video ID : 457

The PETMAN robot was developed by Boston Dynamics with funding from the DoD CBD program. It is used to test the performance of protective clothing designed for hazardous environments. The video shows initial testing in a chemical protection suit and gas mask. PETMAN has sensors embedded in its skin that detect any chemicals leaking through the suit. The skin also maintains a microclimate inside the clothing by sweating and regulating temperature. Partners in developing PETMAN were MRIGlobal, Measurement Technology Northwest, Smith Carter, SRD, CUH2A, and HHI.

Chapter 13 — Behavior-Based Systems

François Michaud and Monica Nicolescu

Nature is filled with examples of autonomous creatures capable of dealing with the diversity, unpredictability, and rapidly changing conditions of the real world. Such creatures must make decisions and take actions based on incomplete perception, time constraints, limited knowledge about the world, cognition, reasoning and physical capabilities, in uncontrolled conditions and with very limited cues about the intent of others. Consequently, one way of evaluating intelligence is based on the creature’s ability to make the most of what it has available to handle the complexities of the real world. The main objective of this chapter is to explain behavior-based systems and their use in autonomous control problems and applications. The chapter is organized as follows. Section 13.1 overviews robot control, introducing behavior-based systems in relation to other established approaches to robot control. Section 13.2 follows by outlining the basic principles of behavior-based systems that make them distinct from other types of robot control architectures. The concept of basis behaviors, the means of modularizing behavior-based systems, is presented in Sect. 13.3. Section 13.4 describes how behaviors are used as building blocks for creating representations for use by behavior-based systems, enabling the robot to reason about the world and about itself in that world. Section 13.5 presents several different classes of learning methods for behavior-based systems, validated on single-robot and multirobot systems. Section 13.6 provides an overview of various robotics problems and application domains that have successfully been addressed or are currently being studied with behavior-based control. Finally, Sect. 13.7 concludes the chapter.

Using ROS4iOS

Author  François Michaud

Video ID : 419

Demonstration of the integration, using HBBA (hybrid behaviour-based architecture), of navigation, remote localization, speaker identification, speech recognition and teleoperation. The scenario employs the ROS4iOS to provide remote perceptual capabilities for visual location, speech and speaker recognition. Reference: F. Ferland, R. Chauvin, D. Létourneau, F. Michaud: Hello robot, can you come here? Using ROS4iOS to provide remote perceptual capabilities for visual location, speech and speaker recognition, Proc. Int. ACM/IEEE Conf. Human-Robot Interaction (2014), p. 101

Chapter 64 — Rehabilitation and Health Care Robotics

H.F. Machiel Van der Loos, David J. Reinkensmeyer and Eugenio Guglielmelli

The field of rehabilitation robotics considers robotic systems that 1) provide therapy for persons seeking to recover their physical, social, communication, or cognitive function, and/or that 2) assist persons who have a chronic disability to accomplish activities of daily living. This chapter will discuss these two main domains and provide descriptions of the major achievements of the field over its short history and chart out the challenges to come. Specifically, after providing background information on demographics (Sect. 64.1.2) and history (Sect. 64.1.3) of the field, Sect. 64.2 describes physical therapy and exercise training robots, and Sect. 64.3 describes robotic aids for people with disabilities. Section 64.4 then presents recent advances in smart prostheses and orthoses that are related to rehabilitation robotics. Finally, Sect. 64.5 provides an overview of recent work in diagnosis and monitoring for rehabilitation as well as other health-care issues. The reader is referred to Chap. 73 for cognitive rehabilitation robotics and to Chap. 65 for robotic smart home technologies, which are often considered assistive technologies for persons with disabilities. At the conclusion of the present chapter, the reader will be familiar with the history of rehabilitation robotics and its primary accomplishments, and will understand the challenges the field may face in the future as it seeks to improve health care and the well being of persons with disabilities.

ARMin plus HandSOME robotic therapy system

Author  Peter Lum

Video ID : 497

The ARMin exoskeleton is combined with the HandSOME orthosis to enable practice of pick and place tasks with real objects. The ARMin is controlled by a joint-based guidance algorithm which enforces normal coordination between shoulder and elbow joints.

Chapter 68 — Human Motion Reconstruction

Katsu Yamane and Wataru Takano

This chapter presents a set of techniques for reconstructing and understanding human motions measured using current motion capture technologies. We first review modeling and computation techniques for obtaining motion and force information from human motion data (Sect. 68.2). Here we show that kinematics and dynamics algorithms for articulated rigid bodies can be applied to human motion data processing, with help from models based on knowledge in anatomy and physiology. We then describe methods for analyzing human motions so that robots can segment and categorize different behaviors and use them as the basis for human motion understanding and communication (Sect. 68.3). These methods are based on statistical techniques widely used in linguistics. The two fields share the common goal of converting continuous and noisy signal to discrete symbols, and therefore it is natural to apply similar techniques. Finally, we introduce some application examples of human motion and models ranging from simulated human control to humanoid robot motion synthesis.

Example of optical motion-capture data converted to joint-angle data

Author  Katsu Yamane

Video ID : 762

This video shows an example of optical motion-capture data converted to the joint-angle data of a robot model.

Chapter 30 — Sonar Sensing

Lindsay Kleeman and Roman Kuc

Sonar or ultrasonic sensing uses the propagation of acoustic energy at higher frequencies than normal hearing to extract information from the environment. This chapter presents the fundamentals and physics of sonar sensing for object localization, landmark measurement and classification in robotics applications. The source of sonar artifacts is explained and how they can be dealt with. Different ultrasonic transducer technologies are outlined with their main characteristics highlighted.

Sonar systems are described that range in sophistication from low-cost threshold-based ranging modules to multitransducer multipulse configurations with associated signal processing requirements capable of accurate range and bearing measurement, interference rejection, motion compensation, and target classification. Continuous-transmission frequency-modulated (CTFM) systems are introduced and their ability to improve target sensitivity in the presence of noise is discussed. Various sonar ring designs that provide rapid surrounding environmental coverage are described in conjunction with mapping results. Finally the chapter ends with a discussion of biomimetic sonar, which draws inspiration from animals such as bats and dolphins.

Antwerp biomimetic sonar tracking of a complex object

Author  Herbert Peremans

Video ID : 311

The Antwerp biomimetic bat head sonar system consists of a single emitter and two receivers. The receivers are constructed by inserting a small omnidirectional microphone in the ear canal of a plastic replica of the outer ear of the bat Phyllostomus discolor. Using the head-related transfer (HRTF) cues, the system is able to localize multiple reflectors in three dimensions based on a single emission. This video demonstrates that the reflector does not need to be a sphere for this spectrum-based localization algorithm to work. Despite the filtering of the echo signal by the reflector, no apparent confusion of the 3-D localization results.

Chapter 51 — Modeling and Control of Underwater Robots

Gianluca Antonelli, Thor I. Fossen and Dana R. Yoerger

This chapter deals with modeling and control of underwater robots. First, a brief introduction showing the constantly expanding role of marine robotics in oceanic engineering is given; this section also contains some historical backgrounds. Most of the following sections strongly overlap with the corresponding chapters presented in this handbook; hence, to avoid useless repetitions, only those aspects peculiar to the underwater environment are discussed, assuming that the reader is already familiar with concepts such as fault detection systems when discussing the corresponding underwater implementation. Themodeling section is presented by focusing on a coefficient-based approach capturing the most relevant underwater dynamic effects. Two sections dealing with the description of the sensor and the actuating systems are then given. Autonomous underwater vehicles require the implementation of mission control system as well as guidance and control algorithms. Underwater localization is also discussed. Underwater manipulation is then briefly approached. Fault detection and fault tolerance, together with the coordination control of multiple underwater vehicles, conclude the theoretical part of the chapter. Two final sections, reporting some successful applications and discussing future perspectives, conclude the chapter. The reader is referred to Chap. 25 for the design issues.

Underwater vehicle Nereus

Author  Woods Hole Oceanographic Institution

Video ID : 88

Nereus is the first vehicle to enable routine scientific investigation of the world's deepest ocean depths. Recently, Nereus successfully reached the deepest part of the world's ocean - the Challenger Deep in the Mariana Trench in the western Pacific Ocean.