View Chapter

Chapter 21 — Actuators for Soft Robotics

Alin Albu-Schäffer and Antonio Bicchi

Although we do not know as yet how robots of the future will look like exactly, most of us are sure that they will not resemble the heavy, bulky, rigid machines dangerously moving around in old fashioned industrial automation. There is a growing consensus, in the research community as well as in expectations from the public, that robots of the next generation will be physically compliant and adaptable machines, closely interacting with humans and moving safely, smoothly and efficiently - in other terms, robots will be soft.

This chapter discusses the design, modeling and control of actuators for the new generation of soft robots, which can replace conventional actuators in applications where rigidity is not the first and foremost concern in performance. The chapter focuses on the technology, modeling, and control of lumped parameters of soft robotics, that is, systems of discrete, interconnected, and compliant elements. Distributed parameters, snakelike and continuum soft robotics, are presented in Chap. 20, while Chap. 23 discusses in detail the biomimetic motivations that are often behind soft robotics.

AMASC - changing stiffness

Author  Jonathan Hurst et al.

Video ID : 468

AMASC variable stiffness actuator: changing stiffness phase.

Chapter 72 — Social Robotics

Cynthia Breazeal, Kerstin Dautenhahn and Takayuki Kanda

This chapter surveys some of the principal research trends in Social Robotics and its application to human–robot interaction (HRI). Social (or Sociable) robots are designed to interact with people in a natural, interpersonal manner – often to achieve positive outcomes in diverse applications such as education, health, quality of life, entertainment, communication, and tasks requiring collaborative teamwork. The long-term goal of creating social robots that are competent and capable partners for people is quite a challenging task. They will need to be able to communicate naturally with people using both verbal and nonverbal signals. They will need to engage us not only on a cognitive level, but on an emotional level as well in order to provide effective social and task-related support to people. They will need a wide range of socialcognitive skills and a theory of other minds to understand human behavior, and to be intuitively understood by people. A deep understanding of human intelligence and behavior across multiple dimensions (i. e., cognitive, affective, physical, social, etc.) is necessary in order to design robots that can successfully play a beneficial role in the daily lives of people. This requires a multidisciplinary approach where the design of social robot technologies and methodologies are informed by robotics, artificial intelligence, psychology, neuroscience, human factors, design, anthropology, and more.

Home-assistance companion robot in the Robot House

Author  Kerstin Dautenhahn

Video ID : 218

The video results from the research as part of the three-year European Project Accompany (http://accompanyproject.eu/). It shows the year-one scenario. Later scenarios were subsequently used for cumulative evaluation studies with elderly users and their carer-givers in three European countries. This video shows the year-one scenario as it was implemented in the University of Hertfordshire Robot House.

Chapter 19 — Robot Hands

Claudio Melchiorri and Makoto Kaneko

Multifingered robot hands have a potential capability for achieving dexterous manipulation of objects by using rolling and sliding motions. This chapter addresses design, actuation, sensing and control of multifingered robot hands. From the design viewpoint, they have a strong constraint in actuator implementation due to the space limitation in each joint. After briefly introducing the overview of anthropomorphic end-effector and its dexterity in Sect. 19.1, various approaches for actuation are provided with their advantages and disadvantages in Sect. 19.2. The key classification is (1) remote actuation or build-in actuation and (2) the relationship between the number of joints and the number of actuator. In Sect. 19.3, actuators and sensors used for multifingered hands are described. In Sect. 19.4, modeling and control are introduced by considering both dynamic effects and friction. Applications and trends are given in Sect. 19.5. Finally, this chapter is closed with conclusions and further reading.

DLR hand

Author  DLR -Robotics and Mechatronics Center

Video ID : 768

A DLR hand

Chapter 8 — Motion Control

Wan Kyun Chung, Li-Chen Fu and Torsten Kröger

This chapter will focus on the motion control of robotic rigid manipulators. In other words, this chapter does not treat themotion control ofmobile robots, flexible manipulators, and manipulators with elastic joints. The main challenge in the motion control problem of rigid manipulators is the complexity of their dynamics and uncertainties. The former results from nonlinearity and coupling in the robot manipulators. The latter is twofold: structured and unstructured. Structured uncertainty means imprecise knowledge of the dynamic parameters and will be touched upon in this chapter, whereas unstructured uncertainty results from joint and link flexibility, actuator dynamics, friction, sensor noise, and unknown environment dynamics, and will be treated in other chapters. In this chapter, we begin with an introduction to motion control of robot manipulators from a fundamental viewpoint, followed by a survey and brief review of the relevant advanced materials. Specifically, the dynamic model and useful properties of robot manipulators are recalled in Sect. 8.1. The joint and operational space control approaches, two different viewpoints on control of robot manipulators, are compared in Sect. 8.2. Independent joint control and proportional– integral–derivative (PID) control, widely adopted in the field of industrial robots, are presented in Sects. 8.3 and 8.4, respectively. Tracking control, based on feedback linearization, is introduced in Sect. 8.5. The computed-torque control and its variants are described in Sect. 8.6. Adaptive control is introduced in Sect. 8.7 to solve the problem of structural uncertainty, whereas the optimality and robustness issues are covered in Sect. 8.8. To compute suitable set point signals as input values for these motion controllers, Sect. 8.9 introduces reference trajectory planning concepts. Since most controllers of robotmanipulators are implemented by using microprocessors, the issues of digital implementation are discussed in Sect. 8.10. Finally, learning control, one popular approach to intelligent control, is illustrated in Sect. 8.11.

Sensor-based online trajectory generation

Author  Torsten Kröger

Video ID : 761

The video shows the movements of a position-controlled 6-DOF industrial-robot arm equipped with a distance sensor at its end-effector. The task of the robot is to draw a rectangle on the table, while the force on the table is controlled by a force controller which acts only orthogonally to the table surface. The dimensions of the rectangle are determined by the obstacles in the robot's environment. If the obstacles are moved, the distance sensor triggers the execution of a new trajectory segment which is computed within one control cycle (1 ms), so that it can be instantly executed.

Chapter 47 — Motion Planning and Obstacle Avoidance

Javier Minguez, Florant Lamiraux and Jean-Paul Laumond

This chapter describes motion planning and obstacle avoidance for mobile robots. We will see how the two areas do not share the same modeling background. From the very beginning of motion planning, research has been dominated by computer sciences. Researchers aim at devising well-grounded algorithms with well-understood completeness and exactness properties.

The challenge of this chapter is to present both nonholonomic motion planning (Sects. 47.1–47.6) and obstacle avoidance (Sects. 47.7–47.10) issues. Section 47.11 reviews recent successful approaches that tend to embrace the whole problemofmotion planning and motion control. These approaches benefit from both nonholonomic motion planning and obstacle avoidance methods.

A ride in the Google self-driving car

Author  Google Self-Driving Car Project

Video ID : 710

The maturity of the tools developed for mobile-robot navigation and explained in this chapter have enabled Google to integrate them into an experimental vehicle. This video demonstrates Google's self-driving technology on the road.

Chapter 6 — Model Identification

John Hollerbach, Wisama Khalil and Maxime Gautier

This chapter discusses how to determine the kinematic parameters and the inertial parameters of robot manipulators. Both instances of model identification are cast into a common framework of least-squares parameter estimation, and are shown to have common numerical issues relating to the identifiability of parameters, adequacy of the measurement sets, and numerical robustness. These discussions are generic to any parameter estimation problem, and can be applied in other contexts.

For kinematic calibration, the main aim is to identify the geometric Denavit–Hartenberg (DH) parameters, although joint-based parameters relating to the sensing and transmission elements can also be identified. Endpoint sensing or endpoint constraints can provide equivalent calibration equations. By casting all calibration methods as closed-loop calibration, the calibration index categorizes methods in terms of how many equations per pose are generated.

Inertial parameters may be estimated through the execution of a trajectory while sensing one or more components of force/torque at a joint. Load estimation of a handheld object is simplest because of full mobility and full wrist force-torque sensing. For link inertial parameter estimation, restricted mobility of links nearer the base as well as sensing only the joint torque means that not all inertial parameters can be identified. Those that can be identified are those that affect joint torque, although they may appear in complicated linear combinations.

Calibration and accuracy validation of a FANUC LR Mate 200iC industrial robot

Author  Ilian Bonev

Video ID : 430

This video shows excerpts from the process of calibrating a FANUC LR Mate 200iC industrial robot using two different methods. In the first method, the position of one of three points on the robot end-effector is measured using a FARO laser tracker in 50 specially selected robot configurations (not shown in the video). Then, the robot parameters are identified. Next, the position of one of the three points on the robot's end-effector is measured using the laser tracker in 10,000 completely arbitrary robot configurations. The mean positioning error after calibration was found to be 0.156 mm, the standard deviation (std) 0.067 mm, the mean+3*std 0.356 mm, and the maximum 0.490 mm. In the second method, the complete pose (position and orientation) of the robot end-effector is measured in about 60 robot configurations using an innovative method based on Renishaw's telescoping ballbar. Then, the robot parameters are identified. Next, the position of one of the three points on the robot's end-effector is measured using the laser tracker in 10,000 completely arbitrary robot configurations. The mean position error after calibration was found to be 0.479 mm, the standard deviation (std) 0.214 mm, and the maximum 1.039 mm. However, if we limit the zone for validations, the accuracy of the robot is much better. The second calibration method is less efficient but relies on a piece of equipment that costs only $12,000 (only one tenth the cost of a laser tracker).

Chapter 8 — Motion Control

Wan Kyun Chung, Li-Chen Fu and Torsten Kröger

This chapter will focus on the motion control of robotic rigid manipulators. In other words, this chapter does not treat themotion control ofmobile robots, flexible manipulators, and manipulators with elastic joints. The main challenge in the motion control problem of rigid manipulators is the complexity of their dynamics and uncertainties. The former results from nonlinearity and coupling in the robot manipulators. The latter is twofold: structured and unstructured. Structured uncertainty means imprecise knowledge of the dynamic parameters and will be touched upon in this chapter, whereas unstructured uncertainty results from joint and link flexibility, actuator dynamics, friction, sensor noise, and unknown environment dynamics, and will be treated in other chapters. In this chapter, we begin with an introduction to motion control of robot manipulators from a fundamental viewpoint, followed by a survey and brief review of the relevant advanced materials. Specifically, the dynamic model and useful properties of robot manipulators are recalled in Sect. 8.1. The joint and operational space control approaches, two different viewpoints on control of robot manipulators, are compared in Sect. 8.2. Independent joint control and proportional– integral–derivative (PID) control, widely adopted in the field of industrial robots, are presented in Sects. 8.3 and 8.4, respectively. Tracking control, based on feedback linearization, is introduced in Sect. 8.5. The computed-torque control and its variants are described in Sect. 8.6. Adaptive control is introduced in Sect. 8.7 to solve the problem of structural uncertainty, whereas the optimality and robustness issues are covered in Sect. 8.8. To compute suitable set point signals as input values for these motion controllers, Sect. 8.9 introduces reference trajectory planning concepts. Since most controllers of robotmanipulators are implemented by using microprocessors, the issues of digital implementation are discussed in Sect. 8.10. Finally, learning control, one popular approach to intelligent control, is illustrated in Sect. 8.11.

Safe human-robot cooperation

Author  Fabrizio Flacco, Torsten Kröger, Alessandro De Luca, Oussama Khatib

Video ID : 757

A real-time collision avoidance approach is presented for safe human-robot coexistence. The main contribution shown in this video is a fast method to evaluate distances between the robot and possibly moving obstacles (including humans), based on the concept of depth space. The distances are used to generate repulsive vectors that are used to control the robot while executing a generic motion task. The repulsive vectors can also take advantage of an estimation of the obstacle velocity. In order to preserve the execution of a Cartesian task with a redundant manipulator, a simple collision-avoidance algorithm has been implemented, where different reaction behaviors are set up for the end-effector and for other control points along the robot structure. Reference: F. Flacco, T. Kröger, A. De Luca, O. Khatib: A depth space approach to human-robot collision avoidance, Proc. IEEE Int. Conf. Robot. Autom. (ICRA), Saint Paul (2012), pp. 338-345

Chapter 53 — Multiple Mobile Robot Systems

Lynne E. Parker, Daniela Rus and Gaurav S. Sukhatme

Within the context of multiple mobile, and networked robot systems, this chapter explores the current state of the art. After a brief introduction, we first examine architectures for multirobot cooperation, exploring the alternative approaches that have been developed. Next, we explore communications issues and their impact on multirobot teams in Sect. 53.3, followed by a discussion of networked mobile robots in Sect. 53.4. Following this we discuss swarm robot systems in Sect. 53.5 and modular robot systems in Sect. 53.6. While swarm and modular systems typically assume large numbers of homogeneous robots, other types of multirobot systems include heterogeneous robots. We therefore next discuss heterogeneity in cooperative robot teams in Sect. 53.7. Once robot teams allow for individual heterogeneity, issues of task allocation become important; Sect. 53.8 therefore discusses common approaches to task allocation. Section 53.9 discusses the challenges of multirobot learning, and some representative approaches. We outline some of the typical application domains which serve as test beds for multirobot systems research in Sect. 53.10. Finally, we conclude in Sect. 53.11 with some summary remarks and suggestions for further reading.

Coordination of multiple mobile platforms for manipulation and transportation

Author  Tom Sugar, Vijay Kumar

Video ID : 201

Multiple robots are used to pick up and transport boxes. In each case, one robot is designated the "leader." The leader steers the group and the other robot(s) follow it, supplying force to keep the box in place.

Chapter 45 — World Modeling

Wolfram Burgard, Martial Hebert and Maren Bennewitz

In this chapter we describe popular ways to represent the environment of a mobile robot. For indoor environments, which are often stored using two-dimensional representations, we discuss occupancy grids, line maps, topologicalmaps, and landmark-based representations. Each of these techniques has its own advantages and disadvantages. Whilst occupancy grid maps allow for quick access and can efficiently be updated, line maps are more compact. Also landmark-basedmaps can efficiently be updated and maintained, however, they do not readily support navigation tasks such as path planning like topological representations do.

Additionally, we discuss approaches suited for outdoor terrain modeling. In outdoor environments, the flat-surface assumption underling many mapping techniques for indoor environments is no longer valid. A very popular approach in this context are elevation and variants maps, which store the surface of the terrain over a regularly spaced grid. Alternatives to such maps are point clouds, meshes, or three-dimensional grids, which provide a greater flexibility but have higher storage demands.

Service-robot navigation in urban environments

Author  Christian Siagian

Video ID : 270

This video presents the navigation system of the Beobot service robot of the iLab, University of Southern California (USC). Beobot's task is to fulfill services in urban-like environments, especially those involving long-range travel. The robot uses a topological map for global localization based on acquired images.

Chapter 74 — Learning from Humans

Aude G. Billard, Sylvain Calinon and Rüdiger Dillmann

This chapter surveys the main approaches developed to date to endow robots with the ability to learn from human guidance. The field is best known as robot programming by demonstration, robot learning from/by demonstration, apprenticeship learning and imitation learning. We start with a brief historical overview of the field. We then summarize the various approaches taken to solve four main questions: when, what, who and when to imitate. We emphasize the importance of choosing well the interface and the channels used to convey the demonstrations, with an eye on interfaces providing force control and force feedback. We then review algorithmic approaches to model skills individually and as a compound and algorithms that combine learning from human guidance with reinforcement learning. We close with a look on the use of language to guide teaching and a list of open issues.

Demonstrations and reproduction of the task of juicing an orange

Author  Florent D'Halluin, Aude Billard

Video ID : 29

Human demonstrations of the task of juicing an orange, and reproductions by the robot in new situations where the objects are located in positions not seen in the demonstrations. URL: http://www.scholarpedia.org/article/Robot_learning_by_demonstration