View Chapter

Chapter 8 — Motion Control

Wan Kyun Chung, Li-Chen Fu and Torsten Kröger

This chapter will focus on the motion control of robotic rigid manipulators. In other words, this chapter does not treat themotion control ofmobile robots, flexible manipulators, and manipulators with elastic joints. The main challenge in the motion control problem of rigid manipulators is the complexity of their dynamics and uncertainties. The former results from nonlinearity and coupling in the robot manipulators. The latter is twofold: structured and unstructured. Structured uncertainty means imprecise knowledge of the dynamic parameters and will be touched upon in this chapter, whereas unstructured uncertainty results from joint and link flexibility, actuator dynamics, friction, sensor noise, and unknown environment dynamics, and will be treated in other chapters. In this chapter, we begin with an introduction to motion control of robot manipulators from a fundamental viewpoint, followed by a survey and brief review of the relevant advanced materials. Specifically, the dynamic model and useful properties of robot manipulators are recalled in Sect. 8.1. The joint and operational space control approaches, two different viewpoints on control of robot manipulators, are compared in Sect. 8.2. Independent joint control and proportional– integral–derivative (PID) control, widely adopted in the field of industrial robots, are presented in Sects. 8.3 and 8.4, respectively. Tracking control, based on feedback linearization, is introduced in Sect. 8.5. The computed-torque control and its variants are described in Sect. 8.6. Adaptive control is introduced in Sect. 8.7 to solve the problem of structural uncertainty, whereas the optimality and robustness issues are covered in Sect. 8.8. To compute suitable set point signals as input values for these motion controllers, Sect. 8.9 introduces reference trajectory planning concepts. Since most controllers of robotmanipulators are implemented by using microprocessors, the issues of digital implementation are discussed in Sect. 8.10. Finally, learning control, one popular approach to intelligent control, is illustrated in Sect. 8.11.

JediBot - Experiments in human-robot sword-fighting

Author  Torsten Kröger, Ken Oslund, Tim Jenkins, Dan Torczynski, Nicholas Hippenmeyer, Radu Bogdan Rusu, Oussama Khatib

Video ID : 759

Real-world sword-fighting between human opponents requires extreme agility, fast reaction time and dynamic perception capabilities. This video shows experimental results achieved with a 3-D vision system and a highly reactive control architecture which allowfs a robot to sword fight against human opponents. An online trajectory generator is used as an intermediate layer between low-level trajectory-following controllers and high-level visual perception. This architecture enables robots to react nearly instantaneously to the unpredictable human motions perceived by the vision system as well as to sudden sword contacts detected by force and torque sensors. Results show how smooth and highly dynamic motions are generated on-the-fly while using the vision and force/torque sensor signals in the feedback loops of the robot-motion controller. Reference: T. Kröger, K. Oslund, T. Jenkins, D. Torczynski, N. Hippenmeyer, R. B. Rusu, O. Khatib: JediBot - Experiments in human-robot sword-fighting, Proc. Int. Symp. Exp. Robot., Québec City (2012)

Chapter 56 — Robotics in Agriculture and Forestry

Marcel Bergerman, John Billingsley, John Reid and Eldert van Henten

Robotics for agriculture and forestry (A&F) represents the ultimate application of one of our society’s latest and most advanced innovations to its most ancient and important industries. Over the course of history, mechanization and automation increased crop output several orders of magnitude, enabling a geometric growth in population and an increase in quality of life across the globe. Rapid population growth and rising incomes in developing countries, however, require ever larger amounts of A&F output. This chapter addresses robotics for A&F in the form of case studies where robotics is being successfully applied to solve well-identified problems. With respect to plant crops, the focus is on the in-field or in-farm tasks necessary to guarantee a quality crop and, generally speaking, end at harvest time. In the livestock domain, the focus is on breeding and nurturing, exploiting, harvesting, and slaughtering and processing. The chapter is organized in four main sections. The first one explains the scope, in particular, what aspects of robotics for A&F are dealt with in the chapter. The second one discusses the challenges and opportunities associated with the application of robotics to A&F. The third section is the core of the chapter, presenting twenty case studies that showcase (mostly) mature applications of robotics in various agricultural and forestry domains. The case studies are not meant to be comprehensive but instead to give the reader a general overview of how robotics has been applied to A&F in the last 10 years. The fourth section concludes the chapter with a discussion on specific improvements to current technology and paths to commercialization.

VisualGPS – High accuracy localization for forestry machinery

Author  Juergen Rossmann, Michael Schluse, Arno Buecken, Christian Schlette, Markus Emde

Video ID : 96

Developments in space robotics continue to find their way into our everyday lives. These advances, for instance, include novel methods to increase localization accuracy in determining one's position in comparison to conventional GPS systems. The example here is the "VisualGPS" approach that helps to estimate the position of forestry machinery, such as harvesters in the woods, with high accuracy. For "VisualGPS", harvesters are equipped with laser scanners. The sensors scan the surrounding area to generate landmarks from the tree positions. The tree positions are combined into a local, single-tree map. By comparing the local, single-tree map with a map generated from aerial survey data, the current machine position can be calculated with an accuracy of 0.5 m.

Chapter 19 — Robot Hands

Claudio Melchiorri and Makoto Kaneko

Multifingered robot hands have a potential capability for achieving dexterous manipulation of objects by using rolling and sliding motions. This chapter addresses design, actuation, sensing and control of multifingered robot hands. From the design viewpoint, they have a strong constraint in actuator implementation due to the space limitation in each joint. After briefly introducing the overview of anthropomorphic end-effector and its dexterity in Sect. 19.1, various approaches for actuation are provided with their advantages and disadvantages in Sect. 19.2. The key classification is (1) remote actuation or build-in actuation and (2) the relationship between the number of joints and the number of actuator. In Sect. 19.3, actuators and sensors used for multifingered hands are described. In Sect. 19.4, modeling and control are introduced by considering both dynamic effects and friction. Applications and trends are given in Sect. 19.5. Finally, this chapter is closed with conclusions and further reading.

The Dexmart Hand

Author  Claudio Melchiorri

Video ID : 767

Grasp and manipulation tasks executed by the Dexmart Hand, an anthropomorphic robot hand developed within an European research activity. Detailed aspects of the "twisted-spring" actuation principle are demonstrated.

Chapter 13 — Behavior-Based Systems

François Michaud and Monica Nicolescu

Nature is filled with examples of autonomous creatures capable of dealing with the diversity, unpredictability, and rapidly changing conditions of the real world. Such creatures must make decisions and take actions based on incomplete perception, time constraints, limited knowledge about the world, cognition, reasoning and physical capabilities, in uncontrolled conditions and with very limited cues about the intent of others. Consequently, one way of evaluating intelligence is based on the creature’s ability to make the most of what it has available to handle the complexities of the real world. The main objective of this chapter is to explain behavior-based systems and their use in autonomous control problems and applications. The chapter is organized as follows. Section 13.1 overviews robot control, introducing behavior-based systems in relation to other established approaches to robot control. Section 13.2 follows by outlining the basic principles of behavior-based systems that make them distinct from other types of robot control architectures. The concept of basis behaviors, the means of modularizing behavior-based systems, is presented in Sect. 13.3. Section 13.4 describes how behaviors are used as building blocks for creating representations for use by behavior-based systems, enabling the robot to reason about the world and about itself in that world. Section 13.5 presents several different classes of learning methods for behavior-based systems, validated on single-robot and multirobot systems. Section 13.6 provides an overview of various robotics problems and application domains that have successfully been addressed or are currently being studied with behavior-based control. Finally, Sect. 13.7 concludes the chapter.

Natural interaction design of a humanoid robot

Author  François Michaud

Video ID : 418

Demonstration of the use of HBBA, hybrid behavior-based architecture, to implement three interactional capabilities on IRL-1. Reference: F. Ferland, D. Létourneau, M.-A. Legault, M. Lauria, F. Michaud: Natural interaction design of a humanoid robot, J. Human-Robot Interact. 1(2), 118-134 (2012)

Chapter 23 — Biomimetic Robots

Kyu-Jin Cho and Robert Wood

Biomimetic robot designs attempt to translate biological principles into engineered systems, replacing more classical engineering solutions in order to achieve a function observed in the natural system. This chapter will focus on mechanism design for bio-inspired robots that replicate key principles from nature with novel engineering solutions. The challenges of biomimetic design include developing a deep understanding of the relevant natural system and translating this understanding into engineering design rules. This often entails the development of novel fabrication and actuation to realize the biomimetic design.

This chapter consists of four sections. In Sect. 23.1, we will define what biomimetic design entails, and contrast biomimetic robots with bio-inspired robots. In Sect. 23.2, we will discuss the fundamental components for developing a biomimetic robot. In Sect. 23.3, we will review detailed biomimetic designs that have been developed for canonical robot locomotion behaviors including flapping-wing flight, jumping, crawling, wall climbing, and swimming. In Sect. 23.4, we will discuss the enabling technologies for these biomimetic designs including material and fabrication.

Torque-control strategies for snake robots

Author  David Rollinson, Kalyan Vasudev Alwala, Nico Zevallos, Howie Choset

Video ID : 392

This video provides an overview of some initial torque-based motions for the series elastic snake robot (SEA Snake). Because the SEA Snake has the unique ability to accurately sense and control the torque of each of its joints, it can execute life-like compliant and adaptive motions, without a complex controller or tactile sensing.

Chapter 52 — Modeling and Control of Aerial Robots

Robert Mahony, Randal W. Beard and Vijay Kumar

Aerial robotic vehicles are becoming a core field in mobile robotics. This chapter considers some of the fundamental modelling and control architectures in the most common aerial robotic platforms; small-scale rotor vehicles such as the quadrotor, hexacopter, or helicopter, and fixed wing vehicles. In order to control such vehicles one must begin with a good but sufficiently simple dynamic model. Based on such models, physically motivated control architectures can be developed. Such algorithms require realisable target trajectories along with real-time estimates of the system state obtained from on-board sensor suite. This chapter provides a first introduction across all these subjects for the quadrotor and fixed wing aerial robotic vehicles.

Dubins airplane

Author  Randy Beard

Video ID : 437

This video shows how paths are planned using software based on the Dubins airplane model.

Chapter 72 — Social Robotics

Cynthia Breazeal, Kerstin Dautenhahn and Takayuki Kanda

This chapter surveys some of the principal research trends in Social Robotics and its application to human–robot interaction (HRI). Social (or Sociable) robots are designed to interact with people in a natural, interpersonal manner – often to achieve positive outcomes in diverse applications such as education, health, quality of life, entertainment, communication, and tasks requiring collaborative teamwork. The long-term goal of creating social robots that are competent and capable partners for people is quite a challenging task. They will need to be able to communicate naturally with people using both verbal and nonverbal signals. They will need to engage us not only on a cognitive level, but on an emotional level as well in order to provide effective social and task-related support to people. They will need a wide range of socialcognitive skills and a theory of other minds to understand human behavior, and to be intuitively understood by people. A deep understanding of human intelligence and behavior across multiple dimensions (i. e., cognitive, affective, physical, social, etc.) is necessary in order to design robots that can successfully play a beneficial role in the daily lives of people. This requires a multidisciplinary approach where the design of social robot technologies and methodologies are informed by robotics, artificial intelligence, psychology, neuroscience, human factors, design, anthropology, and more.

Social learning applied to task execution

Author  Cynthia Breazeal

Video ID : 562

This is a video demonstration of the Leonardo robot integrating learning via tutelage, self motivated learning and preference learning to perform a tangram-like task. First the robot learns a policy for how to operate a remote-control box to reveal key shapes needed for the next task, integrating self-motivated exploration with tutelage. The human can shape what the robot learns through a variety of social means. Once Leo has learned a policy, the robot begins the tangram task, which is to make a sailboat figure out of the colored blocks on the virtual workspace. During this interaction, the person has a preference for which block colors to use (yellow and blue), which he conveys through nonverbal means. The robot learns this preference rule from observing these nonverbal cues. During the task, the robot needs blocks of a certain shape and color and which are not readily available on the workspace, but can be accessed by operating the remote-control box to reveal those shapes. Leo evokes those recently learned policies to access those shapes to achieve the goal of making the sailboat figure.

Chapter 40 — Mobility and Manipulation

Oliver Brock, Jaeheung Park and Marc Toussaint

Mobile manipulation requires the integration of methodologies from all aspects of robotics. Instead of tackling each aspect in isolation,mobilemanipulation research exploits their interdependence to solve challenging problems. As a result, novel views of long-standing problems emerge. In this chapter, we present these emerging views in the areas of grasping, control, motion generation, learning, and perception. All of these areas must address the shared challenges of high-dimensionality, uncertainty, and task variability. The section on grasping and manipulation describes a trend towards actively leveraging contact and physical and dynamic interactions between hand, object, and environment. Research in control addresses the challenges of appropriately coupling mobility and manipulation. The field of motion generation increasingly blurs the boundaries between control and planning, leading to task-consistent motion in high-dimensional configuration spaces, even in dynamic and partially unknown environments. A key challenge of learning formobilemanipulation consists of identifying the appropriate priors, and we survey recent learning approaches to perception, grasping, motion, and manipulation. Finally, a discussion of promising methods in perception shows how concepts and methods from navigation and active perception are applied.

Atlas whole-body grasping

Author  DRC Team MIT

Video ID : 651

A simple demonstration of automated perception, whole-body motion planning, and dynamic stabilization using Atlas and software developed at MIT.

Chapter 30 — Sonar Sensing

Lindsay Kleeman and Roman Kuc

Sonar or ultrasonic sensing uses the propagation of acoustic energy at higher frequencies than normal hearing to extract information from the environment. This chapter presents the fundamentals and physics of sonar sensing for object localization, landmark measurement and classification in robotics applications. The source of sonar artifacts is explained and how they can be dealt with. Different ultrasonic transducer technologies are outlined with their main characteristics highlighted.

Sonar systems are described that range in sophistication from low-cost threshold-based ranging modules to multitransducer multipulse configurations with associated signal processing requirements capable of accurate range and bearing measurement, interference rejection, motion compensation, and target classification. Continuous-transmission frequency-modulated (CTFM) systems are introduced and their ability to improve target sensitivity in the presence of noise is discussed. Various sonar ring designs that provide rapid surrounding environmental coverage are described in conjunction with mapping results. Finally the chapter ends with a discussion of biomimetic sonar, which draws inspiration from animals such as bats and dolphins.

B-scan image of indoor potted tree using multipulse sonar

Author  Roman Kuc

Video ID : 315

By repeatedly clearing the conventional sonar ranging board, each echo produces a spike sequence that is related to the echo amplitude. A brightness-scan (B-scan) image - similar to diagnostic ultrasound images - is generated by transforming the short-term spike density into a gray scale intensity. The video shows a B-scan of a potted tree in an indoor environment containing a doorway (with door knob) and a tree located in front of a cinder-block wall. The B-scan shows the specular environmental features as well as the random tree-leaf structures. Note that the wall behind the tree is also clearly imaged. Reference: R. Kuc: Generating B-scans of the environment with a conventional sonar, IEEE Sensor. J. 8(2), 151 - 160 (2008); doi: 10.1109/JSEN.2007.908242 .

Chapter 58 — Robotics in Hazardous Applications

James Trevelyan, William R. Hamel and Sung-Chul Kang

Robotics researchers have worked hard to realize a long-awaited vision: machines that can eliminate the need for people to work in hazardous environments. Chapter 60 is framed by the vision of disaster response: search and rescue robots carrying people from burning buildings or tunneling through collapsed rock falls to reach trapped miners. In this chapter we review tangible progress towards robots that perform routine work in places too dangerous for humans. Researchers still have many challenges ahead of them but there has been remarkable progress in some areas. Hazardous environments present special challenges for the accomplishment of desired tasks depending on the nature and magnitude of the hazards. Hazards may be present in the form of radiation, toxic contamination, falling objects or potential explosions. Technology that specialized engineering companies can develop and sell without active help from researchers marks the frontier of commercial feasibility. Just inside this border lie teleoperated robots for explosive ordnance disposal (EOD) and for underwater engineering work. Even with the typical tenfold disadvantage in manipulation performance imposed by the limits of today’s telepresence and teleoperation technology, in terms of human dexterity and speed, robots often can offer a more cost-effective solution. However, most routine applications in hazardous environments still lie far beyond the feasibility frontier. Fire fighting, remediating nuclear contamination, reactor decommissioning, tunneling, underwater engineering, underground mining and clearance of landmines and unexploded ordnance still present many unsolved problems.

Promotional video of robot for cleaning up Fukushima

Author  James P. Trevelyan

Video ID : 583

Many companies have proposed new robots to help with the Fukushima reactor decommissioning process. This is one of many such promotional videos.