View Chapter

Chapter 40 — Mobility and Manipulation

Oliver Brock, Jaeheung Park and Marc Toussaint

Mobile manipulation requires the integration of methodologies from all aspects of robotics. Instead of tackling each aspect in isolation,mobilemanipulation research exploits their interdependence to solve challenging problems. As a result, novel views of long-standing problems emerge. In this chapter, we present these emerging views in the areas of grasping, control, motion generation, learning, and perception. All of these areas must address the shared challenges of high-dimensionality, uncertainty, and task variability. The section on grasping and manipulation describes a trend towards actively leveraging contact and physical and dynamic interactions between hand, object, and environment. Research in control addresses the challenges of appropriately coupling mobility and manipulation. The field of motion generation increasingly blurs the boundaries between control and planning, leading to task-consistent motion in high-dimensional configuration spaces, even in dynamic and partially unknown environments. A key challenge of learning formobilemanipulation consists of identifying the appropriate priors, and we survey recent learning approaches to perception, grasping, motion, and manipulation. Finally, a discussion of promising methods in perception shows how concepts and methods from navigation and active perception are applied.

Adaptive force/velocity control for opening unknown doors

Author  Yiannis Karayiannidis, Colin Smith, Francisco E. Vina, Petter Ogren, Danica Kragic

Video ID : 675

We propose a method that can open doors without prior knowledge of the door's kinematics. The method consists of a velocity controller that uses force measurements and estimates of the radial direction based on adaptive estimates of the position of the door hinge. The control action is decomposed into an estimated radial and tangential direction, following the concept of hybrid force/motion control.

Chapter 17 — Limbed Systems

Shuuji Kajita and Christian Ott

A limbed system is a mobile robot with a body, legs and arms. First, its general design process is discussed in Sect. 17.1. Then we consider issues of conceptual design and observe designs of various existing robots in Sect. 17.2. As an example in detail, the design of a humanoid robot HRP-4C is shown in Sect. 17.3. To design a limbed system of good performance, it is important to take into account of actuation and control, like gravity compensation, limit cycle dynamics, template models, and backdrivable actuation. These are discussed in Sect. 17.4.

In Sect. 17.5, we overview divergence of limbed systems. We see odd legged walkers, leg–wheel hybrid robots, leg–arm hybrid robots, tethered walking robots, and wall-climbing robots. To compare limbed systems of different configurations,we can use performance indices such as the gait sensitivity norm, the Froude number, and the specific resistance, etc., which are introduced in Sect. 17.6.

Intuitive control of a planar bipedal walking robot

Author  Jerry Pratt

Video ID : 529

The planar bipedal walking robot `Spring Flamingo' driven by series elastic actuators developed by Dr. Jerry Pratt and Prof. Gill Pratt.

Chapter 58 — Robotics in Hazardous Applications

James Trevelyan, William R. Hamel and Sung-Chul Kang

Robotics researchers have worked hard to realize a long-awaited vision: machines that can eliminate the need for people to work in hazardous environments. Chapter 60 is framed by the vision of disaster response: search and rescue robots carrying people from burning buildings or tunneling through collapsed rock falls to reach trapped miners. In this chapter we review tangible progress towards robots that perform routine work in places too dangerous for humans. Researchers still have many challenges ahead of them but there has been remarkable progress in some areas. Hazardous environments present special challenges for the accomplishment of desired tasks depending on the nature and magnitude of the hazards. Hazards may be present in the form of radiation, toxic contamination, falling objects or potential explosions. Technology that specialized engineering companies can develop and sell without active help from researchers marks the frontier of commercial feasibility. Just inside this border lie teleoperated robots for explosive ordnance disposal (EOD) and for underwater engineering work. Even with the typical tenfold disadvantage in manipulation performance imposed by the limits of today’s telepresence and teleoperation technology, in terms of human dexterity and speed, robots often can offer a more cost-effective solution. However, most routine applications in hazardous environments still lie far beyond the feasibility frontier. Fire fighting, remediating nuclear contamination, reactor decommissioning, tunneling, underwater engineering, underground mining and clearance of landmines and unexploded ordnance still present many unsolved problems.

HD footage of 1950s atomic power plants - Nuclear reactors

Author  James P. Trevelyan

Video ID : 586

Robot manipulators, mainly remotely controlled and operated by people, have been widely used in the nuclear industry since the 1950s. This video contains archival film footage showing operations using remote manipulators.

Chapter 17 — Limbed Systems

Shuuji Kajita and Christian Ott

A limbed system is a mobile robot with a body, legs and arms. First, its general design process is discussed in Sect. 17.1. Then we consider issues of conceptual design and observe designs of various existing robots in Sect. 17.2. As an example in detail, the design of a humanoid robot HRP-4C is shown in Sect. 17.3. To design a limbed system of good performance, it is important to take into account of actuation and control, like gravity compensation, limit cycle dynamics, template models, and backdrivable actuation. These are discussed in Sect. 17.4.

In Sect. 17.5, we overview divergence of limbed systems. We see odd legged walkers, leg–wheel hybrid robots, leg–arm hybrid robots, tethered walking robots, and wall-climbing robots. To compare limbed systems of different configurations,we can use performance indices such as the gait sensitivity norm, the Froude number, and the specific resistance, etc., which are introduced in Sect. 17.6.

Cybernetic human HRP-4C quick turn

Author  AIST

Video ID : 525

Quick slip-turn of an HRP-4C on its toes developed by Dr. Miura, Dr. Kanehiro, Dr. Kaneko, Dr. Kajita, and Dr. Yokoi.

Chapter 67 — Humanoids

Paul Fitzpatrick, Kensuke Harada, Charles C. Kemp, Yoshio Matsumoto, Kazuhito Yokoi and Eiichi Yoshida

Humanoid robots selectively immitate aspects of human form and behavior. Humanoids come in a variety of shapes and sizes, from complete human-size legged robots to isolated robotic heads with human-like sensing and expression. This chapter highlights significant humanoid platforms and achievements, and discusses some of the underlying goals behind this area of robotics. Humanoids tend to require the integration ofmany of the methods covered in detail within other chapters of this handbook, so this chapter focuses on distinctive aspects of humanoid robotics with liberal cross-referencing.

This chapter examines what motivates researchers to pursue humanoid robotics, and provides a taste of the evolution of this field over time. It summarizes work on legged humanoid locomotion, whole-body activities, and approaches to human–robot communication. It concludes with a brief discussion of factors that may influence the future of humanoid robots.

3-D, collision-free motion combining locomotion and manipulation by humanoid robot HRP-2

Author  Eiichi Yoshida

Video ID : 594

This video shows an example of 3-D, whole-body motion generation combining manipulation and dynamic biped locomotion, based on two-stage motion generation. At the first stage, the motion planner generates the upper-body motion with a walking path of the bounding box of the lower body. The second stage overlays the desired upper-body motion on the dynamically-stable walking motions generated by a dynamic walking-pattern generator, based on preview control of ZMP for a linear, inverted-pendulum model. If collisions occur, the planner goes back to the first stage to reshape the trajectory until collision-free motion is obtained.

Chapter 57 — Robotics in Construction

Kamel S. Saidi, Thomas Bock and Christos Georgoulas

This chapter introduces various construction automation concepts that have been developed over the past few decades and presents examples of construction robots that are in current use (as of 2006) and/or in various stages of research and development. Section 57.1 presents an overview of the construction industry, which includes descriptions of the industry, the types of construction, and the typical construction project. The industry overview also discusses the concept of automation versus robotics in construction and breaks down the concept of robotics in construction into several levels of autonomy as well as other categories. Section 57.2 discusses some of the offsite applications of robotics in construction (such as for prefabrication), while Sect. 57.3 discusses the use of robots that perform a single task at the construction site. Section 57.4 introduces the concept of an integrated robotized construction site in which multiple robots/machines collaborate to build an entire structure. Section 57.5 discusses unsolved technical problems in construction robotics, which include interoperability, connection systems, tolerances, and power and communications. Finally, Sect. 57.6 discusses future directions in construction robotics and Sect. 57.7 gives some conclusions and suggests resources for further reading.

Obayashi ACBS (Automatic Constructions Building System)

Author  Thomas Bock

Video ID : 272

In the Obayashi ACBS (Automatic Constructions Building System) (Figure 57.29), once a story has been finished, the whole support structure, which rests on four columns, is pushed upwards by hydraulic presses to the next story over a 1.5 h period. Fully extended, the support structure is 25 m high; retracted it measures 4.5 m. Once everything has been moved up, work starts on the next story. By constructing the topmost story of the high-rise building as the roof at the beginning of the building process, the site is closed off in all directions, considerably reducing the effect of the weather and any damage it might cause.

Chapter 61 — Robot Surveillance and Security

Wendell H. Chun and Nikolaos Papanikolopoulos

This chapter introduces the foundation for surveillance and security robots for multiple military and civilian applications. The key environmental domains are mobile robots for ground, aerial, surface water, and underwater applications. Surveillance literallymeans to watch fromabove,while surveillance robots are used to monitor the behavior, activities, and other changing information that are gathered for the general purpose of managing, directing, or protecting one’s assets or position. In a practical sense, the term surveillance is taken to mean the act of observation from a distance, and security robots are commonly used to protect and safeguard a location, some valuable assets, or personal against danger, damage, loss, and crime. Surveillance is a proactive operation,while security robots are a defensive operation. The construction of each type of robot is similar in nature with amobility component, sensor payload, communication system, and an operator control station.

After introducing the major robot components, this chapter focuses on the various applications. More specifically, Sect. 61.3 discusses the enabling technologies of mobile robot navigation, various payload sensors used for surveillance or security applications, target detection and tracking algorithms, and the operator’s robot control console for human–machine interface (HMI). Section 61.4 presents selected research activities relevant to surveillance and security, including automatic data processing of the payload sensors, automaticmonitoring of human activities, facial recognition, and collaborative automatic target recognition (ATR). Finally, Sect. 61.5 discusses future directions in robot surveillance and security, giving some conclusions and followed by references.

Camera control from gaze

Author  Fabien Spindler

Video ID : 702

Visual-servoing techniques consist of using the data provided by one or several cameras in order to control the motion of a robotic security or surveillance system. A large variety of positioning or target tracking tasks can be implemented by controlling from one to all degrees of freedom of the system.

Chapter 41 — Active Manipulation for Perception

Anna Petrovskaya and Kaijen Hsiao

This chapter covers perceptual methods in which manipulation is an integral part of perception. These methods face special challenges due to data sparsity and high costs of sensing actions. However, they can also succeed where other perceptual methods fail, for example, in poor-visibility conditions or for learning the physical properties of a scene.

The chapter focuses on specialized methods that have been developed for object localization, inference, planning, recognition, and modeling in activemanipulation approaches.We concludewith a discussion of real-life applications and directions for future research.

Modeling articulated objects using active manipulation

Author  Juergen Strum

Video ID : 78

The video illustrates a mobile, manipulation robot that interacts with various articulated objects, such as a fridge and a dishwasher, in a kitchen environment. During interaction, the robot learns their kinematic properties such as the rotation axis and the configuration space. Knowing the kinematic model of these objects improves the performance of the robot and enables motion planning. Service robots operating in domestic environments are typically faced with a variety of objects they have to deal with to fulfill their tasks. Some of these objects are articulated such as cabinet doors and drawers, or room and garage doors. The ability to deal with such articulated objects is relevant for service robots, as, for example, they need to open doors when navigating between rooms and to open cabinets to pick up objects in fetch-and-carry applications. We developed a complete probabilistic framework that enables robots to learn the kinematic models of articulated objects from observations of their motion. We combine parametric and nonparametric models consistently and utilize the advantages of both methods. As a result of our approach, a robot can robustly operate articulated objects in unstructured environments. All software is available open-source (including documentation and tutorials) on http://www.ros.org/wiki/articulation.

Chapter 40 — Mobility and Manipulation

Oliver Brock, Jaeheung Park and Marc Toussaint

Mobile manipulation requires the integration of methodologies from all aspects of robotics. Instead of tackling each aspect in isolation,mobilemanipulation research exploits their interdependence to solve challenging problems. As a result, novel views of long-standing problems emerge. In this chapter, we present these emerging views in the areas of grasping, control, motion generation, learning, and perception. All of these areas must address the shared challenges of high-dimensionality, uncertainty, and task variability. The section on grasping and manipulation describes a trend towards actively leveraging contact and physical and dynamic interactions between hand, object, and environment. Research in control addresses the challenges of appropriately coupling mobility and manipulation. The field of motion generation increasingly blurs the boundaries between control and planning, leading to task-consistent motion in high-dimensional configuration spaces, even in dynamic and partially unknown environments. A key challenge of learning formobilemanipulation consists of identifying the appropriate priors, and we survey recent learning approaches to perception, grasping, motion, and manipulation. Finally, a discussion of promising methods in perception shows how concepts and methods from navigation and active perception are applied.

A compliant underactuated hand for robust manipulation

Author  Lael U. Odhner, Leif P. Jentoft, Mark R. Claffee, Nicholas Corson, Yaroslav Tenzer, Raymond R. Ma, Martin Buehler, Robert Kohout, Robert Howe, Aaron M. Dollar

Video ID : 655

This video introduces the iRobot-Harvard-Yale (iHY) Hand, an underactuated hand driven by five actuators which is capable of performing a wide range of grasping and in-hand repositioning tasks. This hand was designed to address the need for a durable, inexpensive, moderately dexterous hand suitable for use on mobile robots. Particular emphasis is placed on the development of underactuated fingers that are capable of both firm power grasps and low-stiffness fingertip grasps, using only the compliant mechanics of the fingers.

Chapter 53 — Multiple Mobile Robot Systems

Lynne E. Parker, Daniela Rus and Gaurav S. Sukhatme

Within the context of multiple mobile, and networked robot systems, this chapter explores the current state of the art. After a brief introduction, we first examine architectures for multirobot cooperation, exploring the alternative approaches that have been developed. Next, we explore communications issues and their impact on multirobot teams in Sect. 53.3, followed by a discussion of networked mobile robots in Sect. 53.4. Following this we discuss swarm robot systems in Sect. 53.5 and modular robot systems in Sect. 53.6. While swarm and modular systems typically assume large numbers of homogeneous robots, other types of multirobot systems include heterogeneous robots. We therefore next discuss heterogeneity in cooperative robot teams in Sect. 53.7. Once robot teams allow for individual heterogeneity, issues of task allocation become important; Sect. 53.8 therefore discusses common approaches to task allocation. Section 53.9 discusses the challenges of multirobot learning, and some representative approaches. We outline some of the typical application domains which serve as test beds for multirobot systems research in Sect. 53.10. Finally, we conclude in Sect. 53.11 with some summary remarks and suggestions for further reading.

Robot Pebbles - MIT developing self-sculpting smart-sand robots

Author  Kyle Gilpin, Ara Knaian, Kent Koyanagi, Daniela Rus

Video ID : 211

Researchers at the Distributed Robotics Laboratory at MIT's Computer Science and Artificial Intelligence Laboratory are developing tiny robots that could self-assemble into functional tools, then self-disassemble after use. Dubbed the "smart sand," the tiny robots (measuring 0.1 cubic cm) would contain microprocessors and EG magnets which could latch, communicate, and transfer power to each other, enabling them to form life-size replicas of miniature models. https://groups.csail.mit.edu/drl/wiki/index.php?title=Robot_Pebbles