View Chapter

Chapter 40 — Mobility and Manipulation

Oliver Brock, Jaeheung Park and Marc Toussaint

Mobile manipulation requires the integration of methodologies from all aspects of robotics. Instead of tackling each aspect in isolation,mobilemanipulation research exploits their interdependence to solve challenging problems. As a result, novel views of long-standing problems emerge. In this chapter, we present these emerging views in the areas of grasping, control, motion generation, learning, and perception. All of these areas must address the shared challenges of high-dimensionality, uncertainty, and task variability. The section on grasping and manipulation describes a trend towards actively leveraging contact and physical and dynamic interactions between hand, object, and environment. Research in control addresses the challenges of appropriately coupling mobility and manipulation. The field of motion generation increasingly blurs the boundaries between control and planning, leading to task-consistent motion in high-dimensional configuration spaces, even in dynamic and partially unknown environments. A key challenge of learning formobilemanipulation consists of identifying the appropriate priors, and we survey recent learning approaches to perception, grasping, motion, and manipulation. Finally, a discussion of promising methods in perception shows how concepts and methods from navigation and active perception are applied.

DLR's Agile Justin plays catch with Rollin' Justin

Author  DLR

Video ID : 661

DLR has developed a new robot named Agile Justin that is capable of tossing a baseball. This seemed like a natural complement to Rollin' Justin's ability to catch a baseball, so they teamed them up for a friendly game of "catch."

Atlas walking and manipulation

Author  DRC Team MIT

Video ID : 662

Autonomy demonstration with the MIT Atlas robot which is composed of the execution of a sequence of autonomous sub-tasks. Walking and manipulation plans are computed online with object fitting input from the perception system.

Dynamic robot manipulation

Author  Boston Dynamics

Video ID : 664

BigDog handles heavy objects. The goal is to use the strength of the legs and torso to help power motions of the arm. This sort of dynamic, whole-body approach to manipulation is used routinely by human athletes and will enhance the performance of advanced robots.

CHOMP trajectory optimization

Author   Nathan Ratliff, Matt Zucker, J. Andrew Bagnell, Siddhartha Srinivasa

Video ID : 665

Covariant functional gradient techniques for motion planning via optimization. Computer simulations and video demonstrations based on two experimental platforms: Barrett Technologies WAM arm and Boston Dynamics LittleDog.

Motor-skill learning for robotics

Author  Jan Peters, Jens Kober, Katharina Mülling

Video ID : 667

We propose to divide the generic skill-learning problem into parts that can be well-understood from a robotics point of view. After appropriate learning approaches have been designed for these basic components, they will serve as the ingredients of a general approach to robot-skill learning. This video shows results of our work on learning to control, learning elementary movements, as well as steps towards the learning of complex tasks.

Policy learning

Author  Peter Pastor

Video ID : 668

The video explains and demonstrates the basics of policy learning as based on two tasks, pool strokes and chopstick manipulation.

Autonomous robot skill acquisition

Author  Scott Kuindersma, George Konidaris

Video ID : 669

This video demonstrates the autonomous-skill acquisition of a robot acting in a constrained environment called the "Red Room". The environment consists of buttons, levers, and switches, all located at points of interest designated by ARTags. The robot can navigate to these locations and perform primitive manipulation actions, some of which affect the physical state of the maze (e.g., by opening or closing a door).

State-representation learning for robotics

Author  Rico Jonschkowski, Oliver Brock

Video ID : 670

State-representation learning for robotics using prior knowledge about interacting with the physical world.

Extracting kinematic background knowledge from interactions using task-sensitive, relational learning

Author  Sebastian Hofer, Tobias Lang, Oliver Brock

Video ID : 671

To successfully manipulate novel objects, robots must first acquire information about the objects' kinematic structure. We present a method to learn relational, kinematic, background knowledge from exploratory interactions with the world. As the robot gathers experience, this background knowledge enables the acquisition of kinematic world models with increasing efficiency. Learning such background knowledge, however, proves difficult, especially in complex, feature-rich domains. We present a novel, task-sensitive, relational-rule learner and demonstrate that it is able to learn accurate kinematic background knowledge in domains where other approaches fail. The resulting background knowledge is more compact and generalizes better than that obtained with existing approaches.

DART: Dense articulated real-time tracking

Author  Tanner Schmidt, Richard Newcombe, Dieter Fox

Video ID : 673

This project aims to provide a unified framework for tracking arbitrary articulated models, given their geometric and kinematic structure. Our approach uses dense input data (computing an error term on every pixel) which we are able to process in real-time by leveraging the power of GPGPU programming and very efficient representation of model geometry with signed-distance functions. This approach has proven successful on a wide variety of models including human hands, human bodies, robot arms, and articulated objects.