Part Videos

Part A — Robotics Foundations

Video Image
Editor  David E. Orin

The chapters contained in Robotics Foundations present the fundamental principles and methods that are used to develop a robotic system. In order to perform the tasks that are envisioned for robots, many challenging problems have been uncovered in kinematics, dynamics, design, actuation, sensing, modeling, motion planning, control, programming, decision making, task planning, and learning. Robots with redundant kinematic degrees of freedom or with flexible elements add to the complexity of these systems.

Robots often consist of a large number of degrees of freedom so that they can provide the rich set of three dimensional motions that may be required for a range of tasks. The design of the link and joint structures, as well as the actuation, to achieve the desired performance is also challenging. The robot is a nonlinear, coupled system which is difficult to model and control because of its complex dynamics. Kinematic redundancy and flexible elements in the robots increase this complexity.

In addition to control of the motion, control of the interaction forces between the robot and environment is needed when manipulating objects or interacting with humans. A fundamental robotics task is to plan collision free motion for complex bodies from a start to a goal position among a collection of obstacles, and this can become an intractable computational problem. In other scenarios, robots need to execute behaviors that take advantage of dynamic interactions with the environment rather than rely solely on explicit reasoning and planning.

Robot learning will be necessary to generate actions and control for ever-changing task requirements and environments, in order to achieve the level of autonomy envisioned.

Part B — Design

Video Image
Editor  Frank C. Park

The chapters contained in Design are concerned with the design and modeling of the actual physical realizations of a robot. Some of the more obvious mechanical structures that come to mind are arms, legs, and hands. To this list can be added wheeled vehicles and platforms; snake-like and continuum robots; robots capable of swimming and flying; and robot structures at the micro- and nanoscales.

Even for the most basic robotic device, the arm, an incredibly diverse set of structures is possible, depending on the number and types of joints and actuators, and the presence of closed loops in the kinematic structure, or flexibility in the joints and links. Constructing models and planning and control algorithms for these diverse structures represents an even greater set of challenges. The topics addressed in these chapters are essential to creating not only the physical robot itself, but also to creating and controlling movements, and manipulating objects in desired ways.

What ultimately distinguishes robotics from other disciplines that study intelligence is that, by definition, robots require a physical manifestation, and by extension must physically interact with the environment.

In this regard the topics addressed in these chapters can be said to constitute the most basic layer of this endeavor. Just as it is difficult to examine human intelligence from a purely abstract perspective, remotely detached from the physical body, so it is difficult to separate the contents of the remaining parts without including in the discussion the actual medium of interaction with the physical world, the (physical) robots themselves.

Part C — Sensing and Perception

Video Image
Editor  Henrik I. Christensen

This part covers all aspects of sensing, from basic measurements of physical parameters in the world to making sense of such data, all for the purpose of enabling a robot to perform its desired tasks.

Right now robotics is seeing a revolution in use of sensors. Traditionally robots have been designed to have maximum stiffness and applications have been designed to be predictable in their operation. As robots emerge from the fenced areas and we deploy robots in a wider range of applications, from collaborative robotics to autonomously driving cars, it is essential to have perception capabilities that allow estimation of the state of the robot but also the state the surrounding environment.

Due to these new requirements the importance of sensing and perception has increased significantly over the last decade and will without doubt continue to grow in the future.

The topics addressed in this part include all aspects from detection and processing; from physical contact with the world through force and tackle sensing over augmented environments for detection of position and motion in the world, to image based methods for mapping, detection and control in structured and unstructured environments. In many settings a single sensory modality/sensor is inadequate to give robust estimate of the state of the environment. Consequently a chapter on multi-sensory fusion is also included in this part.

Part D — Manipulation and Interfaces

Video Image
Editor  Makoto Kaneko

Manipulation and Interfaces is separated into two parts; the first half is concerned with manipulation where frameworks of modeling, motion planning, and control of grasp and manipulation of an object are addressed, and the second half is concerned with interfaces where physical human–robot interactions are handled.

Humans can achieve grasping and manipulation of an object dexterously through hand–arm coordination. An optimum control skill for such a redundant system is naturally and gradually acquired through experience in our daily life. Especially, fingers play an important role for expressing human dexterity. Without dexterous fingers, it is hard for us to handle any daily tool, such as a pencil, keyboard, cup, knife, or fork. This dexterity is supported with active and passive compliance as well as the multiple sensory organs existing at the fingertip. Such dexterous manipulation enables us to clearly differentiate humans from other animals. Thus, manipulation is one of the most important functions for humans. We eventually acquired the current shape of finger, the sensory organs, and skill for manipulation, through a long history of evolution, over more than six million years.

While humans and robots are largely different in terms of actuators, sensors, and mechanisms, achieving dexterous manipulation like that of a human in a robot is a challenging subject in robotics. As we overview current robot technology, however, we observe that the dexterity of robots is still far behind that of humans. Without human-like dexterity, future robots will not be able to replace human labor in environments unsuitable for humans. In this sense, the implementation of dexterity into robots is one of the highlights of future robot design.

The first half of this part provides a good hint for enhancing dexterity for robots. The second half addresses interfaces where humans control a robot or multiple robots through direct or indirect contact with robots.

Part E — Moving in the Environment

Video Image
Editor  Raja Chatila

Until the mid-1960s, robots were only able to move in a predetermined workspace, the one they could reach from their firmly fixed base. Moving in the Environment is about their conquest of the whole space.

Mobile Robotics started as a research domain on its own right in the late 1960s with the Shakey project at SRI. 2015 marks the 50th anniversary of this seminal project that had a lasting legacy such as A*. The seminal paper by N.J. Nilsson "A Mobile Automaton: An Application of Artificial Intelligence Techniques" presented at the International Joint Conference on Artificial Intelligence (IJCAI TS1) 1969, already addressed perception, mapping, motion planning, and the notion of control architecture.

Those issues would indeed be at the core of mobile robotics research for the following decades. The 1980s boomed with mobile robot projects, and as soon as it was necessary to cope with the reality of the physical world, problems appeared that fostered novel research directions, actually moving away from the original concept in which the robot was just an application of artificial intelligence (AI) techniques.

This part of the Springer Handbook of Robotics addresses all the issues that, put together, are necessary to build and control a mobile robot, except for the mechanical design itself.

Part F — Robots at Work

Video Image
Editor  Alex Zelinsky

Robots at Work covers the advances in technology that are concerned with the growing area of robot applications, ranging from factory robotics, through a diverse array of industry applications such mining, agriculture, construction to health care and domestic robotics.

The future vision for robotics lies within the pervasive application of robots. The robots of the future will perform all the dangerous, dirty, and dreary (DDD) tasks. Joe Engelberger, the pioneer of the robotics industry, wrote in his 1989 book "Robotics in Service" that the inspiration to write the book came as a reaction to a forecast study of robot applications, which predicted that in 1995 applications of robotics outside factories (the traditional domain of industrial robots) would account for less than 1% of total sales. Engelberger believed that this forecast was wrong, and instead he predicted that the non-industrial class of robot applications would become the largest class of robot applications. Engelberger’s prediction has yet to come to pass. However, he did correctly foresee the growth in non-traditional applications of robots.

Parts A-E of this Handbook show the great strides that robotics technology has made in the past 50 years. The technology has reached a level of maturity such that robots are now marching from the factories into field and service applications. The topics in this part cover the essentials of what is required to create robots that can operate in all environments and perform meaningful work.

This part of the Springer Handbook of Robotics describes fit-for-purpose robots and includes hardware design, control (of locomotion, manipulation, and interaction), perception, and user interfaces. The economic/social drivers for the particular applications are also discussed.

Part G — Robots and Humans

Video Image
Editor  Daniela Rus

Robots and Humans covers some of the most recent advances concerned with human–robot interaction, ranging from designing biologically inspired robots to programming and safety issues for human–robot interaction, and ethical issue brought forth by robotics.

Our field’s future vision for technology is the leap from personal computers to personal robots, in a world where robots exist pervasively and work side by side with humans. Over the past 50 years we have made great strides in robotics, as the other parts of this Handbook show. However, there are still new capabilities that need to be developed and existing capabilities that need to be improved to create a world in which robots and humans work together.

Robot bodies should be easily integrated into our living environments. Robots should be safe to be around. Robots should take commands from human users easily. Robots should be functionally capable. Robots should engage humans to help mitigate error states and task uncertainties. Meeting these challenges will bring robots closer to our vision of pervasive robotics.

The topics in this part are essential for creating robots that operate in human-centered environments. The chapters cover organically human-centered and life-like robots and include hardware design, control (of locomotion, manipulation, and interaction), perception, user interfaces, and social and ethical implications for robotics.

Today’s approach to computation has progressed naturally from desktop computing, to mobile computing, to pervasive computing, ultimately leading to computation for interaction with the physical world. In other words, Part G of the Springer Handbook on Robotics presents a snapshot of the field’s advances for creating machines in our own image that are smart and obedient.