View Chapter

Chapter 20 — Snake-Like and Continuum Robots

Ian D. Walker, Howie Choset and Gregory S. Chirikjian

This chapter provides an overview of the state of the art of snake-like (backbones comprised of many small links) and continuum (continuous backbone) robots. The history of each of these classes of robot is reviewed, focusing on key hardware developments. A review of the existing theory and algorithms for kinematics for both types of robot is presented, followed by a summary ofmodeling of locomotion for snake-like and continuum mechanisms.

Modsnake swimming

Author  Howie Choset

Video ID : 169

The CMU Modsnake swimming in a pool.

Chapter 9 — Force Control

Luigi Villani and Joris De Schutter

A fundamental requirement for the success of a manipulation task is the capability to handle the physical contact between a robot and the environment. Pure motion control turns out to be inadequate because the unavoidable modeling errors and uncertainties may cause a rise of the contact force, ultimately leading to an unstable behavior during the interaction, especially in the presence of rigid environments. Force feedback and force control becomes mandatory to achieve a robust and versatile behavior of a robotic system in poorly structured environments as well as safe and dependable operation in the presence of humans. This chapter starts from the analysis of indirect force control strategies, conceived to keep the contact forces limited by ensuring a suitable compliant behavior to the end effector, without requiring an accurate model of the environment. Then the problem of interaction tasks modeling is analyzed, considering both the case of a rigid environment and the case of a compliant environment. For the specification of an interaction task, natural constraints set by the task geometry and artificial constraints set by the control strategy are established, with respect to suitable task frames. This formulation is the essential premise to the synthesis of hybrid force/motion control schemes.

Robotic assembly of emergency-stop buttons

Author  Andreas Stolt, Magnus Linderoth, Anders Robertsson, Rolf Johansson

Video ID : 692

Industrial robots are usually position controlled, which requires high accuracy of the robot and the workcell. Some tasks, such as assembly, are difficult to achieve by using using only position sensing. This work presents a framework for robotic assembly, where a standard position-based robot program is integrated with an external controller performing with force-controlled skills. The framework is used to assemble emergency-stop buttons which had been tailored to be assembled by humans. This work was published in A. Stolt, M. Linderoth, A. Robertsson, R. Johansson: Force controlled assembly of emergency stop button, Proc. Int. Conf. Robot. Autom. (ICRA), Shanghai (2011), pp. 3751–3756

Chapter 68 — Human Motion Reconstruction

Katsu Yamane and Wataru Takano

This chapter presents a set of techniques for reconstructing and understanding human motions measured using current motion capture technologies. We first review modeling and computation techniques for obtaining motion and force information from human motion data (Sect. 68.2). Here we show that kinematics and dynamics algorithms for articulated rigid bodies can be applied to human motion data processing, with help from models based on knowledge in anatomy and physiology. We then describe methods for analyzing human motions so that robots can segment and categorize different behaviors and use them as the basis for human motion understanding and communication (Sect. 68.3). These methods are based on statistical techniques widely used in linguistics. The two fields share the common goal of converting continuous and noisy signal to discrete symbols, and therefore it is natural to apply similar techniques. Finally, we introduce some application examples of human motion and models ranging from simulated human control to humanoid robot motion synthesis.

Example of optical motion-capture data converted to joint-angle data

Author  Katsu Yamane

Video ID : 762

This video shows an example of optical motion-capture data converted to the joint-angle data of a robot model.

Chapter 58 — Robotics in Hazardous Applications

James Trevelyan, William R. Hamel and Sung-Chul Kang

Robotics researchers have worked hard to realize a long-awaited vision: machines that can eliminate the need for people to work in hazardous environments. Chapter 60 is framed by the vision of disaster response: search and rescue robots carrying people from burning buildings or tunneling through collapsed rock falls to reach trapped miners. In this chapter we review tangible progress towards robots that perform routine work in places too dangerous for humans. Researchers still have many challenges ahead of them but there has been remarkable progress in some areas. Hazardous environments present special challenges for the accomplishment of desired tasks depending on the nature and magnitude of the hazards. Hazards may be present in the form of radiation, toxic contamination, falling objects or potential explosions. Technology that specialized engineering companies can develop and sell without active help from researchers marks the frontier of commercial feasibility. Just inside this border lie teleoperated robots for explosive ordnance disposal (EOD) and for underwater engineering work. Even with the typical tenfold disadvantage in manipulation performance imposed by the limits of today’s telepresence and teleoperation technology, in terms of human dexterity and speed, robots often can offer a more cost-effective solution. However, most routine applications in hazardous environments still lie far beyond the feasibility frontier. Fire fighting, remediating nuclear contamination, reactor decommissioning, tunneling, underwater engineering, underground mining and clearance of landmines and unexploded ordnance still present many unsolved problems.

iRobots inspecting interior of Fukushima powerplant

Author  James P. Trevelyan

Video ID : 580

A video timestamped April 17, 2011, with English commentary.

Chapter 62 — Intelligent Vehicles

Alberto Broggi, Alex Zelinsky, Ümit Özgüner and Christian Laugier

This chapter describes the emerging robotics application field of intelligent vehicles – motor vehicles that have autonomous functions and capabilities. The chapter is organized as follows. Section 62.1 provides a motivation for why the development of intelligent vehicles is important, a brief history of the field, and the potential benefits of the technology. Section 62.2 describes the technologies that enable intelligent vehicles to sense vehicle, environment, and driver state, work with digital maps and satellite navigation, and communicate with intelligent transportation infrastructure. Section 62.3 describes the challenges and solutions associated with road scene understanding – a key capability for all intelligent vehicles. Section 62.4 describes advanced driver assistance systems, which use the robotics and sensing technologies described earlier to create new safety and convenience systems for motor vehicles, such as collision avoidance, lane keeping, and parking assistance. Section 62.5 describes driver monitoring technologies that are being developed to mitigate driver fatigue, inattention, and impairment. Section 62.6 describes fully autonomous intelligent vehicles systems that have been developed and deployed. The chapter is concluded in Sect. 62.7 with a discussion of future prospects, while Sect. 62.8 provides references to further reading and additional resources.

Lane tracking

Author  Alex Zelinsky

Video ID : 836

This video demonstrates robust lane tracking under variable conditions, e.g., rain and poor lighting. The system uses a particle-filter-based approach to achieve robustness.

Chapter 26 — Flying Robots

Stefan Leutenegger, Christoph Hürzeler, Amanda K. Stowers, Kostas Alexis, Markus W. Achtelik, David Lentink, Paul Y. Oh and Roland Siegwart

Unmanned aircraft systems (UASs) have drawn increasing attention recently, owing to advancements in related research, technology, and applications. While having been deployed successfully in military scenarios for decades, civil use cases have lately been tackled by the robotics research community.

This chapter overviews the core elements of this highly interdisciplinary field; the reader is guided through the design process of aerial robots for various applications starting with a qualitative characterization of different types of UAS. Design and modeling are closely related, forming a typically iterative process of drafting and analyzing the related properties. Therefore, we overview aerodynamics and dynamics, as well as their application to fixed-wing, rotary-wing, and flapping-wing UAS, including related analytical tools and practical guidelines. Respecting use-case-specific requirements and core autonomous robot demands, we finally provide guidelines to related system integration challenges.

Flight stability in aerial redundant manipulators

Author  Christopher Korpela, Matko Orsag, Todd Danko, Bryan Kobe, Clayton McNeil, Robert Pisch, Paul Oh

Video ID : 693

Aerial manipulation tests conducted by the Drexel Autonomous Systems Lab.

Chapter 6 — Model Identification

John Hollerbach, Wisama Khalil and Maxime Gautier

This chapter discusses how to determine the kinematic parameters and the inertial parameters of robot manipulators. Both instances of model identification are cast into a common framework of least-squares parameter estimation, and are shown to have common numerical issues relating to the identifiability of parameters, adequacy of the measurement sets, and numerical robustness. These discussions are generic to any parameter estimation problem, and can be applied in other contexts.

For kinematic calibration, the main aim is to identify the geometric Denavit–Hartenberg (DH) parameters, although joint-based parameters relating to the sensing and transmission elements can also be identified. Endpoint sensing or endpoint constraints can provide equivalent calibration equations. By casting all calibration methods as closed-loop calibration, the calibration index categorizes methods in terms of how many equations per pose are generated.

Inertial parameters may be estimated through the execution of a trajectory while sensing one or more components of force/torque at a joint. Load estimation of a handheld object is simplest because of full mobility and full wrist force-torque sensing. For link inertial parameter estimation, restricted mobility of links nearer the base as well as sensing only the joint torque means that not all inertial parameters can be identified. Those that can be identified are those that affect joint torque, although they may appear in complicated linear combinations.

Dynamic identification of Kuka LWR : Trajectory with load

Author  Maxime Gautier

Video ID : 483

This video shows a trajectory with a known payload mass of 4.6 (kg) used to identify the dynamic parameters and torque-sensor gains of the KUKA LWR manipulator. Details and results are given in the papers: A. Jubien, M. Gautier, A. Janot: Dynamic identification of the Kuka LWR robot using motor torques and joint torque sensors data, preprints 19th IFAC World Congress, Cape Town (2014) pp. 8391-8396 M. Gautier, A. Jubien: Force calibration of the Kuka LWR-like robots including embedded joint torque sensors and robot structure, IEEE/RSJ Int. Conf. Intel. Robot. Syst. (IROS), Chicago (2014) pp. 416-421

Chapter 70 — Human-Robot Augmentation

Massimo Bergamasco and Hugh Herr

The development of robotic systems capable of sharing with humans the load of heavy tasks has been one of the primary objectives in robotics research. At present, in order to fulfil such an objective, a strong interest in the robotics community is collected by the so-called wearable robots, a class of robotics systems that are worn and directly controlled by the human operator. Wearable robots, together with powered orthoses that exploit robotic components and control strategies, can represent an immediate resource also for allowing humans to restore manipulation and/or walking functionalities.

The present chapter deals with wearable robotics systems capable of providing different levels of functional and/or operational augmentation to the human beings for specific functions or tasks. Prostheses, powered orthoses, and exoskeletons are described for upper limb, lower limb, and whole body structures. State-of-theart devices together with their functionalities and main components are presented for each class of wearable system. Critical design issues and open research aspects are reported.

Collaborative control of the Body Extender

Author  Massimo Bergamasco

Video ID : 151

The video shows the numerical and experimental validation of the collaborative control applied to the Body Extender. The control prevents the overturning of the system under the action of gravity by minimally distorting the operator's intended motion.

Chapter 61 — Robot Surveillance and Security

Wendell H. Chun and Nikolaos Papanikolopoulos

This chapter introduces the foundation for surveillance and security robots for multiple military and civilian applications. The key environmental domains are mobile robots for ground, aerial, surface water, and underwater applications. Surveillance literallymeans to watch fromabove,while surveillance robots are used to monitor the behavior, activities, and other changing information that are gathered for the general purpose of managing, directing, or protecting one’s assets or position. In a practical sense, the term surveillance is taken to mean the act of observation from a distance, and security robots are commonly used to protect and safeguard a location, some valuable assets, or personal against danger, damage, loss, and crime. Surveillance is a proactive operation,while security robots are a defensive operation. The construction of each type of robot is similar in nature with amobility component, sensor payload, communication system, and an operator control station.

After introducing the major robot components, this chapter focuses on the various applications. More specifically, Sect. 61.3 discusses the enabling technologies of mobile robot navigation, various payload sensors used for surveillance or security applications, target detection and tracking algorithms, and the operator’s robot control console for human–machine interface (HMI). Section 61.4 presents selected research activities relevant to surveillance and security, including automatic data processing of the payload sensors, automaticmonitoring of human activities, facial recognition, and collaborative automatic target recognition (ATR). Finally, Sect. 61.5 discusses future directions in robot surveillance and security, giving some conclusions and followed by references.

Security: Facial recognition

Author  Ali Mollahosseini, Mohammad Mahoor

Video ID : 553

Video of face tracking and facial-landmark-point extraction of Ali's face for a security robot.

Chapter 6 — Model Identification

John Hollerbach, Wisama Khalil and Maxime Gautier

This chapter discusses how to determine the kinematic parameters and the inertial parameters of robot manipulators. Both instances of model identification are cast into a common framework of least-squares parameter estimation, and are shown to have common numerical issues relating to the identifiability of parameters, adequacy of the measurement sets, and numerical robustness. These discussions are generic to any parameter estimation problem, and can be applied in other contexts.

For kinematic calibration, the main aim is to identify the geometric Denavit–Hartenberg (DH) parameters, although joint-based parameters relating to the sensing and transmission elements can also be identified. Endpoint sensing or endpoint constraints can provide equivalent calibration equations. By casting all calibration methods as closed-loop calibration, the calibration index categorizes methods in terms of how many equations per pose are generated.

Inertial parameters may be estimated through the execution of a trajectory while sensing one or more components of force/torque at a joint. Load estimation of a handheld object is simplest because of full mobility and full wrist force-torque sensing. For link inertial parameter estimation, restricted mobility of links nearer the base as well as sensing only the joint torque means that not all inertial parameters can be identified. Those that can be identified are those that affect joint torque, although they may appear in complicated linear combinations.

Calibration of ABB's IRB 120 industrial robot

Author  Ilian Bonev

Video ID : 422

The video depicts the process for the geometric calibration of the 6 DOF IRB 120. The calibration is based on the measurement of the position and the orientation of a tool using the laser tracking system from FARO. The video shows in sequence the steps in the acquisition of various configurations which can then be be employed using an algorithm similar to that of Sect. 6.2.