View Chapter

Chapter 27 — Micro-/Nanorobots

Bradley J. Nelson, Lixin Dong and Fumihito Arai

The field of microrobotics covers the robotic manipulation of objects with dimensions in the millimeter to micron range as well as the design and fabrication of autonomous robotic agents that fall within this size range. Nanorobotics is defined in the same way only for dimensions smaller than a micron. With the ability to position and orient objects with micron- and nanometer-scale dimensions, manipulation at each of these scales is a promising way to enable the assembly of micro- and nanosystems, including micro- and nanorobots.

This chapter overviews the state of the art of both micro- and nanorobotics, outlines scaling effects, actuation, and sensing and fabrication at these scales, and focuses on micro- and nanorobotic manipulation systems and their application in microassembly, biotechnology, and the construction and characterization of micro and nanoelectromechanical systems (MEMS/NEMS). Material science, biotechnology, and micro- and nanoelectronics will also benefit from advances in these areas of robotics.

High-speed magnetic microrobot actuation in a microfluidic chip by a fine V-groove surface

Author  Fumihito Arai

Video ID : 491

This video shows high-speed microrobotic actuation driven by permanent magnets in a microfluidic chip. The microrobot has a milliNewton-level output force from a permanent magnet, micrometer-level positioning accuracy, and drive speed of over 280 mm/s. The riblet surface, which is a regularly arrayed V-groove, reduces fluid friction and enables high-speed actuation. Ni- and Si-composite fabrication was employed to form the optimum riblet shape on the microrobot’s surface by wet and dry etching. The evaluation experiments show that the microrobot can be actuated at a rate of up to 90 Hz, which is more than ten times higher than that of the microrobot without a riblet.

Chapter 61 — Robot Surveillance and Security

Wendell H. Chun and Nikolaos Papanikolopoulos

This chapter introduces the foundation for surveillance and security robots for multiple military and civilian applications. The key environmental domains are mobile robots for ground, aerial, surface water, and underwater applications. Surveillance literallymeans to watch fromabove,while surveillance robots are used to monitor the behavior, activities, and other changing information that are gathered for the general purpose of managing, directing, or protecting one’s assets or position. In a practical sense, the term surveillance is taken to mean the act of observation from a distance, and security robots are commonly used to protect and safeguard a location, some valuable assets, or personal against danger, damage, loss, and crime. Surveillance is a proactive operation,while security robots are a defensive operation. The construction of each type of robot is similar in nature with amobility component, sensor payload, communication system, and an operator control station.

After introducing the major robot components, this chapter focuses on the various applications. More specifically, Sect. 61.3 discusses the enabling technologies of mobile robot navigation, various payload sensors used for surveillance or security applications, target detection and tracking algorithms, and the operator’s robot control console for human–machine interface (HMI). Section 61.4 presents selected research activities relevant to surveillance and security, including automatic data processing of the payload sensors, automaticmonitoring of human activities, facial recognition, and collaborative automatic target recognition (ATR). Finally, Sect. 61.5 discusses future directions in robot surveillance and security, giving some conclusions and followed by references.

UGV Demo II: Outdoor surveillance robot

Author  Wendell Chun

Video ID : 679

The UGV / Demo II program, begun in 1992, developed and matured those navigation and automatic target-recognition technologies critical for the development of supervised, autonomous ground vehicles capable of performing military scout missions with a minimum of human oversight.

Chapter 40 — Mobility and Manipulation

Oliver Brock, Jaeheung Park and Marc Toussaint

Mobile manipulation requires the integration of methodologies from all aspects of robotics. Instead of tackling each aspect in isolation,mobilemanipulation research exploits their interdependence to solve challenging problems. As a result, novel views of long-standing problems emerge. In this chapter, we present these emerging views in the areas of grasping, control, motion generation, learning, and perception. All of these areas must address the shared challenges of high-dimensionality, uncertainty, and task variability. The section on grasping and manipulation describes a trend towards actively leveraging contact and physical and dynamic interactions between hand, object, and environment. Research in control addresses the challenges of appropriately coupling mobility and manipulation. The field of motion generation increasingly blurs the boundaries between control and planning, leading to task-consistent motion in high-dimensional configuration spaces, even in dynamic and partially unknown environments. A key challenge of learning formobilemanipulation consists of identifying the appropriate priors, and we survey recent learning approaches to perception, grasping, motion, and manipulation. Finally, a discussion of promising methods in perception shows how concepts and methods from navigation and active perception are applied.

DART: Dense articulated real-time tracking

Author  Tanner Schmidt, Richard Newcombe, Dieter Fox

Video ID : 673

This project aims to provide a unified framework for tracking arbitrary articulated models, given their geometric and kinematic structure. Our approach uses dense input data (computing an error term on every pixel) which we are able to process in real-time by leveraging the power of GPGPU programming and very efficient representation of model geometry with signed-distance functions. This approach has proven successful on a wide variety of models including human hands, human bodies, robot arms, and articulated objects.

Chapter 44 — Networked Robots

Dezhen Song, Ken Goldberg and Nak-Young Chong

As of 2013, almost all robots have access to computer networks that offer extensive computing, memory, and other resources that can dramatically improve performance. The underlying enabling framework is the focus of this chapter: networked robots. Networked robots trace their origin to telerobots or remotely controlled robots. Telerobots are widely used to explore undersea terrains and outer space, to defuse bombs and to clean up hazardous waste. Until 1994, telerobots were accessible only to trained and trusted experts through dedicated communication channels. This chapter will describe relevant network technology, the history of networked robots as it evolves from teleoperation to cloud robotics, properties of networked robots, how to build a networked robot, example systems. Later in the chapter, we focus on the recent progress on cloud robotics, and topics for future research.

Tele-actor

Author  Ken Goldberg, Dezhen Song

Video ID : 83

We describe a networked teleoperation system that enables groups of participants to collaboratively explore real-time remote environments. Participants collaborate using a spatial dynamic voting (SDV) interface which enables them to vote on a sequence of images via a network such as the internet. The SDV interface runs on each client computer and communicates with a central server which collects, displays, and analyzes time sequences of spatial votes. The results are conveyed to the “tele-actor,” a skilled human with cameras and microphones who navigates and performs actions in the remote environment.

Chapter 62 — Intelligent Vehicles

Alberto Broggi, Alex Zelinsky, Ümit Özgüner and Christian Laugier

This chapter describes the emerging robotics application field of intelligent vehicles – motor vehicles that have autonomous functions and capabilities. The chapter is organized as follows. Section 62.1 provides a motivation for why the development of intelligent vehicles is important, a brief history of the field, and the potential benefits of the technology. Section 62.2 describes the technologies that enable intelligent vehicles to sense vehicle, environment, and driver state, work with digital maps and satellite navigation, and communicate with intelligent transportation infrastructure. Section 62.3 describes the challenges and solutions associated with road scene understanding – a key capability for all intelligent vehicles. Section 62.4 describes advanced driver assistance systems, which use the robotics and sensing technologies described earlier to create new safety and convenience systems for motor vehicles, such as collision avoidance, lane keeping, and parking assistance. Section 62.5 describes driver monitoring technologies that are being developed to mitigate driver fatigue, inattention, and impairment. Section 62.6 describes fully autonomous intelligent vehicles systems that have been developed and deployed. The chapter is concluded in Sect. 62.7 with a discussion of future prospects, while Sect. 62.8 provides references to further reading and additional resources.

Lane tracking

Author  Alex Zelinsky

Video ID : 836

This video demonstrates robust lane tracking under variable conditions, e.g., rain and poor lighting. The system uses a particle-filter-based approach to achieve robustness.

Chapter 28 — Force and Tactile Sensing

Mark R. Cutkosky and William Provancher

This chapter provides an overview of force and tactile sensing, with the primary emphasis placed on tactile sensing. We begin by presenting some basic considerations in choosing a tactile sensor and then review a wide variety of sensor types, including proximity, kinematic, force, dynamic, contact, skin deflection, thermal, and pressure sensors. We also review various transduction methods, appropriate for each general sensor type. We consider the information that these various types of sensors provide in terms of whether they are most useful for manipulation, surface exploration or being responsive to contacts from external agents.

Concerning the interpretation of tactile information, we describe the general problems and present two short illustrative examples. The first involves intrinsic tactile sensing, i. e., estimating contact locations and forces from force sensors. The second involves contact pressure sensing, i. e., estimating surface normal and shear stress distributions from an array of sensors in an elastic skin. We conclude with a brief discussion of the challenges that remain to be solved in packaging and manufacturing damage-tolerant tactile sensors.

The effect of twice dropping, and then gently placing, a two-gram weight on a small capacitive tactile array

Author  Mark Cutkosky

Video ID : 15

Video illustrating the effect of twice dropping, and then gently placing, a two-gram weight on a small capacitive tactile array sampled at 20 Hz. The first drop produces a large dynamic signal in comparison to the static load, but the second drop is missed, demonstrating the value of having dynamic tactile sensing.

Chapter 18 — Parallel Mechanisms

Jean-Pierre Merlet, Clément Gosselin and Tian Huang

This chapter presents an introduction to the kinematics and dynamics of parallel mechanisms, also referred to as parallel robots. As opposed to classical serial manipulators, the kinematic architecture of parallel robots includes closed-loop kinematic chains. As a consequence, their analysis differs considerably from that of their serial counterparts. This chapter aims at presenting the fundamental formulations and techniques used in their analysis.

Quadrupteron robot

Author  Clément Gosselin

Video ID : 52

This video demonstrates a 4-DOF partially decoupled scara-type parallel robot (Quadrupteron). References: 1. P.L. Richard, C. Gosselin, X. Kong: Kinematic analysis and prototyping of a partially decoupled 4-DOF 3T1R parallel manipulator, ASME J. Mech. Des. 129(6), 611-616 (2007); 2. X. Kong, C. Gosselin: Forward displacement analysis of a quadratic 4-DOF 3T1R parallel manipulator: The Quadrupteron, Meccanica 46(1), 147-154 (2011); 3. C. Gosselin: Compact dynamic models for the tripteron and quadrupteron parallel manipulators, J. Syst. Control Eng. 223(I1), 1-11 (2009)

Chapter 44 — Networked Robots

Dezhen Song, Ken Goldberg and Nak-Young Chong

As of 2013, almost all robots have access to computer networks that offer extensive computing, memory, and other resources that can dramatically improve performance. The underlying enabling framework is the focus of this chapter: networked robots. Networked robots trace their origin to telerobots or remotely controlled robots. Telerobots are widely used to explore undersea terrains and outer space, to defuse bombs and to clean up hazardous waste. Until 1994, telerobots were accessible only to trained and trusted experts through dedicated communication channels. This chapter will describe relevant network technology, the history of networked robots as it evolves from teleoperation to cloud robotics, properties of networked robots, how to build a networked robot, example systems. Later in the chapter, we focus on the recent progress on cloud robotics, and topics for future research.

A multi-operator, multi-robot teleoperation system

Author  Nak Young Chong

Video ID : 84

A multi-operator, multi-robot teleoperation system for collaborative maintenance operations: Video Proc. of ICRA 2001. Over the past decades, problems and notable results have been reported mainly in the single-operator single-robot (SOSR) teleoperation system. Recently, the need for cooperation has rapidly emerged in many possible applications such as plant maintenance, construction, and surgery, and considerable efforts have therefore been made toward the coordinated control of multi-operator, multi-robot (MOMR) teleoperation. We have developed coordinated control technologies for multi-telerobot cooperation in a common environment remotely controlled from multiple operators physically distant from each other. To overcome the operators' delayed visual perception arising from network throughput limitations, we have suggested several coordinated control aids at the local operator site. Operators control their master to get their telerobot to cooperate with the counterpart telerobot using the predictive simulator, as well as video image feedback. This video explains the details of the testbed and investigates the use of an online predictive simulator to assist the operator in coping with time delay.

Chapter 56 — Robotics in Agriculture and Forestry

Marcel Bergerman, John Billingsley, John Reid and Eldert van Henten

Robotics for agriculture and forestry (A&F) represents the ultimate application of one of our society’s latest and most advanced innovations to its most ancient and important industries. Over the course of history, mechanization and automation increased crop output several orders of magnitude, enabling a geometric growth in population and an increase in quality of life across the globe. Rapid population growth and rising incomes in developing countries, however, require ever larger amounts of A&F output. This chapter addresses robotics for A&F in the form of case studies where robotics is being successfully applied to solve well-identified problems. With respect to plant crops, the focus is on the in-field or in-farm tasks necessary to guarantee a quality crop and, generally speaking, end at harvest time. In the livestock domain, the focus is on breeding and nurturing, exploiting, harvesting, and slaughtering and processing. The chapter is organized in four main sections. The first one explains the scope, in particular, what aspects of robotics for A&F are dealt with in the chapter. The second one discusses the challenges and opportunities associated with the application of robotics to A&F. The third section is the core of the chapter, presenting twenty case studies that showcase (mostly) mature applications of robotics in various agricultural and forestry domains. The case studies are not meant to be comprehensive but instead to give the reader a general overview of how robotics has been applied to A&F in the last 10 years. The fourth section concludes the chapter with a discussion on specific improvements to current technology and paths to commercialization.

A mini, unmanned, aerial system for remote sensing in agriculture

Author  Joao Valente, Julian Colorado, Claudio Rossi, Alex Martinez, Jaime Del Cerro, Antonio Barrientos

Video ID : 307

This video shows a mini-aerial robot employed for aerial sampling in precision agriculture (PA). Issues such as field partitioning, path planning, and robust flight control are addressed, together with experimental results collected during outdoor testing.

Chapter 69 — Physical Human-Robot Interaction

Sami Haddadin and Elizabeth Croft

Over the last two decades, the foundations for physical human–robot interaction (pHRI) have evolved from successful developments in mechatronics, control, and planning, leading toward safer lightweight robot designs and interaction control schemes that advance beyond the current capacities of existing high-payload and highprecision position-controlled industrial robots. Based on their ability to sense physical interaction, render compliant behavior along the robot structure, plan motions that respect human preferences, and generate interaction plans for collaboration and coaction with humans, these novel robots have opened up novel and unforeseen application domains, and have advanced the field of human safety in robotics.

This chapter gives an overview on the state of the art in pHRI as of the date of publication. First, the advances in human safety are outlined, addressing topics in human injury analysis in robotics and safety standards for pHRI. Then, the foundations of human-friendly robot design, including the development of lightweight and intrinsically flexible force/torque-controlled machines together with the required perception abilities for interaction are introduced. Subsequently, motionplanning techniques for human environments, including the domains of biomechanically safe, risk-metric-based, human-aware planning are covered. Finally, the rather recent problem of interaction planning is summarized, including the issues of collaborative action planning, the definition of the interaction planning problem, and an introduction to robot reflexes and reactive control architecture for pHRI.

Collaborative human-focused robotics for manufacturing

Author  CHARM Project Consortium

Video ID : 717

The CHARM project demonstrates methods for interacting with robotic assistants through developments in the perception, communication, control, and safe interaction technologies and techniques centered on supporting workers performing complex manufacturing tasks.