View Chapter

Chapter 21 — Actuators for Soft Robotics

Alin Albu-Schäffer and Antonio Bicchi

Although we do not know as yet how robots of the future will look like exactly, most of us are sure that they will not resemble the heavy, bulky, rigid machines dangerously moving around in old fashioned industrial automation. There is a growing consensus, in the research community as well as in expectations from the public, that robots of the next generation will be physically compliant and adaptable machines, closely interacting with humans and moving safely, smoothly and efficiently - in other terms, robots will be soft.

This chapter discusses the design, modeling and control of actuators for the new generation of soft robots, which can replace conventional actuators in applications where rigidity is not the first and foremost concern in performance. The chapter focuses on the technology, modeling, and control of lumped parameters of soft robotics, that is, systems of discrete, interconnected, and compliant elements. Distributed parameters, snakelike and continuum soft robotics, are presented in Chap. 20, while Chap. 23 discusses in detail the biomimetic motivations that are often behind soft robotics.

Introducing WildCat

Author  Boston Dynamics

Video ID : 458

WildCat is a four-legged robot being developed to run fast on all types of terrain. So far WildCat has run at about 16 mph on flat terrain using bounding and galloping gaits. The video shows WildCat's best performance so far. WildCat is being developed by Boston Dynamics with funding from DARPA's M3 program. For more information about WildCat visit our website at www.BostonDynamics.com.

Chapter 26 — Flying Robots

Stefan Leutenegger, Christoph Hürzeler, Amanda K. Stowers, Kostas Alexis, Markus W. Achtelik, David Lentink, Paul Y. Oh and Roland Siegwart

Unmanned aircraft systems (UASs) have drawn increasing attention recently, owing to advancements in related research, technology, and applications. While having been deployed successfully in military scenarios for decades, civil use cases have lately been tackled by the robotics research community.

This chapter overviews the core elements of this highly interdisciplinary field; the reader is guided through the design process of aerial robots for various applications starting with a qualitative characterization of different types of UAS. Design and modeling are closely related, forming a typically iterative process of drafting and analyzing the related properties. Therefore, we overview aerodynamics and dynamics, as well as their application to fixed-wing, rotary-wing, and flapping-wing UAS, including related analytical tools and practical guidelines. Respecting use-case-specific requirements and core autonomous robot demands, we finally provide guidelines to related system integration challenges.

UAV stabilization, mapping and obstacle avoidance using VI-Sensor

Author  Skybotix AG

Video ID : 689

The video depicts UAV stabilization, mapping and obstacle avoidance using the Skybotix--Autonomous Systems Lab VI-Sensor - on-board and realtime. The robot is enabled with assisted teleoperation without line of sight and without the use of GPS during the ICARUS trials in Marche-En-Famenne.

Chapter 65 — Domestic Robotics

Erwin Prassler, Mario E. Munich, Paolo Pirjanian and Kazuhiro Kosuge

When the first edition of this book was published domestic robots were spoken of as a dream that was slowly becoming reality. At that time, in 2008, we looked back on more than twenty years of research and development in domestic robotics, especially in cleaning robotics. Although everybody expected cleaning to be the killer app for domestic robotics in the first half of these twenty years nothing big really happened. About ten years before the first edition of this book appeared, all of a sudden things started moving. Several small, but also some larger enterprises announced that they would soon launch domestic cleaning robots. The robotics community was anxiously awaiting these first cleaning robots and so were consumers. The big burst, however, was yet to come. The price tag of those cleaning robots was far beyond what people were willing to pay for a vacuum cleaner. It took another four years until, in 2002, a small and inexpensive device, which was not even called a cleaning robot, brought the first breakthrough: Roomba. Sales of the Roomba quickly passed the first million robots and increased rapidly. While for the first years after Roomba’s release, the big players remained on the sidelines, possibly to revise their own designs and, in particular their business models and price tags, some other small players followed quickly and came out with their own products. We reported about theses devices and their creators in the first edition. Since then the momentum in the field of domestics robotics has steadily increased. Nowadays most big appliance manufacturers have domestic cleaning robots in their portfolio. We are not only seeing more and more domestic cleaning robots and lawn mowers on the market, but we are also seeing new types of domestic robots, window cleaners, plant watering robots, tele-presence robots, domestic surveillance robots, and robotic sports devices. Some of these new types of domestic robots are still prototypes or concept studies. Others have already crossed the threshold to becoming commercial products.

For the second edition of this chapter, we have decided to not only enumerate the devices that have emerged and survived in the past five years, but also to take a look back at how it all began, contrasting this retrospection with the burst of progress in the past five years in domestic cleaning robotics. We will not describe and discuss in detail every single cleaning robot that has seen the light of the day, but select those that are representative for the evolution of the technology as well as the market. We will also reserve some space for new types of mobile domestic robots, which will be the success stories or failures for the next edition of this chapter. Further we will look into nonmobile domestic robots, also called smart appliances, and examine their fate. Last but not least, we will look at the recent developments in the area of intelligent homes that surround and, at times, also control the mobile domestic robots and smart appliances described in the preceding sections.

Husqvarna Automower vs competitors

Author  Erwin Prassler

Video ID : 731

Video shows a comparison of the Automower of Husquarna with the products of competitors such as Friendly Machines, John Deer, and Honda.

Chapter 68 — Human Motion Reconstruction

Katsu Yamane and Wataru Takano

This chapter presents a set of techniques for reconstructing and understanding human motions measured using current motion capture technologies. We first review modeling and computation techniques for obtaining motion and force information from human motion data (Sect. 68.2). Here we show that kinematics and dynamics algorithms for articulated rigid bodies can be applied to human motion data processing, with help from models based on knowledge in anatomy and physiology. We then describe methods for analyzing human motions so that robots can segment and categorize different behaviors and use them as the basis for human motion understanding and communication (Sect. 68.3). These methods are based on statistical techniques widely used in linguistics. The two fields share the common goal of converting continuous and noisy signal to discrete symbols, and therefore it is natural to apply similar techniques. Finally, we introduce some application examples of human motion and models ranging from simulated human control to humanoid robot motion synthesis.

Example of muscle tensions computed from motion-capture data

Author  Katsu Yamane

Video ID : 763

This video shows an example of muscle tensions computed from motion-capture data. The muscle color changes from yellow to red as the tension increases. The blue lines represent tendons.

Chapter 72 — Social Robotics

Cynthia Breazeal, Kerstin Dautenhahn and Takayuki Kanda

This chapter surveys some of the principal research trends in Social Robotics and its application to human–robot interaction (HRI). Social (or Sociable) robots are designed to interact with people in a natural, interpersonal manner – often to achieve positive outcomes in diverse applications such as education, health, quality of life, entertainment, communication, and tasks requiring collaborative teamwork. The long-term goal of creating social robots that are competent and capable partners for people is quite a challenging task. They will need to be able to communicate naturally with people using both verbal and nonverbal signals. They will need to engage us not only on a cognitive level, but on an emotional level as well in order to provide effective social and task-related support to people. They will need a wide range of socialcognitive skills and a theory of other minds to understand human behavior, and to be intuitively understood by people. A deep understanding of human intelligence and behavior across multiple dimensions (i. e., cognitive, affective, physical, social, etc.) is necessary in order to design robots that can successfully play a beneficial role in the daily lives of people. This requires a multidisciplinary approach where the design of social robot technologies and methodologies are informed by robotics, artificial intelligence, psychology, neuroscience, human factors, design, anthropology, and more.

Playing triadic games with KASPAR

Author  Kerstin Dautenhahn

Video ID : 220

The video illustrates (using researchers taking the roles of children) the system developed by Joshua Wainer as part of his PhD research at University of Hertfordshire. In this study, KASPAR was developed to fully autonomously play games with pairs of children with autism. The robot provides encouragement, motivation and feedback, and 'joins in the game'. The system was evaluated in long-term studies with children with autism (J. Wainer et al. 2014). Results show that KASPAR encourages collaborative skills in children with autism.

Chapter 23 — Biomimetic Robots

Kyu-Jin Cho and Robert Wood

Biomimetic robot designs attempt to translate biological principles into engineered systems, replacing more classical engineering solutions in order to achieve a function observed in the natural system. This chapter will focus on mechanism design for bio-inspired robots that replicate key principles from nature with novel engineering solutions. The challenges of biomimetic design include developing a deep understanding of the relevant natural system and translating this understanding into engineering design rules. This often entails the development of novel fabrication and actuation to realize the biomimetic design.

This chapter consists of four sections. In Sect. 23.1, we will define what biomimetic design entails, and contrast biomimetic robots with bio-inspired robots. In Sect. 23.2, we will discuss the fundamental components for developing a biomimetic robot. In Sect. 23.3, we will review detailed biomimetic designs that have been developed for canonical robot locomotion behaviors including flapping-wing flight, jumping, crawling, wall climbing, and swimming. In Sect. 23.4, we will discuss the enabling technologies for these biomimetic designs including material and fabrication.

A perching mechanism for micro aerial vehicles

Author  Mirko Kovač, Jürg Germann, Christoph Hürzeler, Roland Y. Siegwart, Dario Floreano

Video ID : 416

This video shows a 4.6 g perching mechanism for micro aerial vehicles (MAVs) which enables them to perch on various vertical surfaces such as tree trunks and the external walls of concrete buildings. To achieve high impact force, needles snap forward and puncture as the trigger collides with the target's surface.

Chapter 61 — Robot Surveillance and Security

Wendell H. Chun and Nikolaos Papanikolopoulos

This chapter introduces the foundation for surveillance and security robots for multiple military and civilian applications. The key environmental domains are mobile robots for ground, aerial, surface water, and underwater applications. Surveillance literallymeans to watch fromabove,while surveillance robots are used to monitor the behavior, activities, and other changing information that are gathered for the general purpose of managing, directing, or protecting one’s assets or position. In a practical sense, the term surveillance is taken to mean the act of observation from a distance, and security robots are commonly used to protect and safeguard a location, some valuable assets, or personal against danger, damage, loss, and crime. Surveillance is a proactive operation,while security robots are a defensive operation. The construction of each type of robot is similar in nature with amobility component, sensor payload, communication system, and an operator control station.

After introducing the major robot components, this chapter focuses on the various applications. More specifically, Sect. 61.3 discusses the enabling technologies of mobile robot navigation, various payload sensors used for surveillance or security applications, target detection and tracking algorithms, and the operator’s robot control console for human–machine interface (HMI). Section 61.4 presents selected research activities relevant to surveillance and security, including automatic data processing of the payload sensors, automaticmonitoring of human activities, facial recognition, and collaborative automatic target recognition (ATR). Finally, Sect. 61.5 discusses future directions in robot surveillance and security, giving some conclusions and followed by references.

Detection of abandoned objects

Author  Nikos Papanikolopoulos

Video ID : 682

Automatic detection of abandoned objects is of great importance in security and surveillance applications. This project at the Univ. of Minnesota attempts to detect such objects based on several criteria. Our approach is based on a combination of short-term and long-term blob logic, and the analysis of connected components. It is robust to many disturbances that may occur in the scene, such as the presence of moving objects and occlusions.

Chapter 14 — AI Reasoning Methods for Robotics

Michael Beetz, Raja Chatila, Joachim Hertzberg and Federico Pecora

Artificial intelligence (AI) reasoning technology involving, e.g., inference, planning, and learning, has a track record with a healthy number of successful applications. So can it be used as a toolbox of methods for autonomous mobile robots? Not necessarily, as reasoning on a mobile robot about its dynamic, partially known environment may differ substantially from that in knowledge-based pure software systems, where most of the named successes have been registered. Moreover, recent knowledge about the robot’s environment cannot be given a priori, but needs to be updated from sensor data, involving challenging problems of symbol grounding and knowledge base change. This chapter sketches the main roboticsrelevant topics of symbol-based AI reasoning. Basic methods of knowledge representation and inference are described in general, covering both logicand probability-based approaches. The chapter first gives a motivation by example, to what extent symbolic reasoning has the potential of helping robots perform in the first place. Then (Sect. 14.2), we sketch the landscape of representation languages available for the endeavor. After that (Sect. 14.3), we present approaches and results for several types of practical, robotics-related reasoning tasks, with an emphasis on temporal and spatial reasoning. Plan-based robot control is described in some more detail in Sect. 14.4. Section 14.5 concludes.

RoboEarth final demonstrator

Author  Gajamohan Mohanarajah

Video ID : 706

This video made in 2014 summarizes the final demonstrator of the joint project RoboEarth -- A World Wide Web for robots (http://roboearth.org/). The demonstrator includes four robots collaboratively working together to help patients in a hospital. These robots used their common knowledge base and infrastructure in the following ways: 1. a knowledge repository to share and learn from each others' experience, 2. a communication medium to perform collaborative tasks, and 3. a computational resource to offload some of their heavy computational load.

Chapter 62 — Intelligent Vehicles

Alberto Broggi, Alex Zelinsky, Ümit Özgüner and Christian Laugier

This chapter describes the emerging robotics application field of intelligent vehicles – motor vehicles that have autonomous functions and capabilities. The chapter is organized as follows. Section 62.1 provides a motivation for why the development of intelligent vehicles is important, a brief history of the field, and the potential benefits of the technology. Section 62.2 describes the technologies that enable intelligent vehicles to sense vehicle, environment, and driver state, work with digital maps and satellite navigation, and communicate with intelligent transportation infrastructure. Section 62.3 describes the challenges and solutions associated with road scene understanding – a key capability for all intelligent vehicles. Section 62.4 describes advanced driver assistance systems, which use the robotics and sensing technologies described earlier to create new safety and convenience systems for motor vehicles, such as collision avoidance, lane keeping, and parking assistance. Section 62.5 describes driver monitoring technologies that are being developed to mitigate driver fatigue, inattention, and impairment. Section 62.6 describes fully autonomous intelligent vehicles systems that have been developed and deployed. The chapter is concluded in Sect. 62.7 with a discussion of future prospects, while Sect. 62.8 provides references to further reading and additional resources.

Bayesian Embedded Perception in Inria/Toyota instrumented platform

Author  Christian Laugier, E-Motion Team

Video ID : 566

This video illustrates the concept of “Embedded Bayesian Perception”, which has been developed by Inria and implemented on the Inria/Toyota experimental Lexus vehicle. The objective is to improve the robustness of the on-board perception system of the vehicle, by appropriately fusing the data provided by several heterogeneous sensors. The system has been developed as a key component of an electronic co-pilot, designed for the purpose of detecting dangerous driving situations a few seconds ahead. The approach relies on the concept of the “Bayesian Occupancy Filter” developed by the Inria E-Motion Team. More technical details can be found in [62.25].

Chapter 72 — Social Robotics

Cynthia Breazeal, Kerstin Dautenhahn and Takayuki Kanda

This chapter surveys some of the principal research trends in Social Robotics and its application to human–robot interaction (HRI). Social (or Sociable) robots are designed to interact with people in a natural, interpersonal manner – often to achieve positive outcomes in diverse applications such as education, health, quality of life, entertainment, communication, and tasks requiring collaborative teamwork. The long-term goal of creating social robots that are competent and capable partners for people is quite a challenging task. They will need to be able to communicate naturally with people using both verbal and nonverbal signals. They will need to engage us not only on a cognitive level, but on an emotional level as well in order to provide effective social and task-related support to people. They will need a wide range of socialcognitive skills and a theory of other minds to understand human behavior, and to be intuitively understood by people. A deep understanding of human intelligence and behavior across multiple dimensions (i. e., cognitive, affective, physical, social, etc.) is necessary in order to design robots that can successfully play a beneficial role in the daily lives of people. This requires a multidisciplinary approach where the design of social robot technologies and methodologies are informed by robotics, artificial intelligence, psychology, neuroscience, human factors, design, anthropology, and more.

Mental-state inference to support human-robot collaboration

Author  Cynthia Breazeal

Video ID : 563

In this video, the Leonardo robot infers mental states from the observable behavior of two human collaborators in order to assist them in achieving their respective goals. The robot engages in a simulation-theory-inspired approach to make these inferences and to plan the appropriate actions to achieve the task goals. Each person wants a different food item (chips or cookies), locked in one of two larger boxes. The robot can operate a remote control interface to open two smaller boxes, one containing chips and the other cookies. The task is inspired by the Sally-Anne false-belief task, where the humans have diverging beliefs caused by a manipulation witnessed by only one of the participants. The robot must keep track of its own beliefs, in addition to inferring the beliefs of the human collaborators, as well as infer their respective goals, to offer the correct assistance.