View Chapter

Chapter 71 — Cognitive Human-Robot Interaction

Bilge Mutlu, Nicholas Roy and Selma Šabanović

A key research challenge in robotics is to design robotic systems with the cognitive capabilities necessary to support human–robot interaction. These systems will need to have appropriate representations of the world; the task at hand; the capabilities, expectations, and actions of their human counterparts; and how their own actions might affect the world, their task, and their human partners. Cognitive human–robot interaction is a research area that considers human(s), robot(s), and their joint actions as a cognitive system and seeks to create models, algorithms, and design guidelines to enable the design of such systems. Core research activities in this area include the development of representations and actions that allow robots to participate in joint activities with people; a deeper understanding of human expectations and cognitive responses to robot actions; and, models of joint activity for human–robot interaction. This chapter surveys these research activities by drawing on research questions and advances from a wide range of fields including computer science, cognitive science, linguistics, and robotics.

Gaze and gesture cues for robots

Author  Bilge Mutlu

Video ID : 128

In human-robot communication, nonverbal cues like gaze and gesture can be a source of important information for starting and maintaining interaction. Gaze, for example, can tell a person about what the robot is attending to, its mental state, and its role in a conversation. Researchers are studying and developing models of nonverbal cues in human-robot interaction to enable more successful collaboration between robots and humans in a variety of domains, including education.

Robotic secrets revealed, Episode 1

Author  Greg Trafton

Video ID : 129

A Naval Research Laboratory (NRL) scientist shows a magic trick to a mobile-dextrous-social robot, demonstrating the robot's use and interpretation of gestures. The video highlights recent gesture-recognition work and NRL's novel cognitive architecture, ACT-R/E. While set within a popular game of skill, this video illustrates several Navy-relevant issues, including computational cognitive architecture which enables autonomous function, and integrates perceptual information with higher-level cognitive reasoning, gesture recognition for shoulder-to-shoulder human-robot interaction, and anticipation and learning on a robotic system. Such abilities will be critical for future, naval, autonomous systems for persistent surveillance, tactical mobile robots, and other autonomous platforms.

Robotic secrets revealed, Episode 2: The trouble begins

Author  Greg Trafton

Video ID : 130

This video demonstrates research on robot perception (including object recognition and multimodal person identification) and embodied cognition (including theory of mind or the ability to reason about what others believe). The video features two people interacting with two robots.

Human-robot jazz improvisation

Author  Guy Hoffman

Video ID : 236

The stage debut of Shimon, the robotic marimba player. Also, the world's first human-robot rendition of Duke Jordan's "Jordu", for human piano and robot marimba.

Designing robot learners that ask good questions

Author  Maya Cakmak, Andrea Thomaz

Video ID : 237

Programming new skills on a robot should take minimal time and effort. One approach to achieve this goal is to allow the robot to ask questions. This idea, called active learning, has recently caught a lot of attention in the robotics community. However, it has not been explored from a human-robot interaction perspective. We identify three types of questions (label, demonstration, and feature queries) and discuss how a robot can use these while learning new skills. Then, we present an experiment on human question-asking which characterizes the extent to which humans use these question types. Finally, we evaluate the three types of question within a human-robot teaching interaction. We investigate the ease with which different types of questions are answered and whether or not there is a general preference of one type of question over another. Based on our findings from both experiments, we provide guidelines for designing question-asking behaviors for a robot learner.

Active key-frame-based learning from demonstration

Author  Maya Cakmak, Andrea Thomaz

Video ID : 238

Simon asks different types of questions in response to demonstrations given by the teacher.