A key research challenge in robotics is to design robotic systems with the cognitive capabilities necessary to support human–robot interaction. These systems will need to have appropriate representations of the world; the task at hand; the capabilities, expectations, and actions of their human counterparts; and how their own actions might affect the world, their task, and their human partners. Cognitive human–robot interaction is a research area that considers human(s), robot(s), and their joint actions as a cognitive system and seeks to create models, algorithms, and design guidelines to enable the design of such systems. Core research activities in this area include the development of representations and actions that allow robots to participate in joint activities with people; a deeper understanding of human expectations and cognitive responses to robot actions; and, models of joint activity for human–robot interaction. This chapter surveys these research activities by drawing on research questions and advances from a wide range of fields including computer science, cognitive science, linguistics, and robotics.
Robotic secrets revealed, Episode 1
Author Greg Trafton
Video ID : 129
A Naval Research Laboratory (NRL) scientist shows a magic trick to a mobile-dextrous-social robot, demonstrating the robot's use and interpretation of gestures.
The video highlights recent gesture-recognition work and NRL's novel cognitive architecture, ACT-R/E. While set within a popular game of skill, this video illustrates several Navy-relevant issues, including computational cognitive architecture which enables autonomous function, and integrates perceptual information with higher-level cognitive reasoning, gesture recognition for shoulder-to-shoulder human-robot interaction, and anticipation and learning on a robotic system. Such abilities will be critical for future, naval, autonomous systems for persistent surveillance, tactical mobile robots, and other autonomous platforms.