Human interfaces for operating robots often take the form of gamepads, tablets, or laptops, which force the user to give up their own situation awareness to attend to the robot. Instead, we aimed to develop perceptual capabilities for robots such that they can be treated more like partners by their users, where the user does not need to give up their awareness of their situation. To develop these capabilities, we worked with to develop real-time person and gesture recognition capabilities such that field robots could reliably follow and recognize selected nonverbal commands from users. The resulting robot system is able to accompany and take gestural commands from humans in a variety of indoor and outdoor environments. In our usability studies, our recognition-based interfaces demonstrated an effect that was a significant improvement upon teleoperation-based interfaces for a building clearing task. Further, once ported to ROS, our recognitions methods have been successfully applied to perform accompaniment with the iRobot PackBot, Willow Garage PR2, and other mobile robot platforms.
M. Loper, N. Koenig, S. Chernova, O. Jenkins, and C. Jones, “Mobile Human-Robot Teaming with Environmental Tolerance,” in Human-Robot Interaction (HRI 2009), San Diego, CA, USA, 2009, pp. 157-164.
M. Marge, A. Powers, J. Brookshire, G. T. Jay, O. C. Jenkins, and C. Geyer, “Comparing Heads-up, Hands-free Operation of Ground Robots to Teleoperation,” in Proceedings of Robotics: Science and Systems, Los Angeles, CA, USA, 2011.
2010 Results video using stereo vision
2007 Results video using SR3000 depth camera