From Brown University Robotics
Revision as of 18:12, 5 December 2011 by Bthomas
http://ros.org/wiki/roboframenet gives an overview of the organization of RoboFrameNet (RFN). Read this first; it will take 10 minutes and save you lots of time after that.
For running demos, basically, run roboframenet_bringup/roboframenet_pr2.launch or roboframenet_bringup/roboframenet_turtlebot.launch on the robot itself, ssh again into the robot, then send commands by publishing them on the correct topic. (Note: Everything runs locally on the robot.) For instance:
rostopic pub command std_msgs/String "Ping."
should just make a text message saying "Ping" appear on the terminal in which you launched RFN. More interestingly:
rostopic pub command std_msgs/String "Give me 5."
will make the PR2 give you a high five.
Note, for movement tasks ("go to X"), you need to open up rviz and localize the robot first. Second, we're not at Willow Garage and the moving demo is map-specific, the demo as-is won't work. You'll need to modify the location names / positions / orientations in move_base_rfn's location_to_pose function. (You can get position and orientation on rviz's maps by doing a rostopic echo and, eg, setting a 2D nav goal.
Launch files, more in depth
Adding new actions
The from-scratch to do this is the create an rfnserver. There are two examples in rfnserver/bin, namely ping.py and loop.py, which are basically skeleton files for doing a single and repeated task, respectively.
If you're trying to tie in already-existing code, you have a few options. One, if the code in question is an action server, is to modify the code directly. This was my initial thought on how I would tie others' code in. However, I wouldn't recommend it.
Instead, for any case, you should create a small RFNServer which acts as a bridge between the pre-existing code and RFN. I've demonstrated two instances of doing this. First, pr2_props_rfn demonstrates a bridge for launch files. Note that the roslaunch API is still in flux, so this may or may not work anymore, and you may have to change it. Second, move_base_rfn demonstrates a bridge for action servers. If I recall, it can sometimes act a little wonky if you try to call an action server twice in a row, so there may be an underlying problem in my implementation. I feel like, for both cases, there should be some way to standardize them so that you don't have to wade through icky roslaunch and action server APIs, but I did not pursue that route.
Then, you'll want to add bindings to the speech half of the pipeline. This is accomplished through semantic frames (the concept of an action) and lexical units (the binding between natural language and semantic frames). The easiest way to do this is to copy an example. For semantic frames, look at frame_registrar/frames/*.yaml. For lexical units, go to semantic_framer/lu/*.yaml. (Note that multiple lexical units may occupy one file.)
Other things not yet metioned
imperative_to_declarative puts the implicit "you" in front of imperative (command) sentences. The assumption that this all commands are simple and imperative makes the natural language part easier, though it limits the expressiveness of the end user.
install_application.sh is a hack to install the roboframenet app on the PR2. This is NOT necessary to run RFN. This was used for Willow's demo where you can scan a PR2's QR code and gain control of it. This installs particular RFN application on the PR2, thus making it capable of running RFN with voice command/control with nearly zero configuration on the end-users part. (I don't know if the apps ever made it out of Willow's HQ.)
voice_command will be released with PR2 apps for Android. I don't know when this is scheduled. I had the impression that it was pretty close at the end of summer 2011, but I guess not?
If someone (probably Chad) wants you to run this and you can't figure something out, feel free to contact me! :) email@example.com is my more-or-less permanent email address. Best of luck!