Strict Standards: Declaration of action_plugin_safefnrecode::register() should be compatible with DokuWiki_Action_Plugin::register($controller) in /home/brownrob/public_html/cs148/lib/plugins/safefnrecode/action.php on line 14

Strict Standards: Declaration of action_plugin_popularity::register() should be compatible with DokuWiki_Action_Plugin::register($controller) in /home/brownrob/public_html/cs148/lib/plugins/popularity/action.php on line 57

Warning: Cannot modify header information - headers already sent by (output started at /home/brownrob/public_html/cs148/lib/plugins/safefnrecode/action.php:14) in /home/brownrob/public_html/cs148/inc/auth.php on line 352

Warning: Cannot modify header information - headers already sent by (output started at /home/brownrob/public_html/cs148/lib/plugins/safefnrecode/action.php:14) in /home/brownrob/public_html/cs148/inc/actions.php on line 180
software_environment_setup – Introduction to Autonomous Robotics
Dieses Dokuwiki verwendet ein von Anymorphic Webdesign erstelltes Thema.

CS148 Robot Software Environment Details

ROS

ROS Base

First, install the base of ROS by following Willow Garage's instructions. The robots we will be using this semester have the following ROS stacks (as Ubuntu/Debian packages) installed:

ros-diamondback-brown-perception
ros-diamondback-brown-remotelab
ros-diamondback-common
ros-diamondback-common-msgs
ros-diamondback-control
ros-diamondback-diagnostics
ros-diamondback-driver-common
ros-diamondback-executive-smach
ros-diamondback-geometry
ros-diamondback-image-common
ros-diamondback-image-pipeline
ros-diamondback-image-transport-plugins
ros-diamondback-joystick-drivers
ros-diamondback-laser-pipeline
ros-diamondback-multimaster-experimental
ros-diamondback-navigation
ros-diamondback-openni-kinect
ros-diamondback-perception-pcl
ros-diamondback-pr2-common
ros-diamondback-pr2-controllers
ros-diamondback-pr2-mechanism
ros-diamondback-robot-model
ros-diamondback-ros
ros-diamondback-ros-comm
ros-diamondback-slam-gmapping
ros-diamondback-turtlebot
ros-diamondback-turtlebot-apps
ros-diamondback-turtlebot-robot
ros-diamondback-turtlebot-viz
ros-diamondback-vision-opencv

Any of these stacks can be installed using the command (substituting ”[stackname]” with the actual stack name):

> sudo apt-get install ros-diamondback-[stackname]

ROS Environment Variables

Next, setup your ROS package path to include a folder in your home directory. This will allow you to create and edit packages within this folder (and any subfolders thereof) and have them visible to ROS. (Make sure to substitute your username for [username].)

> cd ~ && mkdir ros
> echo "source /opt/ros/diamondback/setup.bash; export ROS_PACKAGE_PATH=/home/[username]/ros:$ROS_PACKAGE_PATH" >> .bashrc

ROS Third-Party Packages

CMVision (Color blob recognition)

Based on the CMU CMVision library, the cmvision package performs segmentation of solid colored regions (or “blobs”) in an image, reported as bounding boxes. cmvision proxy thresholds and groups pixels in an images based on given YUV color ranges to estimate blobs. To calibrate color ranges, the colorgui node is include within cmvision to build color ranges from selected pixels in published image topics.

Installation

Using Subversion, check out the ROS cmvision package from ros.org into your ros directory (assuming ~/ros is your ROS working directory):

> cd ~ros
> svn co https://code.ros.org/svn/wg-ros-pkg/branches/trunk_cturtle/vision/cmvision
> rosmake cmvision  

Color Calibration

For color blobfinding, ROS uses the CMVision library to perform color segmentation of an image and find relatively solid colored regions (or “blobs”), as illustrated below. The cmvision package in ROS consists of two nodes:

  • <tt>colorgui</tt> to specify (or “calibrate”) colors to recognize and
  • <tt>cmvision</tt> to find color blobs at run-time.

Both of these nodes receive input from the camera by subscribing to an image topic.

The blobfinder provides a bounding box around each image region containing pixels within a specified color range. These color ranges are specified in a color calibration file, or colorfile, such as in the “colors.txt” example below. cmvision colorfiles contains two sections with the following headers:

  • “Color section: a list identifiers for each blob color, as strings and RGB triplets, in sequential order
  • “Thresholds” section: a list of color range thresholds (in YUV space) sequence to match each blob color in the “Colors” section

The following example “colors.txt” illustrates the format of the colorfile for colors “Red”, “Green”, and “Blue”:

 [[Colors]]
 (255,  0,  0) 0.000000 10 Red
 (  0,255,  0) 0.000000 10 Green
 (  0,  0,255) 0.000000 10 Blue
 
 [[Thresholds]]
 ( 25:164, 80:120,150:240)
 ( 20:220, 50:120, 40:115)
 ( 15:190,145:255, 40:120)

In this colorfile, the color “Red” has the integer identifier ”(255,0,0)” or, in hexidecimal, “0x00FF0000” and YUV thresholds ”(25:164,80:120,150:240)”. These thresholds are specified as a range in the the Wikipedia YUV color space. Specifically, any pixel with YUV values within this range will be labeled with the given blob color. Note: that YUV and RGB color coordinates are vastly different representations, you can refer to the Wikipedia YUV entry and the Appendix for details.

To calibrate the blobfinder, you will use colorgui to estimate YUV color ranges for objects viewed in the camera's image stream. These color ranges will then be entered into your own colorfile for use by the cmvision node. Assuming turtlebot driver is running, run colorgui using the following command (substituting [imagetopic] for the actual topic the camera uses to publish images):

> rosrun cmvision colorgui image:=[imagetopic]

If you are using the Kinect, [imagetopic] will be /camera/rgb/image_color. If you are using gscam, [imagetopic] will be /gscam/image_raw. This begs the question of why there is not a standard topic name for publishing images.

The result should pop up a window displaying the current camera image stream, similar to running image_view. The colorgui image window can now be used to find the YUV range for a single color of interest.

Using colorgui image window, you can calibrate for the color of specific objects by sampling their pixel colors. Put objects of interest in the robot's view. Mouse click on a pixel belonging to the object in the image window. This action should put the RGB value of the pixel into the left textbox and YUV value in the right textbox. Clicking on another pixel will update the output of the terminal to show the pixel's RGB value and the YUV range encompassing both clicked pixels. Clicking on additional pixels will expand the YUV range to span the color region of interest. Assuming your clicks represent a consistent color, you should see bounding boxes in the colorgui window represented color blobs found with the current YUV range.

Color calibration of output and display via colorgui

Note: you may not want to click on all pixels of an object due to shadowing and specular (“shiny”) artifacts.

Once you have a sufficient calibration for a color, copy the YUV range shown in the colorgui textbox (or output to the terminal) to a separate text buffer temporarily or directly enter this information into your colorfile. Save this file as colors.txt on the robot. You can restart this process to calibrate for another color by selecting “File→Reset” in the colorgui menu bar.

Color Recognition and Execution

Once you have an appropriately calibrated colorfile, the cmvision blobfinder will be able to detect color blobs. This process can be used to color calibrate a variety of cameras both in real and simulated environments. However, your color file will likely work only for cameras and lighting conditions similar to those used at the time of calibration.

An example of cmvision color segmentation on objects

  • Stop colorgui and use roslaunch to start cmvision and see the image stream with recognized blobs:
> roscd cmvision
> roslaunch cmvision.launch

cmvision.launch essentially sets related ROS parameters and launches cmvision to use images from your camera image and your color file. The code for cmvision.launch is listed below:

 <launch> 
   <!-- Location of the cmvision color file -->
   <param name="cmvision/color_file" type="string" 
          value="PATH_TO_YOUR_COLOR_FILE" />
 
   <!-- Turn debug output on or off -->
   <param name="cmvision/debug_on" type="bool" value="true"/>
 
   <!-- Turn color calibration on or off -->
   <param name="cmvision/color_cal_on" type="bool" value="false"/>
 
   <!-- Enable Mean shift filtering -->
   <param name="cmvision/mean_shift_on" type="bool" value="false"/>
 
   <!-- Spatial bandwidth: Bigger = smoother image -->
   <param name="cmvision/spatial_radius_pix" type="double" value="2.0"/>
 
   <!-- Color bandwidth: Bigger = smoother image-->
   <param name="cmvision/color_radius_pix" type="double" value="40.0"/>
 
   <node name="cmvision" pkg="cmvision" type="cmvision" args="image:=/camera/rgb/image_color" 
         output="screen" />
 </launch>

Note that default cmvision.launch contains wrong parameter settings. Please copy the code above and modify it for your use.

  • Determine if your robot can recognize blobs while moving by running the turtlebot driver and teleop_keyboard_twist:
> roslaunch YOUR_ROBOT.launch
> rosrun teleop_twist_keyboard teleop_twist_keyboard.py cmd_vel:=/turtlebot_node/cmd_vel

Disclaimer: The calibration process is not always easy and may take several iterations to get a working calibration. Remember, the real world can be particular and unforgiving. Small variations make a huge difference. So, be consistent and thorough.

ar_recog/ARToolKit (AR tag recognition)

Based on the ARToolkit augmented reality library, ar_recog recognizes augmented reality tags in an image. ar_recog publishes various information about recognized tags, such as its corners in image space and relative 6DOF pose in camera space.

Installation

  • Using Subversion, check out the ar_recog package from the brown-ros-pkg repository, assuming ~/ros is your working ROS directory:
> cd ~/ros
> svn co http://brown-ros-pkg.googlecode.com/svn/trunk/experimental/ar_recog ar_recog
  • Compile the package
> roscd ar_recog
> cmake .
> rosmake ar_recog

Recognition / Execution

  • Assuming a camera is publishing image topics, start ar_recog:
> roscd ar_recog/bin
> rosrun ar_recog ar_recog image:=[imagetopic]
  • Place a print-out of one of our trained AR tags (alpha-kappa) in front of the camera. Make sure the tag is flat and viewed in its entirety by the camera.
  • Run image_view, using the ”/ar/image” topic, to view tags recognized by ar_recog:
> rosrun image_view image_view image:=/ar/image

If successful, you should see a window with drawn green boxes overlaid on AR tags in the camera image stream:

> cd $ROS_HOME/ar_recog/src/ARToolKit/bin
> ./mk_patt
camera parameter: camera_para.dat 
# show camera tag of interest, tag is highlight, click window to choose,
# save pattern as "patt.patternname" (or patt.X)
> cp patt.patternname $ROS_HOME/ar_recog/bin
> vi $ROS_HOME/ar_recog/bin/object_data
# add pattern entry (patternname, patternfilename, width of tag in mm,
center of tag usually "0.0 0.0") 

mk_patt will likely use the laptop's onboard camera instead of the PS3 cam. It is usually necessary to change the configuration string of mk_patt and remaking mk_patt to use a non-default camera, which is why we do not recommend cs148 students training new tags.

software_environment_setup.txt · Last modified: 2011/09/19 09:37 by brownrobotics
Trace: software_environment_setup
Dieses Dokuwiki verwendet ein von Anymorphic Webdesign erstelltes Thema.
CC Attribution 3.0 Unported
www.chimeric.de Valid CSS Driven by DokuWiki do yourself a favour and use a real browser - get firefox!! Recent changes RSS feed Valid XHTML 1.0