A video of Lisa Miller's implementation of this assignment from Spring Semester 2009.
Assigned: Sept 17, 1:50pm, 2010
Due: Sept 26, 11:59pm, 2010
Building on your Enclosure Escape assignment, you will build a controller in ROS to perform "object seeking". In this seeking task, your robot will perceive and drive to objects that are visually recognizable by a solid color appearance or labeled with an AR tag from the robot's visual sensing (i.e., camera). For this assignment, you will be working primarily with the Create platform and a Sony PlayStation Eye USB video camera. For object recognition in ROS, you will use the cmvision package for color blobfinding and the ar_recog package for AR tag recognition.
Assuming perception of objects salient by color or pattern, you will develop an object seeking package for this assignment that enables a robot to continually drive between these (non-occluded) objects in a sequence given at run-time. Your controller's decision making should take the form a finite state machine (FSM). This FSM should use one state variable to specify the currently sought object. For motion control, you should use proportional-derivative (PD) servoing to center objects in the robot's field of view. As a whole, your controller should put the current object in the center of view, drive as close as possible to an object without hitting it, increment the state variable, and continue the process for the next object.
Your tasks for this assignment are as follows.
The following sections provide information to guide you through completing these tasks.
#!/bin/bash arg=$1 if [ -z "$arg" ] ; then echo "usage: sh run_ps3cam.sh arg where argument is the videomode (00-04 and 10-16) check comments in the file for details" exit fi modprobe -r gspca-ov534 modprobe gspca-ov534 videomode=$arg echo 'ps3 camera driver started' #00:640x480@15 #01:640x480@30 #02:640x480@40 #03:640x480@50 #04:640x480@60 #10:320x240@30 #11:320x240@40 #12:320x240@50 #13:320x240@60 #14:320x240@75 #15:320x240@100 #16:320x240@125
> sudo sh run_ps3cam.sh <VIDEO TYPE NUMBER>
> guvcview -d /dev/video1
> roscd gscam/bin > rosrun gscam gscam
> rosrun image_view image_view image:=/gscam/image_raw
If successful, you should see a new window emerge displaying the image stream from the robot's camera, example below. Stop image_view with the ctrl-c command in the terminal before proceeding
For color blobfinding, ROS uses the CMVision library to perform color segmentation of an image and find relatively solid colored regions (or "blobs"), as illustrated below. The cmvision package in ROS consists of two nodes: colorgui to specify (or "calibrate") colors to recognize and cmvision to find color blobs at run-time. Both of these nodes receive input from the camera by subscribing to an image topic.
The blobfinder provides a bounding box around each image region containing pixels within a specified color range. These color ranges are specified in a color calibration file, or colorfile, such as in the "colors.txt" example below. cmvision colorfiles contains two sections with the following headers:
The following example "colors.txt" illustrates the format of the colorfile for colors "Red", "Green", and "Blue":
[Colors] (255, 0, 0) 0.000000 10 Red ( 0,255, 0) 0.000000 10 Green ( 0, 0,255) 0.000000 10 Blue [Thresholds] ( 25:164, 80:120,150:240) ( 20:220, 50:120, 40:115) ( 15:190,145:255, 40:120)
In this colorfile, the color "Red" has the integer identifier "(255,0,0)" or, in hexidecimal, "0x00FF0000" and YUV thresholds "(25:164,80:120,150:240)". These thresholds are specified as a range in the the Wikipedia YUV color space. Specifically, any pixel with YUV values within this range will be labeled with the given blob color. Note: that YUV and RGB color coordinates are vastly different representations, you can refer to the Wikipedia YUV entry and the Appendix for details.
To calibrate the blobfinder, you will use colorgui to estimate YUV color ranges for objects viewed in the camera's image stream. These color ranges will then be entered into your own colorfile for use by the cmvision node. Start by running colorgui, assuming gscam is publishing images:
> rosrun cmvision color_gui image:=/gscam/image_raw
Illustrated below, the result should pop up a window displaying the current camera image stream.
The colorgui image window can now be used to find the YUV range for a single color of interest.
Using colorgui image window, you can calibrate for the color of specific objects by sampling their pixel colors. Put objects of interest in the robot's view. Mouse click on a pixel in the image window. This action should put the RGB value of the pixel into the left textbox and YUV value in the right textbox. Clicking on another pixel will update the textboxes to show the pixel's RGB value and the YUV range encompassing both clicked pixels. Clicking on additional pixels will expand the YUV range to span the color region of interest. Assuming your clicks represent a consistent color, you should see bounding boxes in the colorgui window represented color blobs found with the current YUV range.
Copy this YUV range to a separate text buffer temporarily or directly enter this information into your colorfile. You can restart this process to calibrate for another color by selecting "File->Reset" in the colorgui menu bar.
Once you have an appropriately calibrated colorfile, the cmvision blobfinder will be able to detect color blobs. This process can be used to color calibrate a variety of cameras both in real and simulated environments. However, your colorfile will likely work only for cameras and lighting conditions similar to those used at the time of calibration.
Once you have a calibrated colorfile, you can stop colorgui and start cmvision:
> rosrun cmvision cmvision
You should now be able to see blobs detected by cmvision using image_view subscribed to XX topic
> rosrun image_view image_view image:=/cmvision (???)
Update for cmvision and teleop_twist_keyboard to assess blob performance playercam automatically overlays extracted blobs from the blobfinder using the colorfile in the configuration file. You can use the playerjoy or playerv utilities to then move the robot around the room and perform blobfinding of the objects from different viewpoints. You may find small changes in camera perspective vastly change the performance of the blobfinder in such cases, you can sample pixel color values from these perspectives and adjust your YUV thresholds. Also, make sure to properly order the [Colors] and [Thresholds] sections such that the blob color entries are aligned.
> rosrun teleop_twist_keyboard teleop_twist_keyboard.py
%ar training (need to debug, trevor has something better?) cd $ROS_HOME/ar_recog/src/ARToolKit/bin ./mk_patt * camera parameter: camera_para.dat % param file for ps3 cam, pointers for other cameras * show camera tag of interest, tag is highlight, click window to choose, save patter as "patt.patternname" (or patt.X) cp patt.patternname $ROS_HOME/ar_recog/bin edit $ROS_HOME/ar_recog/bin/object_data * add patter entry (patternname, patternfilename, width of tag in mm, center of tag usually "0.0 0.0")
Given appropriate color calibration, recognizing single solid color and AR tag objects should be straightforward. However, fiducials used in robot soccer to indicate specific locations on the field may have multiple solid colors. For example, the camera image in Figure XX has two solid colors stacked in a vertical order with similar shape dimensions. In such cases, your controller will need to specifically include perception routines to process the output of the blobfinder for multicolor fiducials.
Given a specific ordering (via file or command line; rosparam??), your client should drive the robot to visit each of the given objects continuously in this order. For example, the given ordering [3 1 2 4] should direct the robot to visit the green/orange fiducial, orange/green fiducial, yellow ball, pink fiducial, green/orange fiducial, etc. A finite state machine is a good choice for controlling this decision making. A proportional-derivative feedback controller with a form of wandering is a good choice for motion control.
You are expected to conduct least 3 trials for 3 different object sequences with 3 different initial conditions (27 trials total). For each trial, measure total time taken to visit each object and number of collisions with objects, and estimate average distance the robot approaches objects. All of your trials must use the same controller without modification.
Document your controller and experimental results in a written report based on the structure described in the course missive. You are welcome to experiment with additional enclosure escape algorithms and evaluate the relative performance of each. When completed, your report should be committed to the object_seeking/docs/username directory of your repository.
Your grade for this assignment will be determined by equal weighting of your group's implementation (50%) and your individual written report (50%). The weighted breakdown of grading factors for this assignment are as follows:
The color conversion routines used by CMVision for blobfinding are below:
#define YUV2RGB(y, u, v, r, g, b)\ r = y + ((v*1436) >>10);\ g = y - ((u*352 + v*731) >> 10);\ b = y + ((u*1814) >> 10);\ r = r < 0 ? 0 : r;\ g = g < 0 ? 0 : g;\ b = b < 0 ? 0 : b;\ r = r > 255 ? 255 : r;\ g = g > 255 ? 255 : g;\ b = b > 255 ? 255 : b #define RGB2YUV(r, g, b, y, u, v)\ y = (306*r + 601*g + 117*b) >> 10;\ u = ((-172*r - 340*g + 512*b) >> 10) + 128;\ v = ((512*r - 429*g - 83*b) >> 10) + 128;\ y = y < 0 ? 0 : y;\ u = u < 0 ? 0 : u;\ v = v < 0 ? 0 : v;\ y = y > 255 ? 255 : y;\ u = u > 255 ? 255 : u;\ v = v > 255 ? 255 : v