CS148 Assignment Object Seeking

From Brown University Robotics
(Difference between revisions)
Jump to: navigation, search
(Running gscam and image_view)
(Running gscam and image_view)
Line 59: Line 59:
  > guvcview -d /dev/video1
  > guvcview -d /dev/video1
-
[[File:guvcview.png|550px|center|guvcview.]]
+
[[File:guvcview.png|350px|center|guvcview.]]
* run gscam to start camera driver
* run gscam to start camera driver

Revision as of 18:27, 30 August 2010

Contents

Introduction

Reach out and touch something.


In this assignment, you will develop a ROS package for perceiving objects labeled with a solid color or an augmented reality (AR) tag and controlling a robot to move between these objects.

A video of Lisa Miller's implementation of this assignment from Spring Semester 2009.

Important Dates

Assigned: Sept XX, 2010

Due: Sept XX, 2010

Description

Building on your Enclosure Escape assignment, you will build a controller in ROS to perform "object seeking". In this seeking task, your robot will perceive and drive to objects that are visually recognizable by a solid color appearance or labeled with an AR tag from the robot's visual sensing (i.e., camera). For this assignment, you will be working primarily with the Create platform and a Sony PlayStation Eye USB video camera. For object recognition in ROS, you will use the cmvision package for color blobfinding and the ar_recog package for AR tag recognition.

Assuming perception of objects salient by color or pattern, you will develop an object seeking package for this assignment that enables a robot to continually drive between these (non-occluded) objects in a sequence given at run-time. Your controller's decision making should take the form a finite state machine (FSM). This FSM should use one state variable to specify the currently sought object. For motion control, you should use proportional-derivative (PD) servoing to center objects in the robot's field of view. As a whole, your controller should put the current object in the center of view, drive as close as possible to an object without hitting it, increment the state variable, and continue the process for the next object.

ROS support packages

gscam (brown-ros-pkg)
Based on the GStreamer multimedia framework, gscam package provides direct access to the robot's camera image and related parameters. support for a variety of cameras, although not as intuitive as other ROS camera interfaces. While laser rangefinders provide more accurate information, cameras are often more practical in terms of their cost, weight, size, and sampling frequency. Cameras have the additional benefit of sensing color, which you will leverage in this assignment.
image_view (ros.org)
A simple viewer for subscribing to and displaying ROS image topics. Includes a specialized viewer for stereo + disparity images.
cmvision (ros.org)
However, you should used the cmvision blobfinder node, which performs color segmentation and provides a set of bounding boxes around approximately solid color image regions. The blobfinder proxy examines the images from the camera and groups like-colored pixels of a particular color into regions, or ``blobs. The blobfinder proxy returns information about detected blobs and their bounding boxes in the image. The specification for the \href{http://www.ros.org/wiki/cmvision}{blobfinder proxy} can be found from the ROS wiki\footnote{http://www.ros.org/wiki/cmvision}.
ar_recog (brown-ros-pkg)
Based on the ARToolkit augmented reality library, ar_recog recognizes tags, corners in the image, and relative 6DOF pose from the camera.

Assignment Instructions

Your tasks for this assignment are as follows.

  1. View image topics from the camera
  2. Color calibration: Generate a color calibration file for cmvision to recognize the solid color objects (balls and multi-color fiducials)
  3. AR pattern training: train ar tags
  4. Drive to a single object: write a controller ROS to seek out, find, and drive towards a single object.
  5. Drive to sequence of objects: extend the controller to continually drive between a set of objects in a sequential order
  6. Experiments, Reporting, and Submission

The following sections provide information to guide you through completing these tasks.

Running gscam and image_view

  • Run roscore
> roscore
  • Configure the camera's video settings (you only need to do this after you plug in the camera, not every time you want to run something that uses the camera).
> sudo sh run_ps3cam.sh <VIDEO TYPE NUMBER>
Use 04 for <VIDEO TYPE NUMBER>.
  • Run guvcview and make sure that whitebalance and autogain are turned off. If you mounted the camera upside down, you can use this interface to flip the camera's feed vertically.
> guvcview -d /dev/video1
guvcview.
  • run gscam to start camera driver
> roscd gscam/bin
> rosrun gscam gscam
  • to view camera's image stream, start the image_view node with this command:
> rosrun image_view image_view image:=/gscam/image_raw

If successful, you should see a new window emerge displaying the image stream from the robot's camera, example below. Stop image_view with the ctrl-c command in the terminal before proceeding

An example of ros image_view.

Color Calibration

For color blobfinding, ROS uses the CMVision library to perform color segmentation of an image and find relatively solid colored regions (or "blobs"), as illustrated below. The cmvision package in ROS consists of two nodes: colorgui to specify (or "calibrate") colors to recognize and cmvision to find color blobs at run-time. Both of these nodes receive input from the camera by subscribing to an image topic.

An example of CMVision performing color segmentation on a robot soccer field (from the CMVision website).

The blobfinder provides a bounding box around each image region containing pixels within a specified color range. These color ranges are specified in a color calibration file, or colorfile, such as in the "colors.txt" example below. cmvision colorfiles contains two sections with the following headers:

  • "[Colors]" section: a list identifiers for each blob color, as strings and RGB triplets, in sequential order
  • "[Thresholds]" section: a list of color range thresholds (in YUV space) sequence to match each blob color in the "[Colors]" section

The following example ``colors.txt illustrates the format of the colorfile for colors "Red", "Green", and "Blue":

[Colors]
(255,  0,  0) 0.000000 10 Red
(  0,255,  0) 0.000000 10 Green
(  0,  0,255) 0.000000 10 Blue

[Thresholds]
( 25:164, 80:120,150:240)
( 20:220, 50:120, 40:115)
( 15:190,145:255, 40:120)


In this colorfile, the color "Red" has the integer identifier "(255,0,0)" or, in hexidecimal, "0x00FF0000" and YUV thresholds "(25:164,80:120,150:240)". These thresholds are specified as a range in the the Wikipedia YUV color space. Specifically, any pixel with YUV values within this range will be labeled with the given blob color. Note: that YUV and RGB color coordinates are vastly different representations, you can refer to the Wikipedia YUV entry and the Appendix for details.

An example of using CMVision with playercam utility to color calibrate and perform blobfinding.  (Top left) 3 objects are placed in the robot's field of view and playercam is used to extract YUV values and thresholds for each object color.  (Other 5 images) Results from blobfinding with the resulting color calibration file from different robot locations and pushing the yellow ball.


To calibrate the blobfinder, you will use colorgui to estimate YUV color ranges for objects viewed in the camera's image stream. These color ranges will then be entered into your own colorfile for use by the cmvision node. Start by running colorgui, assuming gscam is publishing images:

> rosrun cmvision color_gui image:=/gscam/image_raw

Illustrated below, the result should pop up a window displaying the current camera image stream.

The colorgui image window can now be used to find the YUV range for a single color of interest.

Using colorgui image window, you can calibrate for the color of specific objects by sampling their pixel colors. Put objects of interest in the robot's view. Mouse click on a pixel in the image window. This action should put the RGB value of the pixel into the left textbox and YUV value in the right textbox. Clicking on another pixel will update the textboxes to show the pixel's RGB value and the YUV range encompassing both clicked pixels. Clicking on additional pixels will expand the YUV range to span the color region of interest. Assuming your clicks represent a consistent color, you should see bounding boxes in the colorgui window represented color blobs found with the current YUV range.

  • Note: you may not want to click on all pixels of an object due to shadowing and specular ("shiny") artifacts.


Copy this YUV range to a separate text buffer temporarily or directly enter this information into your colorfile. You can restart this process to calibrate for another color by selecting "File->Reset" in the colorgui menu bar.

Once you have an appropriately calibrated colorfile, the cmvision blobfinder will be able to detect color blobs. This process can be used to color calibrate a variety of cameras both in real and simulated environments. However, your colorfile will likely work only for cameras and lighting conditions similar to those used at the time of calibration.

Once you have a calibrated colorfile, you can stop colorgui and start cmvision:

> rosrun cmvision cmvision

You should now be able to see blobs detected by cmvision using image_view subscribed to XX topic

> rosrun image_view image_view image:=/cmvision (???)

Update for cmvision and teleop_twist_keyboard to assess blob performance playercam automatically overlays extracted blobs from the blobfinder using the colorfile in the configuration file. You can use the playerjoy or playerv utilities to then move the robot around the room and perform blobfinding of the objects from different viewpoints. You may find small changes in camera perspective vastly change the performance of the blobfinder in such cases, you can sample pixel color values from these perspectives and adjust your YUV thresholds. Also, make sure to properly order the [Colors] and [Thresholds] sections such that the blob color entries are aligned.

> rosrun teleop_twist_keyboard teleop_twist_keyboard.py
  • {\bf Disclaimer}: The calibration process is not always easy and may take several iterations to get a working calibration. Remember, the real world can be particular and unforgiving. Small variations make a huge difference. So, be consistent and thorough.

AR Tag Training

  • tjay's new AR training

http://brown-robotics.org/index.php?title=Making_New_ARTags

%ar training (need to debug, trevor has something better?)
cd $ROS_HOME/ar_recog/src/ARToolKit/bin
./mk_patt
* camera parameter: camera_para.dat % param file for ps3 cam, pointers
for other cameras
* show camera tag of interest, tag is highlight, click window to choose,
save patter as "patt.patternname" (or patt.X)
cp patt.patternname $ROS_HOME/ar_recog/bin
edit $ROS_HOME/ar_recog/bin/object_data
* add patter entry (patternname, patternfilename, width of tag in mm,
center of tag usually "0.0 0.0") 


Seeking a Individual Objects

  • Create "object_seeking" package with dependencies XXX
  • Within this package, create "nodes/object_seeking.py" (or similar file in the client library of your choice)
  • object_seeking.py should subscribe to XX and publish XX topics
  • write code for object_seeking.py to distinguish and move to the following objects with the following integer order identifiers (Note, these do not necessarily need to be identifiers in the colorfile):
  1. Yellow ball
  2. Green over orange fiducial
  3. Orange over green fiducial
  4. Pink fiducial
  5. Alpha AR tag
  6. ...

Given appropriate color calibration, recognizing single solid color and AR tag objects should be straightforward. However, fiducials used in robot soccer to indicate specific locations on the field may have multiple solid colors. For example, the camera image in Figure XX has two solid colors stacked in a vertical order with similar shape dimensions. In such cases, your controller will need to specifically include perception routines to process the output of the blobfinder for multicolor fiducials.

Seeking a Sequence of Objects

Given a specific ordering (via file or command line; rosparam??), your client should drive the robot to visit each of the given objects continuously in this order. For example, the given ordering [3 1 2 4] should direct the robot to visit the green/orange fiducial, orange/green fiducial, yellow ball, pink fiducial, green/orange fiducial, etc. A finite state machine is a good choice for controlling this decision making. A proportional-derivative feedback controller with a form of wandering is a good choice for motion control.

Experiments, Reporting, and Submission

update from enclosure escape

conduct least 3 trials at 3 different start locations in 3 different environments. measure total time, number of collisions. all of your trials must use the same controller without modification.

Document your controller and experimental results in a written report. You are welcome to experiment with various enclosure escape algorithms and evaluate the relative performance of each. The details of how this report is graded are discussed in the following section. Example reports are provided in the ``Documents" section of the \href{http://www.cs.brown.edu/courses/cs148/}{course website}. When finished, your report should be committed to the enclosure_escape/docs directory of the repository that you checked out.

Grading

Your grade for this assignment will be determined by equal weighting of your group's implementation (50%) and your individual written report (50%). The weighted breakdown of grading factors for this assignment are as follows:

Project Implementation

Color calibration 20%
Is your color calibration file suitable for finding objects in the Roomba Lab?
What are the restrictions (camera pose, lighting, etc.) on your color blobfinding?
Seeking a single object 15%
Does your robot find, center, drive to non-occluded objects?
How close does your robot get to sought objects?
Transitioning between objects 8%
Does your robot properly switch to other objects after visiting an object?
Controller Robustness 7%
Does your controller run without interruption?

Written Report

Introduction and Problem Statement 7%
What is your problem?
Why is it interesting?
Approach and Methods 15%
What is your approach to the problem?
How did you implement your approach and algorithms?
Could someone reproduce your algorithms?
Experiments and Results 20%
How did you validate your methods?
Describe your variables, controls, and specific tests.
Could someone reproduce your results?
Conclusion and Discussion 8%
What conclusions can be reached about your problem and approach?
What are the strengths of your approach?
What are the shortcomings of your approach?

Appendix: RGB-YUV Conversions

The color conversion routines used by CMVision for blobfinding are below:

#define YUV2RGB(y, u, v, r, g, b)\
  r = y + ((v*1436) >>10);\
  g = y - ((u*352 + v*731) >> 10);\
  b = y + ((u*1814) >> 10);\
  r = r < 0 ? 0 : r;\
  g = g < 0 ? 0 : g;\
  b = b < 0 ? 0 : b;\
  r = r > 255 ? 255 : r;\
  g = g > 255 ? 255 : g;\
  b = b > 255 ? 255 : b

#define RGB2YUV(r, g, b, y, u, v)\
  y = (306*r + 601*g + 117*b)  >> 10;\
  u = ((-172*r - 340*g + 512*b) >> 10)  + 128;\
  v = ((512*r - 429*g - 83*b) >> 10) + 128;\
  y = y < 0 ? 0 : y;\
  u = u < 0 ? 0 : u;\
  v = v < 0 ? 0 : v;\
  y = y > 255 ? 255 : y;\
  u = u > 255 ? 255 : u;\
  v = v > 255 ? 255 : v
Personal tools