CS148 Assignment Localization

From Brown University Robotics
Jump to: navigation, search

Contents

Introduction

CS 148 giveth, and CS148 taketh away.

Evil maposaurus prepare to meet your doom.

In the previous assignments, we have built up to a working robot soccer player. You have developed an autonomous robot controller for 1-on-1 soccer using Player proxies (or ROS nodes) for position control, bumper, and color blobfinding as well as our in-house overhead localization system. Now it is time to remove the training wheels, specifically external top-down localization (this assignment) and internal state variables (Subsumption assignment).

In the current assignment, you will implement a robot soccer controller that uses only the robot's on-board sensing and perception. Specifically, you are only allowed to use the Create's sensing (odometry, bumper, etc.) and vision functions for blobfinding and AR tag recognition. With this perceptual information, you are tasked with implementing a localization system to determine your robot's pose on the pitch. The pitch has been augmented with color fiducials and AR tags at locations on a map that will be given to you to facilitate localization routines you develop. This map will also assume fixed locations and dimensions of the goals. These fiducials should be detectable using your object recognition code from the Path Planning assignment. Pose estimates from your localization, along with desired pose generation, will given to your working path planning routines.

Don't hate the monster, hate the map

Important Dates

Assigned: November 10, 2010

Range and bearing milestone due: Wednesday, November 17, 2010

Spin-and-localize milestone due: Monday, November 29, 2010

Inter-group soccer competition: Monday, December 6, 2010

Project reports due: Wednesday, December 8, 2010 (11:59 pm)

Robot Localization

For this project, your client will continually estimate the current pose of your robot to enable path planning. Your localization system must be able to estimate the probability of the robot being in a certain pose X = (x,y,\theta) given the current perception values Z given by odometry, object recognition, and bumper and a known map. The map will contain the locations of relevant objects, namely goals and fiducials. Your existing path planner will use this pose estimate to generate a path on the field to traverse.

In class, we have covered several different localization algorithms based on the Bayes filter:

p(X_{t}|Z_{t}) \propto p(Z_{t}|X_{t}) \sum_{X_t} p(X_{t}|X_{t-1}) p(X_{t-1}|Z_{t-1})

which, roughly stated, generates the robot's new location belief at time t, or posterior p(X_{t}|Z_{t}), by using its old belief at time t-1, or prior p(X_{t-1}|Z_{t-1}) to predict a new belief, using dynamics p(X_{t}|X_{t-1}), that is matched against reality, using a likelihood p(Z_{t}|X_{t}). Several algorithms can be used to perform filtering for localization, although we will only cover the particle filter in depth during lecture:

  • Filtering with grid-based discretization
  • Kalman filtering: Markovian linear dynamics with parametric Gaussian-distributed unimodal noise
  • Particle filtering: probabilistic Markovian dynamics with nonparametric noise distributions and importance sampling

However, you will find that defining proper likelihood and dynamics terms are not explicitly covered by the algorithms. Specifically, filter dynamics will use odometric information about the robot's pose given by the Create's odometry predict a new belief forward in time. Odometry is published by irobot_create_2_1 in two ways: 1) distance and angle values in the sensorPacket topic that are directly reported from the Open Interface and 2) odom topics using quaternions. Your likelihood function will update the belief by evaluating the plausibility of perceiving information from the AR tag, blobfinder, and bumper proxies given a hypothesis of a particular robot pose. You will spend some time and careful consideration in defining these terms.


Additionally, you will need to consider how to extract a single localization decision if you have a multi-modal posterior distribution. As discussed in class, maximum a posteriori (maximum), expectation (mean), and robust mean are options for extracting such pose estimates.

Active localization: It should be noted that your actions taken by the robot can help resolve ambiguity for your localization system. That is, you can decide to move your robot towards locations that would make its location more clear.

Snapshot of the "FC 148" robot soccer field with fiducial landmarks at the corners.


Desired Pose Determination

In addition to estimating the current pose of the robot, your controller will need to make decisions for determine desired poses and generating actions to reach these desireds. This decision making can be performed by the path planner you implemented for Assignment 3, assuming the location of the ball can be determined. You are not restricted to using only your path planner for decision making, but it is highly recommended. At the very least, some combination of path planner and other control heuristics is likely warranted. Sharing of path planning code between groups is allowed, only through checkout of code from course repositories for previously graded assignments.

Your calculation of desired pose can (and probably should) use estimates of the ball location of your robot's pose along. While it is not necessary to perform localization on the ball's location, estimating some information about the state of the ball is typically necessary. One recommended approach is to estimate the range and bearing of the ball from the robot's pose.


Given our current soccer setup, localization of the other player will be difficult and is discouraged.

mcl Package Development

Your group should create an mcl package that includes a node (soccer_mcl) that subscribes to your robot's sensing from the Create base (via irobot_create_2_1), cmvision blobfinder, and ar_recog tag recognition and outputs tag_positions topics for localization.

Your group is free to use additional nodes for performing localization and/or having controllers specific to each challenge.


Map File Format

The pitch map will be given to you as file in the following space-delimited format:

Color Landmarks
<color_top> <color_bottom> <x_location> <y_location> <radius> 
...
<color_top> <color_bottom> <x_location> <y_location> <radius>

AR Landmarks
<id> <corner_1_x> <corner_1_y> <corner_1_z> ... <corner_4_x> <corner_4_y> <corner_4_z>
...
<id> <corner_1_x> <corner_1_y> <corner_1_z> ... <corner_4_x> <corner_4_y> <corner_4_z>

Visit
<x_location> <y_location> 
...
<x_location> <y_location> 

Avoid
<x_location> <y_location> <radius>
...
<x_location> <y_location> <radius>

There is a sample map located at /course/cs148/pub/sample_maps/.

The "Color Landmarks" and "AR Landmarks" section lists the colors/corners and locations of non-goal landmarks, the "Visit" section lists the locations to visit (in sequence), and the "Avoid" section lists circular regions on the field to avoid. The location and radius values will be given in the field coordinate system, as given in the last assignment by the overhead localization system. Colors for the landmarks will be specified as one of the following strings: "Green", "Pink", "Orange", or "Yellow". Top and bottom landmark colors can be the same to indicate a single colored fiducial.

Be careful to not make overly limiting assumptions about the map! Make sure you can parse and read the file format. Map files will not be given to you for the skills challenges until just before your run. However, example and competition files will be provided in /course/cs148/pub.

Experiments, Reporting, and Submission

You are expected to conduct least 2 trials with 2 different initial conditions demonstrating that your robot can visit a spot on a map while avoiding obstacles (4 trials total). For each trial, it is up to you to determine what are the appropriate properties to keep track of in order to convince any reader of your report that you robot can successfully determine its location and navigate its environment. All of your trials must use the same controller without modification.

Document your controller and experimental results in a written report based on the structure described in the course missive. You are welcome to experiment with additional techniques and evaluate the relative performance of each. When completed, your report should be committed to the mcl/docs/username directory of your repository.

Project Milestones

Landmark Range and Bearing Estimation

An intermediate demonstration of your progress is required before the final due date of the Localization project. For this milestone, you will need to demonstrate a working estimation of range and bearing from the robot to fiducial objects (yellow ball, pink landmark, green/orange landmark, orange/green landmark, green goal and orange goal) when seen through the robot's blobfinder. For the milestone, you will be required to estimate range and bearing for all the landmarks, in centimeters and radians respectively. The TAs will set the robot at a random location in the field and place varying landmarks in your field of view. For each landmark, in the robot's current visual stream, you must print out a distance and relative angle to the landmark. No visualization is required for this milestone, but would be appreciated.

It is recommended that a data-driven procedure is used to learn a function that outputs the predicted range of landmark objects, each which have known dimensions and appearance, from perceived blob features. To estimate the range from your robot to a landmark, place the landmark at varying distances from the robot and record features (e.g., height, width, area) of the perceived blob(s) corresponding to the object. The result is a set of example input-output pairs that relate blob features to distance . Once you have recorded blob measurements for each landmark, you should approximate the function that predicts distance from blob features. It is up to you to determine the appropriate features of blobs to use for estimating range.

This blob-feature function can be approximated through a variety of regression techniques, including a nearest neighbors lookup table, linear interpolation, radial-basis interpolation, spline interpolation, and nonparametric regression. You should import your approximated function into your client. For example, a nearest neighbor regressor would import a lookup table data structure into a client. Upon seeing a blob, your client will first identify the type of object (ball, goal, landmark) for the blob, and then use the dimensions of the blob(s) to predict/lookup the distance to the landmark.

To estimate bearing, it is suggested to use the relative proportions of the robot camera field of view, assuming the (default) camera view center and the angular field of view for the PS3 Eye camera.

Spin and localize

Your robot will be placed on an unknown location on the field by a TA. Given a map in the file format above and movement to only spin in place, your mcl package must determine and report the robot's location on the field.

Soccer Skills Challenges and Competition

You are required to demonstrate this to a TA. Videos are not acceptable substitutes for these challenges. Find at TA on hours, or send an email to the TAs to schedule a time to demonstrate. Your demonstration does not need to be before the report is due, but it must use the code you handed in when your turned in your report.

The navigation challenge will consist of your robot navigating to a visitation point and avoiding obstacles. The visitation point and obstacles will be specified by a map file.

The goal scoring challenge will consist of your robot hitting a ball into a goal. The ball will be placed on a random spot on the field, and the robot should determine its own location (using localization), determine the balls location (you can do this by getting its range and bearing, and applying that to your own assumed location to get its location, and then plan a path to a scoring position. The robot should then try to score.

Each of these skills tests must be completed in under 120 seconds.

Grading

Your grade for this assignment will be determined by equal weighting of your group's implementation (50%) and your individual written report (50%). The weighted breakdown of grading factors for this assignment are as follows:

Note: Demonstrations of your challenges should occur by scheduling an appointment with a TA. Soccer competitions will be held during class and the preceding hour (12-2pm).

Project Implementation

Localization 30%
Does your robot properly estimate its location?
Does this estimate account for uncertainty and ambiguity in perception?
Goal Attainment 10%
Can your robot drive to a given sequence of locations on the field?
Can the robot avoid given unseen regions on the pitch?
Soccer Proficiency 5%
How well does your robot player soccer in the given environment?
Controller Robustness 5%
Does your controller run without interruption or crashing?

Written Report

Introduction and Problem Statement 7%
What is your problem?
Why is it interesting?
Approach and Methods 15%
What is your approach to the problem?
How did you implement your approach and algorithms?
Could someone reproduce your algorithms?
Experiments and Results 20%
How did you validate your methods?
Describe your variables, controls, and specific tests.
Could someone reproduce your results?
Conclusion and Discussion 8%
What conclusions can be reached about your problem and approach?
What are the strengths of your approach?
What are the shortcomings of your approach?


Personal tools