# CS148 Assignment Localization

##### Views
(Difference between revisions)
 Revision as of 01:52, 18 October 2010 (view source)Cjenkins (Talk | contribs)← Older edit Revision as of 16:50, 10 November 2010 (view source)Cjenkins (Talk | contribs) Newer edit → Line 1: Line 1: modern modern - \begin{itemize} + ==Introduction== - \item Assignment 4 range-and-bearing milestone due: Wednesday, November 4, 2009 + - \item Assignment 4 spin-and-localize milestone due: Wednesday, November 4, 2009 + - \item Assignment 4 spin-and-localize milestone due: Wednesday, November 4, 2009 + - \item Assignment 4 project demonstrations in class: Friday, November 13, 2009 + - \item Assignment 4 handin due: Friday, November 13, 2009 (10 pm) + - \end{itemize} + - \section{Introduction} + CS 148 giveth, and CS148 taketh away. + [[File:Sports_Maposaurus.jpg|right|Evil maposaurus prepare to meet your doom.]] - [[File:Sports_Maposaurus.jpg|Evil maposaurus prepare to meet your doom.]] + In the previous assignments, we have built up to a working robot soccer player.  You have developed an autonomous robot controller for 1-on-1 soccer using Player proxies (or ROS nodes) for position control, bumper, and color blobfinding as well as our in-house overhead localization system.  Now it is time to remove the training wheels, specifically external top-down localization (this assignment) and internal state variables (Subsumption assignment). + + In the current assignment, you will implement a robot soccer controller that uses only the robot's on-board sensing and perception.  Specifically, you are only allowed to use the Create's sensing (odometry, bumper, etc.) and vision functions for blobfinding and AR tag recognition.  With this perceptual information, you are tasked with implementing a localization system to determine your robot's pose on the pitch.  The pitch has been augmented with color fiducials and AR tags at locations on a map that will be given to you to facilitate localization routines you develop.  This map will also assume fixed locations and dimensions of the goals.  These fiducials should be detectable using your object recognition code from the [http://brown-robotics.org/index.php?title=CS148_Assignment_Path_Planning Path Planning] assignment.  Pose estimates from your localization, along with desired pose generation, will given to your '''working''' path planning routines. [http://www.youtube.com/watch?v=Sv9L44PDF4w Don't hate the monster, hate the map] [http://www.youtube.com/watch?v=Sv9L44PDF4w Don't hate the monster, hate the map] - CS 148 giveth, and CS148 taketh away. - \vspace{0.5cm} + ==Important Dates== - \noindent In the previous assignments, we have built up to a working robot soccer player.  You have developed an autonomous robot controller for 1-on-1 soccer using Player proxies (or ROS nodes) for position control, bumper, and color blobfinding as well as our in-house overhead localization system.  Now it is time to remove the training wheels, specifically external top-down localization (this assignment) and internal state variables (next assignment: Subsumption). + - In the current assignment, you will implement a robot soccer controller that uses only the robot's on-board sensing and perception.  Specifically, you are only allowed to use Player's position2d, bumper, and blobfinding proxies.  With this perceptual information, you are tasked with implementing a localization system to determine your robot's pose on the pitch.  The pitch has been augmented with six color fiducials at the corners and midfield lines, as well the fixed locations and dimensions of the goals, to facilitate the localization process.  These fiducials should be detectable using your object recognition code from Assignment 2.  Pose estimates from your localization, along with desired pose generation, will given to your {\bf working} path planning system from Assignment 3. + Assigned: November 10, 2010 - \section{Robot Localization} + Range and bearing milestone due: Wednesday, November 17, 2010 - For this project, your client will continually estimate the current pose of your robot to enable path planning.  Your localization system must be able to estimate the probability of the robot being in a certain pose $X = (x,y,\theta)$ given the current perception values $Z$ given by odometry, object recognition, and bumper and a known map.  The map will contain the locations of relevant objects, namely goals and fiducials.  Your existing path planner will use this pose estimate to generate a path on the field to traverse. + Spin-and-localize milestone due: Monday, November 29, 2010 - In class, we have covered several different localization algorithms based on the Bayes filter\footnote{\href{http://en.wikipedia.org/wiki/Recursive_Bayesian_estimation}{refer to Wikipedia entry on Recursive Bayesian estimation''}}: + Inter-group soccer competition: Monday, December 6, 2010 + + Project reports due: Wednesday, December 8, 2010 (11:59 pm) + + == Robot Localization== + + For this project, your client will continually estimate the current pose of your robot to enable path planning.  Your localization system must be able to estimate the probability of the robot being in a certain pose X = (x,y,\theta) given the current perception values Z given by odometry, object recognition, and bumper and a known map.  The map will contain the locations of relevant objects, namely goals and fiducials.  Your existing path planner will use this pose estimate to generate a path on the field to traverse. + + In class, we have covered several different localization algorithms based on the [http://en.wikipedia.org/wiki/Recursive_Bayesian_estimation Bayes filter]: - p(X_{t}|Z_{t}) \propto p(Z_{t}|X_{t}) \sum_{X_t} p(X_{t}|X_{t-1}) p(X_{t-1}|Z_{t-1}) p(X_{t}|Z_{t}) \propto p(Z_{t}|X_{t}) \sum_{X_t} p(X_{t}|X_{t-1}) p(X_{t-1}|Z_{t-1}) - - which, roughly stated, generates the robot's new location belief at time $t$, or {\it posterior} $p(X_{t}|Z_{t})$, by using its old belief at time $t-1$, or {\it prior} $p(X_{t-1}|Z_{t-1})$ to predict a new belief, or {\it dynamics} $p(X_{t}|X_{t-1})$, that is matched against reality, or {\it likelihood} $p(Z_{t}|X_{t})$.  Several algorithms can be used to perform filtering for localization, although we will only cover the particle filter in depth during lecture: + which, roughly stated, generates the robot's new location belief at time t, or ''posterior'' p(X_{t}|Z_{t}), by using its old belief at time t-1, or ''prior'' p(X_{t-1}|Z_{t-1}) to predict a new belief, using ''dynamics'' p(X_{t}|X_{t-1}), that is matched against reality, using a ''likelihood'' p(Z_{t}|X_{t}).  Several algorithms can be used to perform filtering for localization, although we will only cover the particle filter in depth during lecture: - \begin{itemize} + * Filtering with grid-based discretization - \item Filtering with grid-based discretization + * [http://en.wikipedia.org/wiki/Kalman_filter Kalman filtering]: Markovian linear dynamics with parametric Gaussian-distributed unimodal noise - \item Kalman filtering: Markovian linear dynamics with parametric Gaussian-distributed unimodal noise + * [http://en.wikipedia.org/wiki/Particle_filter Particle filtering]: probabilistic Markovian dynamics with nonparametric noise distributions and importance sampling - \item Particle filtering: probabilistic Markovian dynamics with nonparametric noise distributions and importance sampling + - \end{itemize} + - However, you will find that defining proper likelihood and dynamics terms are not explicitly covered by the algorithms.  Specifically, filter dynamics will use odometric information about the robot's pose given by Player's position2d proxy to {\bf predict} a new belief forward in time.  Your likelihood function will {\bf update} the belief by evaluating the plausibility of perceiving information from the blobfinder and bumper proxies given a hypothesis of a particular robot pose.  You will spend some time and careful consideration in defining these terms. + However, you will find that defining proper likelihood and dynamics terms are not explicitly covered by the algorithms.  Specifically, filter dynamics will use odometric information about the robot's pose given by the Create's odometry '''predict''' a new belief forward in time.  Odometry is published by irobot_create_2_1 in two ways: 1) distance and angle values in the sensorPacket topic that are directly reported from the Open Interface and 2) odom topics using quaternions.  Your likelihood function will '''update''' the belief by evaluating the plausibility of perceiving information from the AR tag, blobfinder, and bumper proxies given a hypothesis of a particular robot pose.  You will spend some time and careful consideration in defining these terms. - Additionally, you will need to consider how to extract a single localization decision if you have a multi-modal posterior distribution.  As discussed in class, {\it maximum a posteriori} (maximum), expectation (mean), and robust mean are options for extracting such pose estimates. - {\bf Active localization:} It should be noted that your actions taken by the robot can help resolve ambiguity for your localization system.  That is, you can decide to move your robot towards locations that would make its location more clear. + Additionally, you will need to consider how to extract a single localization decision if you have a multi-modal posterior distribution.  As discussed in class, ''maximum a posteriori'' (maximum), expectation (mean), and robust mean are options for extracting such pose estimates. - \begin{figure} + '''Active localization:''' It should be noted that your actions taken by the robot can help resolve ambiguity for your localization system.  That is, you can decide to move your robot towards locations that would make its location more clear. - \centering + - \includegraphics[width=0.7\textwidth]{field.jpg} + - \vspace{0.5cm} + - \includegraphics[width=.35\textwidth]{cornerr.jpg} + - \includegraphics[width=.35\textwidth]{midfieldr.jpg} + - \caption{Snapshot of the FC 148'' robot soccer field with fiducial landmarks at the corners (such as green'' over orange'') and midfield line (as pink''). } + - \label{fig:field} + - \end{figure} + + [[File:Field_fiducials.jpg|500px|center|Snapshot of the "FC 148" robot soccer field with fiducial landmarks at the corners.]] - \section{Desired Pose Determination} + + + + == Desired Pose Determination == + + In addition to estimating the current pose of the robot, your controller will need to make decisions for determine desired poses and generating actions to reach these desireds.  This decision making can be performed by the path planner you implemented for Assignment 3, assuming the location of the ball can be determined.  You are not restricted to using only your path planner for decision making, but it is highly recommended.  At the very least, some combination of path planner and other control heuristics is likely warranted.  Sharing of path planning code between groups is allowed, only through checkout of code from course repositories for previously graded assignments. Your calculation of desired pose can (and probably should) use estimates of the ball location of your robot's pose along.  While it is not necessary to perform localization on the ball's location, estimating some information about the state of the ball is typically necessary.  One recommended approach is to estimate the range and bearing of the ball from the robot's pose. Your calculation of desired pose can (and probably should) use estimates of the ball location of your robot's pose along.  While it is not necessary to perform localization on the ball's location, estimating some information about the state of the ball is typically necessary.  One recommended approach is to estimate the range and bearing of the ball from the robot's pose. - %You may want to try to estimate the depth of the ball from blob dimensions, which is (again) not necessary. + + Given our current soccer setup, localization of the other player will be difficult and is discouraged. Given our current soccer setup, localization of the other player will be difficult and is discouraged. - \section{Milestone: Landmark Range and Bearing Estimation} + ===mcl Package Development=== - An intermediate demonstration of your progress is required before the final due date of the Localization project. For this milestone, you will need to demonstrate a working estimation of range and bearing from the robot to fiducial objects (yellow ball, pink landmark, green/orange landmark, orange/green landmark, green goal and orange goal) when seen through the robot's blobfinder.  For the milestone, you will be required to estimate range and bearing for all the landmarks, in centimeters and radians respectively.  The TAs will set the robot at a random location in the field and place varying landmarks in your field of view. For each landmark, in the robot's current visual stream, you must print out a distance and relative angle to the landmark. No visualization is required for this milestone, but would be appreciated. + Your group should create an mcl package that includes a node (soccer_mcl) that subscribes to your robot's sensing from the Create base (via irobot_create_2_1), cmvision blobfinder, and ar_recog tag recognition and outputs tag_positions topics for localization. + Your group is free to use additional nodes for performing localization and/or having controllers specific to each challenge. - It is recommended that a {\bf data-driven} procedure is used to learning a function that outputs the predicted range of landmark objects, each which have known dimensions and appearance, from perceived blob features.  To estimate the range from your robot to a landmark, place the landmark at varying distances from the robot\footnote{preferably from 0 cm (or the closest the robot can see an object) to at least the entire length of the FC148 field} and record features (e.g., height, width, area) of the perceived blob(s) corresponding to the object.  The result is a set of example input-output pairs that relate blob features to distance\footnote{Note, that measurements at far distances will vary only slightly and may become difficult to distinguish an accurate distance based on blob size}.  Once you have recorded blob measurements for each landmark, you should approximate the function that predicts distance from blob features.  It is up to you to determine the appropriate features of blobs to use for estimating range. - This blob-feature function can be approximated through a variety of regression techniques\footnote{Regression is the analysis of the relationship between a dependent variable(s) and an independent variable(s)}, including a nearest neighbors lookup table, linear interpolation, radial-basis interpolation, spline interpolation, and nonparametric regression.  You should import your approximated function into your client.  For example, a nearest neighbor regressor would import a lookup table data structure into a client.  Upon seeing a blob, your client will first identify the type of object (ball, goal, landmark) for the blob, and then use the dimensions of the blob(s) to predict/lookup the distance to the landmark. + === Map File Format=== - To estimate bearing, it is suggested to use the relative proportions of the robot camera field of view, assuming the (default) camera view center is 160 pixels and the view angle is 60 degrees. + The pitch map will be given to you as file in the following space-delimited format: - %, you can estimate the bearing from the robot to a landmark.  You can measure the difference (in pixels) between the robot's view center and the center of the blob for a given landmark, and convert this difference to radians based on a 60 degree camera view angle.  This will be your estimated bearing. + + Color Landmarks + + ... + + + AR Landmarks + ... + ... + ... + + Visit + + ... + + + Avoid + + ... + + - \section{Soccer Skills Challenges and Competition} + The "Color Landmarks" and "AR Landmarks" section lists the colors/corners and locations of non-goal landmarks, the "Visit" section lists the locations to visit (in sequence), and the "Avoid" section lists circular regions on the field to avoid.  The location and radius values will be given in the field coordinate system, as given in the last assignment by the overhead localization system.  Colors for the landmarks will be specified as one of the following strings: "Green", "Pink", "Orange", or "Yellow".  Top and bottom landmark colors can be the same to indicate a single colored fiducial. - The goal scoring challenges and inter-group soccer competition from Assignment 3 (Path Planning) will be used for this competition in addition to a modified navigation challenge. In this navigation challenge, you will be given a set of locations on the field to visit in sequence along with regions on the field to avoid. Both the Goal Scoring and Collision-free Navigation tasks must be completed within 120 seconds. + '''Be careful to not make overly limiting assumptions about the map!''' Make sure you can parse and read the file format.  Map files will not be given to you for the skills challenges until just before your run. However, example and competition files will be provided in /course/cs148/pub. - \subsection{Map File Format} - The pitch map will be given to you as file in the following space-delimited format: - \begin{verbatim} + ===Experiments, Reporting, and Submission=== - Landmarks + - + You are expected to conduct least 3 trials with 4 different initial conditions for both your subsumption_soccer node (24 trials total).  For each trial, measure the properties mentioned for each challenge.  All of your trials must use the same controller without modification. - ... + - + Document your controller and experimental results in a written report based on the structure described in the [http://brown-robotics.org/index.php?title=CS148_Missive#Project_Report_Format course missive]. You are welcome to experiment with additional techniques and evaluate the relative performance of each. When completed, your report should be committed to the object_seeking/docs/username directory of your repository. + + ==Project Milestones== + + ===Landmark Range and Bearing Estimation=== + + An intermediate demonstration of your progress is required before the final due date of the Localization project. For this milestone, you will need to demonstrate a working estimation of range and bearing from the robot to fiducial objects (yellow ball, pink landmark, green/orange landmark, orange/green landmark, green goal and orange goal) when seen through the robot's blobfinder.  For the milestone, you will be required to estimate range and bearing for all the landmarks, in centimeters and radians respectively.  The TAs will set the robot at a random location in the field and place varying landmarks in your field of view. For each landmark, in the robot's current visual stream, you must print out a distance and relative angle to the landmark. No visualization is required for this milestone, but would be appreciated. - Visit + It is recommended that a '''data-driven''' procedure is used to learn a function that outputs the predicted range of landmark objects, each which have known dimensions and appearance, from perceived blob features.  To estimate the range from your robot to a landmark, place the landmark at varying distances from the robot - + and record features (e.g., height, width, area) of the perceived blob(s) corresponding to the object.  The result is a set of example input-output pairs that relate blob features to distance . Once you have recorded blob measurements for each landmark, you should approximate the function that predicts distance from blob features. It is up to you to determine the appropriate features of blobs to use for estimating range. - ... + - + - Avoid + This blob-feature function can be approximated through a variety of regression techniques, including a nearest neighbors lookup table, linear interpolation, radial-basis interpolation, spline interpolation, and nonparametric regression. You should import your approximated function into your client. For example, a nearest neighbor regressor would import a lookup table data structure into a client.  Upon seeing a blob, your client will first identify the type of object (ball, goal, landmark) for the blob, and then use the dimensions of the blob(s) to predict/lookup the distance to the landmark. - + - ... + - + - \end{verbatim} + - The Landmarks'' section lists the colors and locations of non-goal landmarks, the Visit'' section lists the locations to visit (in sequence), and the Avoid'' section lists circular regions on the field to avoid.  The location and radius values will be given in the field coordinate system, as given in the last assignment by the overhead localization system.  Colors for the landmarks will be specified as one of the following strings: Green'', Pink'', Orange'', or Yellow''.  Top and bottom landmark colors can be the same to indicate a single colored fiducial. + To estimate bearing, it is suggested to use the relative proportions of the robot camera field of view, assuming the (default) camera view center and the angular field of view for the [http://en.wikipedia.org/wiki/PlayStation_Eye PS3 Eye] camera. + - {\bf Be careful to not make overly limiting assumptions about the map!}  Make sure you can parse and read the file format.  Map files will not be given to you for the skills challenges until just befor your run.  However, example and competition files will be provided in /course/cs148/pub. + === Spin and localize === + Your robot will be placed on an unknown location on the field by a TA.  Given a map in the file format above and movement to only spin in place, your mcl package must determine and report the robot's location on the field. + === Soccer Skills Challenges and Competition === - \section{Grading} + The [http://brown-robotics.org/index.php?title=CS148_Assignment_Path_Planning#Skills_Challenges_2 goal scoring challenge] and [http://brown-robotics.org/index.php?title=CS148_Assignment_Path_Planning#Inter-group_Soccer_Competition inter-group soccer competition] from the Path Planning project will be used for this competition.  The navigation challenge will have the same format as Assignment 3 except that obstacles will be pink fiducials and the field boundaries will be enforced by the physical and virtual walls. Both the Goal Scoring and Collision-free Navigation tasks must be completed within 120 seconds. - Please make appointments to run the demonstrations of your challenges with the TA staff on or before the project due date. + ==Grading== - Note: The soccer competitions will be held during class and the preceding hour (12-2pm).  Though performance in this competition is ungraded, failure to participate in the competition will result in a 10\% deduction in the grade for this assignment, at a minimum. + Your grade for this assignment will be determined by equal weighting of your group's implementation (50%) and your individual written report (50%).  The weighted breakdown of grading factors for this assignment are as follows: + Note: Demonstrations of your challenges should occur by scheduling an appointment with a TA.  Soccer competitions will be held during class and the preceding hour (12-2pm). - Your grade for Assignment 4 will be determined by equal weighting of your group's implementation (50\%) and your individual written report (50\%).  The weighted breakdown of grading factors for this assignment are as follows: + ===Project Implementation=== - \vspace{1cm} + ; Localization 30% - \begin{tabular}{|l|l||l|l|} + : Does your robot properly estimate its location? - \hline + : Does this estimate account for uncertainty and ambiguity in perception? - {\large \bf Project Implementation} & \\ + ; Goal Attainment 10% - \hline + : Can your robot drive to a given sequence of locations on the field? - \hline + : Can the robot avoid given unseen regions on the pitch? - - Localization & 30\% \\ + ; Soccer Proficiency 5% - $\rightarrow$ Does your robot properly estimate its location? & \\ + : How well does your robot player soccer in the given environment? - $\rightarrow$ Does this estimate account for uncertainty and ambiguity in perception? & \\ + ; Controller Robustness 5% - \hline + :Does your controller run without interruption or crashing? - - Goal Attainment & 10\% \\ + - $\rightarrow$ Can your robot drive to a given sequence of locations on the field? & \\ + - $\rightarrow$ Can the robot avoid given unseen regions on the pitch? & \\ + - \hline + - - Soccer Proficiency & 5\% \\ + - $\rightarrow$ How well does your robot player soccer in the given environment? & \\ + - \hline + - - Controller Robustness & 5\% \\ + - $\rightarrow$ Does your controller run without interruption? & \\ + - \hline + - %\vspace{0.1cm} \\ + - \hline + - {\large \bf Written Report} & \\ + - \hline + - \hline + - - Introduction and Problem Statement & 7\% \\ + - $\rightarrow$ What is your problem? & \\ + - $\rightarrow$ Why is it interesting? & \\ + - \hline + - - Approach and Methods & 15\% \\ + - $\rightarrow$ What is your approach to the problem? & \\ + - $\rightarrow$ How did you implement your approach and algorithms? & \\ + - $\rightarrow$ Could someone reproduce your algorithms? & \\ + - \hline + - - Experiments and Results & 20\% \\ + - $\rightarrow$ How did you validate your methods? & \\ + - $\rightarrow$ Describe your variables, controls, specific tests, and results from these test. & \\ + - $\rightarrow$ Could someone reproduce your results? & \\ + - \hline + - - Conclusion and Discussion & 8\% \\ + - $\rightarrow$ What conclusions can be reached about your problem and approach? & \\ + - $\rightarrow$ What are the strengths of your approach? & \\ + - $\rightarrow$ What are the shortcomings of your approach? & \\ + - \hline + - \end{tabular} + + ===Written Report=== + ; Introduction and Problem Statement 7% + : What is your problem? + : Why is it interesting? + ; Approach and Methods 15% + : What is your approach to the problem? + : How did you implement your approach and algorithms? + : Could someone reproduce your algorithms? + ; Experiments and Results 20% + : How did you validate your methods? + : Describe your variables, controls, and specific tests. + : Could someone reproduce your results? + ; Conclusion and Discussion 8% + : What conclusions can be reached about your problem and approach? + : What are the strengths of your approach? + : What are the shortcomings of your approach? +

## Introduction

CS 148 giveth, and CS148 taketh away.

In the previous assignments, we have built up to a working robot soccer player. You have developed an autonomous robot controller for 1-on-1 soccer using Player proxies (or ROS nodes) for position control, bumper, and color blobfinding as well as our in-house overhead localization system. Now it is time to remove the training wheels, specifically external top-down localization (this assignment) and internal state variables (Subsumption assignment).

In the current assignment, you will implement a robot soccer controller that uses only the robot's on-board sensing and perception. Specifically, you are only allowed to use the Create's sensing (odometry, bumper, etc.) and vision functions for blobfinding and AR tag recognition. With this perceptual information, you are tasked with implementing a localization system to determine your robot's pose on the pitch. The pitch has been augmented with color fiducials and AR tags at locations on a map that will be given to you to facilitate localization routines you develop. This map will also assume fixed locations and dimensions of the goals. These fiducials should be detectable using your object recognition code from the Path Planning assignment. Pose estimates from your localization, along with desired pose generation, will given to your working path planning routines.

## Important Dates

Assigned: November 10, 2010

Range and bearing milestone due: Wednesday, November 17, 2010

Spin-and-localize milestone due: Monday, November 29, 2010

Inter-group soccer competition: Monday, December 6, 2010

Project reports due: Wednesday, December 8, 2010 (11:59 pm)

## Robot Localization

For this project, your client will continually estimate the current pose of your robot to enable path planning. Your localization system must be able to estimate the probability of the robot being in a certain pose X = (x,y,\theta) given the current perception values Z given by odometry, object recognition, and bumper and a known map. The map will contain the locations of relevant objects, namely goals and fiducials. Your existing path planner will use this pose estimate to generate a path on the field to traverse.

In class, we have covered several different localization algorithms based on the Bayes filter:

p(X_{t}|Z_{t}) \propto p(Z_{t}|X_{t}) \sum_{X_t} p(X_{t}|X_{t-1}) p(X_{t-1}|Z_{t-1})

which, roughly stated, generates the robot's new location belief at time t, or posterior p(X_{t}|Z_{t}), by using its old belief at time t-1, or prior p(X_{t-1}|Z_{t-1}) to predict a new belief, using dynamics p(X_{t}|X_{t-1}), that is matched against reality, using a likelihood p(Z_{t}|X_{t}). Several algorithms can be used to perform filtering for localization, although we will only cover the particle filter in depth during lecture:

• Filtering with grid-based discretization
• Kalman filtering: Markovian linear dynamics with parametric Gaussian-distributed unimodal noise
• Particle filtering: probabilistic Markovian dynamics with nonparametric noise distributions and importance sampling

However, you will find that defining proper likelihood and dynamics terms are not explicitly covered by the algorithms. Specifically, filter dynamics will use odometric information about the robot's pose given by the Create's odometry predict a new belief forward in time. Odometry is published by irobot_create_2_1 in two ways: 1) distance and angle values in the sensorPacket topic that are directly reported from the Open Interface and 2) odom topics using quaternions. Your likelihood function will update the belief by evaluating the plausibility of perceiving information from the AR tag, blobfinder, and bumper proxies given a hypothesis of a particular robot pose. You will spend some time and careful consideration in defining these terms.

Additionally, you will need to consider how to extract a single localization decision if you have a multi-modal posterior distribution. As discussed in class, maximum a posteriori (maximum), expectation (mean), and robust mean are options for extracting such pose estimates.

Active localization: It should be noted that your actions taken by the robot can help resolve ambiguity for your localization system. That is, you can decide to move your robot towards locations that would make its location more clear.

## Desired Pose Determination

In addition to estimating the current pose of the robot, your controller will need to make decisions for determine desired poses and generating actions to reach these desireds. This decision making can be performed by the path planner you implemented for Assignment 3, assuming the location of the ball can be determined. You are not restricted to using only your path planner for decision making, but it is highly recommended. At the very least, some combination of path planner and other control heuristics is likely warranted. Sharing of path planning code between groups is allowed, only through checkout of code from course repositories for previously graded assignments.

Your calculation of desired pose can (and probably should) use estimates of the ball location of your robot's pose along. While it is not necessary to perform localization on the ball's location, estimating some information about the state of the ball is typically necessary. One recommended approach is to estimate the range and bearing of the ball from the robot's pose.

Given our current soccer setup, localization of the other player will be difficult and is discouraged.

### mcl Package Development

Your group should create an mcl package that includes a node (soccer_mcl) that subscribes to your robot's sensing from the Create base (via irobot_create_2_1), cmvision blobfinder, and ar_recog tag recognition and outputs tag_positions topics for localization.

Your group is free to use additional nodes for performing localization and/or having controllers specific to each challenge.

### Map File Format

The pitch map will be given to you as file in the following space-delimited format:

Color Landmarks
...

AR Landmarks
<id> <corner_1_x> <corner_1_y> <corner_1_z> ... <corner_4_x> <corner_4_y> <corner_4_z>
...
<id> <corner_1_x> <corner_1_y> <corner_1_z> ... <corner_4_x> <corner_4_y> <corner_4_z>

Visit
<x_location> <y_location>
...
<x_location> <y_location>

Avoid
...



The "Color Landmarks" and "AR Landmarks" section lists the colors/corners and locations of non-goal landmarks, the "Visit" section lists the locations to visit (in sequence), and the "Avoid" section lists circular regions on the field to avoid. The location and radius values will be given in the field coordinate system, as given in the last assignment by the overhead localization system. Colors for the landmarks will be specified as one of the following strings: "Green", "Pink", "Orange", or "Yellow". Top and bottom landmark colors can be the same to indicate a single colored fiducial.

Be careful to not make overly limiting assumptions about the map! Make sure you can parse and read the file format. Map files will not be given to you for the skills challenges until just before your run. However, example and competition files will be provided in /course/cs148/pub.

### Experiments, Reporting, and Submission

You are expected to conduct least 3 trials with 4 different initial conditions for both your subsumption_soccer node (24 trials total). For each trial, measure the properties mentioned for each challenge. All of your trials must use the same controller without modification.

Document your controller and experimental results in a written report based on the structure described in the course missive. You are welcome to experiment with additional techniques and evaluate the relative performance of each. When completed, your report should be committed to the object_seeking/docs/username directory of your repository.

## Project Milestones

### Landmark Range and Bearing Estimation

An intermediate demonstration of your progress is required before the final due date of the Localization project. For this milestone, you will need to demonstrate a working estimation of range and bearing from the robot to fiducial objects (yellow ball, pink landmark, green/orange landmark, orange/green landmark, green goal and orange goal) when seen through the robot's blobfinder. For the milestone, you will be required to estimate range and bearing for all the landmarks, in centimeters and radians respectively. The TAs will set the robot at a random location in the field and place varying landmarks in your field of view. For each landmark, in the robot's current visual stream, you must print out a distance and relative angle to the landmark. No visualization is required for this milestone, but would be appreciated.

It is recommended that a data-driven procedure is used to learn a function that outputs the predicted range of landmark objects, each which have known dimensions and appearance, from perceived blob features. To estimate the range from your robot to a landmark, place the landmark at varying distances from the robot and record features (e.g., height, width, area) of the perceived blob(s) corresponding to the object. The result is a set of example input-output pairs that relate blob features to distance . Once you have recorded blob measurements for each landmark, you should approximate the function that predicts distance from blob features. It is up to you to determine the appropriate features of blobs to use for estimating range.

This blob-feature function can be approximated through a variety of regression techniques, including a nearest neighbors lookup table, linear interpolation, radial-basis interpolation, spline interpolation, and nonparametric regression. You should import your approximated function into your client. For example, a nearest neighbor regressor would import a lookup table data structure into a client. Upon seeing a blob, your client will first identify the type of object (ball, goal, landmark) for the blob, and then use the dimensions of the blob(s) to predict/lookup the distance to the landmark.

To estimate bearing, it is suggested to use the relative proportions of the robot camera field of view, assuming the (default) camera view center and the angular field of view for the PS3 Eye camera.

### Spin and localize

Your robot will be placed on an unknown location on the field by a TA. Given a map in the file format above and movement to only spin in place, your mcl package must determine and report the robot's location on the field.

### Soccer Skills Challenges and Competition

The goal scoring challenge and inter-group soccer competition from the Path Planning project will be used for this competition. The navigation challenge will have the same format as Assignment 3 except that obstacles will be pink fiducials and the field boundaries will be enforced by the physical and virtual walls. Both the Goal Scoring and Collision-free Navigation tasks must be completed within 120 seconds.

Your grade for this assignment will be determined by equal weighting of your group's implementation (50%) and your individual written report (50%). The weighted breakdown of grading factors for this assignment are as follows:

Note: Demonstrations of your challenges should occur by scheduling an appointment with a TA. Soccer competitions will be held during class and the preceding hour (12-2pm).

### Project Implementation

Localization 30%
Does your robot properly estimate its location?
Does this estimate account for uncertainty and ambiguity in perception?
Goal Attainment 10%
Can your robot drive to a given sequence of locations on the field?
Can the robot avoid given unseen regions on the pitch?
Soccer Proficiency 5%
How well does your robot player soccer in the given environment?
Controller Robustness 5%
Does your controller run without interruption or crashing?

### Written Report

Introduction and Problem Statement 7%
Why is it interesting?
Approach and Methods 15%
What is your approach to the problem?
How did you implement your approach and algorithms?