From Brown University Robotics
- Implementation of "robot in the middle" demo is done and works correctly for fixed position messages exchanged among the robots. Correctly means that the intermediate robot keeps the latest position message from its neighbors (as they are identified using BATMAN), finds the shortest path among its 2 neighbors, calculates the middle node in that shortest path and derives a path from its current position to the middle node. It needs to be tested in real life as soon as the "illegal instruction" error that position_tracker produces is fixed (error caused by some ros internal process, appears not only when running position_tracker, for some reason no error in the FitPCs - Chris has sent an e-mail to the ros-users list)
- The Coimbra group published their code. They acknowledged us in the mailing list which is good, but they have a more promising approach for sending messages among the robots (they send out a topic, no detailed parsing of the received message needs to be done). I'm going to get elements from their approach that fit our case and possibly build on them.
- get ideas from Coimbra group to implement message passing using foreign_relay
- test message fusion from neighbors in real life with moving robots
- modularize code (e.g. separate navigation from message passing etc)
Implementation of message fusion from different neighbors.
Demo ("robot in the middle") in progress: robot maintaining a position between a static neighbor and a teleoperated one. In order to do that, it leverages the position messages it receives from its neighbors.
Commnents after the seminar that are interesting to me and possibly the lab:
- CMU keeps collecting data. No bvh files yet, but that's in the todo list. For now, they focus on labeling the data using some form of mechanical turk.
- Lifted BP, an accelerated version of BP develop by Christian Kersting at University of Bohn (). BP is accelerated by reusing computations among factor nodes with similar structure. There is code in C++/python online. For now, no heuristic to tell whether lifted BP will produce faster results than BP in a given graph.
- A new student at TUM is continuing Jan's work on tracking and we had a discussion about low-dim representations for tracking. I sent him the ISRR paper we wrote, Matei's paper on eigengrasps and Marek's paper for physics-based tracking. Micheal Beetz seemed to be still interested in low-dimensional/compact representation of motions.
- Concluding the seminar, we decided to start working on a small book summarizing what this workshop was about as an attempt to promote a unified concept of activity recognition.
Final talk slides: 
- Upgraded netbooks (the ones used for the icra workshop experiments) to the new brown ROS package
- New ROS nodes towards message passing among robots:
- batman_mesh_info: queries BATMAN and returns the list of neighbors and the corresponding link quality
- coordination_client: ROS node for sending messages (/position topic for now) to neighbors
- coordination_server: ROS node for receiving messages from other robots and doing sth interesting with that info
- NEXT TODO: extend coordination_server to fuse info from its neighbors and do sth interesting with that info (e.g. get messages from your neighbors and set yourself to be in the middle of your 2 closest neighbors)
Dagstuhl '10 ("Understanding everyday activities" seminar)
- Current version of poster 
- Talk slides are coming next. Aiming for a practice talk tomorrow.
ICRA '11 paper
Enhancing navigation functionality:
- (Single robot) path planning:
- Graph representation of the areas in the map where the robot can navigate.
- Dijkstra's algorithm to find the shortest path between source and destination positions.
- Exploration for a tag:
- Random walk in the graph representation of the map till the tag we search for is encountered.
- Demo: exploration for a tag (ta 11 to the left of the fountain) in the AI lab-Graphics lab corridors ()
Final navigation demo ()
- iCreate moving from the AI lab to room 411 and back
- localization based on tags/bumpers/odometry
- only the tags in the corners/junctions are taken into account
- map in the form of line representation of walls
- intermediate navigation goals in the corners/junctions
- Enhanced hallway navigation using map and bumper-based localization: math in place, more testing to be done
- Navigation from the AI lab to room 411 using AR tags for localization (AI lab corridor navigation: )
Update: navigation from AI lab to room 411 (video 3x ):
- The robot starts from just outside the AI lab, facing tag 1 to get accurate position estimate. It has 3 intermediate goal positions till the fountain is reached.
- Whenever it bumps into the wall, it rotates for pi in order to see its nearby tag and localize itself.
- Consecutive pi-rotations when bumping to a wall: robot cannot read tags while rotating (due to camera blurring we would have tag misidentifications), so its current position remains the same.
- Backward motion at tag 5: the roomba passed the intermediate goal it had to reach and came back to reach it. Then, it turned again to reach the next goal along the corridor.
- AI lab -> fountain corner: robot thinks is in the other wall so it tries to go diagonally to reach the water fountain. No luck (tag 10 cannot be recognized very well when the robot is in the nearby area with light). Eventually, by bumping into the wall many times and moving slowly forward each time, it reaches the corridor to the graphics lab.
- Graphics lab corner: reached the goal in the corner, but got a bad position estimate due to tag misidentification. It goes back towards the fountain. However, its next goal was to go to the other end of the corridor (outside Eugene's office).
- When it localizes itself correctly again, it attempts to go to Eugene's office through a straight line. -> consecutive bumping into left wall. => I move it to navigate from the graphica lab to Eugene's office.
- After ex-office of Roberto (tag misidentification: robot thought it was at the end)
Although I made sure all the tags are identified during installation, some of them are more vulnerable to misidentification from side views.
- First submission of work by Phil (Nov. '10 meeting of Neuroscience)
- Plan of action after discussing w/ Chris
- 5/11/10: (ar_navigate) go to a specific point (x,y) starting from a random location in an open area with AR tags based localization
- 5/18/10: (ar_map_navigate) navigate among 2 points using a topological map of the environment, e.g. go from AI lab to room 411 autonomously
- 5/25/10: (ar_map_navigate_bumpers) incorporate bumpers into the navigation procedure
- 6/1/10:(ar_explore) explore area till an object of interest is detected
- 6/8/10: (navigation_tracker_web) visualize environment and motion of robots in it (like current position tracker web ROS node)
- 6/15/10: (node_messaging) inter-robot communication
- 6/19/10 - 7/5/10: 1 week Dagstuhl seminar, 1 week in Greece
- July - August: incorporate wireless signal & experiments
- June - August: formulate multi-robot coordination algorithm
- Sept 15th: ICRA deadline (or/and HRI?)
- June: need to have a first version of a proposal (or abstract) to send to potential committee members?
- Sept 15th: proposal document to the commitee
- Functionality implemented this week: (ar_navigate) starting from a random location in 404, go to the middle of the field in the room.
- Video (): The iCreate starts in front of a tag to get an accurate estimate of its current position. After a few seconds, it rotates to move towards the destination (middle of field). During its diagonal motion towards the goal, no tags are visible (they have to be at most 2m away) and the localization relies solely on the odometry. That's why we observe some curvature in the diagonal motion of the robot. After some new tags are recognized, the current estimate of the position of the iCreate becomes more accurate. We see the iCreate actually turning and recognizing the destination position. Eventually it reaches a position close to the destination (based on its internal measurements of where it is, the destination has been reached) and performs small circular motions around the destination area.
- Moving to the ar_map_navigate functionality: created and trained 15 new AR tags to instrument the corridors outside the AI lab.
- Related to our coverage and wifi localization conversation last time, encountered 2 papers (ICRA 2009) to keep in mind:
- Nikolaus Correll, Jonathan Bachrach, Daniel Vickery, Daniela Rus, "Ad-hoc wireless network coverage with networked robots that cannot localize" ():
- 1-Sentence summary: randomized deployment algorithm that adjusts to addition/deletion of nodes.
- Useful things to keep in mind for our project: their real-life experiments involve 9 robots (we are close to that). The environment (basement of Stata Center) is a bit more complex than an open area, but not very complex (if we lay out the robots in a loop in the 4th floor (or around obstacles), this will be more interesting). We need to incorporate some form of wireless signal at the end to make the results compelling.
- Karthik Dantu, Prakhar Goyal, Gaurav Sukhatme, "Relative bearing estimation from commodity radios", ICRA 2009 ()
- 2-sentences summary: They sample signal strength in 8 directions around the current location of the robot. Then they run PCA to find the most probable direction of motion that leads to the neighboring node (thus, relative orientation to the neighboring node is estimated).
- Useful things to keep in mind when we incorporate wireless signal for localization: They got 20 degrees deviation from real bearing (orientation). Used up to 5 robots in real experiments in an open area. Elevated the antenna to eliminate multipath effects from the ground. When robots were placed randomly within a square of 20m, their optimal sampling step size is 5m (that's still a significant amount of area to explore). They focused on estimating the relative orientation among two robots, but the distance can be also estimated using the wireless signal propagation model.
Enhanced last week's demo: the mobile robot performs 15 queries to BATMAN regarding the number of received packets and estimates an average, moves forward/backward based on the previous average signal, stops to get new measurements etc.
Demo video: robot starting from distance greater than x in order to achieve distance x with the static node is shown here (I had problems with xvidcap to capture the screen for around 5 min. We will see a video of what the robot displays on its screen () and another video () of where it moves side by side)
Conclusion: Using the packet loss rate for bringing 2 robots (1 static, 1 mobile) in max distance while remaining in range is problematic.
Percentage of received packets is big and almost constant up to a radius x around the static robot. Beyond distance x, the percentage of received packets decreases exponentially with distance (also documented here: ). In practice, that means that starting from distance greater than x, we can achieve distance x with the static robot. Starting from a distance less than x, we cannot find the shortest way to achieve distance x considering only the number of received packets (but, we can have the robot move always forward till it crosses the x-radius circle around the static node).
Need to work with the wireless signal strength that provides greater granularity in the measurements. That may be also a good idea given that BATMAN is slow () at updating link qualities for highly mobile nodes
Tried wireless chipset Medialink MWN-USB54G that is detected as a RaLink Technology RT2501USB wireless adapter and it is supposed to work according to , but in practice the wireless statistics functionality is not implemented for ad-hoc links.
Other possible solutions:
- modify driver to get signal strength measurements ()
- have 2 wireless interfaces per robot. The 1st one is used for mesh networking. The 2nd one is used to create a wireless network per robot and detect link quality with the rest of the robot generated networks.
Demo: 1 static node, 1 mobile node moving forwards/backwards. Mobile node tries to be within max range of the static one while maintaining a desired level of link quality.
- Conclusion: BATMAN's packet-based metric for link quality fluctuates noticeably. That in turn causes continuous back and forth motions of the mobile node. E.g. for a fixed distance among the 2 nodes, the link quality can change up to 40 units (out of the 255) = 15% of the overall range of values. I expect bigger deviations as we add more and more nodes in the network. To improve the link quality measurements, we need to use either a more robust technique for link quality estimation or a wireless chipset/driver that supports wireless statistics collection.
- Next step: get a more accurate measurement of the link quality using the signal strength measured by the wireless interface in the netbook.
- The current driver for wireless interface (Asus EEEPC 701SD Chipset, RealTek 8187SE (RT8187SE) driver) doesn't support the collection of wireless statistics. Neither the additional drivers drivers that I tried:
- BUT, the information we want (link quality in dB for each of the links in the adhoc net) should be sensed somehow even though not presented to the user.
Next step towards measuring link quality(TODO): experiment with virtual wireless interfaces in monitor mode, try out more robust techniques for estimating signal strength using BATMAN's info (e.g. average measurements over a window, etc)experiment with ubuntu network monitoring tools for adhoc links
- Wed (4/14/10) Demo: Webteleop of an iCreate with video feedback. Teleoperation is performed over a wireless mesh network with 5 nodes running BATMAN. The user is in the AI lab. The mesh network extends the communication range between the user and the iCreate enabling the robot to go from the AI lab to the 4th floor kitchen and back.
- Reading related work on coverage, chain formation.
- Experimental setup to implement
Combine sensor networks with mobile robots in order to enhance the functionality of each other.
- Existing sensors may not have been placed in an optimal way in the environment or some areas do not have sensors because they do not require constant monitoring (or we just don't have enough sensors due to their cost) => Use mobile nodes to extend the sensing range of the sensor network => extend "sensing space" of the whole robotic system
- On the other hand, mobile nodes have limited communication range. => Use sensor network to extend the communication range between mobile robots => "extend actuation space" of the whole robotic system
Q: Why not have all the nodes mobile?
A: Mobile sensor nodes are more expensive than static ones. Plus, the size of mobile robots acting as sensors may be prohibitive for some applications.
Experiment (house surveillance):
The "house" is room 404. We have a camera in the entrance so that we see who wants to get in. We also have a primitive sensor network in the atrium consisting of 2 static nodes streaming video back to the user. The user is inside a computer in 404 (may be able to have a virtual interface running BATMAN in one of the computers in 404)
- After we hear a sudden noise, we send our mobile robots (2 robots) to cover the area and send us back video feedback. (=> Mobile robots extend the sensing range of the sensor network).
- The user brings in a new robot (teleoperating it) and positions it in a position that she likes. E.g. in front of the corridor just to the left of room 404. The rest of the mobile nodes in the network should rearrange themselves so that we get optimal coverage given the new constraints. The user repositions the robot to a different place (e.g. facing the corridor in front of Genie's desk) and the rest of the mobile nodes in the network adjust their positions again. (=> The sensing range of the sensor network is extended on demand (by the user))
- After the network is formed, we teleoperate an iCreate (or NAO if possible by then) to get an object within the range of the network and bring it back to/push it towards the user in 404 (=> use sensor network to extend the communication range between the robotic nodes in the network).
- The robots are called back to help with other tasks inside the house and they are released again to form the robot-sensor network.
- Extension on [Gasparri et al. 2008] paper (http://www.springerlink.com/content/a13w735652881014/): They do coverage using mobile robots and taking into account an existing sensor network. They use the static sensor network nodes to determine the paths of the mobile nodes in a centralized way.
- Instead, I propose a descentralized approach to "determine the position of the mobile nodes based on the static nodes, the other mobile nodes and (possibly) the user"
[Gasparri et al. 2008] Andrea Gasparri, Bhaskar Krishnamachari and Gaurav Sukhatme, "A framework for multi-robot node coverage in sensor networks", Journal of Annals of Mathematics and Artificial Intelligence, 2008
- Demo: Rotating roomba sending camera feedback to Java program
- Todo next:
- Web teleop with video streaming to the user:
- Improve image display by not storing intermediate images
- Convert code to Java applet
- Combine w/ Trevor's web teleop interface
- Network visualization
TOC and summary (.txt)