Internal:Aggeliki weekly agenda

From Brown University Robotics

Revision as of 18:09, 21 September 2010 by Aggeliki (Talk | contribs)
Jump to: navigation, search

Contents

Agenda: 09-21-2010

Meshnet project

Agenda: 09-14-2010

Meshnet project

  • Implemented coverage using potential fields. Max distance among robots is determined based on wireless signal quality. The repulsive forces among the

nodes are calculated using the position of the nodes in the network.

Demo (coverage in 3th floor atrium using 1 base station and 4 more robots): http://www.cs.brown.edu/~aggeliki/mvi0500_5.mp4

  • ICRA paper: current draft

Agenda: 09-7-2010

Meshnet project

Writing code and setting up the robots so that 8 robots coverage can be performed. Extended code to do coverage without using a map.

Videos from robot experiments (will upload in flickr as soon as the new password is set):

  • 4 robots corridor coverage (with map)
  • 8 robots corridor coverage (with map)
  • 3 robots enclosed area coverage (no map, 1 tag, desired distance among robots dmax = 5m)
  • 4 robots enclosed area coverage (no map, 1 tag, dmax = 5m) and node removal
  • 5 robots enclosed area coverage (no map, 1 tag, dmax = 5m)
  • node addition starting from 5 robots enclosed area coverage (no map, 1 tag, dmax = 5m)
  • 8 robots enclosed area coverage (no map, 1 tag, dmax = 3m)
    • need more tags, bumping creates a wrong estimate of position
  • outdoors 8 robots open area coverage (no map, no tags, dmax = 3m)
  • node removal starting from outdoors 8 robots open area coverage (no map, no tags, dmax = 3m)
  • node removal starting from outdoors 7 robots open area coverage (no map, no tags, dmax = 3m)
  • node removal starting from outdoors 6 robots open area coverage (no map, no tags, dmax = 3m)
  • node removal starting from outdoors 5 robots open area coverage (no map, no tags, dmax = 3m)


Plan:

  • Write wifi message exchange code based on BATMAN (all experiments so far have been done using the wifi communication code from University of Coimbra)
  • Write a first draft of the ICRA paper using the above videos (no map) plus the teleop video from the ICRA '10 workshop submission.
    • Contribution: a hardware/software framework that enables researchers to implement their multirobot coordination algorithms in real environments with minimal setup time and cost.
    • Feels like the whole lab should be co-authors in that paper. To which extent should I refer to ar_recog, gscam, ar_localizer?
  • Enhance on the results:
    • 8 robots enclosed coverage with more tags
    • coverage in more complex environments e.g. including corridors
    • leader-follower scenario
    • quick implementation in matlab of current coverage algorithm -- don't like current 8 robot formation, but not very sure how it is supposed to look like as it scales to more robots


Agenda: 08-31-2010

Meshnet project

All 8 robots to be used for experiments are set up in terms of operating system, ROS packages, networking infrastructure. Troubleshooted previous coverage controller: solved a "false alarm" failed service call coming from ROS, modified the coverage algorithm to use a gaussian (instead of a linear) kernel to penalize the distance between neighboring robots. Have used 3 of the robots (syndrome, joker, brainiac) for coverage successfully. For some reason though the 8 robots coverage experiment doesn't work very well. It may be the algorithm, it may be the software installation in the robots that I wasn't using in previous experiments or both. Need to test the robots in small groups and then combine them into a big network.

Evening update:
- 4 robot coverage: http://www.flickr.com/photos/brownrobotics/4944130338/
- 8 robot coverage: http://www.flickr.com/photos/brownrobotics/4943586821/

Other

Helped out Alex with setting up foreign_relay as part of the overhead AR localization system for CS148.


Agenda: 08-24-2010

Neural decoding paper

Almost done with the new results. Just need to send the figures to Phil and explain what is going on.

Meshnet project

  • The code is modularized. Example with all the ROS nodes (navigation + wifi communication) (wifi_nolan): one teleoperated robot, one dynamic robot trying to always keep 2 m distance from the teleoperated one.
  • I suspect I will not be able to generalize to more robots in the corridor given the coverage algorithm I implemented, but need to give it a shot. I'm pretty sure though that in terms of infrastructure the robots communicate properly with each other (can have up to 8 robots).
    • Algorithm: find all the next steps you can take based on the graph represenation of the map (next step = non occupied neighboring node in the graph to transition to). Score each next step in such away so that you keep 5m distance from all your neighbors (whenever possible).
  • Next steps: try out coverage in open area, e.g. 4th floor atrium. Incorporate BP for coverage.


Agenda: 08-17-2010

Meshnet project

  • (Re)Installed Ubuntu, ROS, OLSR in 7 netbooks. One of them has insufficient memory, so 6 working ones. Installed ROS cturtle with the hope that the latest version of position tracker can be used, but got the ROS internal error: "what(): Cannot use ros::Time::now() before the first NodeHandle has been created or ros::start() has been called. If this is a standalone app or test that just uses ros::Time and does not communicate over ROS, you may also call ros::Time::init()". Google suggestions were not helpful plus Sarah and Trevor hadn't experienced that error before. So, back to boxturtle installation.
  • Code modularization: Reimplemented and tested current code for path generation using dijkstras algorithm. Reimplementation was necessary because ROS doesn't handle very nested messages. Eg. given 2 ROS messages A and B, if A contains B and B contains A, a ROS error is produced: "maximum recursion limit has been reached".
    • ROS path planners cannot be applied to our case because they rely on sensor_msgs/LaserScan or sensor_msgs/PointCloud type sensor data (obtained through laser scanners) [1].
  • Coverage demo with multiple robots: Coming within the next couple of days. Code is written. Testing and refinement needs to be done.
  • Alternative thought about ICRA paper: extend Daniela Rus' lab approach for coverage on areas with different sensing priorities ([2], [3]). In our case, the sensing priorities will be set implicitly by having the user position the robots at specific locations. Not sure if some kind of rewards should be given for each location, but will think about it more.
  • Next step: incorporate BP in the whole framework.

Neural decoding paper

First draft of paper is out (neural__stisomap_draft01_0804.doc). Up to speed with generating complementary results for Phil. Results produced, they need to become presentable. New figures expected to be done in the next few days (just after the meshnet demo is done).


Agenda: 08-10-2010

Meshnet project

Code modularization: I'm almost done with the navigation part of my current codebase, I'm resolving some issues related to the wifi communication part. More specifically:

  • I have implemented and tested the following nodes:
    • map_loader: loads a line-based representation and a graph-based map of an area. Both maps are read from a .txt file
    • bumper_localizer: uses the line-based map to find the position of the robot whenever it bumps to a wall. Publishes the updated position through position_tracker. Also responsible for recovering from a bump.
    • step_navigator: moves in a straight line between the current position and a goal position
    • path_navigator: executes a path consisting of nodes in the graph-based map
  • Remaining node to be tested: path_generator (generates a path among 2 points using dijkstras algorithm or generates a random walk path)
  • Extended the wifi communication code from ISR (Coimbra, Portugal) using foreign_relay to transfer topics among robots. The topic that is being sent to the neighbors are the position of the robot. Also, from now on the code doesn't misleadingly display that messages are received as well (the c++ version of foreign_relay provides only one-way communication!).
    • There is an issue with foreign_relay in one of the robots: the corresponding topic is registered, but it's not published. I suspect it's a ROS issue that I will try to resolve using different versions of relay and the cturtle installation (according to Chris, cturtle has fixed the error that position_tracker was displaying a few weeks ago). If it's not resolved in the next few days, I will go back to my initial way of exchanging messages among the robots.
      • Tue 8/10/10, 11:20am: update: the foreign_relay issue is resolved. I had to put the IP of the neighboring robot in the /etc/hosts file of the robot sending the messages (and vice versa).
  • Waiting for my permissions to the googlecode repository to be fixed so that I can upload the code.

The video that I had on schedule for today will come shortly.

Neural decoding paper

Phil contacted me last week that he plans to have a draft of the paper ready by Aug. 15th. I'm generating some complementary results like running statistical significance tests on the performance among difference dimensionality reduction techniques, calculating decoding errors for a few more parameter combinations per method, generating revised versions of figures accomodating Phil's suggestions about their form.


Agenda: 08-03-2010

ICRA '11 paper

Looking at the literature of multirobot deployment/coverage, I saw that most of the people so far have been concerned with defining where the robots should be deployed or deploy robots incrementally ([4]). They were not concerned with what happens after the initial deployment when one or more nodes in the network fail. I was also particularly interested in the papers that deal with fault-tolerant deployment/coverage by ensuring properties like biconnectivity or each robot having at least K neighbors ([5]) etc. So, my plan is to take these two principles one step further and do dynamic fault-tolerant deployment. That means maintain the connectivity properties for fault-tolerant networks, but also have the multirobot system adjust to changes on the status of the robots (e.g. when a robot fails) or changes to the network that are caused when a human controls one or more robots in the system. Ahmadi et al. [6] have worked on a similar direction. Our difference is that when a new robot comes, they estimate the position where this robot should be deployed. In our case, the releop'ed robot is put in a fixed location and then the rest of the network has to adjust itself. With that said, the experiments I'm considering for the final paper are the following:
- Starting with an initially connected set of robots (or having all the robots in the same initial location), let them rearrange themselves and form a biconnected network that covers a prespecified area.
- Have a user put a robot in a specific location (through teleop) and have the rest of the network rearrange itself. Try out teleop trajectories defined by the user. Possibly teleop more than one robots at each time.
- Kill a robot, let the network rearrange itself.
(- May need to add: find the optimal position of a new robot introduced in the network)

Meshnet Project

  • In the process of modularizing the existing code for navigation and wifi communication.
  • Demo for next time: given 3 connected robots (e.g. in a line), send messages among them so that they form a biconnected component (triangle).
  • Timeline till ICRA/proposal submission deadline (Sept. 15th):
ICRA '11 proposal
week 1

- finish up modularizing navigation and wifi communication code
- demo: given 3 connected robots (e.g. in a line), send messages among them so that they form a biconnected component (triangle)
- ICRA outline

proposal outline/abstract
week 2

- implement BP among robots (will consider using the MRF minimization library)
- ICRA outline

- proposal outline/abstract
- contact professors for committee
week 3

- test BP, coordination demo with 4 robots
- paper writeup

proposal writeup
week 4

- BP with teleoperated robot (scale to 5 robots)
- paper writeup

proposal writeup
week 5

- kill node(s) and rearrange (scale to 7 robots)
- paper writeup

proposal writeup
week 6

- final experiments for paper and polishing

- paper writeup
proposal writeup

Other

  • CD with media related to Chad's ONR award


Agenda: 07-27-2010

Artemis Project talk

  • Talk for the Artemis Project at Brown (slides). Went pretty well. The girls were excited to see potential applications of robots (robot dogs and not only!)

Meshnet Project

  • Trying to handle the "illegal instruction" error that prohibited the position_tracker from running in the netbooks:
    • Reinstalled ROS and Brown ROS packages as described in the wiki page.
    • Had to try different combinations of versions of position_tracker, ar_localizer, ar_recog to have sth runnable on the netbooks. For now, I work with the latest version of ar_recog, a modifed version of ar_localizer that ignores motion blur and a compatible modified version of the released position_tracker.
    • Will try the cturtle version of ROS next to test whether I can use the latest version of position_tracker/ar_localizer that handle motion blur.
  • Implementation of "robot in the middle" demo: [7] . Next goal is to scale to more robots.
  • In the process of modularizing the existing navigation and wifi communication code into meaningful and self-contained ROS nodes. The goal of this is to separate the controllers from low-level navigation and wifi handling.
  • Tested out Coimbra's wifi communication code using OLSR daemon. I'll keep Coimbra's way of sending messages among the robots, but I will create (and share in the ros mailing list) an alternative node for identifying wireless neighbors using BATMAN.


Agenda: 07-20-2010

Meshnet Project

  • Implementation of "robot in the middle" demo is done and works correctly for fixed position messages exchanged among the robots. Correctly means that the intermediate robot keeps the latest position message from its neighbors (as they are identified using BATMAN), finds the shortest path among its 2 neighbors, calculates the middle node in that shortest path and derives a path from its current position to the middle node. It needs to be tested in real life as soon as the "illegal instruction" error that position_tracker produces is fixed (error caused by some ros internal process, appears not only when running position_tracker, for some reason no error in the FitPCs - Chris has sent an e-mail to the ros-users list)
  • The Coimbra group published their code. They acknowledged us in the mailing list which is good, but they have a more promising approach for sending messages among the robots (they send out a topic, no detailed parsing of the received message needs to be done). I'm going to get elements from their approach that fit our case and possibly build on them.

Next actions:

  • get ideas from Coimbra group to implement message passing using foreign_relay
  • test message fusion from neighbors in real life with moving robots
  • modularize code (e.g. separate navigation from message passing etc)


Agenda: 07-13-2010

Meshnet Project

Implementation of message fusion from different neighbors.
Demo ("robot in the middle") in progress: robot maintaining a position between a static neighbor and a teleoperated one. In order to do that, it leverages the position messages it receives from its neighbors.

Agenda: 07-6-2010

Schloss Dagstuhl

Commnents after the seminar that are interesting to me and possibly the lab:

  • CMU keeps collecting data. No bvh files yet, but that's in the todo list. For now, they focus on labeling the data using some form of mechanical turk.
  • Lifted BP, an accelerated version of BP develop by Christian Kersting at University of Bohn ([8]). BP is accelerated by reusing computations among factor nodes with similar structure. There is code in C++/python online. For now, no heuristic to tell whether lifted BP will produce faster results than BP in a given graph.
  • A new student at TUM is continuing Jan's work on tracking and we had a discussion about low-dim representations for tracking. I sent him the ISRR paper we wrote, Matei's paper on eigengrasps and Marek's paper for physics-based tracking. Micheal Beetz seemed to be still interested in low-dimensional/compact representation of motions.
  • Concluding the seminar, we decided to start working on a small book summarizing what this workshop was about as an attempt to promote a unified concept of activity recognition.


Final talk slides: [9]

Agenda: 06-15-2010

Meshnet Project

  • Upgraded netbooks (the ones used for the icra workshop experiments) to the new brown ROS package
  • New ROS nodes towards message passing among robots:
    • batman_mesh_info: queries BATMAN and returns the list of neighbors and the corresponding link quality
    • coordination_client: ROS node for sending messages (/position topic for now) to neighbors
    • coordination_server: ROS node for receiving messages from other robots and doing sth interesting with that info
    • NEXT TODO: extend coordination_server to fuse info from its neighbors and do sth interesting with that info (e.g. get messages from your neighbors and set yourself to be in the middle of your 2 closest neighbors)

Dagstuhl '10 ("Understanding everyday activities" seminar)

  • Current version of poster [10]
  • Talk slides are coming next. Aiming for a practice talk tomorrow.

ICRA '11 paper

Plan: Internal:aggeliki_icra11

Agenda: 06-08-2010

Meshnet Project

Enhancing navigation functionality:

  • (Single robot) path planning:
    • Graph representation of the areas in the map where the robot can navigate.
    • Dijkstra's algorithm to find the shortest path between source and destination positions.
  • Exploration for a tag:
    • Random walk in the graph representation of the map till the tag we search for is encountered.
    • Demo: exploration for a tag (ta 11 to the left of the fountain) in the AI lab-Graphics lab corridors ([11])

Agenda: 06-01-2010

Meshnet Project

Final navigation demo ([12])

  • iCreate moving from the AI lab to room 411 and back
  • localization based on tags/bumpers/odometry
  • only the tags in the corners/junctions are taken into account
  • map in the form of line representation of walls
  • intermediate navigation goals in the corners/junctions

Agenda: 05-25-2010

Meshnet Project

- Enhanced hallway navigation using map and bumper-based localization: math in place, more testing to be done

Agenda: 05-18-2010

Meshnet Project

- Navigation from the AI lab to room 411 using AR tags for localization (AI lab corridor navigation: [13])

(5-19-2010)

Update: navigation from AI lab to room 411 (video 3x [14]):

  • The robot starts from just outside the AI lab, facing tag 1 to get accurate position estimate. It has 3 intermediate goal positions till the fountain is reached.
  • Whenever it bumps into the wall, it rotates for pi in order to see its nearby tag and localize itself.
  • Consecutive pi-rotations when bumping to a wall: robot cannot read tags while rotating (due to camera blurring we would have tag misidentifications), so its current position remains the same.
  • Backward motion at tag 5: the roomba passed the intermediate goal it had to reach and came back to reach it. Then, it turned again to reach the next goal along the corridor.
  • AI lab -> fountain corner: robot thinks is in the other wall so it tries to go diagonally to reach the water fountain. No luck (tag 10 cannot be recognized very well when the robot is in the nearby area with light). Eventually, by bumping into the wall many times and moving slowly forward each time, it reaches the corridor to the graphics lab.
  • Graphics lab corner: reached the goal in the corner, but got a bad position estimate due to tag misidentification. It goes back towards the fountain. However, its next goal was to go to the other end of the corridor (outside Eugene's office).
  • When it localizes itself correctly again, it attempts to go to Eugene's office through a straight line. -> consecutive bumping into left wall. => I move it to navigate from the graphica lab to Eugene's office.
  • After ex-office of Roberto (tag misidentification: robot thought it was at the end)

Although I made sure all the tags are identified during installation, some of them are more vulnerable to misidentification from side views.

Neural project

- First submission of work by Phil (Nov. '10 meeting of Neuroscience)

Agenda: 05-11-2010

Meshnet Project

  • Plan of action after discussing w/ Chris
    Meshnet implementation:
    • 5/11/10: (ar_navigate) go to a specific point (x,y) starting from a random location in an open area with AR tags based localization
    • 5/18/10: (ar_map_navigate) navigate among 2 points using a topological map of the environment, e.g. go from AI lab to room 411 autonomously
    • 5/25/10: (ar_map_navigate_bumpers) incorporate bumpers into the navigation procedure
    • 6/1/10:(ar_explore) explore area till an object of interest is detected
    • 6/8/10: (navigation_tracker_web) visualize environment and motion of robots in it (like current position tracker web ROS node)
    • 6/15/10: (node_messaging) inter-robot communication
    • 6/19/10 - 7/5/10: 1 week Dagstuhl seminar, 1 week in Greece
    • July - August: incorporate wireless signal & experiments


    Meshnet deployment:

    • June - August: formulate multi-robot coordination algorithm
    • Sept 15th: ICRA deadline (or/and HRI?)


    Thesis proposal:

    • June: need to have a first version of a proposal (or abstract) to send to potential committee members?
    • Sept 15th: proposal document to the commitee
  • Functionality implemented this week: (ar_navigate) starting from a random location in 404, go to the middle of the field in the room.
    • Video ([15]): The iCreate starts in front of a tag to get an accurate estimate of its current position. After a few seconds, it rotates to move towards the destination (middle of field). During its diagonal motion towards the goal, no tags are visible (they have to be at most 2m away) and the localization relies solely on the odometry. That's why we observe some curvature in the diagonal motion of the robot. After some new tags are recognized, the current estimate of the position of the iCreate becomes more accurate. We see the iCreate actually turning and recognizing the destination position. Eventually it reaches a position close to the destination (based on its internal measurements of where it is, the destination has been reached) and performs small circular motions around the destination area.
  • Moving to the ar_map_navigate functionality: created and trained 15 new AR tags to instrument the corridors outside the AI lab.
  • Related to our coverage and wifi localization conversation last time, encountered 2 papers (ICRA 2009) to keep in mind:
    • Nikolaus Correll, Jonathan Bachrach, Daniel Vickery, Daniela Rus, "Ad-hoc wireless network coverage with networked robots that cannot localize" ([16]):
      • 1-Sentence summary: randomized deployment algorithm that adjusts to addition/deletion of nodes.
      • Useful things to keep in mind for our project: their real-life experiments involve 9 robots (we are close to that). The environment (basement of Stata Center) is a bit more complex than an open area, but not very complex (if we lay out the robots in a loop in the 4th floor (or around obstacles), this will be more interesting). We need to incorporate some form of wireless signal at the end to make the results compelling.
    • Karthik Dantu, Prakhar Goyal, Gaurav Sukhatme, "Relative bearing estimation from commodity radios", ICRA 2009 ([17])
      • 2-sentences summary: They sample signal strength in 8 directions around the current location of the robot. Then they run PCA to find the most probable direction of motion that leads to the neighboring node (thus, relative orientation to the neighboring node is estimated).
      • Useful things to keep in mind when we incorporate wireless signal for localization: They got 20 degrees deviation from real bearing (orientation). Used up to 5 robots in real experiments in an open area. Elevated the antenna to eliminate multipath effects from the ground. When robots were placed randomly within a square of 20m, their optimal sampling step size is 5m (that's still a significant amount of area to explore). They focused on estimating the relative orientation among two robots, but the distance can be also estimated using the wireless signal propagation model.

Agenda: 05-04-2010

Meshnet Project

Enhanced last week's demo: the mobile robot performs 15 queries to BATMAN regarding the number of received packets and estimates an average, moves forward/backward based on the previous average signal, stops to get new measurements etc.

Demo video: robot starting from distance greater than x in order to achieve distance x with the static node is shown here (I had problems with xvidcap to capture the screen for around 5 min. We will see a video of what the robot displays on its screen ([18]) and another video ([19]) of where it moves side by side)

Conclusion: Using the packet loss rate for bringing 2 robots (1 static, 1 mobile) in max distance while remaining in range is problematic.

Percentage of received packets is big and almost constant up to a radius x around the static robot. Beyond distance x, the percentage of received packets decreases exponentially with distance (also documented here: [20]). In practice, that means that starting from distance greater than x, we can achieve distance x with the static robot. Starting from a distance less than x, we cannot find the shortest way to achieve distance x considering only the number of received packets (but, we can have the robot move always forward till it crosses the x-radius circle around the static node).

Need to work with the wireless signal strength that provides greater granularity in the measurements. That may be also a good idea given that BATMAN is slow ([21]) at updating link qualities for highly mobile nodes

Tried wireless chipset Medialink MWN-USB54G that is detected as a RaLink Technology RT2501USB wireless adapter and it is supposed to work according to [22], but in practice the wireless statistics functionality is not implemented for ad-hoc links.

Other possible solutions:

  • modify driver to get signal strength measurements ([23])
  • have 2 wireless interfaces per robot. The 1st one is used for mesh networking. The 2nd one is used to create a wireless network per robot and detect link quality with the rest of the robot generated networks.

Agenda: 04-27-2010

Meshnet Project

    Demo: 1 static node, 1 mobile node moving forwards/backwards. Mobile node tries to be within max range of the static one while maintaining a desired level of link quality.
    • Conclusion: BATMAN's packet-based metric for link quality fluctuates noticeably. That in turn causes continuous back and forth motions of the mobile node. E.g. for a fixed distance among the 2 nodes, the link quality can change up to 40 units (out of the 255) = 15% of the overall range of values. I expect bigger deviations as we add more and more nodes in the network. To improve the link quality measurements, we need to use either a more robust technique for link quality estimation or a wireless chipset/driver that supports wireless statistics collection.
    • Next step: get a more accurate measurement of the link quality using the signal strength measured by the wireless interface in the netbook.
      • The current driver for wireless interface (Asus EEEPC 701SD Chipset, RealTek 8187SE (RT8187SE) driver) doesn't support the collection of wireless statistics. Neither the additional drivers drivers that I tried:
      • BUT, the information we want (link quality in dB for each of the links in the adhoc net) should be sensed somehow even though not presented to the user.
        Next step towards measuring link quality(TODO): experiment with virtual wireless interfaces in monitor mode, try out more robust techniques for estimating signal strength using BATMAN's info (e.g. average measurements over a window, etc)experiment with ubuntu network monitoring tools for adhoc links

Agenda: 04-20-2010

Meshnet Project

  • Wed (4/14/10) Demo: Webteleop of an iCreate with video feedback. Teleoperation is performed over a wireless mesh network with 5 nodes running BATMAN. The user is in the AI lab. The mesh network extends the communication range between the user and the iCreate enabling the robot to go from the AI lab to the 4th floor kitchen and back.

  • Reading related work on coverage, chain formation.

  • Experimental setup to implement

    Message:
    Combine sensor networks with mobile robots in order to enhance the functionality of each other.
    • Existing sensors may not have been placed in an optimal way in the environment or some areas do not have sensors because they do not require constant monitoring (or we just don't have enough sensors due to their cost) => Use mobile nodes to extend the sensing range of the sensor network => extend "sensing space" of the whole robotic system
    • On the other hand, mobile nodes have limited communication range. => Use sensor network to extend the communication range between mobile robots => "extend actuation space" of the whole robotic system

    Q: Why not have all the nodes mobile?
    A: Mobile sensor nodes are more expensive than static ones. Plus, the size of mobile robots acting as sensors may be prohibitive for some applications.

    Experiment (house surveillance):
    The "house" is room 404. We have a camera in the entrance so that we see who wants to get in. We also have a primitive sensor network in the atrium consisting of 2 static nodes streaming video back to the user. The user is inside a computer in 404 (may be able to have a virtual interface running BATMAN in one of the computers in 404)

    • After we hear a sudden noise, we send our mobile robots (2 robots) to cover the area and send us back video feedback. (=> Mobile robots extend the sensing range of the sensor network).
    • The user brings in a new robot (teleoperating it) and positions it in a position that she likes. E.g. in front of the corridor just to the left of room 404. The rest of the mobile nodes in the network should rearrange themselves so that we get optimal coverage given the new constraints. The user repositions the robot to a different place (e.g. facing the corridor in front of Genie's desk) and the rest of the mobile nodes in the network adjust their positions again. (=> The sensing range of the sensor network is extended on demand (by the user))
    • After the network is formed, we teleoperate an iCreate (or NAO if possible by then) to get an object within the range of the network and bring it back to/push it towards the user in 404 (=> use sensor network to extend the communication range between the robotic nodes in the network).
    • The robots are called back to help with other tasks inside the house and they are released again to form the robot-sensor network.

    Conceptual contribution:

    • Extension on [Gasparri et al. 2008] paper (http://www.springerlink.com/content/a13w735652881014/): They do coverage using mobile robots and taking into account an existing sensor network. They use the static sensor network nodes to determine the paths of the mobile nodes in a centralized way.
    • Instead, I propose a descentralized approach to "determine the position of the mobile nodes based on the static nodes, the other mobile nodes and (possibly) the user"

    [Gasparri et al. 2008] Andrea Gasparri, Bhaskar Krishnamachari and Gaurav Sukhatme, "A framework for multi-robot node coverage in sensor networks", Journal of Annals of Mathematics and Artificial Intelligence, 2008

Agenda: 04-12-2010

Meshnet Project

  • Demo: Rotating roomba sending camera feedback to Java program
  • Todo next:
    • Web teleop with video streaming to the user:
      • Improve image display by not storing intermediate images
      • Convert code to Java applet
      • Combine w/ Trevor's web teleop interface
    • Network visualization

Proposal

TOC and summary (.txt)