Internal:Bthomas weekly agenda

From Brown University Robotics

Revision as of 15:07, 14 March 2011 by Bthomas (Talk | contribs)
Jump to: navigation, search


Agenda: 2011/03/15 T


  • Slides for robot dialog
  • R+R Beetz (TUM) KNOWROB-MAP -- Knowledge-Linked Semantic Object Maps

This paper presents a method for mapping symbolic names of objects to facts about that object in a knowledgebase and implements it as KNOWROB-MAP. KNOWROB-MAP leverages KNOWROB to provide symbolic object names in an environment. OMICS (the Open Mind Indoor Common Sense project, a database of commonsense knowledge for indoor mobile robots) is used in conjunction with Cyc (which categorizes and provides dictionary descriptions of objects) via WordNet, which maps the natural language descriptions in OMICS to word meanings. (A map between these meanings and Cyc already exists.) By combining these databases, formal ontological concepts of words are formed. This knowledge is represented in the Web Ontology Language (OWL), which allows distinguishability between instances and classes and additionally provides connections between instances/classes via roles. The concept is further expanded into probabilistic environmental models using Bayesian Logic Networks. [I don't know about these yet and thus don't quite understand the reasoning behind this section.] Finally, a ROS service is provided to enable language-independent queries of KNOWROB-MAP. The efficacy of the system was tested by the instruction "clean the table".

Demonstrating the power of connecting multiple large-scale databases is an intriguing concept, as was the fact that this connection was done automatically. However, the performance of KNOWROB-MAP is evaluated with only a single query. This, in many ways, fails to demonstrate the power of the system. It would be interesting to see how KNOWROB-MAP performs with other queries; in particular, what would happen if typical people tried to instruct the robot to do something? Further, seeing a robot actually perform this task, instead of detailing the outcome of a query, could add credence to the merits of KNOWROB-MAP. Using standard languages such as OWL [is this actually standard?] and connecting KNOWROB-MAP to ROS will enable others to use this software with minimal effort. No mention of computational time and scalability was given; is it always trivial? (One concern: ROS service calls are blocking.) Finally, it would be nice if the section on probabilistic environmental models were elaborated on more thoroughly; the implementation descriptions throughout the previous sections of the paper could be shortened to accommodate this.

    • This could tie in nicely to the robot dialog project.

Working on:

  • Demos for recruiting weekend: AR.Drone wiimote + Nolan3D
  • Robot dialog
    • Setting up github
    • Using actionlibs to allow for preemptable, nonblockable routines. (eg This will work really well for the "until" statement.)
  • Advisor + 2 committee members

Agenda: 2011/03/01 T

  • Kuipers Walk the talk: Connecting language, knowledge, and action in route instructions (AAAI 2006)

This paper presents a method for following natural-language route instructions using four actions and pre- and post-conditions using a system called "Marco". Natural language instructions ware modelled through parsing, extracting each sentence's surface meaning, and modelling inter-sentence spatial and linguistic knowledge. Given this model and the perception of the environment, an executor determines which of the four actions (namely, Turn, Travel, Verify, and Declare-goal) to take, a process dubbed "compound action specification". Implicit actions including the actions Travel and Turn are inferred when necessary. (For instance, the instruction "Go to the chair." may first require Marco to turn to find the chair.) To evaluate Marco's performance, approximately 700 instructions were created by 6 paricipants over 3 virtual worlds. Another set of 36 participants followed these instructions. Each instruction was followed 6 times, and both success at reaching the desired goal point and the participant's subjective rating of the instruction were recorded. Each instruction was parsed and hand-verified for Marco, and Marco attempted to follow each parsed instruction set. A statistically significant difference was found between Marco's abilities with implicit actions and without them.

The paper suggests that the four actions present are sufficient for many route-following tasks. However, many real-world obstacles such as doors, stairs, and multiple floors are typically present. It would be interesting to see how this could scale to a real-world domain, and it would be especially enlightening to see this task performed in the real world by a robot. When evaluating Marco's performance, the evaluation of Marco's performance differed from that of the humans. (Humans started in a random direction, and Marco made four attempts from four different directions and averaged (How?) the results.) It would be helpful if a rationale for this decision was included or if the data would be reevaluated with Marco starting out in a random orientation to match the human trials. Finally, the paper's abstract claims that Marco "follows free-form, natural language route instructions". However, only hand-parsed trees were evaluated, and the parsing methodology was only briefly discussed. Please provide a rationale for why the parser was not used in the evaluation. Further, more detail on the parsing involved -- especially on how pre-conditions and post-conditions were formed -- would be appreciated. The section comparing this paper to Instruction-Based Learning could be cut to allow for the space necessary to describe this.

  • ... of particular interest to me (next paper to read?):
    • Complicated executors (as opposed to one-at-a-time executors)
      • Full action sequencers
        • RAPs (Bonnasso etal 1997)
        • TDL (Simmons etal 2003)
      • Reasoning on an inferred route topology (Kuipers etal 2004)
  • TODO: I need a 3-person committee (advisor + 2) by 2011/03/15 T. Research proposal must be performed by 2011/04/21 R. Any thoughts?
  • Finished ROS smach tutorials
    • Pros of system?
      • Abstracts lots of FSM details
        • Regular FSMs
        • Some amount of concurrency
        • Has data passing
      • Visualizer is really nice
      • Generic state types are nice
    • Cons of system?
      • Data passing is annoying
        • There's no known way to connect two datas that are named differently in the global space, so mass-renaming is the only solution
      • Smach creates "just a big chunk of code": no ROS node starting/killing
        • That means all the functionalities you might ever need have to be started on boot
        • Can we code something to manage this process, even if it's Linux-/Ubuntu-specific?
      • Concurrence requires are children to terminate. Can we hack around this? (Do we need to?)
      • You have to make everything you want to code a state.
        • Somehow, you'll also have to specify what they need node-wise to run.
        • smach obviously wasn't designed for the task-at-hand in this regard
    • Things smach probably can solve
      • Deterministic FSM, where each node is independent
    • Things smach may not be able to solve
      • Perception requiring motor control. In other words, combining multiple overlapping motor requests intelligently. [Note: I think this is a very interesting general question.]
      • Created state machines may be _just barely_ human-readable

Agenda: 2011/02/22 T

  • Chernova and Breazeal: Crowdsourcing HRI Through Online Multiplayer Games (AAAI 2010)
  • Roy: Toward Understanding Natural Language Directions (HRI 2010)
  • Cleaned up the lab; it's pretty now.
  • Presented AR.Drone demo to PhD recruiting heads. They want to show off the lab to everyone. Demos is on 2011/03/18F at 2pm-3pm and will include:
    • AR.Drone + wiimote
    • nolan3d
    • Chad talking about the lab and what we do?

Agenda: 2011/02/15 T

  • Got AR.Drones ready for Chad's presentation, and documented process here.

Agenda: 2011/02/08 T

  • Done
    • Ordered ardrone replacement parts; arrival in ~1-2 weeks?
  • In progress (highest priority first)
    • This semester's courses: machine learning (Erik), robots for education (Chad), reviewing linear algebra (self)
    • Get ardrones ready for Chad's presentation this weekend
    • Starting to work on robot dialog project with Pete White

Previous semesters

Fall 2010