Internal:Bthomas weekly agenda spring 2011
From Brown University Robotics
(Difference between revisions)
(Created page with '== Agenda: 2011/05/10 T == FrameNet Notes: * Introduction ** What is FrameNet? *** Valency = Number of arguments controlled by a verbal predictate. **** Arguments include subject…')
Current revision as of 18:41, 26 September 2011
Agenda: 2011/05/10 T
Agenda: 2011/05/03 T
move, go, grab, fetch, retrieve, find, pick up, put down, place, set, assemble, combine, disassemble, break, follow, track, watch, open (hinged), close (hinged), open (drawer), close (drawer), open (jar), close (jar), turn on, turn off, stir, chop, mix, shake, pour, fold, pose (for a picture), shake (hands), clean
Agenda: 2011/04/26 T
Agenda: 2011/04/19 T
Agenda: 2011/04/12 T
Agenda: 2011/04/05 T
Agenda: 2011/03/29 T
This paper claims that robots interacting with humans need to be able to establish common ground. The common ground postulate states that people in conversation minimize their collective effort necessary to gain understanding; that is, they communicate with sufficient detail that each understands, but no more. Although humans are innately able to do this by modeling their peers and estimating their knowledge and thus the pair's common ground, robots lack this ability and often frustrate communicatively by saying too much or too little. Further, humans lack mental models of robots and thus often impair the robot's ability to understand by saying too much or too little. Human mental models can be aided by bootstrapping our models of other humans and applying these characteristics to robots, for instance by having robots don job-appropriate attire. The effect of robot appearance on human perception of common ground is demonstrated experimentally with a robot "dating counselor" HRI experiment, where the robot is given either a male or a female appearance and interacts with both male and female participants to "build a database" of dating knowledge, and length of communication is noted.
This paper reveals an interesting point that having only one mode of interaction between humans and robots is insufficient -- different humans have different needs due to different background knowledges, and the same applies for robots. However, while this statement is implied several times in the paper, supported by various previous experiments, it is never explicitly stated and is instead hidden under the guise of "different common grounds". Further, although the paper illustrates that these different common grounds exist, it does not attempt to quantify them (except in one previous experiment) or show how numerically significant these differences are. This is particularly noticeable in the paper's own experiment. Claims of observing different common grounds with the dating counselor robot lacked numerical evidence to support them. It would be interesting to see how significant these differences are, and simple metrics (for instance, number of words) were stated which should be easily measurable. Finally, most of the paper summarized others' work, and little space was used to describe this paper's new contribution; it would have been nice to see a more formal presentation of the experiment.
Agenda: 2011/03/15 T
As robots become increasingly prevalent and increasingly complicated, a gap exists between the robot's capabilities and people's ability to control the robot to accomplish their goals. One potential solution to this problem is through language-based communication. We examine two problems related to this effort. First, we investigate the use of dialog -- a restricted but expressive subset of natural language -- to empower end users with a greater set of robotic capability. Second, because the robot uses this dialog to interact with the real world, we explore the grounding of spoken nouns and verbs into objects and actions, respectively. While previous work exists in both areas, the emergence of ROS and its community's codebase provides a base upon which a more expansive and task- upon which we provide a framework that implements our dialog system, grounds a varied set of household actions and objects, and demonstrates several real-world use cases.
This paper presents a model (ROGER) and method for automatically and simultaneously segmenting unlabelled training data into subtasks and learning these subtasks using an infinite mixure of Gaussian process experts. The process uses SOGP both to incrementally learn a latent control policy (theoretically allowing for online learning) and to allow real-time data processing (afforded by the model's sparsity). Partitioning between subtasks is achieved using a Chinese restaurant process. Inference is achieved incrementally for each new particle by assigning it to each expert, determining the resultant likelihoods, and using optimal thresholding to carry forward some [one?] of these assignments. Prediction is acheived by picking a particle, choosing an expert for that particle (using the transition matrix), and generating an output using that expert's SOGP regressor. The approach is experimentally validated by learning from a hand-coded controller and comparing the learned controller's performance against the original in the task of goal scoring for robot soccer.
The paper presented an interesting idea for extending learning from demonstration into multimap scenarios. Comparing the performace of the implemented system against both the hand-coded controller and one of the previous best-performing implementations both established the performance gained by the developed process and the gap between optimal and current performance. However, the current implementation also required a transition map to be provided; it would be intriguing to see this automatically generated in the future. Further, although the claim was made that the algorithm was fairly robust to their selection, the current implementation requires significant selection of hyperparameters. [I don't know if this actually can be changed significantly.] The section on segmentation analysis, while offering interesting insights, could be shortened. Addressing the problems cited with proposed fixes or research directions would add intrigue to the section. Finally, the paper mentions that many parameters were chosen based on computational limits. Would it be possible to analyze the effect computational power has on the algorithm?
[Note: Although I understood this paper at a high level, the machine learning behind it is still opaque to me. In particular, I know little more than the name for: Gaussian processes, SOGP, Inverse-Wishart distributions, POMDP, DPA.]
This paper presents a method for mapping symbolic names of objects to facts about that object in a knowledgebase and implements it as KNOWROB-MAP. KNOWROB-MAP leverages KNOWROB to provide symbolic object names in an environment. OMICS (the Open Mind Indoor Common Sense project, a database of commonsense knowledge for indoor mobile robots) is used in conjunction with Cyc (which categorizes and provides dictionary descriptions of objects) via WordNet, which maps the natural language descriptions in OMICS to word meanings. (A map between these meanings and Cyc already exists.) By combining these databases, formal ontological concepts of words are formed. This knowledge is represented in the Web Ontology Language (OWL), which allows distinguishability between instances and classes and additionally provides connections between instances/classes via roles. The concept is further expanded into probabilistic environmental models using Bayesian Logic Networks. [I don't know about these yet and thus don't quite understand the reasoning behind this section.] Finally, a ROS service is provided to enable language-independent queries of KNOWROB-MAP. The efficacy of the system was tested by the instruction "clean the table".
Demonstrating the power of connecting multiple large-scale databases is an intriguing concept, as was the fact that this connection was done automatically. However, the performance of KNOWROB-MAP is evaluated with only a single query. This, in many ways, fails to demonstrate the power of the system. It would be interesting to see how KNOWROB-MAP performs with other queries; in particular, what would happen if typical people tried to instruct the robot to do something? Further, seeing a robot actually perform this task, instead of detailing the outcome of a query, could add credence to the merits of KNOWROB-MAP. Using standard languages such as OWL [is this actually standard?] and connecting KNOWROB-MAP to ROS will enable others to use this software with minimal effort. No mention of computational time and scalability was given; is it always trivial? (One concern: ROS service calls are blocking.) Finally, it would be nice if the section on probabilistic environmental models were elaborated on more thoroughly; the implementation descriptions throughout the previous sections of the paper could be shortened to accommodate this.
Agenda: 2011/03/01 T
This paper presents a method for following natural-language route instructions using four actions and pre- and post-conditions using a system called "Marco". Natural language instructions ware modelled through parsing, extracting each sentence's surface meaning, and modelling inter-sentence spatial and linguistic knowledge. Given this model and the perception of the environment, an executor determines which of the four actions (namely, Turn, Travel, Verify, and Declare-goal) to take, a process dubbed "compound action specification". Implicit actions including the actions Travel and Turn are inferred when necessary. (For instance, the instruction "Go to the chair." may first require Marco to turn to find the chair.) To evaluate Marco's performance, approximately 700 instructions were created by 6 paricipants over 3 virtual worlds. Another set of 36 participants followed these instructions. Each instruction was followed 6 times, and both success at reaching the desired goal point and the participant's subjective rating of the instruction were recorded. Each instruction was parsed and hand-verified for Marco, and Marco attempted to follow each parsed instruction set. A statistically significant difference was found between Marco's abilities with implicit actions and without them.
The paper suggests that the four actions present are sufficient for many route-following tasks. However, many real-world obstacles such as doors, stairs, and multiple floors are typically present. It would be interesting to see how this could scale to a real-world domain, and it would be especially enlightening to see this task performed in the real world by a robot. When evaluating Marco's performance, the evaluation of Marco's performance differed from that of the humans. (Humans started in a random direction, and Marco made four attempts from four different directions and averaged (How?) the results.) It would be helpful if a rationale for this decision was included or if the data would be reevaluated with Marco starting out in a random orientation to match the human trials. Finally, the paper's abstract claims that Marco "follows free-form, natural language route instructions". However, only hand-parsed trees were evaluated, and the parsing methodology was only briefly discussed. Please provide a rationale for why the parser was not used in the evaluation. Further, more detail on the parsing involved -- especially on how pre-conditions and post-conditions were formed -- would be appreciated. The section comparing this paper to Instruction-Based Learning could be cut to allow for the space necessary to describe this.
Agenda: 2011/02/22 T
Agenda: 2011/02/15 T
Agenda: 2011/02/08 T