After 15 months of waiting, our Kuka YouBot finally arrived, or at least the base has arrived. I saw working prototype of the YouBot at IROS 2008 and, since then, have been anxiously anticipating getting one of these for the lab. This is one of the few commercial-off-the-shelf mobile manipulation platforms, although we will do no manipulation with it until the arm arrives (expected “Week 12″ of 2012). Until then, we hope use the platform in some interesting projects involving rosbridge and like the remote lab. However, it seems that we first have to address a few issues with the software that came with the YouBot.
Archive for January, 2012
One of the biggest frustrations as a research is getting a “bad” review for a paper. I do not mean “bad” a review that recommended rejection for a conference or journal. I also do not mean “bad” as in a review that looked unfavorably on my work, or had highly pointed criticisms. I mean a bad review where it is not clear the reviewer read the paper at any depth or provided useful objective feedback that can help improve my work. These are often cases where the reviewer put minimal work into the review or has a dogmatic view of the area. While such reviews are often attached to rejections, I have experienced bad reviewing with papers that have been accepted, leading to papers on my CV of which I am less proud. Research put considerable time and effort into the work and authoring papers. Further, there is often value and useful insights in many of the papers that are not accepted to a given conference or journal. Bad reviewing is disrespectful and unfair to authors, in addition to wasting their time. More importantly, bad reviewing spreads dissension among authors, who may themselves choose to become bad reviewers.
I believe the essence of good reviewing is to provide clear feedback such that a paper can be accepted (even with minor revisions), or revised and improved for acceptance at future conferences. While this feedback includes a judgment on acceptance, improving the quality of work in the community through constructive feedback is valued over reviewing for purely quality control. As such, I often find a typical review of good quality has at least the following three elements/paragraphs:
1) Summarize of the paper’s major contributions (3-10 sentences). Provide the best good-faith summarization of the paper’s claims, methods, and results. This paragraph establishes the reviewer’s understanding of the work that will both support further comments/criticisms and allow misunderstandings to be identified. Do not comment or opine on the value of the work in this paragraph.
2) Summarize the reviewer’s opinion of the paper (3-10 sentences). Identify the strengths and weaknesses of the paper with respect to criteria such as: relevance, conceptual novelty, technical soundness, quality of evaluation, clarity in composition and organization, etc. Also, specify to relative confidence in the review, or aspects of the review. For the weak aspects of the paper, provide high-level suggestions that could improve the paper to the level of acceptability. Do not be snarky or dismissive. This is not about finding faults. It is about improving the quality of the work or finding new opportunities for research.
3) Detailed comments (as many paragraphs as necessary). Address any specific points, positive or negative, in detail. I typically do this as a bulleted list of paragraphs. Each paragraph goes into detail about various points mentioned in the review summary and beyond, such as identifying spelling/grammatical errors and pointers to relevant related work.
I think of writing a review as crafting an argument or essay about the validity or invalidity of a paper. Point (1) establishes the foundational premises based on your understanding of the paper. Point (2) is your conclusion or thesis statement about the validity/invalidity of the paper. Points (3)+ build your argument to the the conclusion.
Further there are a number good reviewing practices that can be helpful to authors:
- If you believe the paper is not sufficiently novel, this claim needs to be backed up with citations to at least 3 papers (ideally authored by different research groups). If you cannot think of 3 other related papers, then maybe there is room in the area for new work. Similarly, if you believe a paper is subsumed by another work, this should be said explicitly.
- If the paper suffers from many grammatical and spelling issues, point out and correct a reasonable subset of these examples from the paper, and suggest further proofreading.
- Find and point out all forward references of terms in the paper and undefined acronyms.
Presenting talks at conferences and workshops is an essential part of research. Presentations are one of the main ways we disseminate we have learned to our research community and the greater public. My main goal in presentations is to get the audience excited about my problem and results such that they want to read and implement the ideas in the paper. In this regard, conference presentations are like short advertisements for your work that balance substance and style. While the substance of your talk will vary based on the project, the elements of a good conference presentation stay relatively the same.
There are a number of great tutorials on giving conference presentations. I have found this advice from Mark Hill to be a very useful guide, especially the “How to Give a Bad Talk by David Patterson” section. I adapted his general presentation structure below to something that seems more in accordance with robotics and AI:
- Title/author/affiliation/presenter (1 slide)
- Forecast (1 slide) Foreshadow problem addressed and insight found (What is the one idea you want people to leave with? This is the “abstract” of an oral presentation. If possible, a very brief video is a great way to illustrate your “take home message.”)
- Outline (1 slide) Give talk structure. Some speakers prefer to put this at the bottom of their title slide. (Audiences like predictability.)
- Background: Motivation and Problem Statement (1-2 slides)
(Why should anyone care? Most researchers overestimate how much the audience knows about the problem they are attacking.)
- Background: Related Work (0-1 slides) Cover superficially or omit; refer people to your paper.
- Methods (3-5 slides) Cover quickly in short talks while getting core ideas across; refer people to your paper.
- Results (3-5 slides) Present key results and key insights. This is main body of the talk. Its internal structure varies greatly as a function of the researcher’s contribution. (Do not superficially cover all results; cover key result well. Do not just present numbers; interpret them to give insights. Do not put up large tables of numbers.)
- Summary (1 slide)
- Future Work (0-1 slides) Optionally give problems this research opens up.
- Backup Slides (0-3 slides) Optionally have a few slides ready (not counted in your talk total) to answer expected questions.
In addition to this general format, here are a few guidelines that I practice for my talks:
- It is generally bad form to list entire sentences and paragraphs on slides. Your bullets should be 1 line (2 lines max) with phrases (not complete sentences) that briefly summarize points. You should orally expound on the text in the bullets.
- Limit the number of bullets one each slide. If you need many bullets on one slide, you should probably break them up over multiple slides.
- Limit the number of slides to roughly 1 slide for each minute you have for presentation. Time goes faster than you think. A cardinal sin of talks is to go past your allocated time. This error is often considered as a lack of respect to other presenters and your audience members. Essentially, it is seen as self-centered by taking time away from other presenters and your audience.
- Make sure all of your terminology is defined and forward references are avoided.
- Last and most important: DO NOT read from your slides. Again, use your slides as a talking points to further expound upon orally. (“Thou shalt not make eye contact”)