From Brown University Robotics
Revision as of 04:26, 10 August 2010 by Sosentos
Reinforcement learning (RL) is a sub-area of machine learning concerned with how an agent should select actions given its environment and often cites robotics as a potential application area. While some researchers have bridged the gap and used RL algorithms in robot applications, most RL experiments happen in simulation and are never ported over to the robotics due to the difficulty of programming and maintaining a robot. Additionally individuals in the RL community use their own software frameworks for evaluating and creating learning techniques. RL-Glue is a standard interface that allows RL researchers to share agent, environments and experiment programs together. Robotics has suffered from similar problems where labs have primarily created their own infrastructure and evaluation across different techniques has become difficult if not impossible. ROS is a large sophisticated research tool that is currently be used by many roboticists world-wide.
We introduce rosglue, framework that allows robots running ROS to be environments for RL-Glue agents. Our hope is that this may lead to increased communication between the fields and open further collaborations.
Short Primer on ROS
ROS is an open-source robot middle ware system. It provides many services including hardware abstraction, low-level device control, implementations for commonly used functionality, and message-passing. If you're familiar with ROS, feel free to skim or skip this section. If you've never heard of ROS before or know very little about it you can learn more by checking out the tutorials and documentation on http://www.ros.org/wiki/ However, our goal is to allow you to use at least some robots running ROS with as little understanding of this as possible.
Topics and Services
Perhaps the most important thing to understand about ROS is how it exposes the functionality of the robot. This happens in one of two ways, as a topic or as a service. Both services and topics can be used for observing the robots environment or for performing control.
Topics are an asynchronous communication of streams of objects. A process can publish topics and other processes may subscribe to these topics and use the data as they wish without directly communicating to the publisher process.
Services are a synchronous communication system and are much like function calls in many programming languages, they take in arguments and return responses. Services, under ROS, will always return an object which can be arbitrarily complex.
Short Primer on RL-Glue
RL-Glue provides a standard interface for the three major components of an RL system: the agent, the environment, and the experiment. Much like with ROS you're familiar with RL-Glue, feel free to skim or skip this section. If you've never heard of RL-Glue before or know very little about it you can learn more by checking out http://glue.rl-community.org/wiki/Main_Page
In order to program in RL-Glue developers download a codec for the language of the user's choice, currently C/C++, Java, Lisp, Matlab, and Python are supported. The RL-Glue interface is a series of functions that are defined by the codec. For example a standard python RL-Glue environment the following functions must be defined:
One of the most important things to understand about RL-Glue is the Task Spec. The task speck is essentially the problem definition in RL-Glue. The task spec follows the following template:
VERSION <version-name> PROBLEMTYPE <problem-type> DISCOUNTFACTOR <discount-factor> OBSERVATIONS INTS ([times-to-repeat-this-tuple=1] <min-value> <max-value>)* DOUBLES ([times-to-repeat-this-tuple=1] <min-value> <max-value>)* CHARCOUNT <char-count> ACTIONS INTS ([times-to-repeat-this-tuple=1] <min-value> <max-value>)* DOUBLES ([times-to-repeat-this-tuple=1] <min-value> <max-value>)* CHARCOUNT <char-count> REWARDS (<min-value> <max-value>)
EXTRA [extra text of your choice goes here]";
VERSION RL-Glue-3.0 PROBLEMTYPE episodic DISCOUNTFACTOR 1 OBSERVATIONS INTS (2 0 1) DOUBLES (3 -2 0.5) (-.5 .5) ACTIONS INTS (0 4) REWARDS (-5.0 5.0) EXTRA additional notes go here (for exampe author and problem name)
This defines the learning problem as:
rosglue is designed to be a bridge between RL-Glue and ROS. As pictured in the figure, rosglue treats a robot running ROS as an RL-Glue environment.
Currently rosglue allows observation to be ROS topics. Actions may either be ROS topics, published by rosglue, or ROS services. Reward and termination functions may either be ROS topics or custom python functions created by the user. When the user creates a custom function they must use rosparam. For example if the function can be found in the mycode_reward.py file in the same directory as rosglue the following call will be issued:
rosparam set /brown/rosglue/rewardfile mycode_reward.py
The same thing can be done to define the termination condition.
Other than custom reward and termination conditions a user does not need to perform any other RL-Glue coding. Instead the user defines the robotics environment through a yaml configuration file. This yaml is similar to the RL-Glue task spec in that it defines the problem. In the yaml configuration file the user defines not only the RL task but also which portions of the ROS environment will provide the observations and action interfaces. rosglue will use the configuration file to automatically handle all of the messages and translate from the RL-Glue environment to ROS for the user. The RL research is no longer required to program in ROS and the robot research can use RL-Glue agents available in the RL-Library without understanding the RL-Glue interface. Launch files (scripts to launch the appropriate ROS nodes) and sample yaml files can also provide a means of allowing users with little previous experience in ROS to immediately begin working with the robot.
In order to make this more concrete we show an example yaml file for a task in which the learning agent uses an iRobot Create for a navigation task. The observations provided by ros are topics that are published on the /position topic. This topic has three fields that the agent will use during learning: x, y, and theta (orientation). The actions are then provided using a service named /act. This service takes in three values 0 (right), 1 (forward), 2 (left).
problemtype: episodic discountfactor: 1 observations: /position: x: - -0.05 - 3.0 y: - -0.05 - 3.0 theta: - -1.6 - 3.5 actions: /act: - service - action: - 0 - 2 reward: type: glue #ros or glue /position: x: - -0.05 - 3.0 y: - -0.05 - 3.0 range: - -1 - 10 termination: type: glue #ros or glue extra: iRobotCreate by Sarah Osentoski