Saturday, July 18, 2015

Playing with Fetch and Freight robots (from Fetch Robotics) on Gazebo

It's been awhile since a promising (as everyone in robotics industry hopes) startup in San Jose called Fetch Robotics has made some of their controller software public, even before they start selling their robots. Apparently there are a group of folks who took that as "use-it-test-it-giveus-feedback!" message, which I just joined lately.

So here's an incomplete report. It should be more appreciated by anyone who Google the information about particular software if the information is gathered at fewer places, so for ROS software I usually try to write any info on wiki.ros.org as much as possible (that's one excuse for why this blog has been stale; I've posted up so many docs and changes on ROS wiki). Because the Fetch Robotics is a commercial company, however, I instead write it here as my personal record. Of course if there will be something that better be shared on their official doc, I'll open a pull request on their doc repository.

----
Let's start on their tutorial "Tutorial: Gazebo Simulation" http://docs.fetchrobotics.com/gazebo.html

Do the installation. I'm on Ubuntu Trusty 64bit, i7 quad core notebook.

Run what's already prepared
--------------------------------

Since I'm interested in using the full-fledged simulation right away, let's run from "Running the Mobile Manipulation Demo":

    $ roslaunch fetch_gazebo playground.launch

This fires up Fetch in Gazebo in a nicely elaborated lab setting (with a good-ol nostalgic wallpaint...Looks like they knew that C-Turtle is my favorite ROS logo).



Then next launch file kicks off demo program.

    $ roslaunch fetch_gazebo_demo demo.launch


With a course of actions, the robot arrives at the table in one of the rooms, releases an object on top of it, and rest. Very nicely done Fetch! Also the demo is very well-made, demonstrating its many capabilities at once.

Now, let's look into a little bit of how the software is coordinated.


With rqt_graph, you can overview the nodes running, and what topics are exchanged between nodes.





Closer look around move_group node shows that it uses the commonly used type of ROS Actions.

Closer look at move_base node (forgot to take snapshot) doesn't show anything special to Fetch, which indicates that ROS' interface for navigation is versatile enough to be applied to various robots without customization.

On the left is a screenshot using rqt_tf_tree, which I recommend to existing/new ROS users in place for rosrun tf view_frames (http://wiki.ros.org/tf#view_frames) as a quick and dynamic introspection (view_frames still has a purpose; it enables to share the tree view with others via email etc.)


Looking at base, I noticed that the actuated wheels are only 2 (l, r) out of 6 wheels/casters I see in Gazebo.

You can see how the robot perceives the world, via RViz.

    $ rviz -d `rospack find fetch_navigation`/config/navigation.rviz

LEFT; RViz, RIGHT; Gazebo



As you've seen in the node graph above, MoveIt! processes are already running with playground.launch.


So here let's open MoveIt! plugin to operate the robot on RViz. Here you have to unclick to disable `RobotModel` plugin on RViz so that MoveIt! plugin can load the robot model.







Also, normally I disable Query Start State in MoveIt! plugin that I barely use.
Open pointcloud2 plugin so that we can see the table and the object on it Fetch just picked up and placed.

Now, let's pick up the object on the table. Plan a trajectory by using MoveIt! plugin, which can simply be done by just placing robot's end-effector to the pose you want. This can be done by drag-n-drop ROS' Interactive Marker by your mouse. With clicking Plan and Execute button the robot computes and plan the trajectory plan.

It looks like MoveIt! is having a hard time to figure out the valid trajectory from the arm's current pose to the object on the table. So I rather move the hand above the table first.











Here I see on RViz a "ghost" of the hand, which I've never seen with my MoveIt! history. Also the pose of this ghost is slightly off from the Interactive Marker. Turned out it looks like the ghost is the view from the pointcloud. And I don't know why the pose was off (debugging is not the first objective on this blog, having fun instead is), but it aligned fine after re-enabled MotionPlanning plugin on RViz.

 At some point while I'm still trying to figure out the pose to pick up the object, the hand interferes the camera view so that the object went invisible on RViz so that it's almost impossible for me to continue finding the goal pose.
So I rotated the robot's base by sending `/cmd_vel` topic (using rqt_robot_steering GUI).







Unfortunately after an approach trajectory execution, RViz had to be restarted, and the goal I gave might have not been right so that the hand hit the object from the side.








With RViz restarted, I now see the table as part of MoveIt! scene object (, which went away again after re-enabling MotionPlanning plugin).
Now, the hand is almost there.




Looks pretty good huh? This time I don't move the base any more although I lost the object again in my vision. It has already taken half an hour to reach here so I'm getting reluctant to do my very best.

Now only thing I need to do might be close the gripper, which doesn't seem possible by Interactive Markers on RViz. I assume there's a Python I/F. But my time ran out before going to gym. I'll resume later to update this blog.

After all, I found it hard to do the pick-and-place tasks via MoveIt! I/F on RViz for many reasons; among other things, not seeing the block on RViz with the lack of the block in sight makes the teaching extremely harder. And for the purpose of extending the existing demo, it looks like far easier to utilize the demo program that's based on vision.

Here with this small change to the original fetch_gazebo package, I wrote a simple Python script mimicking the original demo.

1:  #!/usr/bin/env python  
2:    
3:  # Modified from https://github.com/fetchrobotics/fetch_gazebo/blob/49674b6c5f6ac5d6a649a8dc6a39f86e60515057/fetch_gazebo_demo/scripts/demo.py  
4:    
5:  import rospy  
6:    
7:  from fetch_gazebo_demo.democlients import *  
8:    
9:  if __name__ == "__main__":  
10:    # Create a node  
11:    rospy.init_node("demo2")  
12:    
13:    # Make sure sim time is working  
14:    while not rospy.Time.now():  
15:      pass  
16:    
17:    # Setup clients  
18:    move_base = MoveBaseClient()  
19:    torso_action = FollowTrajectoryClient("torso_controller", ["torso_lift_joint"])  
20:    head_action = PointHeadClient()  
21:    grasping_client = GraspingClient()  
22:    
23:    # Move the base to be in front of the table  
24:    # Demonstrates the use of the navigation stack  
25:    
26:    # Raise the torso using just a controller  
27:    rospy.loginfo("Raising torso...")  
28:    torso_action.move_to([0.4, ])  
29:    
30:    # Look below to find the block  
31:    head_action.look_at(0.4, -0.2, 0.5, "base_link")  
32:    
33:    # Get block to pick  
34:    while not rospy.is_shutdown():  
35:      rospy.loginfo("Picking object...")  
36:      grasping_client.updateScene()  
37:      cube, grasps = grasping_client.getGraspableCube()  
38:      if cube == None:  
39:        rospy.logwarn("Perception failed.")  
40:        continue  
41:    
42:      # Pick the block  
43:      if grasping_client.pick(cube, grasps):  
44:        break  
45:      rospy.logwarn("Grasping failed.")  
46:    
47:    # Tuck the arm  
48:    grasping_client.tuck()  

What this script does to the robot is simply extend the torso up, look for graspable blocks (I assume a process that detects graspable objects is running internally as an ActionServer somewhere. Also the shape is pre-registered in the referenced Python module). If a graspable block is found then get the pose of it, and move the hand toward there.















Hard to see but the arm is moving.


























See the block is in her/his hand.




















----
Also the sister (bro?) robot Freight seems fully ROS compatible.

    terminal-1$ roslaunch fetch_gazebo playground.launch robot:=freight
    terminal-2$ roslaunch fetch_gazebo_demo freight_nav.launch
    terminal-3$ rviz -d `rospack find fetch_navigation`/config/navigation.rviz


Freight on autonomous navigation. It moves amazingly fast and smooth on simulation  (I guess its real robot does so as well, at least the command level, it should).

Now I'm tempted to spawn 2 robots and collaborate, which I have no idea how to do so in ROS today.
----

Little post mortem; though I could use only a small part of all functionalities Fetch and Freight today, I should say I'm delighted to see this happening! Well-made as a ROS robot, easy to operate, all the features I thought of I wanted to give a try are implemented, already (even before it goes on sale). Looks like 2 parents of Turtlebot rock again. I just hope no matter how huge success they will get, its software remains opensource so that it takes the place of the modern reference model of the ROS robot that PR2 has been serving for the long time. Viva opensource robotics!