Saturday, October 17, 2015

Laser scan tools rejuvenated into ROS

From TORK web site:

    We just released ROS tools for handling laser scan data “scan_tools” into ROS Indigo, Jade. laser_scan_matcher or polar_scan_matcher packages enable 2D pose estimation solely based on the laser scan input. It can be also used as odometry without relying on other sensor data correction. On Ubuntu linux you can install via apt command.
     
    Ok, actually, we didn’t build anything. We just sorted out and “re-published” the source code so that people can install by a single command. Its source code was made by CCNY Robotics Lab.
     
    Video:

    Using Ubuntu linux, you have so many ROS packages readily available just 1-command away, much like apps on your smart phones. As of today on ROS Indigo 2,034 packages are awaiting in the cloud (If you’re curious how the sea of cloud actually looks like, sneak peak from Build Status). To make this wonder happen, there are a few things:
    • apt package management sysytem on Ubuntu
    • OSRF, management team of ROS maintains the infrastructure that builds, tests, and distributes ROS packages that the developers of each package submit. Involving volunteers, it’s eye-maintained almost 24/7
    • Each package’s developers and maintainers are deligently conforming to the rule and vigorously submitting the updates
     
    What makes ROS as sized is this many package maintainers; today more than 200 persons have made a release.

    A dilemma of a huge project to last long, like ROS with longer than 6 years, may be that some maintainers may go away as they move on to new projects (while some remain although they are not actively using the packages they take care of!). But most of the times with a minimal care, the packages can be revitalized and made compatible with the latest ROS platform. This is what TORK did for the aforementioned scan_tools package suite and we’ll continue this kind of effort to help opensource community shine even more.

Thursday, October 15, 2015

Update on Nikkei Robotics monthly magazine on ROS

    As we made a notice before, Isaac has been working with Nikkei (a leading business news distributor in Japan that recently acquired Financial Times) on article series about ROS on Nikkei Robotics monthly magazine. Lately some customers even take the magazine out from the bag and ask about the articles. What a bliss! We’re glad to see more development is happening in the industry and if we are becoming the integral part of it by providing assistance about opensource robotics technology.

    Article titles (English translation by TORK):
    • Ep.1 ROS: de facto standard tool for robotics software development, Beyond the research labs further on industrial
    • Ep.2 Using ROS on Fetch; SLAM, IK are off the shelf
    • Ep.3 ROS-I: ROS’ industrial extension. Community and application. Universal pendant to be developed
    • Ep.4 ROS on ARM: Going Beyond x86 to Embedded Systems Backed Up By Qualcomm

Saturday, October 10, 2015

Kawada’s Hiro & NEXTAGE Open Software now available in ROS Indigo (Long-Term Support)

From web site of TORK:

    Software of Kawada’s Hironx or NEXTAGE Open, both of which we’re providing maintenance service for its ROS-based opensource controller, is now available on Ubuntu 14.04 and ROS Indigo. At the same time, support for Ubuntu 12.04 and ROS Hydro has reached the end-of-life.

    Please follow the link for the detail including upgrade instruction.
    ROS Indigo is proud of approx. 2,000 packages, much added since its predecessor Hydro (with 1,700), and it’s the first LTS (Long-Term Support) in ROS and claimed to last until 2019.

    We know that Indigo has been around for longer than an year already but you know, it’ll keep rocking longer than any other versions of ROS has ever done!

Tuesday, September 22, 2015

To rename ROS package systematically

Not specific to ROS, but still nice to have a memo like this for myself.

A couple of scripts can be used when you change the name of your ROS package "foo" to "bar".

1:  cd /tmp && cp -R `rospack find foo` .  
2:  (shopt -s nullglob && _() { for P in "$1"*/; do Q="${P//[Ff][Oo][Oo]/bar}"; mv -- "$P" "$Q"; _ "$Q"; done } && _ ./)  
3:  find . -type f -print0 | xargs -0 sed -i 's/foo/bar/g'  

Line by line:
  • At first line, you copy the entire package to somewhere safe; this prevents changing CVS data (.git, .svn etc.) without knowing. 
  • 2nd line is influenced by this thread 
  • So as the 3rd line (this thread)

If you want to change the packages you've already "released", see this thread.

----

Update 2024/10/21 If the targeted files are under (a local) git repository, there's much more user-friendly command (thanks to superuser.com#1110337).

git grep -lz foo | xargs -0 sed -i '' -e 's/foo/bar/g'



Monday, August 31, 2015

Using libuvc ROS package on System76 Kudu laptop

libuvc_camera works as a universal camera ROS driver for UVC compliant cameras (webcams in particular). Using it with the integrated webcam on my laptop (Kudu Pro by System76), I added in `/etc/udev/rules.d/99-uvc.rules` this line:
SUBSYSTEMS=="usb", ENV{DEVTYPE}=="usb_device", ATTRS{idVendor}=="5986", ATTRS{idProduct}=="055c", MODE="0666"
Figuring out needed some tweaks as follows; mainly because I did not know the manufacturer name of the camera: 1. `lsusb` command result doesn't indicate the type of the device:

:
Bus 001 Device 002: ID 8087:8008 Intel Corp. 
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 003 Device 004: ID 5986:055c Acer, Inc 
:
Turned out later that the Acer one was what I was looking for, who knows. 2. Hacky enough, I ran cheese. Make sure the camera image is displayed on its window, then go to "Preference" --> "WebCam" tabg --> Device. I found the manufacturer is acutally called BisonCam. 3. Hacker doesn't stop yet. Look into `dmesg` command's result. Since this is usually long, I saved it as a file. I found a few lines that include string `Bison`, in which I found the vendor and product id:
[    2.973758] uvcvideo: Found UVC 1.00 device BisonCam, NB Pro (5986:055c)

After this, I wrote `/etc/udev/rules.d/99-uvc.rules` as above and also made a launch file for libuvc_camera:
<launch>
  <node ns="camera" pkg="libuvc_camera" type="camera_node" name="cam_kudu1">
    <param name="vendor" value="0x5986"/> <!-- check lsusb -->
    <param name="product" value="0x055c"/> <!-- check lsusb -->
    <param name="width" value="640"/>
    <param name="height" value="480"/>
    <param name="video_mode" value="yuyv"/>
    <param name="frame_rate" value="30"/>
    <param name="camera_info_url" value="file:///tmp/cam.yaml"/>
    <param name="auto_exposure" value="3"/> 
  </node>
</launch>

Saturday, July 18, 2015

Playing with Fetch and Freight robots (from Fetch Robotics) on Gazebo

It's been awhile since a promising (as everyone in robotics industry hopes) startup in San Jose called Fetch Robotics has made some of their controller software public, even before they start selling their robots. Apparently there are a group of folks who took that as "use-it-test-it-giveus-feedback!" message, which I just joined lately.

So here's an incomplete report. It should be more appreciated by anyone who Google the information about particular software if the information is gathered at fewer places, so for ROS software I usually try to write any info on wiki.ros.org as much as possible (that's one excuse for why this blog has been stale; I've posted up so many docs and changes on ROS wiki). Because the Fetch Robotics is a commercial company, however, I instead write it here as my personal record. Of course if there will be something that better be shared on their official doc, I'll open a pull request on their doc repository.

----
Let's start on their tutorial "Tutorial: Gazebo Simulation" http://docs.fetchrobotics.com/gazebo.html

Do the installation. I'm on Ubuntu Trusty 64bit, i7 quad core notebook.

Run what's already prepared
--------------------------------

Since I'm interested in using the full-fledged simulation right away, let's run from "Running the Mobile Manipulation Demo":

    $ roslaunch fetch_gazebo playground.launch

This fires up Fetch in Gazebo in a nicely elaborated lab setting (with a good-ol nostalgic wallpaint...Looks like they knew that C-Turtle is my favorite ROS logo).



Then next launch file kicks off demo program.

    $ roslaunch fetch_gazebo_demo demo.launch


With a course of actions, the robot arrives at the table in one of the rooms, releases an object on top of it, and rest. Very nicely done Fetch! Also the demo is very well-made, demonstrating its many capabilities at once.

Now, let's look into a little bit of how the software is coordinated.


With rqt_graph, you can overview the nodes running, and what topics are exchanged between nodes.





Closer look around move_group node shows that it uses the commonly used type of ROS Actions.

Closer look at move_base node (forgot to take snapshot) doesn't show anything special to Fetch, which indicates that ROS' interface for navigation is versatile enough to be applied to various robots without customization.

On the left is a screenshot using rqt_tf_tree, which I recommend to existing/new ROS users in place for rosrun tf view_frames (http://wiki.ros.org/tf#view_frames) as a quick and dynamic introspection (view_frames still has a purpose; it enables to share the tree view with others via email etc.)


Looking at base, I noticed that the actuated wheels are only 2 (l, r) out of 6 wheels/casters I see in Gazebo.

You can see how the robot perceives the world, via RViz.

    $ rviz -d `rospack find fetch_navigation`/config/navigation.rviz

LEFT; RViz, RIGHT; Gazebo



As you've seen in the node graph above, MoveIt! processes are already running with playground.launch.


So here let's open MoveIt! plugin to operate the robot on RViz. Here you have to unclick to disable `RobotModel` plugin on RViz so that MoveIt! plugin can load the robot model.







Also, normally I disable Query Start State in MoveIt! plugin that I barely use.
Open pointcloud2 plugin so that we can see the table and the object on it Fetch just picked up and placed.

Now, let's pick up the object on the table. Plan a trajectory by using MoveIt! plugin, which can simply be done by just placing robot's end-effector to the pose you want. This can be done by drag-n-drop ROS' Interactive Marker by your mouse. With clicking Plan and Execute button the robot computes and plan the trajectory plan.

It looks like MoveIt! is having a hard time to figure out the valid trajectory from the arm's current pose to the object on the table. So I rather move the hand above the table first.











Here I see on RViz a "ghost" of the hand, which I've never seen with my MoveIt! history. Also the pose of this ghost is slightly off from the Interactive Marker. Turned out it looks like the ghost is the view from the pointcloud. And I don't know why the pose was off (debugging is not the first objective on this blog, having fun instead is), but it aligned fine after re-enabled MotionPlanning plugin on RViz.

 At some point while I'm still trying to figure out the pose to pick up the object, the hand interferes the camera view so that the object went invisible on RViz so that it's almost impossible for me to continue finding the goal pose.
So I rotated the robot's base by sending `/cmd_vel` topic (using rqt_robot_steering GUI).







Unfortunately after an approach trajectory execution, RViz had to be restarted, and the goal I gave might have not been right so that the hand hit the object from the side.








With RViz restarted, I now see the table as part of MoveIt! scene object (, which went away again after re-enabling MotionPlanning plugin).
Now, the hand is almost there.




Looks pretty good huh? This time I don't move the base any more although I lost the object again in my vision. It has already taken half an hour to reach here so I'm getting reluctant to do my very best.

Now only thing I need to do might be close the gripper, which doesn't seem possible by Interactive Markers on RViz. I assume there's a Python I/F. But my time ran out before going to gym. I'll resume later to update this blog.

After all, I found it hard to do the pick-and-place tasks via MoveIt! I/F on RViz for many reasons; among other things, not seeing the block on RViz with the lack of the block in sight makes the teaching extremely harder. And for the purpose of extending the existing demo, it looks like far easier to utilize the demo program that's based on vision.

Here with this small change to the original fetch_gazebo package, I wrote a simple Python script mimicking the original demo.

1:  #!/usr/bin/env python  
2:    
3:  # Modified from https://github.com/fetchrobotics/fetch_gazebo/blob/49674b6c5f6ac5d6a649a8dc6a39f86e60515057/fetch_gazebo_demo/scripts/demo.py  
4:    
5:  import rospy  
6:    
7:  from fetch_gazebo_demo.democlients import *  
8:    
9:  if __name__ == "__main__":  
10:    # Create a node  
11:    rospy.init_node("demo2")  
12:    
13:    # Make sure sim time is working  
14:    while not rospy.Time.now():  
15:      pass  
16:    
17:    # Setup clients  
18:    move_base = MoveBaseClient()  
19:    torso_action = FollowTrajectoryClient("torso_controller", ["torso_lift_joint"])  
20:    head_action = PointHeadClient()  
21:    grasping_client = GraspingClient()  
22:    
23:    # Move the base to be in front of the table  
24:    # Demonstrates the use of the navigation stack  
25:    
26:    # Raise the torso using just a controller  
27:    rospy.loginfo("Raising torso...")  
28:    torso_action.move_to([0.4, ])  
29:    
30:    # Look below to find the block  
31:    head_action.look_at(0.4, -0.2, 0.5, "base_link")  
32:    
33:    # Get block to pick  
34:    while not rospy.is_shutdown():  
35:      rospy.loginfo("Picking object...")  
36:      grasping_client.updateScene()  
37:      cube, grasps = grasping_client.getGraspableCube()  
38:      if cube == None:  
39:        rospy.logwarn("Perception failed.")  
40:        continue  
41:    
42:      # Pick the block  
43:      if grasping_client.pick(cube, grasps):  
44:        break  
45:      rospy.logwarn("Grasping failed.")  
46:    
47:    # Tuck the arm  
48:    grasping_client.tuck()  

What this script does to the robot is simply extend the torso up, look for graspable blocks (I assume a process that detects graspable objects is running internally as an ActionServer somewhere. Also the shape is pre-registered in the referenced Python module). If a graspable block is found then get the pose of it, and move the hand toward there.















Hard to see but the arm is moving.


























See the block is in her/his hand.




















----
Also the sister (bro?) robot Freight seems fully ROS compatible.

    terminal-1$ roslaunch fetch_gazebo playground.launch robot:=freight
    terminal-2$ roslaunch fetch_gazebo_demo freight_nav.launch
    terminal-3$ rviz -d `rospack find fetch_navigation`/config/navigation.rviz


Freight on autonomous navigation. It moves amazingly fast and smooth on simulation  (I guess its real robot does so as well, at least the command level, it should).

Now I'm tempted to spawn 2 robots and collaborate, which I have no idea how to do so in ROS today.
----

Little post mortem; though I could use only a small part of all functionalities Fetch and Freight today, I should say I'm delighted to see this happening! Well-made as a ROS robot, easy to operate, all the features I thought of I wanted to give a try are implemented, already (even before it goes on sale). Looks like 2 parents of Turtlebot rock again. I just hope no matter how huge success they will get, its software remains opensource so that it takes the place of the modern reference model of the ROS robot that PR2 has been serving for the long time. Viva opensource robotics!