Documentation and Tutorials

Introduction

This documentation covers the technical details about using ROS to acquire and process the sensor data needed for our project. There are three main parts: 

ROS Installation, Using ROS on YD Lidar (obstacle avoidance) and Using ROS on Pi Camera (object tracking).

What is ROS?

ROS (Robotic Operating System) is a “set of software libraries and tools that help you build robot applications”. For more details, please visit the official ROS website [1].

http://www.ros.org/

ROS installation

The Robotic Operating System needs to be installed first to set up the environment for running the Lidar and the Camera. There are different ROS versions, and we chose to install the Kinetic version because it’s relatively new and has stronger supports for various functionalities. We also chose to install ROS on Raspbian, the Raspberry Pi Operating System, because we wanted to keep aligned with the previous programs written for the PiCar, which were all running on Raspbian. ROS could also be installed on other platforms (for example, Ubuntu).

Like ROS, Raspbian also has different versions, and the installation procedures vary for each version. Our Raspbian version was Stretch.

The ROS wiki page for installing ROS [2] was a great resource. Here we largely referenced the guide on the ROS wiki, with some changes/clarifications added.

http://wiki.ros.org/ROSberryPi/Installing%20ROS%20Kinetic%20on%20the%20Raspberry%20Pi

 

Prerequisites:

Please follow the section 2. Prerequisites on the ROS installation wiki.

 

Installation:

Please follow the section 3. Installation on the wiki, with the following changes/suggestions:

 

  1. a) In 1. Create a catkin workspace, install the “desktop” set that includes all the ROS packages, not “ros_comm”:

    $ rosinstall_generator desktop –rosdistro kinetic –deps –wet-only –tar > kinetic-desktop-wet.rosinstall
    $ wstool init src kinetic-desktop-wet.rosinstall

The wiki suggests installing “ros_comm”, which is a reduced set of ROS packages that includes the most commonly used packages. Generally it is a great approach to save space and time when installing. However, both the Lidar and the Camera depend on some packages that are outside the common set (e.g. tf, sensor_msgs, and opencv). It is possible to manually add all other packages required and create a customized installation set, but there are many of them, and dealing with a lot of dependencies can be tricky. Thus, we decided to (and recommend) simply install the “desktop” set which contains all packages.

b) In 3.3. Building the catkin Workspace, the wiki mentions that the compilation will likely fail due to memory exhaustion. That was what happened during our installation, and the solution was to add a swap file as the wiki suggested. The wiki link to “adding a swap file” was helpful, but some commands there did not work.

Here are the revised instructions to create a swap space on the SD card in Raspberry Pi:

$ cd /etc/

Open dphys-swapfile

Add a line: CONF_SWAPFILE=/var/swap  //default location of the swap file: on SD card
Add a line: CONF_SWAPSIZE=1024  //size of swap space
(The CONF_SWAPFILE could also be set to somewhere else to specify the swap file used. A common practice is to plug in an external USB drive and use the space there. However, this approach was hard to use, and after several failures we decided to just use the default location.)

$ cd /etc/init.d/
$ sudo dphys-swapfile setup  //setup the swapfile
$ sudo dphys-swapfile swapon   //turn on the swap file

Now that the swapfile is allocated and you can re-issue the command for compilation.


When finished, don’t forget to turn off the swap file:
$ sudo dphys-swapfile swapoff

Additionally, $ free -m could be used to check how much free memory is left on the Pi. $ htop could display the current processes and the current memory consumption.

Using ROS on YDLiDar

The provided YD Lidar source code is in the format of a ROS package. A ROS package is a set of code that performs one or multiple similar function(s), which is somewhat similar to “class” or “object” in other programming languages.

 

1. YD Lidar Package Installation

 

The Lidar comes with an open-source ROS package of Lidar nodes. It is an user-defined, independent package which is not included in the set of released ROS packages we installed previously. Thus, it needs to be installed separately.

 

The instructions below largely reference the YD Lidar documentation page [3], which contains the source files and a user manual.

 

http://www.ydlidar.com/download

 

(Referenced from the Linux ROS operation -> ROS Drive Installation section of the manual)

 

First, download the ROS package source file (ROS.zip) from the website. It is also helpful to download and go through the YDLIDAR F4PRO User Manual.

 

Then, make another workspace for the Lidar (separate from the previous workspace for installing ROS), and create a folder called “src” within the workspace:

$ mkdir -p ydlidar_ws/src   // could also have your own name for the workspace

 

Copy over the source files into the /src folder.

 

Then, in /ydlidar_ws, issue the command:

$ catkin_make

(catkin_make is used to compile the ROS packages.)

 

After the compilation, add the environment variable:

$ source ./devel/setup.bash

 

And add a device alias /dev/ydlidar to F4PRO’s serial port.

$ cd ydlidar_ws/src/ydlidar/startup

$ sudo chmod +x initenv.sh

$ sudo sh initenv.sh

 

Because we have already installed the full set of ROS packages on Raspbian, which includes Rviz (a visualization tool used for the Lidar data), there is no need to install it separately as suggested in the manual.

 

To verify that the package is successfully installed, you can run the Lidar by issuing the following command:

$ roslaunch ydlidar lidar.launch

 

And the Lidar will start spinning (and taking in scanned data).

 

To visualize the data scanned in, please launch the Rviz visualization tool:

$ roslaunch ydlidar lidar_view.launch

 

If the visualization does not show up when first launching the Rviz visualizer, please try the following:

 

  1. a) Click Add — By Topic — /scan — Select LaserScan and add

  1. b) Under Global Options, change Fixed Frame to laser_frame

2. YD Lidar Nodes, LaserScan, and Launch Files

 

After the YD Lidar package was installed, we could take a closer look at what was inside the package.

 

Nodes/Publisher/Subscriber

 

In the directory ydlidar_ws/src/ydlidar_master/src (assuming the original directory name for the downloaded Lidar packaged was kept), there are two files:

ydlidar_node.cpp

ydlidar_client.cpp

 

These two C++ files define ROS nodes. ROS nodes are blocks of code that define an object which exists in the system with certain features. One effective way to communicate between the ROS nodes is to make them Publishers and Subscribers, as what the Lidar source code does. In our case, ydlidar_node.cpp defines a Publisher node and ydlidar_client.cpp defines a Subscriber node.

 

A Publisher node, as the name suggests, publishes the data on the ROS platform with an unique identifier (called “topic” in ROS), and every Subscriber node that “listens” to that topic could receive the data. It is a one-to-many system: multiple Subscribers could subscribe to the same Publisher, as long as the specified topics match.

 

For the syntax, the ROS wiki has a beginner tutorial on how to write a simple Publisher and Subscriber system. [4]

 

Writing a Simple Publisher and Subscriber (C++):

http://wiki.ros.org/ROS/Tutorials/WritingPublisherSubscriber%28c%2B%2B%29

 

Writing a Simple Publisher and Subscriber (Python):

http://wiki.ros.org/ROS/Tutorials/WritingPublisherSubscriber%28python%29

 

(There are a lot of other tutorials to help understand Publisher/Subscriber system on the ROS wiki. Other helpful ones include writing the makefile and the .xml file for different nodes, etc. )

 

Once launched, a node will keep spinning and continuously publishing/receiving data. To run a node on ROS, simply issue the following command:

$ rosrun <package_name> <your_node_name>

E.g.

$ rosrun ydlidar ydlidar_node

Message Type/LaserScan

 

The thing sent by the Publisher and caught by the Subscriber is of the type “message” (msg in short). It is like an object that has different “fields” that can be assigned by the Publisher and accessed by the Subscriber. The message fields are in the form of:

variable_type variable_name

 

The documentation on the ROS wiki page for messages [5] (http://wiki.ros.org/msg) covers more details.

 

In YD Lidar, the message type that the Publisher and the Subscriber used to communicate is LaserScan, which could be found under sensor_msgs/LaserScan.msg [6] (http://docs.ros.org/melodic/api/sensor_msgs/html/msg/LaserScan.html). It is a common message type that applies to many devices that are scanning in data, and is ideal for Lidar data transfers.

 

To view the fields inside LaserScan, use the following command in any ROS package:

$ rosmsg show sensor_msgs/LaserScan

 

Launch Files

 

Finally, we will examine the ROS launch files [7] (http://wiki.ros.org/roslaunch). A launch file is not absolutely necessary for running the node, but it is used to supply the parameters that need to be passed into the node.

 

A launch file specifies the package, the node, the Publisher/Subscriber communication topic, and the different parameter types and values. When a launch file is present, the node can be launched via the launch file:

$ roslaunch <package_name> <launchfile>

E.g.

$ roslaunch ydlidar lidar.launch

 

In YD Lidar package, the launch files can be found in the directory ydlidar_master/launch.There are two files: lidar.launch is the plain launch file that starts the Lidar which publishes the data scanned in. lidar_view.launch uses Rviz, the visualization tool, to visualize the data scanned by the Lidar and draw the “obstacles” on the screen.

3. LaserScan Filters

 

As discussed before, YD Lidar uses LaserScan message type and outputs data in [angle, distance] pairs.

The LaserScan data originally taken in is the raw data, which is really noisy. Thus, we need to apply filters to remove the noise, and make our object detection algorithm more accurate.

 

There are several built-in LaserScan filters in ROS that specifically apply to LaserScan message type data. Their source code could be found in the laser_filters ROS package [8] that builds upon the filters package.

 

http://wiki.ros.org/laser_filters

 

Besides using a single filter, we can also connect several LaserScan filters to build a filter chain. The data will be passed into the first filter, and then from the first to the second, and finally come out from the last one.

 

One way to implement the filters is to use a scan-to-scan filter node. The node should be put in the middle of the Publisher (which publishes the raw scanned data) and the Subscriber (which receives the data). That way the Subscriber will receive the filtered data. The Subscriber should subscribe to the new topic of the filter node, instead of the Publisher topic.

 

Below are two simple diagrams that illustrate how the scan-to-scan filter node works:

 

Before adding the node:

After adding the node:

 

We picked the Median filter [9] (http://wiki.ros.org/filters/MedianFilter) provided in the package to filter our LaserScan data. The Median filter is a common type of filter which works by taking in the current measurement, finding the median of the current and an user defined number of previous values, and using the median as the current value. We implemented the filter within the scan-to-scan filter node.

Below are the instructions on implementing the filter node and inserting a Median filter.

 

Download and install laser_filters package

 

The laser_filters package is not included in the set of ROS packages we installed before, and we need to install the package separately.

 

First, create a workspace to hold the package: (we named it laser_filters)

$ mkdir -p laser_filters/src

 

Then, in the src folder, download the source code of the laser_filters package from github.

E.g. by using wget:

$ cd laser_filters/src

$ wget https://github.com/ros-perception/laser_filters

 

Go back to the top of the workspace and issue catkin_make to compile the package:

$ cd ..

$ catkin_make

 

Set environment variable:

$ source ./devel/setup.bash

 

Now, issue the rospack command to check that the package is successfully installed:

$ rospack find laser_filters

Explore and reconfigure files

 

Go to the directory that contains the source files:

$ cd laser_filters/src/laser_filters/src

 

All C++ source files for the laser_filters package are under laser_filters/src (header files under laser_filters/include/laser_filters). It is very helpful to go through the source code to understand the code structure before making any changes. Be careful that some implementations might have been deprecated.

 

The most up-to-date, and commonly used nodes are:

  1. scan_to_scan_filter_chain (takes in and sends out data in LaserScan format)
  2. scan_to_cloud_filter_chain (takes in data LaserScan format and sends out data in PointCloud format (coordinate system))

 

Both of the nodes are really useful. We will focus on scan_to_scan_filter_chain here.

 

Under the same directory we can find median_filter.cpp, which is a definition file of the Median filter. It extends on the Filter base class and defines the functions configure() and update(), the required interface for every filter. The filter chain node calls these functions when the Median filter is used within the node.

 

Now, go to the directory that contains the configuration files:
$ cd ../examples

 

The directory contains both launch files and yaml files. As discussed before, the launch files are used to “wrap” the nodes and feed in parameters. In package ydlidar, the parameters are written in the launch file itself. However, here the parameters are written in a yaml file which is loaded into the launch file. Separating the parameters from the launch file is a good practice which  makes the file structure more organized.

 

For the Median filter, open median_filter_5_example.yaml and set the configurations.

 

The fields in the yaml file are self-explanatory. For more explanations please see the documentation on Median filter on the ROS wiki.

 

The most important parameters here is the number_of_observations, which indicates the number of values that the filter will choose the median value from. The default value is set to be 5. Bigger number_of_observations will generally lead to less noise but longer “reaction” time (the older values will “mask” the new values coming in, and the change in data will not be reflected until many cycles later). There is a trade off between the noise level and the reaction time and you need to find the optimal number_of_observations to balance the two.

 

Here, we set the number of observations for our Median filter to be 13.

 

Then we will take a look at the launch file (median_filter_5_example.launch).

 

The launch file specifies the package (laser_filters), the node to call upon launch (scan_to_scan_filter_chain), the output (screen) and the name (laser_filter). It also specifies the yaml file to load upon launch.

 

In the middle, there is a line:

<remap from=”scan” to=”base_scan”>

 

Which makes the Subscriber node search for the topic “base_scan” instead of “scan”. However, since our YD Lidar Publisher publishes data on the topic “scan”, there is no need for the remap. We can remove this line from the launch file.

Launch the filter chain with median filter

 

First, make sure that the YD Lidar is launched:

(In YD Lidar workspace)

$ roslaunch ydlidar lidar.launch

 

Then, simply use the launch file to launch the filter chain:

$ roslaunch laser_filters median_filter_5_example.launch

 

To check that the filter chain was successfully launched, use the rostopic command:

$ rostopic info <topic_name>

E.g.

$ rostopic info scan

shows the YD Lidar as the Publisher and the laser_filter (the name of the scan_to_scan_filter node specified in the launch file) as the Subscriber.

 

$ rostopic info scan_filtered

shows the laser_filter as the Publisher.

 

To receive the filtered data, change the ydlidar_client Subscriber node to listen to topic “scan_filtered” in the source code, and launch the node in its directory:

$ rosrun ydlidar ydlidar_client.py  // or ydlidar_client if using C++

 

Now the filtered data can be received by the Subscriber. You can use rostopic to double check.

At this point, the data is successfully acquired and post-processed, and is ready to use.

Getting image from Pi Camera

We planned to use the built-in object detection functions from OpenCV to detect the object being tracked, and provide coordinates of the object for our tracking algorithm. The OpenCV ROS package is included in the full set of ROS packages we installed at the beginning. The default OpenCV version for Kinetic is OpenCV3.

 

To make sure that the package is already installed, use rospack:

$ rospack find opencv3

 

1. Getting Image from the Pi Camera

 

First, we needed to get the raw image data from the Raspberry Pi Camera. The Camera takes in a 2D video stream of the surroundings, and each frame of the stream is composed of pixels.

 

Here we used raspicam_node, which stands for Raspberry Pi Camera node. It is a ROS package we downloaded and installed online (source code and instructions [10] https://github.com/UbiquityRobotics/raspicam_node). This package reads in the Camera image data and publishes it with the message type sensor_msgs/Image ([11] http://docs.ros.org/melodic/api/sensor_msgs/html/msg/Image.html), which represents a 2D image and is ideal for Camera data transfers.

 

However, the message type sensor_msgs/Image is a ROS image format. If the user wants to use the image with OpenCV, there needs to be a “translator” between the ROS Image type and the OpenCV-compatible Image type. The package cv_bridge [12] (http://wiki.ros.org/cv_bridge) is able to take the ROS image messages and convert them into OpenCV cv::Mat format which allows OpenCV to read and analyze the images. Cv_bridge is also able to convert OpenCV image format back to ROS format.

 

Below is a diagram that briefly demonstrates the process:

The code of the package needs to be downloaded from the source webpage, and put into the /src folder within catkin_ws (the catkin root directory).

 

The package raspicam_node has several dependencies that need to be resolved, including cv_bridge and some other packages that are not installed during the initial ROS installation. The important ones are listed below:

 

cv_bridge

Converts raw image to an opencv-readable format, and vice versa

image_transport

Contains different methods of image transportation, e.g. compressed image

camera_info_manager

Manages Pi camera usage

 

To resolve all dependencies (including those not listed above), go to the catkin root directory, and issue the command:

$ cd ~/catkin_ws
$ rosdep install -y –from-paths src –ignore-src –rosdistro kinetic -r –os=debian:stretch

 

We have used the command when installing the initial ROS packages. It works the same way here: it will find all dependencies required by the packages that are already in the directory and obtain the source code for these packages.

 

Alternatively, it is possible to manually find and download all the dependent packages from github into the /src directory. However, this approach is not recommended because it requires much more effort.

 

After all dependent packages are downloaded, compile them in the workspace:

$ catkin_make

 

Now the package should be ready to use.

 

Our camera is a v2 camera, and we chose to use resolution 1280×720. Our command to launch the node was:

$ roslaunch raspicam_node camerav2_1280x720.launch

 

Use rosnode info to double check that the node is running correctly.

 

Note that by default, the node only publishes the image in a compressed format (message type sensor_msgs/CompressedImage [13] http://docs.ros.org/melodic/api/sensor_msgs/html/msg/CompressedImage.html). To enable output of the raw camera image, please add an option in the command line:

$ roslaunch raspicam_node camerav2_1280x720.launch enable_raw:=true

 

Details of the parameter definitions can be found in the launch files.

2. Viewing the Image

 

Command rosnode can be used to check whether the node is working correctly or not. But, if you want to see the captured image on the screen in real time, there needs to be another ROS package that displays the image, which is called image_view [14] (http://wiki.ros.org/image_view).

 

To install the package, download the source code, resolve any dependencies, and compile the package using catkin_make, just as before.

 

After compilation finished, first launch the raspicam_node, and then run the image_view node:
$ rosrun image_view image_view image:=/raspicam_node/image
To display the raw image on the screen. (This is assuming that raw image publishing is enabled in raspicam_node.)

 

The “image:=” command line option is for remapping the topic from “image” as defined in the image_view.cpp source file to whatever topic you want to subscribe to.

 

To subscribe to the compressed image type instead, use the following command:

$ rosrun image_view image_view image:=/raspicam_node/image compressed

 

The “compressed” added at the end is to specify the image transportation type. Other image transportation types (e.g. theora) are also supported. Please see the ROS wiki on package image_transport [15] (http://wiki.ros.org/image_transport) for detailed information.

3. Locating Object to be Tracked

 

There are many useful ROS packages that use OpenCV to post-process the images captured by the Camera. Our goal was to track a certain object, and the data we needed to acquire was the location of the object in real time. Therefore, we picked find_object_2d [16] http://wiki.ros.org/find_object_2d, a package that is able to recognize the selected object within a 2D frame and send out the coordinates of the object in real time.

 

First, download and install the package on our Pi, following the same steps as before.

 

Then, first launch raspicam_node to start capturing image, and run the node based on the raw image published:

$ rosrun find_object_2d find_object_2d image:=/raspicam_node/image

The find_object_2d GUI will be launched. The GUI displays the video stream currently captured, and also the “featured points” recognized by the algorithm.

 

The GUI allows the user to manually select an object:

File → Add an object → Take picture → Select the region of the object on screen → Finish

 

In addition, since the system will be stuck or very slow when running the GUI with too many feature points detected and there’s no use to open the GUI, go to and edit the launch file:

~/find_object/launch/find_object_2d.launch

 

find the command line:

 

<param name=”gui” value=”false” type=”bool”/>

<param name=”objects_path” value=”~/objects” type=”str”/>

 

By changing the parameter value of gui to false, you can start without gui. At ~/objects, add the path to a folder where the pictures of the objects locate at the parameter of object_path. It’ll run without gui but all the functions as before.

To run find_object_2d without gui, use the command:

roslaunch find_object_2d find_object_2d.launch

 

In the picture below, the raspberry on the Raspberry Pi case is selected and tracked:

The find_object_2d node publishes on many topics, and the most important one contains the coordinate information of the object being tracked. The message type is std_msgs/Float32MultiArray [17] http://docs.ros.org/api/std_msgs/html/msg/Float32MultiArray.html.

 

Something about the Multi Array and about the Q transform:

QTransform

QTransform Matrix

 

A QTransform object contains a 3 x 3 matrix. The m31 (dx) and m32 (dy) elements specify horizontal and vertical translation. The m11 and m22 elements specify horizontal and vertical scaling. The m21 and m12 elements specify horizontal and vertical shearing. And finally, the m13 and m23 elements specify horizontal and vertical projection, with m33 as an additional projection factor. (Qt wikipedia http://doc.qt.io/archives/qt-4.8/qtransform.html )

 

The message of QTransform in ROS contations the info in this form:[objectId1, objectWidth, objectHeight, m11, m12, m13, m21, m22, m23, m31, m32, m33, objectId2…] where m## is a 3×3 homography matrix (m31 = dx and m32 = dy). According to the definition of QTransform, the 10th element m31 is the one we used in our algorithm because it represents the actual translation of the detected object.

 

For example, if we have a point/pixel with coordinate (x,y) in the coordinate and it moves to a new position of (x’, y’), the new coordinate is calculated by QTransform with the equations below:

x’ = m11*x + m21*y + dx
y’ = m22*y + m12*x + dy

As explained above, Qtransform changes the coordinate (x,y) into (x’,y’) with the calculation above. As explained before, m11 is the horizontal scaling factor, m21 is horizontal shear mapping factor and m31(dx) is horizontal translation factor.

 

HOW TO CREATE A ROS PACKAGE (tutorial)

http://wiki.ros.org/ROS/Tutorials/CreatingPackage

 

Logistics on writing python node receiving c++stuff

2.3. Rewriting Lidar Client in Python

 

Now that we understand all the important parts of the Lidar package, we can start to modify the package to make something we desire.

 

One thing we noticed first was that the Lidar Publisher and Subscriber nodes are both written in C++. However, we would like our code to be in Python, to align with the other works already done on the PiCar, and to better reuse the previously developed code on Pi/Arduino communication, which were all written in Python.

 

ROS supports both Python and C++ equivalently, and the Publisher/Subscriber pairs are not restricted to the same programming language. Therefore, we decided to rewrite only the Subscriber node in Python and left the Publisher node unchanged. This required minimum effort but also met what we desired.

 

The final version of the Python code we wrote can be found in the Appendix.

 

After the code was written, we needed to build the node using catkin_make:

$ cd ydlidar_ws  // the catkin workspace folder

$ catkin_make

 

There is no need to change the makefile in order to build a Python ROS node (whereas for C++ nodes the file names will need to be added to the CMakeList.txt file).

 

After compilation, run the new Python node with the command:
$ roslaunch ydlidar ydlidar_client.py  // the name of the Python Subscriber