Crossing Campus on Cruise Control: An Autonomous RC Car
Authors:
Thomas Robertson – B.S. Electrical Engineering
robertson.t@wustl.edu
Vanessa Wergin – B.S. Systems Engineering
v.g.wergin@wustl.edu
Billy Williams – B.S. Electrical Engineering
b.x.williams@wustl.edu
Project Advisor:
Dr. Dorothy Wang (dorothyw@wustl.edu)
05/08/2023
Submitted to Professor Dorothy Wang and the Department of Electrical and Systems Engineering
01/2023 – 05/2023
Abstract
As the autonomous car industry continues to grow, research must be done into the technology and control algorithms that follow. Autonomous cars analyze their environment with the help of sensors. This project will explore two of those sensors. A Traxxas Rustler RC car was programmed to navigate an L-shaped path on Washington University in St. Louis’s campus. A raspberry pi device and Robotic Operating System (ROS) programming methods were used to seamlessly combine RPLidar readings and Pi-camera images to navigate the track in a safe manner. The control algorithm is comparable to a lane and turn assist in modern vehicles. The goal of the project was to create a safe autonomous system that could be tested in a realistic environment beyond a computer simulation. Vanessa Wergin developed the ROS control program for final implementation. Thomas Robertson developed the Pi-camera and RP Lidar software. Billy Williams developed the final car hardware design. The results of this study could provide insight into sensor research and development for the self-driving cars of tomorrow.
Introduction
The rate of automobile related deaths has decreased by nearly half in the past 30 years. This can be attributed to improved safety technology such as Honda’s proprietary “Lane Assist” technology or Chevrolet’s “Adaptive Cruise Control” [2]. Car manufacturers worldwide have developed safety measures for their vehicles that positively impact what happens on the road. With that being said, there were still over 46,000 automobile related deaths in the U.S. in 2021 alone [3]. Almost all can be attributed to driver error. How can this issue be combated? A potential and promising solution is taking the wheel out of the driver’s hands. A computer can process information much faster than a human mind. Why not have one operating a machine weighing thousands of lbs and traveling at 60 mph? The answer is we already have…kind of. The United States Federal Government has allowed the use of self-driving cars, but there must be a passenger in the driver seat and they must be engaged while the car is moving autonomously. So, although not completely autonomous in the sense of the word, there are still cars on the road today that drive themselves.
Autonomous, self-driving vehicles will soon become the norm as a means of predictably safe travel on the ground. Tesla and other pioneers of the technology have already begun to develop tomorrow’s fleet. However, before the transition may take place, sensor technology must be studied extensively. Self driving cars are currently part of more accidents on the road than their percentage representation on the road. That must improve before mass production and use of autonomous cars can take place. This project explored the benefits related to this type of environmental technology interaction as well as the limitations the technology possesses. Through the project selection guidelines provided by WashU’s Electrical and Systems Engineering Capstone program, it was possible to be part of the research that revolutionizes future infrastructure and travel. The Electrical and Systems Engineering Department at WashU allowed the purchase of devices that are widely used in the autonomous car industry today. Groups were allowed to choose from a selection that included encoders, cameras, lidars, IMUs, ultrasonic devices, and more. The A1 RP Lidar and a Pi-camera were used in the final design.
The main objective of this project was safety of travel. Travel safety was determined by how well the car stayed on the path, how fluid the movements were, how fast the car turned, and how quickly it could adapt to a changing environment. Because the project is a smaller model of autonomous cars that are nearly 100 times the volume and a factor of 1000 times the weight, safety failure of the Rustler would mean catastrophic failure of the lifesize model.
Being that past electrical and systems engineering students have failed to successfully implement Lidar technology without creating obstacles for the sensor to read, the group determined it was up for the challenge. The objective of ROS integration of the RPLIdar and Pi-camera was difficult and time consuming. The integration proved to be more difficult than expected and led to a modification of the original blueprint of the project. It was determined early on that the project would also include an encoder to monitor speed as another means of adaptive control. It was later determined the car would not be traveling at high enough speeds to deem the encoder necessary. The final design did not include encoder integration.
Task
![](https://sites.wustl.edu/esecapstonevwtrbw/files/2023/05/image-1.png)
Figure 1. Vehicle Race Track
The track for our car is shown in Figure 1, located in the Northeast corner of Washington University’s campus. The red line represents any of the available lengths our car could maneuver through. We decided to follow the green line, starting at the black circle and ending at the black ‘x’.
Methods
Equipment
![](https://sites.wustl.edu/esecapstonevwtrbw/files/2023/05/image-6-768x1024.png)
Figure 2. Final RC Car
The vehicle we used for this project is a Traxxas Rustler RC car. In a previous semester, teams in this course deconstructed the car to disable the remote control and enable use for a raspberry pi. We were also given a pi hat, which allows for more functionality and use on top of the pi. How we connected these parts together will be explained later in this paper.
![](https://sites.wustl.edu/esecapstonevwtrbw/files/2023/05/image-7-1024x541.png)
Figure 3. Pi Camera
Our main piece of equipment for steering the car is a pi camera. This component allows for simple photo capture and then image manipulation. We tried to place the camera on different locations on the car, but decided to place it on the side of the vehicle in the end.
![](https://sites.wustl.edu/esecapstonevwtrbw/files/2023/05/image-6-768x1024.png)
Figure 4. RPLidar
Our second main piece of equipment is a Slamtec RPLidar. This device is able to take a 2D, 360 degree view of everything around it up to 6 meters. The lidar continuously takes readings and returns the distance at each angle. We placed it on top of our car in order that nothing unintended interferes with the readings.
Connecting Equipment
![](https://sites.wustl.edu/esecapstonevwtrbw/files/2023/05/image-1024x355.png)
Figure 5. Wiring Connection
We were provided with documentation via the Canvas website on how to connect the raspberry pi to the car. The right square in Figure 5 represents the pi hat, which contains connection points for both the servo and the esc, which is the main power driver for the motor. Figure 5 shows exactly how to connect it, and Figure 6 shows the actual wires. We also had to plug in the camera to the pi using a camera cable (black cable in Figure 6), and connected the lidar into a usb port on the pi (white cable in Figure 6).
As a preliminary step in this project, we had to solder an extra connection from the esc to the pi hat (thick red and black wires in Figure 6). Our car only came with wires protruding from the esc to the battery, but we found we needed to power the pi hat, as well. After this step was complete, we had all the necessary connections to power and drive our car.
![](https://sites.wustl.edu/esecapstonevwtrbw/files/2023/05/image-8-768x1024.png)
Figure 6. Connections to RC Car
Initialization
Our first step was to ensure we could rotate and steer the tires. In order to sync the esc with our pi, we completed a motor initialization sequence, again provided on the Canvas website. Doing this allows the pi to control the speed of the car via the PWM board. This entailed setting the duty cycle of the modulation to 20 percent, 15 percent, 10 percent and then 15 percent, in that order. The duty cycle is the amount of time the motor sends a signal in a given cycle. For the car, setting the duty cycle to 15 percent stopped the tires altogether, or zero speed. The initialization sequence needed to be completed every time we turned the car on.
The steering angles range from 0 to 180 degrees, with straight ahead being set to 90. Higher than 90 turns the car left, while less than 90 turns right. A substep of the initialization process was to find if the 90th degree was actually straight on the path. There could be a couple reasons why this wouldn’t be the case, including weight distribution of the car with the multiple sensors, varying tire pressure, and potentially a non perfectly straight path. This, too, was done almost every time we drove the car, and found our ‘straight’ steer angle varied between 92 and 93.
It should be noted that the rear tires are used for the speed, and the front tires are used for steering.
Turning
For clarity throughout this paper, we refer to ‘turning’ as the moment our car reaches the point in the track in which it needs to turn 90 degrees. ‘Steering’ refers to our car on the straight paths, trying to steer itself in the center of the path. The turn during the project utilizes the RPLidar. For this step, the process is fairly straightforward. When we analyzed our projected path, we found that on the left side of the car, there is only one large pole along the first leg, and it happens to be at the very end where we would like to start the turn. Therefore, we continuously read the distance value corresponding to the 90th degree of the lidar. Figure 7 shows the direction the car travels in red, and which angle we will read in green.
![](https://sites.wustl.edu/esecapstonevwtrbw/files/2023/05/image-2.png)
Figure 7. 90th Degree of Lidar
We initially measured the distance from the path to the pole at the moment in which we would begin turning. We then coded that if the lidar picks up a reading within a range +/- of the previously measured distance, we would turn the tires for a certain period of time. This did take some trial and error in order to achieve a smooth turn and place it in the middle of the second leg without over or under steering. In our final design, we set the steering angle to 110 degrees, and kept it in that position for 3.8 seconds before returning to the original steering angle.
After the turn, the steering component would take back over to center the car. We also still read values of the lidar, and if another pole was found after the turn, the car would stop and would conclude the race track.
Speed (pwm) | Servo Angle | Sleep Time (s) |
0.16 | 120 | 1.32 |
0.16 | 110 | 1.97 |
0.165 | 110 | 0.820 |
0.17 | 110 | 0.72 |
Table 1. Values Documented for Turn
Steering
Steering entails keeping the car centered on the track. Over the course of the semester, this is where a majority of our time was spent and dedicated to. We were unsuccessful in perfectly steering the car in the end, unfortunately.
In our final design, we placed the camera on the side of our car rather than the front, for reasons which will be explained later in the paper. The figure below shows a simple photo capture outside.
![](https://sites.wustl.edu/esecapstonevwtrbw/files/2023/05/image-8-768x1024.png)
Figure 8. Outside Photo on Track
The main method for steering was to manipulate the images in order to stay a steady distance away from the grass on the left side. This was done by doing a number of things. The first of which was to take an image, similar to that in Figure 8. The next was to convert the image to a HSV color scheme, which stands for hue, saturation and value. The hue component is basically a color range, and saturation and value help read brightness of the pixels. Doing this allows us to mask the image so that certain colors become white and the rest will be black. We took the HSV image and converted it again to pick out green colors. An example of this is shown in Figure 9.
![](https://sites.wustl.edu/esecapstonevwtrbw/files/2023/05/image-3-1024x580.png)
Figure 9. Masked Image
After the masking, we proceeded to crop the image to only display the bottom portion, as we did not need to take into account anything above the grass. After this is done, ideally we have an image that would be white from the green grass on the top and black on the bottom as the road.
We then went through each pixel, line by line starting from the top and going down. The number of white pixels were counted in each row and compared to a threshold number. Obviously, as we get to the road, which should be nearly black, the number of pixels should drop off considerably. When we found the first row which was beneath this threshold, we would return that row and stop looping.
The simple thought of steering is that if the row number would get smaller, that would mean the road would be found higher up on the image, and the car would be too far away from the grass. The car would then turn its tires to the left for a small amount of time before returning to the original steering angle. The opposite would be true if the row number would get larger. By taking an image, masking it, and reading the pixels, we were able to steer the car on the track to keep a certain distance away from the grass.
ROS
Our final design utilized ROS1, which allowed us to run several sensors at once and have them interact with the car. We used six different nodes to run our car. We established talker nodes for the camera and lidar sensor, as well as one to initialize the car starting. We defined two listener nodes for the motor as well as one for the steering.
The camera node would publish the row it recognized as the edge of the sidewalk. The steering was subscribed to this topic and would change the wheel position based on the current sidewalk value. The lidar node published a 0 or 1 depending on whether or not it detected a pole on the left side of the car. The motor initialization node was basically just a start button once we had all of the other nodes up and running. When it was run, the motorstart node would turn on the motor and the steering would begin. The other listener node was subscribed to the lidar topic, and this would override the steering to complete the hard coded turn once it recognized the 1 value from the lidar sensor. After the pole was no longer in range, the steering would turn back on and a count kept track that one pole had been passed. The same process would happen until the lidar detected another pole. At that point, the count would increase to 2. Once the car reached the final pole at the end of the path, the count would be at 3, which signified to the motor listener node to stop running, and the car would stop.
Results
Although we contributed a great amount of effort and time, we were unable to successfully have our car drive autonomously along the track without it steering off into the grass. The car did work for a part of the track, including if it started closer to the turning pole. We would have loved to say our car worked perfectly, but due to problems and considerations outlined in this paper, our project was simply not consistent enough to fully complete our task.
Discussion
Alternative Methods
Throughout the course of the semester, we tried many different things and avenues in order to get our car to work properly. An initial solution tried was to put the camera in the front of the car. However, as seen from the images below, the camera could not take a high quality image without it being distorted in some way. This could be caused by bumps in the road leading to an unstable base where the camera was attached. This led to placing the camera on the side of the car. Also, when looking at images taken from the side, the camera quality was much better and without blurring.
![](https://sites.wustl.edu/esecapstonevwtrbw/files/2023/05/image-7-1024x541.png)
![](https://sites.wustl.edu/esecapstonevwtrbw/files/2023/05/image-5-1024x562.png)
Figure 10. Images With Camera on Front
Another couple paths tried were how to use the camera to steer. We initially tried to find the center of the road. However, the rocks within the ground were not all a consistent color, so it was difficult to consistently find the center using a range of colors. The same goes for finding the center of the green grass. The center of the green wasn’t the same every time and the mask would occasionally find random green spots beyond the grass.
As an attempt, we tried to place an ultrasonic sensor on the side of the car to find the distance away from the grass. However, that too was inconsistent until it was very close, and we weren’t able to rely on the height of the grass with the landscapers mowing.
Problems
Beginning the project, one of the first issues we ran into was regarding the operating system we would use on the pi. Because we had the intention of using ROS2, we began by downloading the suggested software that was most compatible with the program, which was Ubuntu. We were able to successfully download the software, as well as install ROS. However, when testing several of our sensors, we began running into issues such as user access and incompatible packages. This was especially apparent for the PiCamera, which was an essential part of our design plan for the control. Because of this trouble, we ultimately decided to switch back to Raspberry Pi OS, which we knew was compatible with the PiCamera software. All of our sensors worked after switching over, and we began more work on some of the control using the sensors at that point. As we continued testing, we began to try operating several sensors at once and found that the python threading method was giving us a few problems operating the camera and lidar. We decided at this point to try ROS, and our Raspberry Pi OS had to be switched to the legacy version, which was compatible with ROS1.
A majority of the problems we encountered after we had our car running on the ground were surrounding the steering using the camera. Our initial strategy had the camera mounted on the front of the car, where we would take a picture, mask the image to identify the color of the sidewalk in front of it, and steer toward the center of the color. After getting inconsistent values for the center of mass, we looked at the photos being captured and realized that a majority of the image was the sidewalk, and the grass on either side only showed about halfway up the image. This left little room for steering to take effect, and we decided to use the approach which was implemented in our final design with the camera on the side of the car.
Before reaching our final strategy utilizing the mask to find the edge of the sidewalk, we tried finding the center of mass of the grass in the image and keeping it in the same place. The problem with this method was that different parts of the sidewalk may have had different amounts of grass behind them. We were reading inconsistent values and decided that detecting the edge of the sidewalk would be more consistent.
Once we implemented our edge detection of the sidewalk, another issue we ran into when it came to the camera was the differences in performance and values depending on the weather. We found our car steering to be inconsistent between sunny and cloudy days, which especially caused problems when the sun would come out and then go back behind clouds. We would need to tune the mask values to accommodate the lighting outside, so the car could not operate well on partly cloudy days with inconsistent sunlight.
Part of the reason the inconsistent lighting was a problem was because of our ranges for the hue, saturation, and value in the mask. When looking at images of two different locations on the sidewalk, the mask we implemented on the same day had very different results in two different spots. Below you can see that one side of the sidewalk successfully masked the grass as white, while the other actually masked the sidewalk itself.
Figure 11. Mask of image from south sidewalk |
Figure 12. Mask of image from east sidewalk |
At this point we had to import our photos into an application which would provide the hsv values at a selected point in a photo. For this case, we found that the sidewalk had a slight green hue, and the angle of the sun hitting the grass resulted in a different set of hsv values than when viewing it from the other side. We made adjustments to our ranges and did not seem to run into this problem again when testing. The image below shows that on a bright, sunny day, the image taken can be difficult to see a distinct color of green, which lead to more inconsistencies in the steering.
![](https://sites.wustl.edu/esecapstonevwtrbw/files/2023/05/image-4-1024x550.png)
Figure 13. Image with Large Amount of Reflected Sunlight
Conclusion
There were many setbacks with the project’s implementation of the control algorithm.
Initial software issues did not allow for the early detection of later hardware issues that arose
during testing. The project would have benefited from perhaps 2-3 additional weeks of testing
with different lighting conditions. This would allow for improved performance during the
presentation. There is value in repeated failure of implementation, especially for aspiring
engineers. A silver lining for the project was the consistent performance of the Lidar sensor.
The turning mechanism and control programming that accompanied it allowed for a
safe, stable turn. For lifesize models, that is extremely important for the safety of the passengers.
Additionally, Tesla founder Elon Musk expressed his worries about Lidar performance for
passenger vehicles. The results of this study could possibly be used as an avenue for future
autonomous control using Lidar sensors that were previously dismissed.
Although we weren’t able to successfully have the car drive itself around the track,
we did learn many valuable lessons about problem solving and how to work on a team.
This capstone project encapsulated many of the things we learned throughout our college
education and will help us on the next steps in our career.
Deliverables
- A final written report and oral presentation (demo) to describe our process and results.
- Webpage outlining the general process and strategy for autonomous vehicle navigation
Schedule/Timeline
Task # | Task Name | Start | Finish | Due Date | Completion Date |
1 | Formal Proposal | 02/06/23 | 02/19/23 | Feb 19 | 02/19/23 |
2 | Preliminary Sensor Testing | 02/07/23 | 02/21/23 | 02/28/23 | |
3 | Data Collection with Car | 02/21/23 | 02/28/23 | 2/28/23 | |
4 | Steering (straight) | 02/25/23 | 03/07/23 | 3/28/23 | |
5 | Car Weight and Friction Measurements | 2/25/23 | 03/07/23 | Not needed | |
6 | Path Marker Data Collection | 3/04/23 | 3/11/23 | 3/28/23 | |
7 | Steering with Camera | 03/11/23 | 04/28/23 | 04/28/23 | |
8 | Navigating Turn | 03/04/23 | 03/23/23 | 4/4/23 | |
9 | Navigating Turn – Corrections and Final Alterations | 03/23/23 | 04/12/23 | 4/28/23 | |
10 | Presentation | 04/15/23 | 04/30/23 | May 1 | 4/28/233 |
11 | Final Report | 04/15/23 | 05/08/23 | May 8 | 5/9/23 |
12 | Webpage | 04/29/23 | 05/10/23 | May 10 | 5/9/23 |
References
[1] WashU Capstone Canvas Webpage
[2] https://injuryfacts.nsc.org/motor-vehicle/historical-fatality-trends/deaths-and-rates/
[3] https://www.verneidehonda.com/what-is-honda-lane-keeping-assist/
[4] https://learn.adafruit.com/slamtec-rplidar-on-pi/using-the-slamtec-rplidar