What We Did

From experimentation, we found that the Raspberry Pi’s camera sensor was the most useful for us. By taking a video of the course, we could analyze every frame of the video stream to detect a certain color in real-time. The color we chose was magenta. We then masked the image, setting every magenta pixel to white, and every non-magenta pixel to black, as seen below.

Shown above: original, masked, and restored screenshots of our magenta sign created by image processing.

Now that we have a masked image, we can find the center of mass of those white pixels in order to give us a centered X and Y coordinate towards which to navigate! We did this using four purple turn-marking posters like the one in the masking demonstration above.

By thinking of the image as a grid of pixels where the upper left corner is the origin (0,0) and the lower right corner has the largest (X,Y) position equal to the resolution of the image, we can effectively average out any masked pixels to find the approximate center location of any white pixels. To stay on target towards our sign we used the X center coordinate to slightly adjust our steering, and we used a specific center Y coordinate as a threshold to decide when to commit to a full 90-degree turn towards the next sign.

In order to make 90 degree turns, we utilized the cY (center in the vertical dimension) value of our masked image. As our car moved closer to the sign, from its point of view the average cY value would decrease because cY shrinks as the color magenta moves up the frame. Therefore, we chose to pause our navigation code and perform a full 90 degree left turn once the stream cY values dipped below a given threshold.

Our biggest issue with this method was dealing with exposure issues on the camera. When the camera was over-exposed, the magenta sign would appear darker and therefore fall out of a range of HSV (hue, saturation, and value) numbers that we picked to avoid colors outside the pink-to-purple range.

One other consideration we decided to make was to crop the left-most quarter and the right-most quarter out of the image to try and keep out objects in the background that could heavily skew the masking process. An example of the camera frame before and after cropping is shown below.

Before Cropping
After Cropping

A block diagram of the final design is shown below.


By setting up the hardware and connecting the sensors to the raspberry pi, we are left with the final design of the car. See below for pictures of the finished car.

Right View of Car
Top View of Car
Left View of Car
Close Up View of Raspberry Pi Connections