To verify correctness of our design, we compared the results with those from a working version of the Lucas-Kanade algorithm in C#. We compared intermediate values in simulation waveforms to the expected values from the code. Using this method we identified a few issues with our original design. Each of the issues were minor programming erros with the most significant being improper control for the input line buffers. At the end of the debugging process, we used a test bench file to output the results to a tab separated text file. This type of file is easily readable with Microsoft Excel for comparison with the output from the C# code.
Simulated Optical Flow:
Once the correctness was verified, we sent two pairs of test frames through our simulation. Using MatLab, we generated vector field overlays from the text files that resulted from the simulation. The results from the first pair of frames is below. From the optical flow vectors, you can see the general direction of motion as well as variations in magnitude resulting from the folds in the fabric. For example, the upper right corner of the image has low optical flow because the fold causes the fabric in that location to move less than the surrounding material.
The second scene below shows a more typical navigation scenario. The camera was moving at walking speed down the hallway. Observe that objects and walls closer to the camera exhibit greater optical flow than those far away. For navigation, the system would simply need to move away from the locations with the largest optical flow. The direction of motion can also be determined from the optical flow. In general, the vectors move away from the center of the image meaning that the camera is moving forward through the scene.
To calculate the achieved frame rate for our optical flow design we first ran a simulation at a time scale 100x slower than our maximum clock period of 10.5ns. Running on a 256×100 pixel image, it takes 390,069 ns to process each frame. This results in a frame rate of 2,563 frames per second. This is much faster than would be necessary for navigation. Since larger images simply require processing more windows, we can also calculate the achievable frame rate at any given resolution. Results for a few of these calculations are shown in the table below.
|Resolution||Achieved Frame Rate (fps)|
These achieved frame rates are more than enough for making navigation decisions, even at full 1080p HD. At maximum frame rate, we need an input memory bandwidth of 190MB/s and an output memory bandwidth of 330MB/s. These throughputs are achievable with modern storage devices, but in practice would never be required as camera frame rates tend to be in the range of 24 to 120 frames per second.
For those interested, the full VHDL code for this project can be found here:Code