Despite our success with the Projectile Launcher, there are some things that could make the system better and fully autonomous.
- Having 2 cameras for depth perception
- This launcher currently has one camera. Therefore, we cannot get a good sense of depth. Humans have depth perception because they have 2 eyes. Similarly, we need 2 cameras to autonomously get the distance of the wall from the launcher.
- The user currently has to manually input the distance and they can only use 5, 10, or 15 yards. If 2 cameras were used, the user could use the launcher at any distance that the launcher allows them to.
- Video processing for autonomous adjustment
- Currently, the launcher adjustment is dependent upon the user. If the user notices the projectile missed, they should take note of where it missed and then move the launcher a degree up, down, left, or right, or a combination of those.
- Video processing would allow us to track the projectile after it is fired, while it is in the air. This would allow raspberry pi to see where the projectile hit. If it sees the target was hit, it can automatically move to the next target. If the target was missed, it can automatically make the necessary adjustments without the user having to do anything.
- Machine learning to replace square finding
- Since the code was modularly made, the function that finds blue squares and returns their centers can easily be substituted for a machine learning algorithm that can track more complex objects.
- These can be anything where there is training for the algorithm, such as windows, cars, people, etc.
- The machine learning code can easily be used as a substitute, as long as the pixel coordinates representing the thing the algorithm tries to track is returned as an output of the function.