Conclusions

We have successfully created and verified the feasibility of a multiple implementations of growth transform neural networks. The first implementation, using an FPGA to process the neural network, matches the accuracy of MATLAB and provides lower power than traditional computer simulations by a factor of 1,000. However, although it is faster than MATLAB when accounting for clock frequency differences, on an absolute scale the FPGA is orders of magnitude slower than MATLAB. It also is portable and built for scalability in the future.

The second implementation shows that the Raspberry Pi is a step toward a portable solution for implementing a growth transform neural network. The accuracy of our C implementation corroborates the feasibility of the growth transform neuron model and demonstrates that it can be used on multiple platforms. Additionally, our results show that the growth transform model is not extraneously computationally expensive because it is able to be simulated on a Raspberry PI that does not have all the brunt of a mass-market computer.

We hope our work has served as a stepping stone for future research in this area, and believe the growth transform neural network is a promising innovation that could lead to improved machine learning algorithms and potential for interfacing with biology.