With traditional computer architectures not being able to perform adequately (in terms of speed and power consumption) on machine learning tasks due to the von Neumann bottleneck, computer hardware optimizations are now necessary to meet the growing demands of machine learning algorithms. Neural networks—architectures that emulate biological neurons and the human brain—are a promising approach to meet these demands. At Washington University in St. Louis, researchers have developed a neural network known as the “growth transform network”, which uses growth transform neurons for calculations.
Although this novel network has been simulated in MATLAB, it has yet to be tested in hardware and lower-level programming languages, and is limited in usage until then. We design a hardware implementation of the growth transform network in order to prove the ability of the network towards solving machine learning problems. A Xilinx Spartan-6 FPGA contains the neural network in hardware, while a Raspberry Pi 3 simulates the network in software and visualizes the results on a monitor. To test our design, we compare the output of our solution to the output of the pre-existing MATLAB simulations when input with the identical dataset.
The outputs of our solutions and the MATLAB solution match, corroborating the feasibility of the growth transform neuron model and the possibility of having a hardware implementation. Both the hardware and software implementations consume less power than the baseline result and provide promising results for computational speed.