Introduction

In the Adaptive Integrated Microsystems Laboratory (AIMLab) at Washington University in St. Louis, PhD candidate Ahana Gangopadhyay has been working with Professor Shantanu Chakrabartty in designing a novel neural network model to solve machine learning classification problems. Together they have developed a “growth transform network” which is characterized by mirrored neuron pairs in the neural network. This design is distinct from the standard McCulloch-Pitts neuron model, as shown in below, and aims to provide a more realistic emulation of the human brain. The lab has implemented a simulation of the growth transform network in MATLAB for a traditional PC platform and tested its ability to solve classification machine learning problems. This initial software attempt provides a baseline for our work.

McCulloch-Pitts neuron model (http://wwwold.ece.utep.edu/) and Growth Transform neuron model (Gangopadhyay). In the McCulloch-Pitts model, each neuron outputs the result of applying an activation function to a linear combination of its inputs. The activation function is usually nonlinear. The Growth Transform model uses a different approach. Each neuron has a mirrored partner and has a first-order update. The activation function used is a thresholded spiking function that mimics the biology-inspired integrate-and-fire model.

 

The problem with this initial baseline is that the growth transform network is limited in computational speed and power usage. In order to circumvent these restrictions brought on by running the existing software on a traditional PC, it is necessary to create a hardware implementation of the growth transform network. Specialized hardware offers increased speed and lower power consumption than the current software implementation, with the added benefit of portability. Only after the network has been proven to work in hardware can it be used in a multitude of machine learning applications.

We addressed this problem by designing two independent systems that demonstrate the feasibility of the growth transform network. The first system uses a Xilinx Spartan-6 FPGA to simulate neurons as they perform a classification task and output the spiking behavior to pins on the FPGA board and ultimately viewed with an oscilloscope. We compared the spiking output of our hardware-implemented network given the same input parameters (i.e., the same classification problem) as the MATLAB-implemented network. We also designed our project to be scalable; multiple FPGAs should be able to be added to the system in order to easily increase the number of neurons available for computation. Finally, we compared the speed and power consumption of our implementation of the network against the MATLAB implementation, demonstrating that FPGAs perform better than software for machine learning tasks.

The second system uses a Raspberry Pi 3 to simulate the growth transform network in software. We added this software implementation to our project after our original design was modified due to time restraints. We simulated neurons using the C programming language and present the neurons’ spiking behavior visually using custom plotting software. In order to view the spiking behavior, the Raspberry Pi 3 is interfaced with a video monitor to display the algorithm at work as well as the classification accuracy after training has completed. Like the hardware implementation, we compared the speed and accuracy of the software network to the MATLAB implementation.