Methods

The overall block diagram of both our implementations is seen below in the figure below.

Project Block Diagram. The FPGA and Raspberry Pi are separated from each other and each have their own growth transform neural network implementations. The FPGA has initialization data pre-programmed onto it, and it displays spiking outputs through the pins on the FPGA board, which are viewed with an oscilloscope. The Raspberry Pi 3 uses C to generate spiking information from its simulated neural network, and uses Python to display the spiking information graphically, which eventually goes to the HDMI monitor.

 

FPGA Design

The Xilinx Spartan-6 FPGA implements the neural network in hardware and is coded in VHDL using the Xilinx ISE Design Tools. The FPGA represents each neuron as binary bits stored in flip-flops, with connections between each neuron via wires. We used the growth transform neuron model as outlined in AIMLab’s previous paper [Gangopadhyay and Chakrabartty]. Their model dictates how the neural network evolves in time to learn to classify its input data.

Xilinx Spartan-6 FPGA (XC6SLX-45)

Combinational logic performed the necessary arithmetic calculations needed to update the neuron binary values as the network solved the classification problem. Each neuron’s binary values were dependent on all other neurons in the network. This meant that to calculate a new binary value for a neuron, all other neuron binary values must be obtained. This proved a significant design challenge, which was unforeseen in our initial understanding of the project. This dependence was manifested in a matrix multiplication, which was surprisingly difficult to implement efficiently in hardware.

Finite State Machines (FSMs) were used in VHDL to generate control signals for the FPGA implementation. The high level algorithm used in the FPGA was as follows: load values into the matrix multiplication core, calculate the matrix multiplication, normalize the output data, and determine whether neurons spiked. This process was repeated over a specified number of iterations, and the output spiking data was available on sixteen output pins of the FPGA board.

Originally, we intended to use a USB 2.0 connection between the FPGA and Raspberry Pi to transfer data as part of our original project objectives because it provided a high bitrate as well as compatibility with both devices. However, due to time restraints, we were unable to complete this aspect of the project. This is an area for future work.

software (Raspberry Pi) design

For our secondary neural network implementation, we decided to use a Raspberry Pi 3. It was relatively inexpensive and came with a wide range of existing documentation. It also conveniently has a HDMI port for communication with the monitor, as well as Ethernet and USB 2.0 ports for communication with the FPGA for future work. The Raspberry Pi has an ARM processor running Linux. We used C and Python as our programming languages on the Raspberry Pi.

Raspberry Pi 3

The Raspberry Pi C implementation follows the same procedure as the MATLAB implementation, which includes matrix multiplication, normalization, and spike detection. Furthermore, it displays spiking event graphics on the monitor in two ways. The graphics include a raster plot of spikes (spikes over time) and the classification boundary of the data set.

In addition to the FPGA board and Raspberry Pi, our technical approach required two AC power adapters for both the FPGA board and Raspberry Pi; a 32 GB microSD card to store the Raspberry Pi’s operating system and data processing programs; and an HDMI cable to connect the Raspberry Pi to an external monitor.

Data Acquisition

The dataset used is a publicly available heart disease dataset from the Cleveland Clinic Foundation (https://archive.ics.uci.edu/ml/datasets/Heart+Disease) that consists of 13 different attributes of 297 patients’ (anonymized) medical information, including age, sex, cholesterol level, and resting heart rate. We prepared this dataset for our use, selecting 80% of the dataset (238 samples) for training purposes; the remaining 20% of the dataset was reserved for verification. Additional datasets for classification problems can be obtained from the UC Irvine Machine Learning Repository (http://archive.ics.uci.edu/ml/).