The goal of this project was to develop a brain-computer interface (BCI) capable of flying a drone in response to commands communicated solely through motor imagery. By using a 32-channel electroencephalogram (EEG) headset developed by g.tec, brain activity recorded from each team member was passed through a pipeline that processed the data, extracted features, classified the data, and created commands that were sent to the drone controller. The drone controller was then used to fly the drone using state feedback and a Wiener filter. Originally, it was planned to make a pipeline that would allow for real-time control of the drone; however, due to the expense of the required software, the developed pipeline uses pre-recorded data instead of real-time data. In pre-processing the data the following were applied: a bandpass filter between 1 and 30Hz, a Laplacian montage, and the Infomax Independent Component Analysis algorithm. When extracting features, the Power Spectral Density (PSD) was produced from the c3 and c4 electrodes for each channel by implementing a Fast Fourier Transform (FFT) with a window of 1 second, split into frequency bands of 1Hz between 1 and 30 Hz. Following this, we selected the bands between 8 and 13 Hz as the feature vector. Lastly a support vector machine (SVM) with a gaussian kernel classified the feature vector. Using pre-recorded data, the drone’s flight had 72.3% accuracy when it came to executing desired commands.

Group Members

Eva Cheung: B.S in Electrical Engineering,
Jennifer Fleites: B.S in Electrical Engineering & Systems Science Engineering,
John Harry Wagner: B.S. in Systems Science Engineering,


Professor ShiNung Ching: Electrical and Systems Engineering Department,
Professor Dorothy Wang: Electrical and Systems Engineering Department,