In order to get real-time data out of the Muse headband, we need to start to familiarize with the headband. The Muse headband has 4 EEG channels with names “FP1”, “FP2”, “TP9”, and “TP10”. There are also three accelerator channels, named as “ACC1”, “ACC2”, and “ACC3”. Since the frequency of the raw EEG data is 220 Hz and the frequency of the accelerators is 50 Hz, we will gather a very large amount of data after running the Muse headband for just a short period of time (In our design, we can get over 5000 samples within 30 seconds). Therefore, we believe that Matlab, which is featured by the matrix manipulation, is the most suitable programming language for us.
At the same time, we will also put our efforts on exploring how to stream live data into a PC and convert them to a proper format. Because the most easy way to get data out of the headband is using the provided developer kit, we choose Muse-io, a driver to extract real-time data from the Muse headband over open sound control (OSC). To incorporate the data from Muse-io to MATLAB, first of all we need to open the Muse-io in MATLAB and then set up the communication between Muse-io and MATLAB by transmission control protocol (TCP). After getting the size of the input OSC data and saving them to an empty matrix, we need to process and categorize the data using a sub-function splitOscMessage. In other words, we should identify whether the specific OSC byte is a path message, a tag message or a real data. By definition, the path message comes firstly as x00 (null), so we can find the first 0 in every loop and put it to the path category. Then there is a comma (x44) that follows the path bytes and represents the tag message, so we can determine the tag message based on this comma byte. Finally, the contents after the tag messages are the EEG/accelerator data we want. Once having the raw data, we can use another sub-function oscFormat to convert the identified OSC data to readable formats in MATLAB.
Since in this project we only look at EEG channels and accelerator channels, we can easily distribute the data to either EEG array or accelerator array based on the tag messages. For plotting, we need to update the data arrays every loop. Meanwhile, we also need to save all EEG data and all accelerator data in two arrays for fast Fourier transformation (FFT) and sound signal synchronization.
To synchronize the sound inputs with the EEG data, we firstly need to record all sound information. After considering several strategies, we decide on the simplest way: we activate the sound recording before starting the EEG listening, and then record the index of current sample at every corresponding EEG data point in a separate array. Therefore, we set up an one-to-one mapping from the signal data to the EEG data with the index array within the period of EEG recording.
I need to mention here that at first, we were trying to stream the raw EEG data directly to a Raspberry Pi 2. However, we failed because it is so hard for us to build up the connection between the Muse headband and the Raspberry Pi 2. Specifically, until now, the way we extract OSC data from the Muse headband is based on the availability of Muse-io, which is a part of the Muse SDK. Unfortunately, the SDK is not applicable for the Raspberry Pi 2 with a ARM architecture. One of the solution we tried is to take the advantage of a virtual machine and run the x86 architecture on the Raspberry Pi 2. As an alternative, we also tried to install a Windows system on the Raspberry Pi 2. We succeed on running the Muse-io on Raspberry Pi 2. However, because we do not have enough knowledge on network communication, we failed to set up a TCP server on the Raspberry Pi 2 and hereby were unable to get any data in a common programming language.