Despite our success with simulated data, the results generated by our algorithm on real data from the microphones were not as good. We can attribute at least some of this disparity to the equipment we used. The sound sources we used were from an iPhone application which generates pure tones. The app was free and by no means industrial quality software. When we analyzed the signals received on the microphones using FFTs, we noticed there was a 5-10 Hz difference between the frequency the app said it was sending and the frequency of the data the microphones received.
In terms of our algorithm, an issue we know caused unwanted destructive interference was the time delays on the real microphone data. With simulated data, we were able to phase shift the pure tones by an exact amount of radians because we could just insert the phase shift into the sine/cosine command in MATLAB. With real data this was not possible, and we had to delay signals by an integer number of samples in the time domain. The reason we had to do this is straightforward, you can’t shift a sampled signal by a non-integer number of samples.
The other area of our project that did not perform as well as we had hoped was our GSC algorithm. The first problem we had was that the output of the GSC algorithm was regularly out of phase with the predicted output by pi, as shown in Figure 6.
Another major problem we had was that the adjustments our GSC algorithm made to the conventional delay-sum beamformer were insignificant. The side lobe cancelling signal, yk, had real and imaginary components which were extremely small, meaning that the output barely changed from what it would have been with conventional beamforming. Nonetheless, comparing Figure 6 to Figure 7 shows that our algorithms were a significant improvement to the real data results without beamforming.