Research efforts in the lab are focused on quantifying aspects of brain and behavior relevant for complex sensory processing, such as with speech sounds. Methods over the years have ranged from neurophysiology in animals and humans to brain training video games. Current efforts are largely focused on developing active machine learning methods to optimize individual inference for informative psychometric tests.

General Research Areas

Novel tests of perception and cognition

We are using modern principles of machine learning to improve upon longstanding behavioral test formats while also devising completely new ones. The new tests are much more efficient and deliver much more information than current tests. As a result, more integrated, more ecologically valid and more predictive tests of hearing, vision and cognitive variables are steadily becoming available.

Active machine learning for medical inference

Precision medicine exploits large amounts of patient data to extract meaningful trends about disease and effective treatments. While effective, this approach has fundamental limitations. One example is the need for large data sets encompassing many patients. In cases where disease complexity is high relative to the number of patients, such as with rare diseases or even common disease with high variability such as COVID-19, these methods may fall short. We develop individualized modeling procedures to inform diagnostic and therapeutic decisions about individuals even in the face of sparse data and/or high disease complexity.

Equitable personalized education tools

Education represents a long, directed process of induced neuroplasticity that is subject to optimization. Too many of the attempts to do so have relied upon group-level inference from ethnic majority populations that often applies poorly to minority kids. In the worst cases, these inequities lead to systematic biases that permanently derail educational opportunities of the most vulnerable students. We draw from our efforts designing individualized medical workup procedures to help individualize assessments and lessons for students.

Game-based auditory training

Cognitive training software provides exercises whose completion strengthens certain cognitive processes. We seek to develop listening training software in the form of compelling video games playable on smartphones that naturally encourage individuals to complete their auditory training. The goal of this work is to optimize the function of hearing assist devices such as hearing aids and cochlear implants, as well as to enable individuals with a newly correct hearing deficit to learn to communicate effectively.

Brain repair via induced neural plasticity

The lab has investigated the principles behind “forward-engineering” novel brain function by rewiring native cortical brain networks to implement new algorithms. Following brain injury such as a stroke, some function is lost and the brain network is pathologically disrupted. The principles of system theory and neuroplasticity are applied toward developing brain-computer interfaces that can rewire brain networks using strategic neurostimulation and thus potentially recover the lost function. Collectively, this research represents a combination of neuroscientific and neuroengineering endeavors that have the potential to alleviate focal losses of nervous system function such as in stroke.

Encoding of complex sounds in the auditory system

One of the major thrusts of the laboratory is toward understanding how complex sounds such as species-specific vocalizations are represented robustly in the auditory system. Robust representation implies that behaviorally relevant acoustic features can be extracted under a variety of environmental conditions. In particular, we examine conditions of variable sound intensity, variations in temporal context of sounds and different mixtures of sounds. “Reverse-engineering” the neural encoding of sounds under these variable conditions, as opposed to the static experimental conditions typically studied, may lead to both improved understanding of normal auditory function in natural environments as well as improved engineering of devices intended to process relevant sounds. The latter include hearing aids, cochlear implants and computers capable of automatically recognizing speech.

Human language processing for brain-computer interfaces

Animal models of sound processing face inherent limitations when one attempts to explore the richness of human speech. In recent years functional magnetic resonance (fMRI) has been used successfully to probe directly the human brain activity underlying speech and language tasks at high spatial resolution. This method suffers from a low time resolution, however, so the dynamic nature of brain activity is mostly inaccessible to it. We use recording electrodes placed directly upon the brain, termed electrocorticography (ECoG), to examine rapidly evolving brain activity responsible for the processing of both simple and complex linguistic tasks. These experiments can  lead to new insights into how dynamic, coordinated brain activity results in human speech processing. Additionally, these findings may ultimately enable individuals to control external devices by thinking particular words or phrases and having their brain activity decoded by a computer.

Ongoing Research Projects

Defining the field of computational audiology

Audiology has a long history of deploying advanced electronic devices and sophisticated signal processing algorithms for improving patient outcomes. The cochlear implant, for example, is a functional cure for deafness—the first home run in the field of neuroengineering. A new era is emerging with more data, more capable devices, and more informative algorithms. For this reason, audiology occupies an enviable position at the vanguard of 21st century medicine. Establishing the nature of the field of Computational Audiology prepares the way for technologically-enabled, patient-centered advances in audiology particularly and in healthcare more generally.

Performance and Potential of Machine Learning Audiometry. Barbour, D. L., & Wasmann, J.-W. (2021).  The Hearing Journal74(3), 40–44.

Computational Audiology: New Approaches to Advance Hearing Health Care in the Digital Age. Wasmann, J.-W. A., Lanting, C. P., Huinck, W. J., Mylanus, E. A. M., van der Laak, J. W. M., Govaerts, P. J., Swanepoel, D. W., Moore, D. R., & Barbour, D. L. (2021).  Ear and Hearing42(6), 1499–1507.

Emerging Hearing Assessment Technologies For Patient Care. Wasmann, J.-W. & Barbour, D. L. (2021).  The Hearing Journal74(3), 44–45.

Wasmann, J.W. (2019). Computational Audiology and the Series of VCCA Conferences. Computational Audiology. https://computationalaudiology.com/

The Promise of Computational Audiology. Nalley, C. (2021).  The Hearing Journal74(12), 16.

Pure-tone audiometry via Bayesian active learning

Traditional psychometrics upon which audiometry is based has always approached the problem of estimating detection thresholds by estimating the probabilty that someone will hear a tone delivered very near their threholds. This view of probability is an inherently frequentist view, such that these probabilities of tone detection have always been estimated diectly by sampling. We have developed a novel, purely Bayesian estimation procedure that is considerably more efficient than conventional methods because it does not rely upon sampling theory to generate its estimates. The time required for this new audiometric test is dramatically reduced as a consequence.

Fast, Continuous Audiogram Estimation Using Machine Learning. Song, X. D.; Wallace, B. M.; Gardner, J. R.; Ledbetter, N. M.; Weinberger, K. Q.; and Barbour, D. L. Ear and Hearing, 36(6): e326–335. December 2015. 

Psychophysical detection testing with Bayesian active learning. Gardner, J. R.; Song, X.; Weinberger, K. Q.; Barbour, D. L.; and Cunningham, J. P. In Proceedings of the Thirty-First Conference on Uncertainty in Artificial Intelligence, of UAI’15, pages 286–297, Amsterdam, Netherlands, July 2015. AUAI Press 

Bayesian Active Model Selection with an Application to Automated Audiometry. Gardner, J.; Malkomes, G.; Garnett, R.; Weinberger, K. Q.; Barbour, D. L.; and Cunningham, J. P. Advances in Neural Information Processing Systems, 28: 2386–2394. 2015. 

Bayesian active probabilistic classification for psychometric field estimation. Song, X. D.; Sukesan, K. A.; and Barbour, D. L. Attention, Perception & Psychophysics, 80(3): 798–812. April 2018. 

Online Machine Learning Audiometry. Barbour, D. L.; Howard, R. T.; Song, X. D.; Metzger, N.; Sukesan, K. A.; DiLorenzo, J. C.; Snyder, B. R. D.; Chen, J. Y.; Degen, E. A.; Buchbinder, J. M.; and Heisey, K. L. Ear and Hearing, 40(4): 918–926. August 2019. 

Concurrent Bilateral Audiometric Inference. Heisey, K. L.; Buchbinder, J. M.; and Barbour, D. L. Acta Acustica united with Acustica, 104(5): 762–765. September 2018. 

Conjoint psychometric field estimation for bilateral audiometry. Barbour, D. L.; DiLorenzo, J. C.; Sukesan, K. A.; Song, X. D.; Chen, J. Y.; Degen, E. A.; Heisey, K. L.; and Garnett, R. Behavior Research Methods, 51(3): 1271–1285. 2019. 

Dynamically Masked Audiograms With Machine Learning Audiometry. Heisey, K. L.; Walker, A. M.; Xie, K.; Abrams, J. M.; and Barbour, D. L. Ear and Hearing. June 2020. 

Accelerating Psychometric Screening Tests With Bayesian Active Differential Selection. Larsen, T. J.; Malkomes, G.; and Barbour, D. L.; arXiv:2002.01547 [cs, stat]. February 2020. arXiv: 2002.01547 

Accelerating Psychometric Screening Tests with Prior Information. Larsen, T.; Malkomes, G.; and Barbour, D. L. In Shaban-Nejad, A.; Michalowski, M.; and Buckeridge, D. L., editor(s), Explainable AI in Healthcare and Medicine: Building a Culture of Transparency and Accountability, of Studies in Computational Intelligence, pages 305–311. Springer International Publishing, Cham, 2021. 

Community hearing screens in South Africa

Many Africans live far from healthcare providers without access to timely and inexpensive diagnostic procedures. As a result, clinical disorders that are not painful or obvious often go undiagnosed. This is an acute unmet need in African health management, especially for audiological services. This project evaluates the usability, speed and effectiveness of a sophisticated smartphone-based hearing screen in community populations in South African townships. The method uses extremely low bandwidths compatible with 2G cellular networks. The proposed technology collects the minimal data necessary to arrive at an informed clinical referral or treatment plan. Anticipated efficiency gains over conventional procedures and successful mobile implementation will enable expanded community-based hearing healthcare for underserved populations in Africa.

Efficient visual function estimation

Visual fields are important as the only ground truth measure of disease severity for such disorders as glaucoma and macular degeneration. Estimating visual fields is costly in terms of time and equipment, however, giving rise to the need for more efficient methods for this task. We have applied active machine learning to estimating visual fields and showed that they are at least as efficient as current methods, and they carry with them all the advantages demonstrated for similar tests of hearing. Similar approaches are being applied to improve visual contrast sensitivity function estimation.

Visual Field Estimation by Probabilistic Classification. Chesley, B. & Barbour, D. L. (2020).  IEEE Journal of Biomedical and Health Informatics24(12), 3499–3506.

Generalizing precision medicine

The goal of precision medicine is to deliver “the right treatment to the right person at the right time in the right dose.” Progress toward this goal uses the empirical construction of evidence-based medicine, combined with new analytics, on ever growing data sets. By far the primary growth in data is the addition of more people, which has a fundamental limit to the amount of growth possible. Work in the lab has focused on generalizing the principles of precision medicine, allowing for Big Data to drive decisions when available, but accommodating theory when available and probabilistic models of individual patients. This framework should continue to make inferential advances even after most patients have been quantified.

Advanced Inferential Medicine℠. Barbour, D. L. Technical Report OSF Preprints, December 2017. 

Formal Idiographic Inference in Medicine. Barbour, D. L. JAMA otolaryngology– head & neck surgery, 144(6): 467–468. 2018. 

Precision medicine and the cursed dimensions. Barbour, D. L. NPJ digital medicine, 2: 4. 2019. 

Precision Clinical Trials: A Framework for Getting to Precision Medicine for Neurobehavioural Disorders. Lenze, E. J., Nicol, G. E., Barbour, D. L., Kannampallil, T., Wong, A. W. K., Piccirillo, J., Drysdale, A. T., Sylvester, C. M., Haddad, R., Miller, J. P., Low, C. A., Lenze, S. N., Freedland, K. E., & Rodebaugh, T. L. (2021). Journal of Psychiatry & Neuroscience: JPN46(1), E97–E110. 

Optimized, equitable math education

Individualized inference tools drawn from our work on rational medical workups are being applied toward tracking the cognitive preparation of middle school math students for math lessons on any given school day. Optimal lessons are then selected for that day. Because the inference tools do not reference data from other students, these individualized methods are expected to lead to more equitable educational experiences and more effective learning.

Previous Research Projects

Neuronal population coding

Neurons work together in concert to encode sensory stimuli. New machine learning methods can quantify the high-dimensional representation of stimuli in the dynamic neuronal spiking traces. Many neurons are active at the onset of a stimulus, but most of their activity decays away. A variety of decoders can extract consistent stimulus information from both the onset and the sustained portions of this spiking. This finding implies that overall population rate is a good proxy for the speed of decoding. Furthermore, this kind of dense/sparse coding pattern enables populations of neurons to encode other, novel stimuli that may come later.

Rate, not selectivity, determines neuronal population coding accuracy in auditory cortex. Sun, W.; and Barbour, D. L. PLoS Biology, 15(11): e2002459. November 2017. 

Engaging and disengaging recurrent inhibition coincides with sensing and unsensing of a sensory stimulus. Saha, D.; Sun, W.; Li, C.; Nizampatnam, S.; Padovano, W.; Chen, Z.; Chen, A.; Altan, E.; Lo, R.; Barbour, D. L.;  and Raman, B. Nature Communications, 8: 15413. 2017. 

Population Responses Represent Vocalization Identity, Intensity, and Signal-to-Noise Ratio in Primary Auditory Cortex. Ni, R.; Bender, D. A.; and Barbour, D. L. bioRxiv, 2019.12.21.886101. December 2019. Publisher: Cold Spring Harbor Laboratory Section: New Results

Adaptive processes in auditory cortex

The auditory system is unique in that a large fraction of its neurons are tuned to respond best at a particular sound intensity: both louder and softer sounds relative to their best intensities result in a decreased response. This observation is 60 years old and has been widely interpreted to reflect a change in the code of sounds in the brain relative to the ear that makes it easier for the neurons to encode sounds at different intensities. In an extensive series of experiments we demonstrated for the first time that intensity tuning of auditory neurons is strongly correlated to their short-term adaptive processes. The trends we discovered indicate that that the strong inhibition active at higher sound intensities actually shields these neurons from the desensitization that usually accompanies intense stimuli. By not adapting much in response to loud sounds, these neurons are more sensitive to softer sounds follow immediately. This encoding process dramatically expands the overall dynamic range over which the auditory system can operate at short time scales and consequently enables robust encoding of real-world dynamic stimuli when the acoustic environment is relatively unstable or unpredictable.

Specialized neuronal adaptation for preserving input sensitivity. Watkins, P. V.; and Barbour, D. L.. Nature Neuroscience, 11(11): 1259–1261. November 2008. 

Level-tuned neurons in primary auditory cortex adapt differently to loud versus soft sounds. Watkins, P. V.; and Barbour, D. L. Cerebral Cortex (New York, N.Y.: 1991), 21(1): 178–190. January 2011. 

Representation of noisy vocalizations in auditory cortex

Vocalizations typically occur in and must be decoded from complex acoustic environments containing other competing sounds and environmental noise.  Biological auditory systems are expert at extracting usable information from such an environment, but engineered systems typically fail. Our studies of the neural encoding of noisy vocalizations have revealed a variety of individual neuronal responses to mixtures of these sounds. The population of neurons responds most accurately to the vocalizations, but some respond to everything and a few respond better to the noise. Linking basic neuronal response characteristics to the behavior of the same neurons in response to complex acoustics will elucidate the important features of the auditory system for real-world listening. Furthermore, the insights gained from this work may lead to improved engineered systems intended to process sounds with interference.

Contextual effects of noise on vocalization encoding in primary auditory cortex. Ni, R.; Bender, D. A.; Shanechi, A. M.; Gamble, J. R.; and Barbour, D. L. Journal of Neurophysiology, 117(2): 713–727. 2017. 

Population Responses Represent Vocalization Identity, Intensity, and Signal-to-Noise Ratio in Primary Auditory Cortex. Ni, R.; Bender, D. A.; and Barbour, D. L. bioRxiv, 2019.12.21.886101. December 2019. Publisher: Cold Spring Harbor Laboratory Section: New Results

Representation of sound intensity in the auditory system

The auditory system is unique in that a large fraction of its neurons are tuned to respond best at a particular sound intensity: both louder and softer sounds relative to their best intensities result in a decreased response. We have thoroughly documented the properties of these neurons in primary auditory cortex, finding that they are easily the most sensitive neurons (i.e, have the lowest response threshold) of all central auditory neurons. Their best intensities are also strongly skewed toward lower sound intensities, further implying that they are preferentially encoding the softest sounds. How the responses of these neurons and others are combined together to create robust encoding of sounds across the wide range of sound level found in the environment is the subject of continuing investigation.

Rate-level responses in awake marmoset auditory cortex. Watkins, P. V.; and Barbour, D. L. Hearing Research, 275(1-2): 30–42. May 2011. 

Intensity-invariant coding in the auditory system. Barbour, D. L. Neuroscience and Biobehavioral Reviews, 35(10): 2064–2072. November 2011. 

Decoding sound level in the marmoset primary auditory cortex. Sun, W.; Marongelli, E. N.; Watkins, P. V.; and Barbour, D. L. Journal of Neurophysiology, 118(4): 2024–2033. 2017.

Hidden Hearing Loss: Mixed Effects of Compensatory Plasticity. Barbour, D. L. (2020).  Current Biology30(23), R1433–R1436.

Feature mapping in cortical areas

While the cerebral cortex of the brain in higher mammals is a three-dimensional organ, one of its primary organizational features is a two-dimensional (2D) arrangement of neuronal collections called columns. Any particular cortical area has a 2D arrangement of neurons that tends to place neurons near one another that connect together, which is a feature that tends to keep the amount of interneuronal wiring relatively low. These “feature maps” can take on a wide variety of forms that often provide clues to neurophysiologists about the significance and internal organization of various neuronal response features. The maps of visual and somatosensory cortical areas have been extensively worked out over the years, while maps of auditory cortical areas have been more challenging to discern for unclear reasons. We pursued a series of modeling studies aimed at defining principles underlying functional mapping in auditory cortex and discovered that the reason functional maps are unclear and even variable in auditory cortex relates to the nature of sound encoding. Visual and somatosensory (touch) stimuli are inherently 2D (on the retina and body, respectively), while the cochlea of the ear fundamentally encodes only the frequency of sounds. The resulting 1D maps do not map so readily onto the 2D cortical surface. These maps are strongly influenced by individual variability in the shapes of the areas as well as brain development, more so than corresponding maps in other sensory areas. Understanding this distinction allows experimenters to improve their investigation of auditory cortical function with electrical recordings and functional imaging.

A computational framework for topographies of cortical areas. Watkins, P.; Chen, T.; and Barbour, D. L. Biol Cybern, 100(3): 231–48. 2009. 

Theoretical limitations on functional imaging resolution in auditory cortex. Chen, T. L.; Watkins, P. V.; and Barbour, D. L. Brain Research, 1319: 175–189. March 2010. 

Evaluation of techniques used to estimate cortical feature maps. Katta, N.; Chen, T. L.; Watkins, P. V.; and Barbour, D. L. Journal of Neuroscience Methods, 202(1): 87–98. October 2011. 

Intensity-invariant coding in the auditory system. Barbour, D. L. Neuroscience and Biobehavioral Reviews, 35(10): 2064–2072. November 2011. 

Human speech decoding with a brain-computer interface

Electrocorticography (ECoG) recording electrodes placed directly upon the brain can reliably reveal rapidly evolving brain activity at reasonably high spatial resolutions. Using ECoG, we have recorded brain activity of human subjects performing speech perception and production tasks. The brain areas active during these tasks were consistent with findings of functional imaging studies. Because ECoG preserves more timing information than functional imaging studies can, the relative activation sequence of the brain areas involved in hearing and speaking can be extracted. While a very similar collection of brain areas is active in both of these tasks, their order of activation is essentially complementary. Extensions of this type of experiment will allow dynamic brain network configurations in a variety of tasks to be analyzed.

Nonuniform High-Gamma (60-500 Hz) Power Changes Dissociate Cognitive Task and Anatomy in Human Cortex. Gaona, C. M., Sharma, M., Freudenburg, Z. V., Breshears, J. D., Bundy, D. T., Roland, J., Barbour, D. L., Schalk, G., & Leuthardt, E. C. (2011). The Journal of Neuroscience: The Official Journal of the Society for Neuroscience31(6), 2091–2100.

Decoding vowels and consonants in spoken and imagined words using electrocorticographic signals in humans. Pei, X., Barbour, D. L., Leuthardt, E. C., & Schalk, G. (2011). Journal of Neural Engineering8(4), 046028.

Temporal Evolution of Gamma Activity in Human Cortex During an Overt and Covert Word Repetition Task. Leuthardt, E. C., Pei, X.-M., Breshears, J., Gaona, C., Sharma, M., Freudenberg, Z., Barbour, D. L., & Schalk, G. (2012). Frontiers in Human Neuroscience6, 99.

Towards a Speech BCI Using ECoG. Leuthardt, E. C., & Cunningham, J. (2013). In Barbour, D. L., C. Guger, B. Z. Allison, & G. Edlinger (Eds.), Brain-Computer Interface Research: A State-of-the-Art Summary (pp. 93–110). Springer.

Network effects of spike timing induced plasticity

A fundamental property of many neural networks, including the cerebral cortex, is that neurons active at the same time become more easily activated at the same time in the future. This type of network modification appears to be instrumental in forming new memories and acquiring new skills. We are using computational and neurophysiological models to probe systematically the effects of synaptic “learning rules” upon small-, medium- and large-scale neural network behavior. We have observed systematic changes across these networks when small subnetworks are manipulated. By working out the rules governing such network modification, we anticipate developing novel techniques for making arbitrary changes to biological neural networks following injury that can be critical to optimizing the functional repair of damaged neural tissue.  

Modeling of topology-dependent neural network plasticity induced by activity-dependent electrical stimulation. Ni, R.; Ledbetter, N. M.; and Barbour, D. L. International IEEE/EMBS Conference on Neural Engineering: [Proceedings]. International IEEE EMBS Conference on Neural Engineering,831–834. 2013. 

Spike-timing computation properties of a feed-forward neural network model. Sinha, D. B.; Ledbetter, N. M.; and Barbour, D. L. Frontiers in Computational Neuroscience, 8: 5. 2014. 

In Vitro Assay for the Detection of Network Connectivity in Embryonic Stem Cell-Derived Cultures. Gamble, J. R.; Zhang, E. T.; Iyer, N.; Sakiyama-Elbert, S.; and Barbour, D. L. bioRxiv,377689. July 2018. Publisher: Cold Spring Harbor Laboratory Section: New Results 

Directing new neuronal growth in the brain

Practical proposals to induce and maintain an artificial concentration gradient of a diffusible agent parallel to an organ surface do not exist without completely encapsulating the organ. All diffusible drug delivery systems rely upon diffusion of molecules from a high-concentration source to areas of lower concentrations within an organ of interest. Short of cutting into an organ and inserting a drug delivery system, manipulating concentration gradients along organ boundaries does not seem possible. Creating just such a concentration gradient of growth factors in the brain or spinal cord could be of profound practical value to induce the extension of neural processes for repairing damage. To achieve just such a result, we propose a novel drug delivery system termed discrete controlled release (DCR). DCR is achieved by creating multiple release points on small rods arranged in a grid that is inserted into the brain parenchyma. Proper adjustments of drug loading and release parameters should enable a consistent growth factor concentration gradient parallel to the brain’s surface to be achieved within the confines of the grid, which will promote neural process extension over longer distances than could be achieved through simple diffusion alone. As a result, options for designing neural repair mechanisms are expanded.

Designing in vivo concentration gradients with discrete controlled release: a computational model. Walker, E. Y.; and Barbour, D. L. Journal of Neural Engineering, 7(4): 046013. August 2010. 

Earlier projects