The Tavoni Laboratory investigates the fundamental principles underlying brain function, from sensing the environment to forming memories and making decisions. We focus on how information is represented and processed within brain networks to optimize behavior in dynamic settings. Our theories offer normative predictions about the organization and function of neural circuits across various systems, and we integrate these theories with biophysically-inspired network models and neural data. Ultimately, cracking the principles of the neural code is crucial for understanding disruptions in brain diseases.
Areas of focus in the lab include:
Context-dependent sensory coding and perceptual learning
Context has a powerful effect on the neuronal representation of sensory inputs. Accordingly, perception adapts to top-down information that is learned through experience, such as the salience or valence of external stimuli, and reflects sensory integration across different modalities. We use information-theoretic approaches, statistical and biophysical models to unravel how sensory information is encoded in the brain and the mechanisms that underpin perceptual learning. Currently, the lab is focusing on the olfactory system, where sensory and contextual inputs are processed in a highly compact, integrated network, and a range of plasticity mechanisms (including adult neurogenesis) are in place to adapt the neural code to changing environments. We are interested in identifying physical principles of context-dependent optimal coding in this system, and the function of different plasticity rules in forming dynamic representations. Our theories make predictions that can be tested in experiments and aim to establish a conceptual foundation for studying sensory coding associated with behavior.
Network analyses of neural circuits for economic decisions
The goal of this project, conducted in collaboration with the Padoa-Schioppa lab, is to shed
light on the principles that determine how the values of goods are encoded in the brain and how these values are compared in neural networks to generate economic decisions. We use techniques based on the inference of maximum entropy and non-stationary models to reconstruct the functional circuits in the orbitofrontal cortex (OFC) that are central to economic decision-making. In parallel, we develop and test theories on how the OFC network may support the optimal coding of values and choices. Since economic choice behavior is specifically disrupted in a variety of clinical disorders, understanding the neural mechanisms underlying this behavior is important both from a scientific and from a medical perspective.
Bayesian and complexity theories of high-level cognition
We continuously gather noisy data through our senses to make inferences about past, present, and future states of the world. Accessible information, time and resources are limited and constrain the accuracy and complexity of viable inference strategies. The lab develops normative theories to understand how efficient inference processes adapt their complexity to environmental uncertainty and task demands. We recently identified a hierarchical (nested) organization in a wide range of models, from Bayesian probabilistic strategies to simpler, heuristic update processes that are often described as implementing a ‘model-free’ form of learning. By studying this hierarchy, we identified two fundamental principles: (a) a power law of diminishing returns, whereby increasing computational complexity and cognitive effort gives progressively smaller gains in accuracy, and (b) a non-monotonic relationship between cognitive demands and statistical uncertainty in the world, such that complex inference strategies are necessary only in a relatively narrow range of intermediate-noise environments.
Statistical physics approaches to memory consolidation and retrieval
To acquire knowledge, we need effective ways to store and access information in the brain. Since the Hopfield model, the theory of spin glasses has offered a conceptual framework for studying how memories are encoded and retrieved in attractor networks. The quantification of the memory capacity of these networks has been the subject of an extensive literature, which has also shed light on optimal synaptic learning rules for storing information. Currently, our lab is interested in the complementary problem of memory retrieval. We use statistical physics approaches to study the properties of the basins of attraction in neural networks and to identify stimulation protocols that optimize memory recall. This research can also help understand the mechanisms of memory consolidation, which are based on repeated activations of neural attractors by external inputs. Finally, we hope that these studies will help identify specific therapeutic targets for Alzheimer’s disease and other conditions that cause memory decline.