Sloan-Swartz Centers for Theoretical Neurobiology

Annual Meeting 2008

 

Princeton University, July 19-22, 2008

 

Program (as of 19 July 2008)

Page 123

 

Monday, July 21

 

9:00 NYU Posters Overview: Bob Shapley 

 

9:15 Invited Speaker: Asif Ghazanfar (Princeton) Primate communication through coupled oscillations

 

9:55 Break

 

10:10 Talk Sessions: Motor Control

 

10:10 Maurice Smith (Harvard)

Basis Elements for Motor Adaptation Reflect Neural Representations of Motion

 

Our internal neural representations of the world provide the framework for interpreting sensory information and forming motor responses. The neural representation of limb movement and posture integrates position and velocity information rather than processing them through separate channels. Furthermore, the responses to these variables are correlated such that a particular neuron's response to one of these features substantially predicts its response to the other. For example, muscle spindle afferents neural stretch sensors in our muscles are driven simultaneously by the amount and rate of stretch resulting in positively correlated responses to these variables. Similarly, the firing rates of neurons in primary motor and parietal cortex are simultaneously tuned to both the position and velocity of voluntary limb movements. However, little is understood about how this property shapes the learning of new motor skills. Here we make the first direct measurements of how the temporal patterns of force output produced by the motor system evolve during the learning of novel physical dynamics. The evolution of this temporal pattern reveals that the primitive elements underlying motor adaptation display correlated responses to position and velocity. These primitive elements are the computational building blocks for motor adaptation. They determine how the motor system forms motor commands and adapts them to altered dynamics, such as when one is fatigued or manipulating a tool. We find that the combined tuning of these 'motor primitives' to position and velocity dictates not only how quickly and to what extent several different types of dynamics are learned, but also explains why initial stages of learning are often rapid and stereotyped, whereas later stages are slower and more stimulus-specific. This work suggests that neural representations of motion shape the computational primitives for motor learning, which in turn determine the ability to learn different motor skills.

 

10:30 Heather Dean (NYU)

Reaction time correlations as a measure of eye-hand coordination

 

Despite the importance of hand-eye coordination, the behavioral signatures of coordinated actions remain poorly understood. Reaction time (RT), the time to respond to a cue, is widely used as a measure of movement preparation. We have proposed RTs for eye and arm movements may also reflect preparation for coordinated movements. Previously, we found saccade and reach RTs were highly correlated before a look-reach movement. These correlations did not depend on common sensory modality, as we found strong correlations using either visual cues to initiate both movements or a visual cue for one effector and an auditory cue for the other. These observations indicate that RT correlations we have observed are mainly due to movement preparation and not sensory processing. Here, we manipulate coordinated actions by explicitly controlling the relative timing of the movements and measuring the effect of this manipulation on RT correlations. A monkey was trained to perform a Stimulus Onset Asynchrony (SOA) task in which we varied the timing between movements by delaying the reach with respect to the saccade. We varied the timing of eye and hand movements by triggering each using separate sensory cues. The monkey started each trial by fixating a central red LED and touching a central green LED. Both a reach and a saccade were made to a single peripheral yellow LED. Saccades were triggered by the onset of the peripheral yellow target and the offset of the central red LED. Reaches were triggered by the onset of an auditory cue. SOA was defined as the time from saccade go cue to the reach go cue and varied from 0 to 1500 ms. We examined the effect of the SOA delay on RT correlations. At 0 ms SOA delay, RT correlations were highly significant (r = 0.47, p<0.0001). We found that RT correlations remained highly significant (p<0.0001) but dropped off quickly with increasing SOA delays. (SOA = 100 ms, r = 0.29; SOA = 200 ms, r = 0.16). RT correlations were not significantly different from zero for SOA delays greater than 300 ms (p = 0.99). Since average saccade RT is 280 ms and correlations are not significant for SOA delay greater than 300 ms, these correlations must not be due to trial-by-trial changes in arousal and motivation and are linked to the overlap in time of preparation for a saccade and reach. Our results support RT correlations as a measure of behavioral coordination. They also provide evidence for a class of models for coordination in which interacting races underlie the preparation of coordinated saccades and reaches. These race models are consistent with our data and predict that coordination will generate progressively stronger RT correlations when there is more time for both races to influence each other.

 

10:45 Bernstein Posters Overview: Julia Veit

 

11:00 Break  

 

11:20 Salk Posters Overview: Terry Sejnowski

 

11:35 Arpan Banerjee (NYU)

Spatiotemporal emergence of movement plans in the posterior parietal cortex during eye-hand coordination

 

Movement plans are formed after sensory stimuli have been discriminated and before movements are executed. The onset of the movement planning stage, which we call the movement selection time, is the time at which neural activity reliably discriminates which movement the animal will select. Identifying the movement selection time from spiking and local field potential (LFP) activity recorded simultaneously on multiple electrodes is a challenging problem. This is, in part, due to the different statistical properties of these measures of neuronal activity which can introduce biases in the distribution of movement selection times resulting in potentially misleading comparisons between measurements. Here, we present a unifying framework for the multivariate analysis of spiking and LFP activity and apply it to the problem of characterizing stages of processing when subjects perform coordinated eye-hand movements. The key idea in this framework is to develop probabilistic models of spiking and LFP activity and use them to obtain time varying likelihood ratios for selection of each movement direction. Firing rate dynamics of responsive single units is modeled as inhomogeneous Poisson process corresponding to each movement direction. Using this information, time-varying likelihood ratios for each movement direction is obtained from new data. LFPs are modeled as autoregressive process for single trials in the train set data to decode the time varying input to the neural tissue where recordings are performed. Subsequently, the input distribution is modeled as a Gaussian process to obtain the time-varying likelihood ratios for selection of each movement direction from new data. Statistical thresholds imposed on the accumulation of the log-likelihood ratios allows us to determine the movement selection times from spikes and fields. For proof of concept we apply this framework to simulated data and neural recordings from multiple electrodes in the posterior parietal cortex of awake, behaving monkeys performing reaches and saccades to either an instructed or a freely chosen location.

 

11:50 Flip Sabes and UCSF Posters Overview

Normative models and networks: sensorimotor learning from two perspectives

 

A variety of sensorimotor phenomenon are well described by normative statistical (maximum likelihood or Bayesian) models of sensory estimation or learning. We have found that by adding Hebbian learning to simple networks with line attractor properties, we can model equally well several of these behavioral phenomena. Here we focus on one example: the influence of recent experience on a simple visually guided reaching task. When repeated movements are made to the same target, movement variability drops for reaches to that target, and reaches to nearby targets are biased toward the repeated target. These effects are described by an adaptive Bayesian prior for target or movement vector. We show that the effects can also be achieved in a simple line-attractor model for reach planning. When Hebbian plasticity is added to the model, regions along the "attractor" gain or lose strength with experience. Changes in the energy landscape along the attractor mimic the effects of a Bayesian prior. These and similar results from our lab suggest a potential mechanistic model of Bayesian estimation in the brain and, more generally, to the view that the neural circuits for sensorimotor integration are constantly evolving in response to recent experience.

 

12:10 Director's Lunch

 

12:30 Lunch

 

2:30 Brandeis Posters Overview: Eve Marder

 

2:45 Talk Sessions: Multiscale Human Electrophysiology

 

2:45 Scott Makeig (UCSD)

Modeling large-scale and neural-scale dynamics in multi-resolution human data

 

Brain dynamics are inherently multiscale, their operant structures at different spatial scales (e.g., cortical regions, columns, neurons, synapses, molecules) having different dynamics and connectivity structure. Currently, the most pressing questions in brain dynamics concern coordination of these dynamics at multiple space and time scales, a subject long neglected during the experimental era dominated by single microelectrode data recording and analysis. Paradoxically or not, imaging modalities that view brain activity from the greatest distance (e.g. from the scalp) are also the most suited to studying distributed dynamics. However, scalp EEG (or MEG) can only see the far-field projections of locally synchronized field activity across (largely) cortical domains of currently unknown size and spatiotemporal dynamics. Ideal measures of cortical field dynamics, therefore, need to be multi-resolution. A unique window of opportunity into human brain dynamics is afforded by the current clinical practice of invasive monitoring of cortical (and/or sub-cortical) activities in subjects with complex cases of intractable epilepsy for the purpose of planning remediative brain surgery. In collaboration with Dr. Greg Worrell (MD, PhD) at the Mayo Clinic (Rochester, MN), I and colleagues at the Swartz Center have been analyzing first multi-resolution data from Dr. Worrell's practice. These consist of up to 30 scalp EEG electrodes recorded synchronously with electrocorticographic (ECOG) grids or strips of 1-cm spaced subdural electrodes. Recently, Dr. Worrell has begun to intersperse among these additional 40-um wire tip electrodes, and has used the resultant field recordings to uncover a new human epileptic phenomenon -- microseizures. Adequate joint analysis of simultaneous recordings at these three and optimally still more intervening spatial scales is not simple, however. I and Greg Worrell, Zeynep Akalin Acar (UCSD), Maxim Bazhenov (UC Riverside), and Tanya Baker (Salk) are beginning an ambitious multiscale modeling project to address more comprehensively the problem of multiscale brain dynamic modeling . I argue this must involve no less than a seven-layer data collection and analysis model, which I will describe. Then Zeynep and Maxim will discuss the modeling problem from the 'field-data-down' and 'neural-model-up' perspectives.

 

3:05 Maxim Bazhenov (UC Riverside)

Large-scale neural modeling of multi-resolution human data

 

Intrinsic neuronal and network properties control the spatiotemporal patterns of neural activity that are involved in sensory processing, memory formation, cognitive tasks. The modeling of such dynamics requires computationally efficient single-neuron models capable of displaying realistic response properties. We developed a set of reduced models based on difference equations (map-based models) to simulate the intrinsic dynamics of biological neurons. These phenomenological models were designed to capture the main response properties of specific types of neurons while ensuring realistic model behavior across a sufficient dynamic range of inputs. Drawing on results obtained using large-scale networks of map-based neurons, we reveal a critical role played by the cortical network geometry in achieving precise long-range synchronization in the gamma (20-80 Hz) frequency band. We found that the presence of many independent synaptic pathways in a two-dimensional network facilitates precise phase synchronization of gamma oscillations with nearly zero phase delays between remote network sites. Our results predict a common mechanism of precise oscillatory synchronization in neuronal networks.

 

3:30 Zeynep Akalin Acar (UCSD)

Neuroelectromagnetic Forward Modeling Toolbox and cortical source localization in epilepsy

  

This study introduces the Neuroelectromagnetic Forward Modeling Toolbox running under MATLAB (The Mathworks, Inc.). The purpose of the toolbox is to generate realistic head models from available data (MRI and/or electrode locations) for solving the forward problem of electro-magnetic source imaging numerically. As an application of the toolbox, a realistic head model for an epilepsy patient undergoing pre-surgical evaluation at the epilepsy center of the Mayo Clinic (Rochester, MN) is presented. The toolbox includes tools for segmenting scalp, skull, cerebrospinal fluid (CSF) and brain tissues from T1-weighted magnetic resonance (MR) images. After extracting the segmented tissue volumes, mesh generation can be performed using deformable models. When MR images are not available, it is possible to warp a template head model to measured electrode locations to obtain a better-fitting realistic model. The Boundary Element Method (BEM) is used for the numerical solution of the forward problem. Toolbox functions can be called from either a graphic user interface or from the command line. Function help messages and a tutorial are included. The toolbox is freely available under the GNU Public License for non-commercial use and open source development. The accuracy of the head model used in electrical source imaging (ESI) affects the accuracy of the source localization significantly. For pre-surgical evaluation of epilepsy patients, a section of the skull is removed, and a plastic sheet with embedded subdural electrodes is placed on the cortex. For these studies, the influence of post-surgical defects in the skull and the influence of the plastic sheet can not be neglected. To overcome this problem, we generated a BEM head model containing scalp, skull with a large craniotomy, and the plastic sheet layers. The multi-scale EEG recordings acquired simultaneously from the scalp (sEEG) and intracranial subdural (iEEG) EEG data are analyzed using independent component analysis (ICA), which identifies and isolates independent signal components from multi-channel recordings. Using this approach, we investigated the relationship between noninvasive and invasive source localization of electrical sources in the human brain. We have observed that there was a difference of 1 cm between sources estimated from sEEG and iEEG measurements. The difference between these results may be the low spatial sampling of scalp data, registration of the scalp electrodes to the model and not including the CSF layer in the model.

 

3:50 Makeig, Bazhenov, & Acar: Joint discussion  

 

4:00 Break  

 

4:30 Talk Sessions: Oscillations and Spikes

 

4:30 Dave Markowitz (Princeton) 

Rate-specific synchrony: Using noisy oscillations to detect equally active neurons

 

Gamma oscillations (30-100Hz) are observed in field potential recordings from many brain areas. These oscillations are typically noisy, exhibiting fluctuations in amplitude and a broad frequency distribution. In vitro experiments using cortical brain slices have demonstrated that gamma oscillations can be produced by sustained activation of networks of inhibitory neurons, which in turn produce highly correlated rhythmic membrane potential oscillations in the local population of pyramidal cells. This raises the question of what role common noisy oscillations might play in producing synchronous action potentials across a population of neurons. Such a relationship would be functionally important, because precise spike timing influences synaptic activation of postsynaptic targets, timing-dependent short-term synaptic plasticity, and pattern recognition. Previous work has shown that weakly correlated input to a pair of neurons produces an output correlation proportional to the geometric mean of their firing rates. We explore the opposite regime of highly correlated noisy input, consistent with in vivo paired intracellular recordings and the synaptic drive expected from gamma-producing inhibitory networks. We ask how this common input interacts with more slowly varying independent inputs to the neurons that modify mean ring rates. Using whole cell recordings from cortical neurons in vitro, we show that similarity of firing rate is a major determinant of synchrony under common noisy oscillatory input: precise spike synchrony at similar rates (within 3 spikes/sec) gives way to substantial desynchronization at larger firing rate differences. Analysis of this rate-specific synchrony phenomenon reveals distinct spike timing fingerprints at different ring rates, which emerge through a combination of phase shifting and abrupt changes in spike patterns. During periods of synchrony, individual neurons show highly irregular, Poisson-like interspike interval distributions, as observed in vivo. We further demonstrate that rate-specic synchrony permits robust detection of rate similarity in a population of neurons through synchronous activation of a postsynaptic neuron, supporting the biological plausibility of Many Are Equal (MAE) computation. Our results reveal that spatially coherent noisy oscillations, which are common throughout the brain, generate previously unknown relationships between neural rate codes, noisy interspike intervals, and precise spike synchrony codes. All of these can coexist in a self-consistent manner because of rate-specific synchrony.

 

4:50 Zoltan Nadasdy (Caltech)

Information coding with the phases of action potentials

 

5:15 Haim Sompolinsky (Harvard)

 

5:30 Poster Setup and Browsing

 

6:00 Dinner

 

8:00 – 10:00 Poster Session

 

Continued...

  

Page 123

 

 

Saturday, July 27, 2024
About the Swartz Foundation...
 
The Swartz Foundation was established by Jerry Swartz (bio) in 1994 . . .
more>
 
Follow us...
 
The Swartz Foundation is on Twitter: SwartzCompNeuro
more>
 
 
2013 Stony Brook Mind/Brain Lecture - Michael Wigler, PhD
 
 
2012 Stony Brook Mind/Brain Lecture - John Donoghue
 
 
Sloan-Swartz Centers Annual Meeting 2011
 
 
2011 Stony Brook Mind/Brain Lecture - Allison J. Doupe
 
 
2011 Banbury Workshop
 
 
Sloan-Swartz Centers Annual Meeting 2010
 
 
2010 Stony Brook Mind/Brain Lecture
 
 
Sloan-Swartz Centers Annual Meeting 2009
 
 
Conference on Neural Dynamics
 
 
2009 Stony Brook Mind/Brain Lecture
 
 
Canonical Neural Computation, April 2009
 
 
2009 Banbury Workshop
 
 
Sloan-Swartz Centers Annual Meeting 2008
 
 
Theoretical and Experimental Approaches to Auditory and Visual Attention - Banbury 2008
 
 
Stony Brook Mind/Brain 2008: Patricia Smith Churchland, B. Phil. D
 
 
Sloan-Swartz Centers Annual Meeting 2007
 
 
New Frontiers In Studies Of Nonconscious Processing - Banbury 2007
 
 
Stony Brook Mind/Brain 2007: Professor Michael Shadlen, MD, PhD
 
 
Multi-level Brain Modeling Workshop 2006
 
 
Sloan Swartz Centers Annual Meeting 2006
 
 
Banbury 2006: Computational Approaches to Cortical Functions
 
 
Stony Brook Mind/Brain 2006: Helen Fisher -- Lecture Videos
 
 
Sloan-Swartz Centers for Theoretical Neurobiology
 
 
Swartz Center for Computational Neuroscience
 
 
Banbury Center Workshop Series
 
 
Other Events
 
www.theswartzfoundation.org                           Copyright © The Swartz Foundation 2024