Carnegie Mellon University

L-R: Jonathan Gould, Linda Moya, Thomas Summe, Jack Burgess, William Cole, Helen Feibes, Matthew Clapp, Chloe Chen, Chris Weinberger, Leya Luo, Avyi Hill, Sol Markman, Aniekan Umoren, Raina Vin, Nikolas McNeal, Clayton Washington.

(uPNC Home)

2021 Summer Undergraduate Research Program in Neuro Computation (uPNC) class roster

Chloe Chen

Undergraduate Institution: Carnegie Mellon University

Mentor: Bard Ermentrout
University and Department: University of Pittsburgh

Project Description: 

Experimental data indicates there are higher gamma frequencies in the dorsolateral prefrontal cortex (DLPFC) compared to the posterior parietal cortex (PPC) - two regions which interact during working memory processes.  Under the mentorship of Dr. Ermentrout, Chloe Chen and Avyi Hill investigated potential underlying causes of the difference. They simulated a network of 50 excitatory and 20 inhibitory neurons using a quadratic integrate and fire (QIF) model. In XPPAUT, they manipulated parameters related to excitatory synaptic coupling, spike adaptation, and magnesium level. Additionally, they reduced the QIF model to a system of 8 ordinary differential equations using a Montbrio reduction method. They found that increased excitatory synaptic coupling increases the frequency and power of gamma rhythms and could account for the difference of frequencies in the DLPFC and PPC. Also, higher levels of spike adaptation and magnesium decrease gamma frequency and power. Future plans are to explore the cause of bands in power graphs in the presence of magnesium and to implement the Montbrio model in working memory questions.


William Cole

Undergraduate Institution: University of Massachusetts Amherst

Mentor: Rob Kass
University and Department: Carnegie Mellon University

Project Description: 

Clay Washington and William Cole worked together to analyze Local Field Potentials (LFPs) recorded from multiple electrodes (Neuropixels) in the mouse visual system in response to different stimuli (data available from the Allen Institute for Brain Science). We applied logistic regression and support vector classifiers to 150 repeated trials of data from several distinct brain areas in order to discriminate light from dark full-field flashes. Classifiers based on LFPs from area V1 could discriminate these stimuli very effectively. Data from other cortical areas were less discriminative. We identified particular time intervals following stimulus onset when LFP data were most informative. We found that LFP data from parts of the thalamus that project to the primary visual cortex were more informative during the period just after the flashing stimulus is terminated than during stimulus presentation. Classifiers trained on one mouse remained effective when tested on different mice. Nested cross-validation was used to provide confidence intervals for classifier accuracy.


Helen Feibes

Undergraduate Institution: Amherst College

Mentor: Eric Yttri
University and Department: Carnegie Mellon University

Project Description: 

The neural dynamics of naturalistic, unrestrained behaviors are crucial to understanding the relationship between the brain and behavior but are often difficult to define and track, as they continuously evolve over time and space and display highly variable kinematics. While existing methods have been well-suited for studying restrained, task-related behaviors alongside specific brain areas of interest, new machine learning methods may offer a way to extract complex naturalistic behaviors concurrent with neural correlates across the brain. Helen Feibes combined pose estimation, behavior extraction, and electrophysiology in order to reveal the underlying neural dynamics in mouse primary motor cortex and striatum of locomotion, rearing, face/head grooming, and body grooming. 

To track mouse body poses over video frames, Helen utilized DeepLabCut and trained a deep convolutional neural network on video frames with manually labeled body parts. After finding the model performed well for snout but not other body parts, she moved from an original unsupervised approach for behavior-extraction with B-SOiD (developed by the Yttri lab; Nature Communications), to a supervised method using snout information and user-defined behavior onset times. Behavior onsets were aligned with neural spiking data recorded simultaneously with a chronic Neuropixels electrode implant. Primary motor cortex and striatal neurons displayed strong positive and negative modulation prior to and during the onset of locomotion, rearing, and face/head grooming, while largely demonstrating only negative modulation prior to and during the onset of body grooming. To determine if the modulation was behavior-specific, the pre-movement activity of neurons were plotted across pairs of behaviors. The plots reveal locomotion-specific and face/head grooming-specific neurons. Together, these results suggest roles for primary motor cortex and striatum in the onset of various naturalistic behaviors and indicate neurons involved in specific behavioral representations.


Jonathan Gould

Undergraduate Institution: New York University

Mentor: Byron Yu
University and Department: Carnegie Mellon University

Project Description: 

Dimensionality reduction methods are integral to our current understanding of multichannel electrophysiological recordings. Until recently, existing methods have struggled to tease apart the signals concurrently relayed between two neural populations. A novel dimensionality reduction method, Delayed Latents Across Groups (DLAG; Gokcen et al., Cosyne 2020) has been proposed to address this challenge. The original implementation of this method, however, is in MATLAB, a proprietary scientific computing tool. To improve DLAG's accessibility to the scientific community, Jonathan worked to translate DLAG from MATLAB to Python, an open-source and free programming language.


Avyi Hill

Undergraduate Institution: Wheaton College

Mentor: Bard Ermentrout
University and Department: University of Pittsburgh

Project Description: 

Experimental data indicates there are higher gamma frequencies in the dorsolateral prefrontal cortex (DLPFC) compared to the posterior parietal cortex (PPC) - two regions which interact during working memory processes. Under the mentorship of Dr. Ermentrout, Avyi Hill and Chloe Chen investigated potential underlying causes of the difference. They simulated a network of 50 excitatory and 20 inhibitory neurons using a quadratic integrate and fire (QIF) model. In XPPAUT, they manipulated parameters related to excitatory synaptic coupling, spike adaptation, and magnesium level. Additionally, they reduced the QIF model to a system of 8 ordinary differential equations using a Montbrio reduction method. They found that increased excitatory synaptic coupling increases the frequency and power of gamma rhythms and could account for the difference of frequencies in the DLPFC and PPC. Also, higher levels of spike adaptation and magnesium decrease gamma frequency and power. Future plans are to explore the cause of bands in power graphs in the presence of magnesium and to implement the Montbrio model in working memory questions.


Leya Luo

Undergraduate Institution: University of Chicago

Mentor: Chengcheng Huang
University and Department: University of Pittsburgh

Project Description: 

Leya Luo used a recurrent neural network model of V1 to analyze excitatory neuron dynamics. The network was presented with Gabor images and its response to a single Gabor image was compared to its response to the sum of two orthogonal Gabor images. The network’s response to the summed orthogonal Gabor images was near the sum of its responses to each individual Gabor image in the sum. Additionally, Leya calculated the tuning curves for excitatory neurons in the network and used them to determine how the spike count correlation of a pair of neurons depends on their preferred orientations. When comparing the network’s response to the sum of two Gabor images to its response to the individual Gabors that made up the composite image, the activity of the network was less correlated when shown a composite image.

Sol Markman

Undergraduate Institution: Washington University in St. Louis

Mentor: Marlene Cohen
University and Department: University of Pittsburgh

Project Description: 

When navigating the natural world, animals must select relevant behavioral tasks to solve based on incomplete information from past experiences in a dynamic environment. In the traditional view of decision-making, task-belief and perceptual judgement are often assumed to be sequential and independent computations. In recent work, recording population responses from cortical areas 7a and V1 as subjects performed a two-feature discrimination task with a randomly switching task rule showed that task-belief and perceptual decision-making interact on a trial-by-trial basis — a bi-directional interaction which was observed on both behavioral and neuronal levels (Xue, Kramer, Cohen 2021). Sol Markman’s summer project aimed to use recurrent neural network (RNN) models to learn about the network structure underlying the interaction between task-belief and perceptual decisions. Models were trained to perform a version of the two-feature discrimination task, constrained by behavioral data to varying degrees. The models’ behavior could then be directly compared to the subjects’ behavior. While the models’ performance measures on the task were similar to the subjects, the bi-directional relationship between task-belief and perception was not fully reproduced. The model conditioned task updates on perceptual difficulty, but perceptual performance was unaffected by task uncertainty. This suggests that the correlation between task-belief and perception arises from learned priors or biophysical properties that are not naturally captured by a simple RNN model. Future directions include increasing the complexity of the modeled task and exploring the effects of pre-training networks on a variety of tasks. We also aim to continue an alternate approach in which we compare models that predict subjects’ errors with models that predict correct choices to characterize computations underlying suboptimal behavioral strategies.


Nikolas McNeal

Undergraduate Institution: Ohio State University

Mentor: Tai-Sing Lee
University and Department: Carnegie Mellon University

Project Description: 

PredNet1,2 is a generative neural network trained via self-supervised learning to perform next-frame prediction. Self-supervised learning is an unsupervised framework which utilises unlabelled data to automatically generate supervision signals. PredNet uses the difference between its predictions and the ground-truth frames in its objective function. PredNet was inspired by the predictive coding principle; the brain creates a hierarchical internal model of the world to explain away sensory inputs. The network implements this idea by continuously generating predictions of future sensory input via a top-down path and sends prediction errors via its bottom-up path. This principle allows the model to reproduce salient phenomena in the visual cortex such as illusory percepts and single-unit dynamics2. The representations of neurons in PredNet are not known. In so far as PredNet is a useful model of the primate visual cortex, learning its internal representations can provide insights into the functioning of the visual system. Visualizing the stimuli that elicit a strong or weak response gives us insight into PredNet’s internal representations.


Thomas Summe

Undergraduate Institution: Brown University

Mentor: Leila Wehbe
University and Department: Carnegie Mellon University

Project Description: 

Thomas Summe’s research this summer was finding neural correlates of language processing while listening to stories. With MEG’s relatively high temporal resolution compared to fMRI, neural activity linked to intra-word language events can be better analyzed. Using data from one subject listening to five stories, Thomas created encoding models using phoneme occurrence and spectrogram sound frequency as feature spaces were created to predict MEG channel activity for 306 channels. The encoding models were linear models, fit with a cross-validation ridge regression. He found all encoding models were best at predicting channels around the auditory cortex, but the encoding model for sound frequency was a better predictor of the amplitude of activity around the auditory cortex than the phoneme encoding models. He inspected the encoding models to study the receptive fields of different channels and found similar phonemes to have a similar impact in predicting MEG channel activity in the auditory cortex.

Aniekan Umoren

Undergraduate Institution: Massachusetts Institute of Technology

Mentor: Tai-Sing Lee
University and Department: Carnegie Mellon University

Project Description: 

PredNet1,2 is a generative neural network trained via self-supervised learning to perform next-frame prediction. Self-supervised learning is an unsupervised framework which utilises unlabelled data to automatically generate supervision signals. PredNet uses the difference between its predictions and the ground-truth frames in its objective function. PredNet was inspired by the predictive coding principle; the brain creates a hierarchical internal model of the world to explain away sensory inputs. The network implements this idea by continuously generating predictions of future sensory input via a top-down path and sends prediction errors via its bottom-up path. This principle allows the model to reproduce salient phenomena in the visual cortex such as illusory percepts and single-unit dynamics2. The representations of neurons in PredNet are not known. In so far as PredNet is a useful model of the primate visual cortex, learning its internal representations can provide insights into the functioning of the visual system. Visualizing the stimuli that elicit a strong or weak response gives us insight into PredNet’s internal representations.


Raina Vin

Undergraduate Institution: Carnegie Mellon University

Mentor: Marlene Behrmann
University and Department: Carnegie Mellon University

Project Description: 

Functional Magnetic Resonance Imaging (fMRI) studies have repeatedly demonstrated that word recognition is associated with activation of the Visual Word Form Area (the VWFA) in the left hemisphere and that the left VWFA is connected to the important language areas, Broca’s area and Wernicke’s area. Raina Vin’s project began by asking three questions: one, is it only the VWFA in the left hemisphere that is engaged in word recognition, or is the right hemisphere also engaged? Two, is the VWFA only connected to Broca’s and Wernicke’s areas, or is there broader connectivity to other more minor language regions? And three, is the observed connectivity profile modulated by the nature of the input – in other words, is the connectivity between the areas stronger the more word-like the input? To answer these questions, the Behrmann Lab acquired 3T functional MRI data from 28 right-handed college age individuals who performed a 1-back recognition task on words, inverted words and letter strings.

Raina first performed two primary analyses – selectivity and functional connectivity. Selectivity measures the strength of activation in a brain region of interest (ROI) in response to a stimulus category (in this case, words, inverted words, and letter strings). The results of the selectivity analysis showed that it is not only the left hemisphere, but rather both the left and right hemispheres that are activated and thus are involved in word perception and recognition. Functional connectivity analyses revealed that, 1) in addition to Broca’s and Wernicke’s areas, the precentral gyrus and the superior temporal sulcus and gyrus (STS+G) are involved in word recognition, and 2) functional connectivity between these language ROIs showed differential strengths across the three types of stimuli. Additionally, our results suggested that the VWFA, Broca’s area, Wernicke’s area, the STS+G and the precentral gyrus form a distributed network of interconnected regions, both within and across hemispheres. Raina then used graph theoretical methods to study the network properties of these regions and possible hubs in the network. Initial analyses suggested that the STS+G and precentral gyrus are important hubs in the language network along with VWFA, Broca’s area and Wernicke’s area. Moreover, the VWFA showed significant task condition differences (particularly between words and inverted words) for the degree and node strength metrics.


Clayton Washington

Undergraduate Institution: Ohio State University

Mentor: Rob Kass
University and Department: Carnegie Mellon University

Project Description: 

Clay Washington and William Cole worked together to analyze Local Field Potentials (LFPs) recorded from multiple electrodes (Neuropixels) in the mouse visual system in response to different stimuli (data available from the Allen Institute for Brain Science). We applied logistic regression and support vector classifiers to 150 repeated trials of data from several distinct brain areas in order to discriminate light from dark full-field flashes. Classifiers based on LFPs from area V1 could discriminate these stimuli very effectively. Data from other cortical areas were less discriminative. We identified particular time intervals following stimulus onset when LFP data were most informative. We found that LFP data from parts of the thalamus that project to the primary visual cortex were more informative during the period just after the flashing stimulus is terminated than during stimulus presentation. Classifiers trained on one mouse remained effective when tested on different mice. Nested cross-validation was used to provide confidence intervals for classifier accuracy.

Chris Weinberger

Undergraduate Institution: University of Minnesota- Twin Cities

Mentor: Greg Siegle
University and Department: University of Pittsburgh

Project Description: 

Emotion dysregulation is a feature shared among many psychopathologies. Thus, understanding this process would allow for more precisely targeted interventions. This summer, Chris Weinberger worked with his mentor, Dr. Greg Siegle, to build dynamical connectionist models that simulated emotion regulation as interactions between the body and several brain regions. One of these models was a fully connected graph with four nodes, representing the insula, amygdala, prefrontal cortex, and the body. The other model implemented active inference predictive coding in the insula, which was embedded in a fully connected graph between the aforementioned regions. These models were then fit to patient and control fMRI data and will be used to find statistically significant differences in network connections between patient and control groups. So far, they have found that expected emotion regulation dynamics, following James Gross’ process model, occur in a narrow band of the models’ parameter spaces, and that the active inference model has more flexibility in regulatory dynamics.

The 2021 uPNC speaker schedule can be viewed here (pdf format).