Identification of cell types and connectivity from in-vivo activity recordings
Lu Mi
Shanahan Foundation Fellow
Allen Institute University of Washington
Large-scale profiling of neural activity
2
https://github.com/sharminpathan/neuron-finding-in-calcium-imaging
http://www.steinmetzlab.net/
alleninstitute.org |
Single-cell RNA sequencing puts forth a very different view�
3
Tasic et al, Nature, 2018
alleninstitute.org |
A molecularly defined and spatially resolved cell atlas�
4
Zhang et al, biorXiv, 2023
alleninstitute.org |
From spatial transcriptomics to function activities�
5
https://github.com/sharminpathan/neuron-finding-in-calcium-imaging
Towards a mechanistic understanding:
alleninstitute.org |
From spatial transcriptomics to function�
6
Bugeon et. al. 2022
alleninstitute.org |
A way forward: post-hoc spatial transcriptomics�
7
Activity 🡪 Gene expression
LOLCAT
Schneider et al, Cell Reports, 2023
alleninstitute.org |
Challenges
8
alleninstitute.org |
Methodology
9
Uygar Sümbül
@ Allen Institute
Trung Le
PhD Student @ UW
Wuwei Zhang
Master Student @ UW
Shanahan Postbac RA
alleninstitute.org |
Generalized and scalable framework to study multi-modal neural data
10
Lu Mi*, Trung Le*, Tianxing He, Eli Shlizerman, Uygar Sümbül. Learning Time-Invariant Representations for Individual Neurons from Population Dynamics (func2type), NeurIPS 2023 * equal contribution
An implicit model of neuronal dynamics
activity of neuron i at time t
activity of neurons that provide (synaptic or extra-synaptic) input to neuron i at time t
time-invariant representation
for neuron i
alleninstitute.org |
Transferability across experiments, animals�
11
: activity of neurons that provide input to neuron i at time t
https://github.com/sharminpathan/neuron-finding-in-calcium-imaging
Problem: number and order of neurons
alleninstitute.org |
Self-supervised learning for pretraining
12
ChatGPT: large language models with self-supervised pretraining
predict next token
f
foundation model
finetuning
downstream tasks
Vaswani et. al. 2017
Devlin et. al. 2018
alleninstitute.org |
NeuPRINT: Self-supervised representation learning frameworks�
13
13
running
speed
pupil
diameter
learn via prediction loss
alleninstitute.org |
Dynamical model: transformer with causal attention�
14
14
alleninstitute.org |
Lightweight downstream supervised learning
15
Classifier 1
Classifier 2
alleninstitute.org |
Lightweight downstream supervised learning
16
Regressor 1
Regressor 2
alleninstitute.org |
Baselines
17
Schneider et al., 2023
alleninstitute.org |
Self-supervised NeuPRINT demonstrates SOTA accuracy in data-limited scenarios �
18
Bugeon et. al. 2022
alleninstitute.org |
Transformer outperforms other implicit dynamical models�
19
alleninstitute.org |
Permutation-Invariant Summary of Population Dynamics Enhances the Time-invariant Representation�
20
alleninstitute.org |
NeuPRINT demonstrates robustness across visual stimulus settings �
21
alleninstitute.org |
NeuPRINT demonstrates robustness across mice
22
TRAIN
TEST
Inh. subclass
E vs I
alleninstitute.org |
Limitations
23
alleninstitute.org |
24
Enhancing the identifiability and interpretability of computational modeling with mechanistic understanding�
Wuwei Zhang, Trung Le, Eli Shlizerman, Hao Wang, Uygar Sümbül, Lu Mi. Self-Attention Represents Functional Connectivity in a Network-Model of Population Dynamics (func2graph), under review
alleninstitute.org |
In-silico simulation model
25
connectivity W
Campagnola et. al. 2022
Patch Clamp
alleninstitute.org |
In-silico simulation model
26
connectivity W
nonlinearity
connectivity weight
population activities at time t + 1
population activities at time t
baseline
gaussian noise at time t
alleninstitute.org |
Addressing non-identifiability
27
alleninstitute.org |
Addressing non-identifiability
28
Self-Attention in Transformer
(after global linear transformation)
Ground Truth Connectivity
alleninstitute.org |
Test on mouse visual cortex recording
29
Self-Attention in Transformer fits
Bugeon et al. 2022
(after global linear transformation)
Function Connectivity
in Campagnola et. al. 2022
alleninstitute.org |
Compare with other baselines & ablations
30
alleninstitute.org |
Take away: transformer with self-supervised learning could be used as identity and connectivity learner
31
Yes
alleninstitute.org |
Future Directions
32
alleninstitute.org |
Uygar Sümbül, Trung Le, Wuwei Zhang, Tianxing He, Hao Wang, Eli Shlizerman
Thank you
alleninstitute.org
THANK YOU