1 of 34

Identification of cell types and connectivity from in-vivo activity recordings

Lu Mi

Shanahan Foundation Fellow

Allen Institute University of Washington

2 of 34

Large-scale profiling of neural activity

2

https://github.com/sharminpathan/neuron-finding-in-calcium-imaging

http://www.steinmetzlab.net/

alleninstitute.org |

3 of 34

Single-cell RNA sequencing puts forth a very different view�

3

  • Not hard to profile millions of neurons with scRNA-seq

Tasic et al, Nature, 2018

alleninstitute.org |

4 of 34

A molecularly defined and spatially resolved cell atlas�

4

Zhang et al, biorXiv, 2023

alleninstitute.org |

5 of 34

From spatial transcriptomics to function activities�

5

https://github.com/sharminpathan/neuron-finding-in-calcium-imaging

  • How is this activity generated?

  • What is the role of connectivity?

  • Do different cell types have different roles?

Towards a mechanistic understanding:

alleninstitute.org |

6 of 34

From spatial transcriptomics to function�

6

Bugeon et. al. 2022

  • In vivo pan-neuronal calcium imaging + post-hoc spatial transcriptomic identification

alleninstitute.org |

7 of 34

A way forward: post-hoc spatial transcriptomics�

7

Activity 🡪 Gene expression

LOLCAT

Schneider et al, Cell Reports, 2023

  • Costly (time, effort, money)
  • In vivo identification of cell types

alleninstitute.org |

8 of 34

Challenges

8

    • Time-invariant cell types vs. time-varying dynamics
      • Trial-to-trial stochasticity
      • Unknow cell-to-cell interaction
      • Diverse experimental stimuli and behaviors
      • Varied number of neurons recorded per session

alleninstitute.org |

9 of 34

Methodology

9

  • Generalized and scalable framework to study multi-modal neural data: population recording, spatial transcriptomics and animal behaviors
    • Learning Time-Invariant Representations for Individual Neurons from Population Dynamics (func2type), NeurIPS 2023

  • Enhancing the identifiability and interpretability of computational modeling with mechanistic understanding
    • Self-Attention Represents Functional Connectivity in a Network-Model of Population Dynamics (func2graph), under review

Uygar Sümbül

@ Allen Institute

Trung Le

PhD Student @ UW

Wuwei Zhang

Master Student @ UW

Shanahan Postbac RA

alleninstitute.org |

10 of 34

Generalized and scalable framework to study multi-modal neural data

10

Lu Mi*, Trung Le*, Tianxing He, Eli Shlizerman, Uygar Sümbül. Learning Time-Invariant Representations for Individual Neurons from Population Dynamics (func2type), NeurIPS 2023 * equal contribution

An implicit model of neuronal dynamics

activity of neuron i at time t

activity of neurons that provide (synaptic or extra-synaptic) input to neuron i at time t

time-invariant representation

for neuron i

alleninstitute.org |

11 of 34

Transferability across experiments, animals�

11

: activity of neurons that provide input to neuron i at time t

https://github.com/sharminpathan/neuron-finding-in-calcium-imaging

Problem: number and order of neurons

  • compute (invariant) statistics
  • incorporate indirect readouts on activity represents brain states (e.g., pupil diameter, running speed)
  • center-surround partition of statistics

alleninstitute.org |

12 of 34

Self-supervised learning for pretraining

12

ChatGPT: large language models with self-supervised pretraining

predict next token

f

foundation model

finetuning

downstream tasks

Vaswani et. al. 2017

Devlin et. al. 2018

alleninstitute.org |

13 of 34

NeuPRINT: Self-supervised representation learning frameworks�

13

13

running

speed

pupil

diameter

 

learn via prediction loss

alleninstitute.org |

14 of 34

Dynamical model: transformer with causal attention�

14

14

 

alleninstitute.org |

15 of 34

Lightweight downstream supervised learning

15

  • Class (excitatory/inhibitory), subclass (Lamp5, Vip, Pvalb, Sst, Sncg) classifiers

Classifier 1

Classifier 2

 

alleninstitute.org |

16 of 34

Lightweight downstream supervised learning

16

  • Future extensions: lightweight downstream supervised learning for fine-grained regression of gene expression

Regressor 1

Regressor 2

 

alleninstitute.org |

17 of 34

Baselines

17

  • LOLCAT:
    • End-to-end supervised method
    • Hand-crafted input features
    • Lack of time-invariant representation for generalization gap across time

Schneider et al., 2023

alleninstitute.org |

18 of 34

Self-supervised NeuPRINT demonstrates SOTA accuracy in data-limited scenarios �

18

Bugeon et. al. 2022

alleninstitute.org |

19 of 34

Transformer outperforms other implicit dynamical models

19

alleninstitute.org |

20 of 34

Permutation-Invariant Summary of Population Dynamics Enhances the Time-invariant Representation

20

alleninstitute.org |

21 of 34

NeuPRINT demonstrates robustness across visual stimulus settings

21

alleninstitute.org |

22 of 34

NeuPRINT demonstrates robustness across mice

22

TRAIN

TEST

Inh. subclass

E vs I

alleninstitute.org |

23 of 34

Limitations

23

 

  • High predictivity and generalizable, but …
  • Non-identifiability: no unique solution
  • Low-interpretability: time-invariant embedding (GCaMP v.s. intrinsic dynamics) & implicit dynamical model (connectivity)
  • Population summary statistics overlooks neuron-to-neuron connectivity

alleninstitute.org |

24 of 34

24

Enhancing the identifiability and interpretability of computational modeling with mechanistic understanding�

  • Self-supervised learning
  • Dynamical model for individual neurons v.s. neuron population
  • No downstream supervision

Wuwei Zhang, Trung Le, Eli Shlizerman, Hao Wang, Uygar Sümbül, Lu Mi. Self-Attention Represents Functional Connectivity in a Network-Model of Population Dynamics (func2graph), under review

alleninstitute.org |

25 of 34

In-silico simulation model

25

connectivity W

Campagnola et. al. 2022

Patch Clamp

alleninstitute.org |

26 of 34

In-silico simulation model

26

connectivity W

nonlinearity

connectivity weight

population activities at time t + 1

population activities at time t

baseline

gaussian noise at time t

alleninstitute.org |

27 of 34

Addressing non-identifiability

27

  • Remove nonlinear value mapping
  • Apply rank constraints with balancing key dimension and input dimension to match neuron dim
  • Remove softmax nonlinear activation for attention (sum up to one)

alleninstitute.org |

28 of 34

Addressing non-identifiability

28

Self-Attention in Transformer

(after global linear transformation)

Ground Truth Connectivity

  • Identifiable up to global linear transformation
  • Preserve relative signs not strength
  • More consistency across group level not individual level

alleninstitute.org |

29 of 34

Test on mouse visual cortex recording

29

Self-Attention in Transformer fits

Bugeon et al. 2022

(after global linear transformation)

Function Connectivity

in Campagnola et. al. 2022

  • Identifiable up to global linear transformation
  • Preserve relative signs not strength

alleninstitute.org |

30 of 34

Compare with other baselines & ablations

30

  • Outperforms RNN and Pearson correlation in fitting dynamics and recovering connectivity
  • Close to upper bound (oracle) in simulator
  • Remove nonlinearity for attention improves accuracy and fitting dynamics

alleninstitute.org |

31 of 34

Take away: transformer with self-supervised learning could be used as identity and connectivity learner

31

  • Combine identity and connectivity learning in one model?

  • Generalized and scalable framework to study multi-modal neural data: integrating population recording, spatial transcriptomics and animal behaviors

  • Enhancing the identifiability and interpretability of computational modeling with mechanistic understanding

Yes

alleninstitute.org |

32 of 34

Future Directions

32

  • Study multiple upcoming functional activity (2p image, neuropixels) + cell type (spatial transcriptomics, optotagging) datasets in Allen Institute

  • Generalized across different brain regions (motor cortex, visual cortex, midbrain)

  • Generalized across different functions and tasks (visual processing, learning, decision making)
    • How does neural individual and connectivity representations change during different tasks?
    • How does different cell types behave in different tasks?

alleninstitute.org |

33 of 34

Uygar Sümbül, Trung Le, Wuwei Zhang, Tianxing He, Hao Wang, Eli Shlizerman

Thank you

34 of 34

alleninstitute.org

THANK YOU