plasticity and dynamics
the three stooges
what’s the big idea? 💡
What does computation?
Rall 1962
What does computation?
Olshausen & Field 2004
Rall 1962
Representation as computation
Kindel et al. 2019 JoV
Behavior needs more than single neurons
Krakauer et al. 2017
Behavior [in mammals] needs more than single neurons
ignoring those little bugs and worms…
Krakauer et al. 2017
Cowley et al. 2023
Representation as dynamics
Cunningham, Yu 2014 Nat. Neuro.
Koay et al. 2022 Neuron
Dynamics as computation
input
output
Driscoll et al. 2022
But how do we get those objects?
plasticity
But how do we get those objects?
yet there is currently a gulf between “computation” and “mechanisms”
and do we even have all the information (i.e. rules) we need?
Dim
High or low?
Mixed representation of task variables
Mante, Sussillo et al., 2013
Population responses lie on low-Dim manifold
Mante, Sussillo et al., 2013
Gallego, Perich et al., 2018
Low-Dim neural activity in experiments
monkey , DMFC
Sohn, Narain et al., 2019
mouse , PPC
Driscoll et al., 2017
Sussillo et al., 2015
monkey , M1
Russo et al., 2020
Low-Dim neural activity: but how did we get there?
> How are these low-dimensional representations learned � and which plasticity rules can give rise to them?
RNN population activity
neural population activity
Mante, Sussillo et al., 2013
Plasticity rules: why do we even care ?
I will answer none of them!
Low-rank RNNs: link between low-dim activity & connectivity
low-rank component
rank-1 connectivity:
dynamics:
Mastrogiuseppe, Ostojic, 2018
population activity:
latent/collective variables
latent dynamical system:
Can we find bio-plausible plasticity rules for low-rank RNNs to solve a task?
Learning plasticity rules: the naive approach
Learning plasticity rules: becoming a bit smarter
Pablito
Why low-rank?
Mastrogiuseppe, F., & Ostojic, S. (2018)
Connectivity
+
=
1-Rank Matrix
Random Matrix
Connectivity Matrix
Connectivity (eigenspace)
Low-rank eigenvalue (λ)
Spontaneous dynamics of different network regimes
Heterogeneous Stationary
(g: 0.5, λ: 2.22)
Homogeneous Stationary
(g: 0.5, λ: 0.65)
Homogeneous Chaotic
(g: 2, λ: 0.65)
Heterogeneous Chaotic
(g: 2, λ: 2.22)
Spontaneous dynamics of different network regimes
Heterogeneous Stationary
(g: 0.5, λ: 2.22)
Homogeneous Stationary
(g: 0.5, λ: 0.65)
Homogeneous Chaotic
(g: 2, λ: 0.65)
Heterogeneous Chaotic
(g: 2, λ: 2.22)
Plasticity ON
Hebbian Plasticity:
Effect of plasticity in low-rank RNNs
Can we actually learn a task?
where = . . .
s.t :
Can we actually learn a task?
where = . . .
s.t :
Naive Solution!
Can we actually learn a task?
Goal:
Find a plasticity rule which drives the network to a target dynamical state
= …
Can we actually learn a task?
Goal:
Find a plasticity rule which drives the network to a target dynamical state
How:
= …
Simulation Based Inference:
Simulation Based Inference
finito