1 of 37

plasticity and dynamics

the three stooges

2 of 37

what’s the big idea? 💡

3 of 37

What does computation?

Rall 1962

4 of 37

What does computation?

Olshausen & Field 2004

Rall 1962

5 of 37

Representation as computation

Kindel et al. 2019 JoV

6 of 37

Behavior needs more than single neurons

Krakauer et al. 2017

7 of 37

Behavior [in mammals] needs more than single neurons

ignoring those little bugs and worms…

Krakauer et al. 2017

Cowley et al. 2023

8 of 37

Representation as dynamics

Cunningham, Yu 2014 Nat. Neuro.

Koay et al. 2022 Neuron

9 of 37

Dynamics as computation

input

output

Driscoll et al. 2022

10 of 37

But how do we get those objects?

11 of 37

plasticity

12 of 37

But how do we get those objects?

yet there is currently a gulf between “computation” and “mechanisms”

and do we even have all the information (i.e. rules) we need?

13 of 37

Dim

High or low?

14 of 37

Mixed representation of task variables

Mante, Sussillo et al., 2013

15 of 37

Population responses lie on low-Dim manifold

Mante, Sussillo et al., 2013

Gallego, Perich et al., 2018

16 of 37

Low-Dim neural activity in experiments

monkey , DMFC

Sohn, Narain et al., 2019

mouse , PPC

Driscoll et al., 2017

Sussillo et al., 2015

monkey , M1

Russo et al., 2020

17 of 37

Low-Dim neural activity: but how did we get there?

> How are these low-dimensional representations learned � and which plasticity rules can give rise to them?

  • RNNs trained on task variables learn similar low-dimensional representations with neural populations.�BUT
  • RNNs are trained with gradient descent,�i.e. employ global information for weight updates
  • Biology prefers local learning rules,�i.e. rules that make use of what is accessible on the synaptic site
  • Relationship between connectivity-activity not clear

RNN population activity

neural population activity

Mante, Sussillo et al., 2013

18 of 37

Plasticity rules: why do we even care ?

  • Do different rules result in same learned representations?
  • Are different rules more favourable for learning different tasks?
  • Do they suffice to learn tasks or some third factor is required?
  • Do they even matter?
  • If we find a rule that learns a task what are we going to do with it?
  • Does a single synapse grow always with the same rule?
  • ?

I will answer none of them!

19 of 37

Low-rank RNNs: link between low-dim activity & connectivity

low-rank component

rank-1 connectivity:

dynamics:

Mastrogiuseppe, Ostojic, 2018

population activity:

latent/collective variables

latent dynamical system:

20 of 37

Can we find bio-plausible plasticity rules for low-rank RNNs to solve a task?

  • Assume a stationary plasticity process, same for all network interactions�
  • Take your favourite family of rules
  • Pick your favourite task
  • Identify plasticity rules within the selected family for learning the task
  • But how?

21 of 37

Learning plasticity rules: the naive approach

  • Try to learn what gradient descent learns
  • Approximate the learning rule in a single step �
  • We can use our favorite function approximation/regression/parameter inference method, but we will get nothing out of it!
  • Two coupled dynamic processes and the one that interests us (plasticity dynamics) evolves on a much slower timescale than the neuronal dynamics but neural dynamics highly non-stationary� -> simple averaging of the rates will not do it
  • We need to get smarter!

22 of 37

Learning plasticity rules: becoming a bit smarter

  • More sophisticated strategy: Simulation Based Inference
  • Bayesian inference with intractable likelihood

23 of 37

Pablito

24 of 37

Why low-rank?

  • Task-tuned neural dynamics constrained to low-dimensional manifolds
  • Low-Rank RNNs provide a framework to produce low-dimensional dynamics
  • Analytical mapping from connectivity to dynamics (Mean-Field Theory)
  • RNNs show compositionality in multi-task learning, under back-prop
  • Still remaining relationship: plasticity - connectivity (and dynamics implicitly)

Mastrogiuseppe, F., & Ostojic, S. (2018)

25 of 37

Connectivity

+

=

1-Rank Matrix

Random Matrix

Connectivity Matrix

26 of 37

Connectivity (eigenspace)

Low-rank eigenvalue (λ)

27 of 37

Spontaneous dynamics of different network regimes

Heterogeneous Stationary

(g: 0.5, λ: 2.22)

Homogeneous Stationary

(g: 0.5, λ: 0.65)

Homogeneous Chaotic

(g: 2, λ: 0.65)

Heterogeneous Chaotic

(g: 2, λ: 2.22)

28 of 37

Spontaneous dynamics of different network regimes

Heterogeneous Stationary

(g: 0.5, λ: 2.22)

Homogeneous Stationary

(g: 0.5, λ: 0.65)

Homogeneous Chaotic

(g: 2, λ: 0.65)

Heterogeneous Chaotic

(g: 2, λ: 2.22)

29 of 37

Plasticity ON

Hebbian Plasticity:

30 of 37

Effect of plasticity in low-rank RNNs

  • Connectivity updates are low-rank
  • Moderate weight updates lead to completely different dynamics
  • Low-dimensional dynamics (even in chaotic regimes)
  • Different plasticity rules, find different update directions
  • External inputs influence plasticity, leading to changes in connectivity

31 of 37

Can we actually learn a task?

  • Hebbian learning is “task-incapable” (no error signal)
  • Find a plasticity rule

where = . . .

s.t :

32 of 37

Can we actually learn a task?

  • Hebbian learning is “task-incapable” (no error signal)
  • Find a plasticity rule

where = . . .

s.t :

Naive Solution!

33 of 37

Can we actually learn a task?

Goal:

Find a plasticity rule which drives the network to a target dynamical state

= …

34 of 37

Can we actually learn a task?

Goal:

Find a plasticity rule which drives the network to a target dynamical state

How:

  1. Train a gradient-based learning network to solve a task
  2. Find 𝜃

= …

35 of 37

Simulation Based Inference:

  • Prior

  • Simulator =

  • Posterior Estimation: Neural Posterior Estimation (NPE)

36 of 37

Simulation Based Inference

  1. Extract N samples from , which define
  2. Run N simulations and store final dynamics
  3. Train NPE to build posterior
  4. Infer from and compare with

37 of 37

finito