ABCDEFGHIJKLMNOPQRSTUVWXYZ
1
PresenterDatePaperPaper link
2
Group discussion4/22/2021Explanatory models in neuroscience: Part 1 -- taking mechanistic abstraction seriouslyhttps://arxiv.org/abs/2104.01490
3
4/29/2021Explanatory models in neuroscience: Part 2 -- constraint-based intelligibilityhttps://arxiv.org/abs/2104.01489
4
Group discussion5/6/2021If deep learning is the answer, what is the question?https://www.nature.com/articles/s41583-020-00395-8
5
Amirozhan5/13/2021Performance-optimized hierarchical models predict neural responses in higher visual cortexhttps://www.pnas.org/content/111/23/8619
6
Helen5/20/2021Project presentation
7
Maxime6/3/2021Neural Turing Machineshttps://arxiv.org/abs/1410.5401
8
Motahareh6/10/2021Task representations in neural networks trained to perform many cognitive taskshttps://www.nature.com/articles/s41593-018-0310-2
9
Xiaoxuan6/17/2021Is Activity Silent Working Memory Simply Episodic Memory?
https://www.cell.com/trends/cognitive-sciences/fulltext/S1364-6613(21)00005-X
10
Helen6/24/2021
Neural population control via deep image synthesis
https://www.science.org/doi/10.1126/science.aav9436
11
Amirozhan7/1/2021Emergent organization of multiple visuotopic maps without a feature hierarchy
https://www.biorxiv.org/content/10.1101/2021.01.05.425426v1.full.pdf
12
Maxime7/8/2021Self-supervised learning through the eyes of a childhttps://arxiv.org/abs/2007.16189
13
Takuya7/15/2021Attention is all you needhttps://arxiv.org/abs/1706.03762
14
Maxime7/22/2021Generalization of Reinforcement Learners with Working and Episodic Memory
https://papers.nips.cc/paper/2019/hash/02ed812220b0705fabb868ddbf17ea20-Abstract.html
15
Xiaoxuan7/29/2021Decision Transformer: Reinforcement Learning via Sequence Modelinghttps://arxiv.org/abs/2106.01345
16
Motahareh8/5/2021Cortical information flow during flexible sensorimotor decisionshttps://science.sciencemag.org/content/348/6241/1352.abstract
17
Amirozhan8/12/2021THE LOTTERY TICKET HYPOTHESIS:
FINDING SPARSE, TRAINABLE NEURAL NETWORKS
https://arxiv.org/pdf/1803.03635.pdf
18
Helen8/19/2021
Do Adversarially Robust ImageNet Models Transfer Better?
https://arxiv.org/abs/2007.08489
19
Motahareh8/26/2021Low-dimensional dynamics for working memory and time encodinghttps://www.pnas.org/content/117/37/23021
20
9/2/2021
21
Xiaoxuan9/10/2021Cancelled
22
(Guest) Pravish Sainath9/17/2021Recurrent models of n-back task
23
Amirozhan9/24/2021meProp: Sparsified Back Propagation for Accelerated Deep Learning with Reduced Overfittinghttps://arxiv.org/pdf/1706.06197v5.pdf
24
Maxime10/1/2021Tolman-Eichenbaum Machine: Unifying Space and Relational Memory through Generalization in the Hippocampal Formation
https://www.sciencedirect.com/science/article/pii/S009286742031388X
25
Xiaoxuan10/8/2021Elucidating the neural mechanisms of Learning-to-Learnhttps://www.biorxiv.org/content/10.1101/2021.09.02.455707v1
26
Guest (Yalda Mohsenzadeh)10/15/2021
27
Motahareh10/22/2021Circuit mechanisms for the maintenance and
manipulation of information in working memory
https://doi.org/10.1038/s41593-019-0414-3
28
Helen10/29/2021Adversarial Weight Perturbation Helps Robust Generalizationhttps://arxiv.org/abs/2004.05884
29
Amirozhan11/05/2021Beyond category-supervision: Computational support for domain-general pressures guiding human visual system representationhttps://www.biorxiv.org/content/10.1101/2020.06.15.153247v3
30
Maxime11/12/2021When to retrieve and encode episodic memories: a neural network model of hippocampal-cortical interaction
https://www.biorxiv.org/content/10.1101/2020.12.15.422882v2.external-links.html
31
Xiaoxuan11/19/2021neural knowledge assembly in humans and deep networkshttps://www.biorxiv.org/content/10.1101/2021.10.21.465374v1
32
Helen11/26/2021Increasing neural network robustness improves match to macaque V1 eigenspectrum, spatial frequency preference and predictivity
https://www.biorxiv.org/content/10.1101/2021.06.29.450334v1.abstract
33
Xiaoxuan12/3/2021Lifelong Learning of Compositional Structureshttps://arxiv.org/abs/2007.07732
34
Motahareh12/10/2021A backward progression of attentional effects in the ventral streamhttps://www.pnas.org/content/107/1/361
35
Maxime12/17/2021Are place cells just memory cells?https://www.biorxiv.org/content/10.1101/624239v3
36
Amirozhan01/13/2022Principles governing the topological organization of object selectivities in ventral temporal cortex
https://www.biorxiv.org/content/10.1101/2021.09.15.460220v1.full#F1
37
Maxime01/20/2022Adaptive posterior learning: Few-shot learning with a surprise-based memory modulehttps://openreview.net/pdf?id=ByeSdsC9Km
38
Xiaoxuan01/27/2022A dopamine gradient controls access to distributed working memory in the large-scale monkey cortexhttps://doi.org/10.1016/j.neuron.2021.08.024
39
Motahareh02/03/2022The proprioceptive representation of eye position in monkey primary somatosensory cortexhttps://www.nature.com/articles/nn1878
40
Mark02/10/2022A modeling framework for adaptive lifelong learning with transfer and savings through gating in the prefrontal cortexhttps://www.pnas.org/content/pnas/117/47/29872.full.pdff
41
Amirozhan02/17/2022cancelled
42
Maxime02/24/2022Rapid task-solving in novel environmentshttps://arxiv.org/pdf/2006.03662.pdf
43
Xiaoxuan03/03/2022Probing variability in a cognitive map using manifold inference from neural dynamicshttps://www.biorxiv.org/content/10.1101/418939v2
44
Motahareh03/10/2022How biological attention mechanisms improve task performance in a large-scale visual system modelhttps://elifesciences.org/articles/381055
45
Mark03/24/2022COG2 Environment
46
Helen03/31/2022COSYNE poster
47
Amirozhan04/07/2022Topographic deep artificial neural networks reproduce the hallmarks of the primate inferior temporal cortex face processing network
https://www.biorxiv.org/content/10.1101/2020.07.09.185116v1.full.pdff
48
Maxime04/14/2022Context-dependent representations of objects and space in the primate hippocampus during virtual navigationhttps://www.nature.com/articles/s41593-019-0548-3
49
Xiaoxuan04/21/2022Determinants of human compositional generalizationhttps://psyarxiv.com/qnpw6
50
Motahareh04/28/2022Towards the next generation of recurrent network models for cognitive neuroscience
https://www.sciencedirect.com/science/article/pii/S0959438821001276
51
Mark05/05/2022Comparing continual task learning in minds and machineshttps://www.pnas.org/doi/10.1073/pnas.1800755115
52
Xiaoxuan/Maxime05/12/2022GSD poster presentations
53
Amirozhan05/19/2022A map of object space in primate inferotemporal cortexhttps://www.nature.com/articles/s41586-020-2350-5
54
05/26/2022Cancelled
55
Maxime06/02/2022Metalearned Neural Memory
https://proceedings.neurips.cc/paper/2019/file/182bd81ea25270b7d1c2fe8353d17fe6-Paper.pdf
56
Xiaoxuan06/09/2022The geometry of domain-general performance monitoring in the human medial frontal cortexhttps://www.science.org/doi/10.1126/science.abm9922
57
Motahareh06/16/2022Learning to combine top-down and bottom-up signals in Recurrent Neural Networks with Attention over Moduleshttps://arxiv.org/abs/2006.16981
58
Mark06/23/2022
COMPOSITIONAL ATTENTION: DISENTANGLING SEARCH AND RETRIEVAL
https://arxiv.org/pdf/2110.09419.pdf
59
Amirozhan06/30/2022
Cortical response to naturalistic stimuli is largely predictable with deep neural networks
https://www.science.org/doi/10.1126/sciadv.abe7547
60
Tugce07/07/2022
Invariant neural subspaces maintained by feedback modulation
https://elifesciences.org/articles/76096#:~:text=The%20invariance%20is%20not%20present%20on%20the%20level,invariant%20neural%20subspace%20in%20spite%20of%20contextual%20variations
61
Maxime07/14/2022Hypernetworks + Dynamic Predictive Coding
https://www.biorxiv.org/content/biorxiv/early/2022/06/24/2022.06.23.497415.full.pdf
62
Motahareh07/21/2022
Visual Search Asymmetry: Deep Nets and Humans Share Similar Inherent Biases
https://proceedings.neurips.cc/paper/2021/hash/37f0e884fbad9667e38940169d0a3c95-Abstract.html
63
Dr. Patrik Bey 07/28/2022The whole brain network modelling platform ,The Virtual Brain, some of the speaker's latest research for it, and potential future projects bridging the gap between modelling and learningResearch Presentation
64
Xiaoxuan08/04/2022Cancelled
65
Mark08/11/2022Self-healing codes: How stable neural populations cantrack continually reconfiguring neural representationshttps://www.pnas.org/doi/epdf/10.1073/pnas.2106692119
66
Amirozhan08/18/2022Cancelled
67
Tugce08/25/2022Avoiding Catastrophe: Active Dendrites Enable Multi-Task Learning in Dynamic Environmentshttps://arxiv.org/abs/2201.00042
68
Maxime09/01/2022
A model of egocentric to allocentric understanding in mammalian brains
https://www.biorxiv.org/content/10.1101/2020.11.11.378141v2
69
Xiaoxuan09/06/2022Understanding deep learning requires rethinking generalizationUnderstanding deep learning requires rethinking generalization
70
Matthew Riemer09/13/2022Continual Learning inReinforcement LearningResearch Presentation
71
Amirozhan09/20/2022CORnet: Modeling the Neural Mechanisms of Core
Object Recognition
https://www.biorxiv.org/content/10.1101/408385v1.full.pdf
72
Maxime09/27/2022EMERGENT SYMBOLS THROUGH BINDING IN
EXTERNAL MEMORY
https://openreview.net/pdf?id=LSFCEb3GYU7
73
Xiaoxuan10/04/2022From deep learning to mechanistic understanding in neuroscience: the structure of retinal prediction
https://arxiv.org/abs/1912.06207#:~:text=12%20Dec%202019%5D-,From%20deep%20learning%20to%20mechanistic%20understanding%20in,the%20structure%20of%20retinal%20prediction&text=Recently%2C%20deep%20feedforward%20neural%20networks,output%20map%20of%20sensory%20neurons
74
Motahareh10/11/2022CANCELED
75
Motahareh10/18/2022
The functional specialization of visual cortex emerges from training parallel pathways with self-supervised predictive learning
https://www.biorxiv.org/content/10.1101/2021.06.18.448989v3.full
76
Maxime10/25/2022
Overparameterized neural networks implement associative memory
https://www.pnas.org/doi/10.1073/pnas.2005013117
77
Xiaoxuan11/01/2022Metrics for deep generative modelshttps://arxiv.org/abs/1711.01204
78
Motahareh11/08/2022Research Discussion
79
Lucas Gomez11/15/2022
Building Transformers from Neurons and Astrocytes
https://www.biorxiv.org/content/10.1101/2022.10.12.511910v1
80
Mingze Li11/22/2022Research Discussion
81
Maxime11/29/2022Memorizing Transformershttps://openreview.net/forum?id=TrjbxzRcnf-
82
Discussion (led by Andrew)12/06/2022
Meta Learning Backpropagation And Improving It
https://arxiv.org/abs/2012.14905
83
Xiaoxuan12/20/2022CANCELED
84
Christmas12/27/2022Christmas
85
New Year01/03/2023New Year
86
Xiaoxuan01/10/2023
Gradient-based learning drives robust representations in recurrent neural networks by balancing compression and expansion
https://www.nature.com/articles/s42256-022-00498-0
87
Motahareh 01/17/2023
Efficient inverse graphics in biological face processing
https://www.science.org/doi/10.1126/sciadv.aax5979#
88
Dr. Nathan Kong01/24/2023
Adversarial robustness of computational models of visual cortex
Abstract: Task-optimized convolutional neural networks (CNNs) show striking similarities to the ventral visual stream. However, human-imperceptible image perturbations, known as adversarial perturbations, can cause a CNN to make incorrect predictions. Here we suggest three properties that possibly lead to their brittleness: population response dimensionality, spatial frequency preference, and temporally-discontinuous training inputs. Theory suggests that the tolerance of a system to these perturbations could be related to the power law exponent (i.e., decay rate) of the eigenspectrum of its set of neural responses, where power law exponents closer to and larger than one would indicate a system that is more tolerant to input perturbations. We find that the eigenspectra of model representations decay slowly relative to those observed in neurophysiology and that robust models have eigenspectra that decay slightly faster and have higher power law exponents than those of non-robust models. We therefore investigated the spatial frequency tuning of artificial neurons and found that a large proportion of them preferred high spatial frequencies and that robust models had preferred spatial frequency distributions more aligned with the spatial frequency distribution measured in macaque V1 cells. Furthermore, robust models were quantitatively better models of V1 than non-robust models. Motivated by work in visual development showing that temporally-continuous visual experience improves view-invariant object recognition, we trained models on SAYCam, a large dataset of videos collected from the perspective of infants, and evaluated their adversarial robustness. We found that models trained on SAYCam were more robust than those trained on ImageNet and that incorporating temporal information further improved robustness. Overall, although CNNs are the state-of-the-art models of ventral visual processing, their brittle nature suggests that there is room for improvement and that one could potentially take inspiration from biology to improve their robustness.
89
Maxime01/31/2023The forward-forward algorithm by by Geoffrey Hinton https://arxiv.org/abs/2212.13345
90
Xiaoxuan02/07/2023
Modelling human behaviour in cognitive tasks with latent dynamical systems
https://www.nature.com/articles/s41562-022-01510-8
91
Motahareh 02/14/2023
Experimenting with Theoretical Motor Neuroscience
https://web.mit.edu/ajemian/www/Ajemian_Hogan_Falsification.pdf
92
Maxime02/21/2023
Why can GPT learn in-context? Language models secretly perform gradient descent as meta-optimizers
https://arxiv.org/abs/2212.10559
93
Motahareh 02/28/2023
Abstract representations emerge naturally in neural networks trained to perform multiple tasks
https://www.nature.com/articles/s41467-023-36583-0
94
Xiaoxuan03/07/2023CANCELED
95
Cosyne Workshops03/14/2023CANCELED
96
Discussion03/21/2023Cosyne Poster & Talk Discussion
97
Ozhan03/28/2023
Motor cortex signals for each arm are mixed across hemispheres and neurons yet partitioned within the population response
https://elifesciences.org/articles/46159#:~:text=Neuroscience-,Motor%20cortex%20signals%20for%20each%20arm%20are%20mixed%20across%20hemispheres,partitioned%20within%20the%20population%20response
98
Xiaoxuan04/04/2023
Working memory control dynamics follow principles of spatial computing
https://www.nature.com/articles/s41467-023-36555-4
99
Motahareh04/11/2023
Research Presentation: Modeling Visual search in Humans
100
Maxime04/18/2023Mock Presentation for the PhD Candidacy Exam