A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | Presenter | Date | Paper | Paper link | ||||||||||||||||||||||
2 | Group discussion | 4/22/2021 | Explanatory models in neuroscience: Part 1 -- taking mechanistic abstraction seriously | https://arxiv.org/abs/2104.01490 | ||||||||||||||||||||||
3 | 4/29/2021 | Explanatory models in neuroscience: Part 2 -- constraint-based intelligibility | https://arxiv.org/abs/2104.01489 | |||||||||||||||||||||||
4 | Group discussion | 5/6/2021 | If deep learning is the answer, what is the question? | https://www.nature.com/articles/s41583-020-00395-8 | ||||||||||||||||||||||
5 | Amirozhan | 5/13/2021 | Performance-optimized hierarchical models predict neural responses in higher visual cortex | https://www.pnas.org/content/111/23/8619 | ||||||||||||||||||||||
6 | Helen | 5/20/2021 | Project presentation | |||||||||||||||||||||||
7 | Maxime | 6/3/2021 | Neural Turing Machines | https://arxiv.org/abs/1410.5401 | ||||||||||||||||||||||
8 | Motahareh | 6/10/2021 | Task representations in neural networks trained to perform many cognitive tasks | https://www.nature.com/articles/s41593-018-0310-2 | ||||||||||||||||||||||
9 | Xiaoxuan | 6/17/2021 | Is Activity Silent Working Memory Simply Episodic Memory? | https://www.cell.com/trends/cognitive-sciences/fulltext/S1364-6613(21)00005-X | ||||||||||||||||||||||
10 | Helen | 6/24/2021 | Neural population control via deep image synthesis | https://www.science.org/doi/10.1126/science.aav9436 | ||||||||||||||||||||||
11 | Amirozhan | 7/1/2021 | Emergent organization of multiple visuotopic maps without a feature hierarchy | https://www.biorxiv.org/content/10.1101/2021.01.05.425426v1.full.pdf | ||||||||||||||||||||||
12 | Maxime | 7/8/2021 | Self-supervised learning through the eyes of a child | https://arxiv.org/abs/2007.16189 | ||||||||||||||||||||||
13 | Takuya | 7/15/2021 | Attention is all you need | https://arxiv.org/abs/1706.03762 | ||||||||||||||||||||||
14 | Maxime | 7/22/2021 | Generalization of Reinforcement Learners with Working and Episodic Memory | https://papers.nips.cc/paper/2019/hash/02ed812220b0705fabb868ddbf17ea20-Abstract.html | ||||||||||||||||||||||
15 | Xiaoxuan | 7/29/2021 | Decision Transformer: Reinforcement Learning via Sequence Modeling | https://arxiv.org/abs/2106.01345 | ||||||||||||||||||||||
16 | Motahareh | 8/5/2021 | Cortical information flow during flexible sensorimotor decisions | https://science.sciencemag.org/content/348/6241/1352.abstract | ||||||||||||||||||||||
17 | Amirozhan | 8/12/2021 | THE LOTTERY TICKET HYPOTHESIS: FINDING SPARSE, TRAINABLE NEURAL NETWORKS | https://arxiv.org/pdf/1803.03635.pdf | ||||||||||||||||||||||
18 | Helen | 8/19/2021 | Do Adversarially Robust ImageNet Models Transfer Better? | https://arxiv.org/abs/2007.08489 | ||||||||||||||||||||||
19 | Motahareh | 8/26/2021 | Low-dimensional dynamics for working memory and time encoding | https://www.pnas.org/content/117/37/23021 | ||||||||||||||||||||||
20 | 9/2/2021 | |||||||||||||||||||||||||
21 | Xiaoxuan | 9/10/2021 | Cancelled | |||||||||||||||||||||||
22 | (Guest) Pravish Sainath | 9/17/2021 | Recurrent models of n-back task | |||||||||||||||||||||||
23 | Amirozhan | 9/24/2021 | meProp: Sparsified Back Propagation for Accelerated Deep Learning with Reduced Overfitting | https://arxiv.org/pdf/1706.06197v5.pdf | ||||||||||||||||||||||
24 | Maxime | 10/1/2021 | Tolman-Eichenbaum Machine: Unifying Space and Relational Memory through Generalization in the Hippocampal Formation | https://www.sciencedirect.com/science/article/pii/S009286742031388X | ||||||||||||||||||||||
25 | Xiaoxuan | 10/8/2021 | Elucidating the neural mechanisms of Learning-to-Learn | https://www.biorxiv.org/content/10.1101/2021.09.02.455707v1 | ||||||||||||||||||||||
26 | Guest (Yalda Mohsenzadeh) | 10/15/2021 | ||||||||||||||||||||||||
27 | Motahareh | 10/22/2021 | Circuit mechanisms for the maintenance and manipulation of information in working memory | https://doi.org/10.1038/s41593-019-0414-3 | ||||||||||||||||||||||
28 | Helen | 10/29/2021 | Adversarial Weight Perturbation Helps Robust Generalization | https://arxiv.org/abs/2004.05884 | ||||||||||||||||||||||
29 | Amirozhan | 11/05/2021 | Beyond category-supervision: Computational support for domain-general pressures guiding human visual system representation | https://www.biorxiv.org/content/10.1101/2020.06.15.153247v3 | ||||||||||||||||||||||
30 | Maxime | 11/12/2021 | When to retrieve and encode episodic memories: a neural network model of hippocampal-cortical interaction | https://www.biorxiv.org/content/10.1101/2020.12.15.422882v2.external-links.html | ||||||||||||||||||||||
31 | Xiaoxuan | 11/19/2021 | neural knowledge assembly in humans and deep networks | https://www.biorxiv.org/content/10.1101/2021.10.21.465374v1 | ||||||||||||||||||||||
32 | Helen | 11/26/2021 | Increasing neural network robustness improves match to macaque V1 eigenspectrum, spatial frequency preference and predictivity | https://www.biorxiv.org/content/10.1101/2021.06.29.450334v1.abstract | ||||||||||||||||||||||
33 | Xiaoxuan | 12/3/2021 | Lifelong Learning of Compositional Structures | https://arxiv.org/abs/2007.07732 | ||||||||||||||||||||||
34 | Motahareh | 12/10/2021 | A backward progression of attentional effects in the ventral stream | https://www.pnas.org/content/107/1/361 | ||||||||||||||||||||||
35 | Maxime | 12/17/2021 | Are place cells just memory cells? | https://www.biorxiv.org/content/10.1101/624239v3 | ||||||||||||||||||||||
36 | Amirozhan | 01/13/2022 | Principles governing the topological organization of object selectivities in ventral temporal cortex | https://www.biorxiv.org/content/10.1101/2021.09.15.460220v1.full#F1 | ||||||||||||||||||||||
37 | Maxime | 01/20/2022 | Adaptive posterior learning: Few-shot learning with a surprise-based memory module | https://openreview.net/pdf?id=ByeSdsC9Km | ||||||||||||||||||||||
38 | Xiaoxuan | 01/27/2022 | A dopamine gradient controls access to distributed working memory in the large-scale monkey cortex | https://doi.org/10.1016/j.neuron.2021.08.024 | ||||||||||||||||||||||
39 | Motahareh | 02/03/2022 | The proprioceptive representation of eye position in monkey primary somatosensory cortex | https://www.nature.com/articles/nn1878 | ||||||||||||||||||||||
40 | Mark | 02/10/2022 | A modeling framework for adaptive lifelong learning with transfer and savings through gating in the prefrontal cortex | https://www.pnas.org/content/pnas/117/47/29872.full.pdff | ||||||||||||||||||||||
41 | Amirozhan | 02/17/2022 | cancelled | |||||||||||||||||||||||
42 | Maxime | 02/24/2022 | Rapid task-solving in novel environments | https://arxiv.org/pdf/2006.03662.pdf | ||||||||||||||||||||||
43 | Xiaoxuan | 03/03/2022 | Probing variability in a cognitive map using manifold inference from neural dynamics | https://www.biorxiv.org/content/10.1101/418939v2 | ||||||||||||||||||||||
44 | Motahareh | 03/10/2022 | How biological attention mechanisms improve task performance in a large-scale visual system model | https://elifesciences.org/articles/381055 | ||||||||||||||||||||||
45 | Mark | 03/24/2022 | COG2 Environment | |||||||||||||||||||||||
46 | Helen | 03/31/2022 | COSYNE poster | |||||||||||||||||||||||
47 | Amirozhan | 04/07/2022 | Topographic deep artificial neural networks reproduce the hallmarks of the primate inferior temporal cortex face processing network | https://www.biorxiv.org/content/10.1101/2020.07.09.185116v1.full.pdff | ||||||||||||||||||||||
48 | Maxime | 04/14/2022 | Context-dependent representations of objects and space in the primate hippocampus during virtual navigation | https://www.nature.com/articles/s41593-019-0548-3 | ||||||||||||||||||||||
49 | Xiaoxuan | 04/21/2022 | Determinants of human compositional generalization | https://psyarxiv.com/qnpw6 | ||||||||||||||||||||||
50 | Motahareh | 04/28/2022 | Towards the next generation of recurrent network models for cognitive neuroscience | https://www.sciencedirect.com/science/article/pii/S0959438821001276 | ||||||||||||||||||||||
51 | Mark | 05/05/2022 | Comparing continual task learning in minds and machines | https://www.pnas.org/doi/10.1073/pnas.1800755115 | ||||||||||||||||||||||
52 | Xiaoxuan/Maxime | 05/12/2022 | GSD poster presentations | |||||||||||||||||||||||
53 | Amirozhan | 05/19/2022 | A map of object space in primate inferotemporal cortex | https://www.nature.com/articles/s41586-020-2350-5 | ||||||||||||||||||||||
54 | 05/26/2022 | Cancelled | ||||||||||||||||||||||||
55 | Maxime | 06/02/2022 | Metalearned Neural Memory | https://proceedings.neurips.cc/paper/2019/file/182bd81ea25270b7d1c2fe8353d17fe6-Paper.pdf | ||||||||||||||||||||||
56 | Xiaoxuan | 06/09/2022 | The geometry of domain-general performance monitoring in the human medial frontal cortex | https://www.science.org/doi/10.1126/science.abm9922 | ||||||||||||||||||||||
57 | Motahareh | 06/16/2022 | Learning to combine top-down and bottom-up signals in Recurrent Neural Networks with Attention over Modules | https://arxiv.org/abs/2006.16981 | ||||||||||||||||||||||
58 | Mark | 06/23/2022 | COMPOSITIONAL ATTENTION: DISENTANGLING SEARCH AND RETRIEVAL | https://arxiv.org/pdf/2110.09419.pdf | ||||||||||||||||||||||
59 | Amirozhan | 06/30/2022 | Cortical response to naturalistic stimuli is largely predictable with deep neural networks | https://www.science.org/doi/10.1126/sciadv.abe7547 | ||||||||||||||||||||||
60 | Tugce | 07/07/2022 | Invariant neural subspaces maintained by feedback modulation | https://elifesciences.org/articles/76096#:~:text=The%20invariance%20is%20not%20present%20on%20the%20level,invariant%20neural%20subspace%20in%20spite%20of%20contextual%20variations | ||||||||||||||||||||||
61 | Maxime | 07/14/2022 | Hypernetworks + Dynamic Predictive Coding | https://www.biorxiv.org/content/biorxiv/early/2022/06/24/2022.06.23.497415.full.pdf | ||||||||||||||||||||||
62 | Motahareh | 07/21/2022 | Visual Search Asymmetry: Deep Nets and Humans Share Similar Inherent Biases | https://proceedings.neurips.cc/paper/2021/hash/37f0e884fbad9667e38940169d0a3c95-Abstract.html | ||||||||||||||||||||||
63 | Dr. Patrik Bey | 07/28/2022 | The whole brain network modelling platform ,The Virtual Brain, some of the speaker's latest research for it, and potential future projects bridging the gap between modelling and learning | Research Presentation | ||||||||||||||||||||||
64 | Xiaoxuan | 08/04/2022 | Cancelled | |||||||||||||||||||||||
65 | Mark | 08/11/2022 | Self-healing codes: How stable neural populations cantrack continually reconfiguring neural representations | https://www.pnas.org/doi/epdf/10.1073/pnas.2106692119 | ||||||||||||||||||||||
66 | Amirozhan | 08/18/2022 | Cancelled | |||||||||||||||||||||||
67 | Tugce | 08/25/2022 | Avoiding Catastrophe: Active Dendrites Enable Multi-Task Learning in Dynamic Environments | https://arxiv.org/abs/2201.00042 | ||||||||||||||||||||||
68 | Maxime | 09/01/2022 | A model of egocentric to allocentric understanding in mammalian brains | https://www.biorxiv.org/content/10.1101/2020.11.11.378141v2 | ||||||||||||||||||||||
69 | Xiaoxuan | 09/06/2022 | Understanding deep learning requires rethinking generalization | Understanding deep learning requires rethinking generalization | ||||||||||||||||||||||
70 | Matthew Riemer | 09/13/2022 | Continual Learning inReinforcement Learning | Research Presentation | ||||||||||||||||||||||
71 | Amirozhan | 09/20/2022 | CORnet: Modeling the Neural Mechanisms of Core Object Recognition | https://www.biorxiv.org/content/10.1101/408385v1.full.pdf | ||||||||||||||||||||||
72 | Maxime | 09/27/2022 | EMERGENT SYMBOLS THROUGH BINDING IN EXTERNAL MEMORY | https://openreview.net/pdf?id=LSFCEb3GYU7 | ||||||||||||||||||||||
73 | Xiaoxuan | 10/04/2022 | From deep learning to mechanistic understanding in neuroscience: the structure of retinal prediction | https://arxiv.org/abs/1912.06207#:~:text=12%20Dec%202019%5D-,From%20deep%20learning%20to%20mechanistic%20understanding%20in,the%20structure%20of%20retinal%20prediction&text=Recently%2C%20deep%20feedforward%20neural%20networks,output%20map%20of%20sensory%20neurons | ||||||||||||||||||||||
74 | Motahareh | 10/11/2022 | CANCELED | |||||||||||||||||||||||
75 | Motahareh | 10/18/2022 | The functional specialization of visual cortex emerges from training parallel pathways with self-supervised predictive learning | https://www.biorxiv.org/content/10.1101/2021.06.18.448989v3.full | ||||||||||||||||||||||
76 | Maxime | 10/25/2022 | Overparameterized neural networks implement associative memory | https://www.pnas.org/doi/10.1073/pnas.2005013117 | ||||||||||||||||||||||
77 | Xiaoxuan | 11/01/2022 | Metrics for deep generative models | https://arxiv.org/abs/1711.01204 | ||||||||||||||||||||||
78 | Motahareh | 11/08/2022 | Research Discussion | |||||||||||||||||||||||
79 | Lucas Gomez | 11/15/2022 | Building Transformers from Neurons and Astrocytes | https://www.biorxiv.org/content/10.1101/2022.10.12.511910v1 | ||||||||||||||||||||||
80 | Mingze Li | 11/22/2022 | Research Discussion | |||||||||||||||||||||||
81 | Maxime | 11/29/2022 | Memorizing Transformers | https://openreview.net/forum?id=TrjbxzRcnf- | ||||||||||||||||||||||
82 | Discussion (led by Andrew) | 12/06/2022 | Meta Learning Backpropagation And Improving It | https://arxiv.org/abs/2012.14905 | ||||||||||||||||||||||
83 | Xiaoxuan | 12/20/2022 | CANCELED | |||||||||||||||||||||||
84 | Christmas | 12/27/2022 | Christmas | |||||||||||||||||||||||
85 | New Year | 01/03/2023 | New Year | |||||||||||||||||||||||
86 | Xiaoxuan | 01/10/2023 | Gradient-based learning drives robust representations in recurrent neural networks by balancing compression and expansion | https://www.nature.com/articles/s42256-022-00498-0 | ||||||||||||||||||||||
87 | Motahareh | 01/17/2023 | Efficient inverse graphics in biological face processing | https://www.science.org/doi/10.1126/sciadv.aax5979# | ||||||||||||||||||||||
88 | Dr. Nathan Kong | 01/24/2023 | Adversarial robustness of computational models of visual cortex | Abstract: Task-optimized convolutional neural networks (CNNs) show striking similarities to the ventral visual stream. However, human-imperceptible image perturbations, known as adversarial perturbations, can cause a CNN to make incorrect predictions. Here we suggest three properties that possibly lead to their brittleness: population response dimensionality, spatial frequency preference, and temporally-discontinuous training inputs. Theory suggests that the tolerance of a system to these perturbations could be related to the power law exponent (i.e., decay rate) of the eigenspectrum of its set of neural responses, where power law exponents closer to and larger than one would indicate a system that is more tolerant to input perturbations. We find that the eigenspectra of model representations decay slowly relative to those observed in neurophysiology and that robust models have eigenspectra that decay slightly faster and have higher power law exponents than those of non-robust models. We therefore investigated the spatial frequency tuning of artificial neurons and found that a large proportion of them preferred high spatial frequencies and that robust models had preferred spatial frequency distributions more aligned with the spatial frequency distribution measured in macaque V1 cells. Furthermore, robust models were quantitatively better models of V1 than non-robust models. Motivated by work in visual development showing that temporally-continuous visual experience improves view-invariant object recognition, we trained models on SAYCam, a large dataset of videos collected from the perspective of infants, and evaluated their adversarial robustness. We found that models trained on SAYCam were more robust than those trained on ImageNet and that incorporating temporal information further improved robustness. Overall, although CNNs are the state-of-the-art models of ventral visual processing, their brittle nature suggests that there is room for improvement and that one could potentially take inspiration from biology to improve their robustness. | ||||||||||||||||||||||
89 | Maxime | 01/31/2023 | The forward-forward algorithm by by Geoffrey Hinton | https://arxiv.org/abs/2212.13345 | ||||||||||||||||||||||
90 | Xiaoxuan | 02/07/2023 | Modelling human behaviour in cognitive tasks with latent dynamical systems | https://www.nature.com/articles/s41562-022-01510-8 | ||||||||||||||||||||||
91 | Motahareh | 02/14/2023 | Experimenting with Theoretical Motor Neuroscience | https://web.mit.edu/ajemian/www/Ajemian_Hogan_Falsification.pdf | ||||||||||||||||||||||
92 | Maxime | 02/21/2023 | Why can GPT learn in-context? Language models secretly perform gradient descent as meta-optimizers | https://arxiv.org/abs/2212.10559 | ||||||||||||||||||||||
93 | Motahareh | 02/28/2023 | Abstract representations emerge naturally in neural networks trained to perform multiple tasks | https://www.nature.com/articles/s41467-023-36583-0 | ||||||||||||||||||||||
94 | Xiaoxuan | 03/07/2023 | CANCELED | |||||||||||||||||||||||
95 | Cosyne Workshops | 03/14/2023 | CANCELED | |||||||||||||||||||||||
96 | Discussion | 03/21/2023 | Cosyne Poster & Talk Discussion | |||||||||||||||||||||||
97 | Ozhan | 03/28/2023 | Motor cortex signals for each arm are mixed across hemispheres and neurons yet partitioned within the population response | https://elifesciences.org/articles/46159#:~:text=Neuroscience-,Motor%20cortex%20signals%20for%20each%20arm%20are%20mixed%20across%20hemispheres,partitioned%20within%20the%20population%20response | ||||||||||||||||||||||
98 | Xiaoxuan | 04/04/2023 | Working memory control dynamics follow principles of spatial computing | https://www.nature.com/articles/s41467-023-36555-4 | ||||||||||||||||||||||
99 | Motahareh | 04/11/2023 | Research Presentation: Modeling Visual search in Humans | |||||||||||||||||||||||
100 | Maxime | 04/18/2023 | Mock Presentation for the PhD Candidacy Exam |