ABCDEFGHIJKLMNOPQRSTUVWXYZ
1
Pruning deep neural networks for lottery tickets (LTs)
2
PresenterSummary & QuestionsTopicMain paperSecondary paper
3
ShreyashPramila1) Iterative Magnitude PruningOn the Predictability of Pruning Across ScalesComparing Rewinding and Fine-tuning in Neural Network Pruning
4
2) Learning rateComparing Rewinding and Fine-tuning in Neural Network PruningLottery Ticket Preserves Weight Correlation: Is It Desirable or Not?
5
3) Parameter initialization A Signal Propagation Perspective for Pruning Neural Networks at Initialization Robust Pruning at Initialization
6
AmritaNirav, Josephine Taniha4) Universal LTs
One ticket to win them all: generalizing lottery ticket initializations across datasets and optimizers
Universality of Winning Tickets: A Renormalization Group PerspectiveOn The Existence of Universal Lottery Tickets
7
Josephine TanihaPramila, Shreyash5) Sanity checking Sanity Checks for Lottery Tickets: Does Your Winning Ticket Really Win the Jackpot?Sanity-Checking Pruning Methods: Random Tickets can Win the Jackpot
8
YaminiSiddhartha, Amrita6) Random pruningAre wider nets better given the same number of parameters?
The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training
9
7) Dynamic Sparse trainingSparse Training via Boosting Pruning Plasticity with Neuroregeneration
Do We Actually Need Dense Over-Parameterization? In-Time Over-Parameterization in Sparse Training
10
RahulAdarsh, Yamini8) Gradient flowGradient Flow in Sparse Neural Networks and How Lottery Tickets WinUnmasking the Lottery Ticket Hypothesis: What's Encoded in a Winning Ticket's Mask?
11
AdarshSiddhartha, Amrita9) Neuron sparsityCoarsening the Granularity: Towards Structurally Sparse Lottery Ticketshen to Prune? A Policy towards Early Structural Pruning
12
PramilaNirav, Josephine Taniha10) Pruning for strong lottery ticketsWhat's Hidden in a Randomly Weighted Neural Network?Deconstructing Lottery Tickets: Zeros, Signs, and the Supermaskk
13
Arthur SaninRahul, Severin11) Theory - Existence of LTsOptimal Lottery Tickets via Subset Sum: Logarithmic Over-Parameterization is SufficientProving the Lottery Ticket Hypothesis: Pruning is All You Need
14
12) BenchmarkingPlant 'n' Seek: Can You Rind the Winning Ticket?Pruning Neural Networks at Initialization: Why Are We Missing the Mark?
15
SeverinRahul, Arthur Sanin13) Quantization
Multi-Prize Lottery Ticket Hypothesis: Finding Accurate Binary Neural Networks by Pruning A Randomly Weighted Network
A Winning Hand: Compressing Deep Networks Can Improve Out-Of-Distribution Robustness
16
14) Pruning with regularizationWinning the Lottery with Continuous SparsificationEffective Sparsification of Neural Networks with Global Sparsity Constraint
17
NiravArthur Sanin, Severin15) Core setsLottery Tickets on a Data Diet: Finding Initializations with Sparse Trainable NetworksEfficient Lottery Ticket Finding: Less Data is More
18
SiddharthaShreyash, Yamini16) Adversarial robustnessCan You Win Everything with A Lottery Ticket?HYDRA: Pruning Adversarially Robust Neural Networks
19
20
21
22
Siwen ChenAdarshGNNsA Unified Lottery Ticket Hypothesis for Graph Neural Networks
23
Prob. approachA Probabilistic Approach to Neural Network Pruning
24
Sample complexity
Why Lottery Ticket Wins? A Theoretical Perspectiveof Sample Complexity on Pruned Neural Networks
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100