ABCDEFGHIJKLMNOPQRSTUVWXYZAAABAC
1
Pruning deep neural networks for lottery tickets (LTs)
2
Preference 1Preference 2Preference 3TopicMain paperSecondary paper
3
Shreyash1) Iterative Magnitude PruningOn the Predictability of Pruning Across ScalesComparing Rewinding and Fine-tuning in Neural Network Pruning
4
2) Learning rateComparing Rewinding and Fine-tuning in Neural Network PruningLottery Ticket Preserves Weight Correlation: Is It Desirable or Not?
5
Yamini3) Parameter initialization A Signal Propagation Perspective for Pruning Neural Networks at Initialization Robust Pruning at Initialization
6
Amrita, Josephine Taniha, AdarshNirav, Pramila4) Universal LTsOne ticket to win them all: generalizing lottery ticket initializations across datasets and optimizersUniversality of Winning Tickets: A Renormalization Group PerspectiveOn The Existence of Universal Lottery Tickets
7
Shreyash, PramilaAmrita, Josephine TanihaSiddhartha5) Sanity checking Sanity Checks for Lottery Tickets: Does Your Winning Ticket Really Win the Jackpot?Sanity-Checking Pruning Methods: Random Tickets can Win the Jackpot
8
Yamini6) Random pruningAre wider nets better given the same number of parameters? The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training
9
7) Dynamic Sparse trainingSparse Training via Boosting Pruning Plasticity with NeuroregenerationDo We Actually Need Dense Over-Parameterization? In-Time Over-Parameterization in Sparse Training
10
RahulYamini, Arthur Sanin, Adarsh8) Gradient flowGradient Flow in Sparse Neural Networks and How Lottery Tickets WinUnmasking the Lottery Ticket Hypothesis: What's Encoded in a Winning Ticket's Mask?
11
SiddharthaAdarsh9) Neuron sparsityCoarsening the Granularity: Towards Structurally Sparse Lottery Ticketshen to Prune? A Policy towards Early Structural Pruning
12
NiravJosephine Taniha, Pramila10) Pruning for strong lottery ticketsWhat's Hidden in a Randomly Weighted Neural Network?Deconstructing Lottery Tickets: Zeros, Signs, and the Supermaskk
13
Arthur SaninRahul11) Theory - Existence of LTsOptimal Lottery Tickets via Subset Sum: Logarithmic Over-Parameterization is SufficientProving the Lottery Ticket Hypothesis: Pruning is All You Need
14
SeverinRahul12) BenchmarkingPlant 'n' Seek: Can You Rind the Winning Ticket?Pruning Neural Networks at Initialization: Why Are We Missing the Mark?
15
Severin13) QuantizationMulti-Prize Lottery Ticket Hypothesis: Finding Accurate Binary Neural Networks by Pruning A Randomly Weighted NetworkA Winning Hand: Compressing Deep Networks Can Improve Out-Of-Distribution Robustness
16
Severin14) Pruning with regularizationWinning the Lottery with Continuous SparsificationEffective Sparsification of Neural Networks with Global Sparsity Constraint
17
Siwen ChenArthur SaninNirav15) Core setsLottery Tickets on a Data Diet: Finding Initializations with Sparse Trainable NetworksEfficient Lottery Ticket Finding: Less Data is More
18
SiddharthaShreyash16) Adversarial robustnessCan You Win Everything with A Lottery Ticket?HYDRA: Pruning Adversarially Robust Neural Networks
19
20
21
22
23
Siwen ChenGNNsA Unified Lottery Ticket Hypothesis for Graph Neural Networks
24
Siwen ChenProb. approachA Probabilistic Approach to Neural Network Pruning
25
Sample complexityWhy Lottery Ticket Wins? A Theoretical Perspectiveof Sample Complexity on Pruned Neural Networks
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100