ABCDEFGHIJKLMNOPQRSTUVWXYZAA
1
IDTopicPaper TitleLink
2
1GNNsExphormer: Sparse Transformers for Graphshttps://arxiv.org/abs/2303.06147
3
4GNNsDo Transformers Really Perform Bad for Graph Representation?https://arxiv.org/abs/2106.05234
4
5GNNsPlanE: Representation Learning over Planar Graphshttps://arxiv.org/abs/2307.01180
5
8Video Representation LearningConditional Object-Centric Learning from Videohttps://arxiv.org/abs/2111.12594
6
9Multi-modal TuningMaPLe: Multi-modal Prompt Learninghttps://arxiv.org/abs/2210.03117
7
11AudioHigh-Fidelity Audio Compression with Improved RVQGANhttps://arxiv.org/pdf/2306.06546.pdf
8
13AudioFrom Discrete Tokens to High-Fidelity Audio Using Multi-Band Diffusionhttps://arxiv.org/abs/2308.02560
9
15NLPpNLP-Mixer: an Efficient all-MLP Architecture for Languagehttps://arxiv.org/abs/2202.04350
10
17QuantizationTernaryBERT: Distillation-aware Ultra-low Bit BERThttps://arxiv.org/pdf/2009.12812.pdf
11
18QuantizationBitNet: Scaling 1-bit Transformers for Large Language Modelshttps://arxiv.org/pdf/2310.11453.pdf
12
19CompressionVARIATIONAL IMAGE COMPRESSION WITH A SCALE HYPERPRIORhttps://arxiv.org/pdf/1802.01436.pdf
13
20CompressionNonlinear Transform Codinghttps://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9242247
14
21CompressionAN IMPROVED UPPER BOUND ON THE RATE-DISTORTION FUNCTION OF IMAGEShttps://arxiv.org/pdf/2309.02574.pdf
15
22CompressionCompressing Images by Encoding Their Latent Representations with Relative Entropy Codinghttps://arxiv.org/pdf/2010.01185.pdf
16
23CompressionThe Unreasonable Effectiveness of Deep Features as a Perceptual Metrichttps://arxiv.org/pdf/1801.03924.pdf
17
24CompressionPRACTICAL LOSSLESS COMPRESSION WITH LATENT VARIABLES USING BITS BACK CODINGhttps://arxiv.org/pdf/1901.04866.pdf
18
26CompressionImproving Inference for Neural Image Compressionhttps://proceedings.neurips.cc/paper/2020/file/066f182b787111ed4cb65ed437f0855b-Paper.pdf
19
27Representation LearningFlow Factorized Representation Learninghttps://arxiv.org/pdf/2309.13167.pdf
20
29Representation LearningInterventional Causal Representation Learninghttps://proceedings.mlr.press/v202/ahuja23a.html
21
30Representation LearningHiera: A Hierarchical Vision Transformer without the Bells-and-Whistleshttps://arxiv.org/abs/2306.00989
22
31Representation LearningHiViT: A Simpler and More Efficient Design of Hierarchical Vision Transformerhttps://openreview.net/forum?id=3F6I-0-57SC
23
32Architecture OptNet: Differentiable Optimization as a Layer in Neural Networkshttps://arxiv.org/abs/1703.00443
24
33Architecture Neural Ordinary Differential Equationshttps://proceedings.neurips.cc/paper_files/paper/2018/hash/69386f6bb1dfed68692a24c8686939b9-Abstract.html
25
34OptimizerADAHESSIAN: An Adaptive Second Order Optimizer for Machine Learninghttps://ojs.aaai.org/index.php/AAAI/article/view/17275
26
37RLBehavior Alignment via Reward Function Optimizationhttps://arxiv.org/abs/2310.19007
27
39OOD generalizationFishr: Invariant Gradient Variances for Out-of-Distribution Generalizationhttps://arxiv.org/abs/2109.02934
28
40General MLGit Re-Basin: Merging Models modulo Permutation Symmetrieshttps://arxiv.org/abs/2209.04836
29
41GNNSWL meet VChttps://arxiv.org/abs/2301.11039
30
42GNNSExploring the Power of Graph Neural Networks in Solving Linear Optimization Problemshttps://arxiv.org/abs/2310.10603
31
43GNNSOn the Markov Property of Neural Algorithmic Reasoning: Analyses and Methodshttps://openreview.net/forum?id=Kn7tWhuetn
32
44NormalizationRethinking "Batch" in BatchNormhttps://arxiv.org/abs/2105.07576
33
45CNNsA ConvNet for the 2020shttps://arxiv.org/abs/2201.03545
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100