1 | Sequential Causal Imitation Learning with Unobserved Confounders | 7.00 | 0.71 | 8, 7, 7, 6 | Oral | Poster | ✔ |
2 | Sequential Causal Imitation Learning with Unobserved Confounders | 7.40 | 0.80 | 9, 7, 7, 7, 7 | Oral | Oral | ✔ |
3 | Replacing Rewards with Examples: Example-Based Policy Search via Recursive Classification | 8.00 | 0.71 | 8, 7, 9, 8 | Oral | Oral | ✔ |
4 | Replacing Rewards with Examples: Example-Based Policy Search via Recursive Classification | 7.33 | 0.47 | 8, 7, 7 | Oral | Poster | ✔ |
5 | Learning Frequency Domain Approximation for Binary Neural Networks | 7.00 | 0.82 | 6, 7, 8 | Oral | Poster | ✔ |
6 | Learning Frequency Domain Approximation for Binary Neural Networks | 7.75 | 0.43 | 7, 8, 8, 8 | Oral | Oral | ✔ |
7 | Latent Equilibrium: A unified learning theory for arbitrarily fast computation with arbitrarily slow neurons | 6.67 | 2.62 | 3, 8, 9 | Oral | Poster | ✔ |
8 | Latent Equilibrium: A unified learning theory for arbitrarily fast computation with arbitrarily slow neurons | 8.00 | 0.71 | 9, 8, 7, 8 | Oral | Oral | ✔ |
9 | Causal Identification with Matrix Equations | 6.00 | 0.00 | 6, 6, 6, 6 | Oral | Poster | ✔ |
10 | Causal Identification with Matrix Equations | 7.00 | 0.82 | 7, 6, 8 | Oral | Oral | ✔ |
11 | Variational Inference for Continuous-Time Switching Dynamical Systems | 7.25 | 0.43 | 7, 7, 7, 8 | Spotlight | Spotlight | ✔ |
12 | Variational Inference for Continuous-Time Switching Dynamical Systems | 6.50 | 0.87 | 8, 6, 6, 6 | Spotlight | Poster | ✔ |
13 | Statistical Query Lower Bounds for List-Decodable Linear Regression | 7.50 | 0.50 | 8, 8, 7, 7 | Spotlight | Spotlight | ✔ |
14 | Statistical Query Lower Bounds for List-Decodable Linear Regression | 7.75 | 0.43 | 8, 8, 8, 7 | Spotlight | Spotlight | ✔ |
15 | Sliced Mutual Information: A Scalable Measure of Statistical Dependence | 7.00 | 0.71 | 8, 7, 7, 6 | Spotlight | Spotlight | ✔ |
16 | Sliced Mutual Information: A Scalable Measure of Statistical Dependence | 6.00 | 1.22 | 7, 6, 7, 4 | Spotlight | Poster | ✔ |
17 | Sequence-to-Sequence Learning with Latent Neural Grammars | 6.75 | 0.83 | 6, 7, 8, 6 | Spotlight | Poster | ✔ |
18 | Sequence-to-Sequence Learning with Latent Neural Grammars | 7.50 | 0.87 | 9, 7, 7, 7 | Spotlight | Spotlight | ✔ |
19 | Prototypical Cross-Attention Networks for Multiple Object Tracking and Segmentation | 5.25 | 0.43 | 5, 5, 6, 5 | Spotlight | Reject | ✔ |
20 | Prototypical Cross-Attention Networks for Multiple Object Tracking and Segmentation | 6.75 | 0.43 | 7, 7, 6, 7 | Spotlight | Spotlight | ✔ |
21 | Program Synthesis Guided Reinforcement Learning for Partially Observed Environments | 4.80 | 1.17 | 5, 4, 4, 4, 7 | Spotlight | Reject | ✔ |
22 | Program Synthesis Guided Reinforcement Learning for Partially Observed Environments | 6.25 | 1.30 | 7, 7, 4, 7 | Spotlight | Spotlight | ✔ |
23 | On the Value of Infinite Gradients in Variational Autoencoder Models | 7.33 | 0.47 | 8, 7, 7 | Spotlight | Spotlight | ✔ |
24 | On the Value of Infinite Gradients in Variational Autoencoder Models | 5.75 | 0.43 | 6, 5, 6, 6 | Spotlight | Reject | ✔ |
25 | On the Power of Differentiable Learning versus PAC and SQ Learning | 7.00 | 0.00 | 7, 7, 7 | Spotlight | Spotlight | ✔ |
26 | On the Power of Differentiable Learning versus PAC and SQ Learning | 7.00 | 0.00 | 7, 7, 7, 7 | Spotlight | Poster | ✔ |
27 | Offline RL Without Off-Policy Evaluation | 6.25 | 1.92 | 8, 7, 7, 3 | Spotlight | Poster | ✔ |
28 | Offline RL Without Off-Policy Evaluation | 7.75 | 0.83 | 8, 9, 7, 7 | Spotlight | Spotlight | ✔ |
29 | Learning with Holographic Reduced Representations | 6.67 | 0.47 | 7, 6, 7 | Spotlight | Poster | ✔ |
30 | Learning with Holographic Reduced Representations | 7.00 | 0.71 | 7, 6, 7, 8 | Spotlight | Spotlight | ✔ |
31 | Learning Generalized Gumbel-max Causal Mechanisms | 6.50 | 0.87 | 5, 7, 7, 7 | Spotlight | Spotlight | ✔ |
32 | Learning Generalized Gumbel-max Causal Mechanisms | 4.33 | 0.94 | 5, 5, 3 | Spotlight | Reject | ✔ |
33 | Learning Disentangled Behavior Embeddings | 5.00 | 0.82 | 5, 6, 4 | Spotlight | Reject | ✔ |
34 | Learning Disentangled Behavior Embeddings | 7.25 | 0.43 | 7, 7, 7, 8 | Spotlight | Spotlight | ✔ |
35 | Generalized Depthwise-Separable Convolutions for Adversarially Robust and Efficient Neural Networks | 6.67 | 0.47 | 7, 6, 7 | Spotlight | Spotlight | ✔ |
36 | Generalized Depthwise-Separable Convolutions for Adversarially Robust and Efficient Neural Networks | 5.50 | 1.50 | 7, 6, 6, 3 | Spotlight | Poster | ✔ |
37 | Forster Decomposition and Learning Halfspaces with Noise | 7.25 | 0.83 | 8, 8, 6, 7 | Spotlight | Spotlight | ✔ |
38 | Forster Decomposition and Learning Halfspaces with Noise | 6.25 | 0.43 | 6, 6, 6, 7 | Spotlight | Poster | ✔ |
39 | Early-stopped neural networks are consistent | 6.25 | 0.43 | 7, 6, 6, 6 | Spotlight | Spotlight | ✔ |
40 | Early-stopped neural networks are consistent | 7.00 | 0.89 | 6, 8, 8, 6, 7 | Spotlight | Spotlight | ✔ |
41 | Diffusion Models Beat GANs on Image Synthesis | 7.50 | 1.12 | 9, 8, 7, 6 | Spotlight | Spotlight | ✔ |
42 | Diffusion Models Beat GANs on Image Synthesis | 7.50 | 0.87 | 9, 7, 7, 7 | Spotlight | Spotlight | ✔ |
43 | Coresets for Decision Trees of Signals | 6.50 | 1.50 | 8, 7, 7, 4 | Spotlight | Oral | ✔ |
44 | Coresets for Decision Trees of Signals | 6.67 | 0.47 | 7, 7, 6 | Spotlight | Poster | ✔ |
45 | Continuous vs. Discrete Optimization of Deep Neural Networks | 5.00 | 1.58 | 6, 7, 3, 4 | Spotlight | Reject | ✔ |
46 | Continuous vs. Discrete Optimization of Deep Neural Networks | 7.00 | 0.63 | 8, 7, 7, 6, 7 | Spotlight | Spotlight | ✔ |
47 | Collaborating with Humans without Human Data | 6.00 | 1.41 | 5, 8, 5 | Spotlight | Reject | ✔ |
48 | Collaborating with Humans without Human Data | 7.25 | 1.48 | 7, 8, 5, 9 | Spotlight | Spotlight | ✔ |
49 | Clustering Effect of Adversarial Robust Models | 6.00 | 0.82 | 6, 7, 5 | Spotlight | Poster | ✔ |
50 | Clustering Effect of Adversarial Robust Models | 6.33 | 0.47 | 6, 7, 6 | Spotlight | Spotlight | ✔ |
51 | Closing the Gap: Tighter Analysis of Alternating Stochastic Gradient Methods for Bilevel Problems | 6.25 | 0.43 | 6, 6, 6, 7 | Spotlight | Poster | ✔ |
52 | Closing the Gap: Tighter Analysis of Alternating Stochastic Gradient Methods for Bilevel Problems | 7.75 | 0.43 | 7, 8, 8, 8 | Spotlight | Spotlight | ✔ |
53 | Beyond Tikhonov: faster learning with self-concordant losses, via iterative regularization | 7.25 | 0.43 | 8, 7, 7, 7 | Spotlight | Poster | ✔ |
54 | Beyond Tikhonov: faster learning with self-concordant losses, via iterative regularization | 7.75 | 0.43 | 7, 8, 8, 8 | Spotlight | Spotlight | ✔ |
55 | Believe What You See: Implicit Constraint Approach for Offline Multi-Agent Reinforcement Learning | 7.00 | 0.00 | 7, 7, 7, 7 | Spotlight | Spotlight | ✔ |
56 | Believe What You See: Implicit Constraint Approach for Offline Multi-Agent Reinforcement Learning | 5.50 | 0.50 | 6, 5, 5, 6 | Spotlight | Reject | ✔ |
57 | An Infinite-Feature Extension for Bayesian ReLU Nets That Fixes Their Asymptotic Overconfidence | 7.00 | 0.00 | 7, 7, 7, 7 | Spotlight | Spotlight | ✔ |
58 | An Infinite-Feature Extension for Bayesian ReLU Nets That Fixes Their Asymptotic Overconfidence | 6.75 | 0.83 | 6, 8, 7, 6 | Spotlight | Poster | ✔ |
59 | Aligned Structured Sparsity Learning for Efficient Image Super-Resolution | 8.00 | 0.00 | 8, 8, 8 | Spotlight | Spotlight | ✔ |
60 | Aligned Structured Sparsity Learning for Efficient Image Super-Resolution | 4.33 | 0.47 | 4, 4, 5 | Spotlight | Reject | ✔ |
61 | α
-IoU: A Family of Power Intersection over Union Losses for Bounding Box Regression | 4.33 | 0.47 | 4, 5, 4 | Poster | Reject | ✔ |
62 | α
-IoU: A Family of Power Intersection over Union Losses for Bounding Box Regression | 6.71 | 0.45 | 7, 7, 7, 7, 7, 6, 6 | Poster | Poster | ✔ |
63 | Zero Time Waste: Recycling Predictions in Early Exit Neural Networks | 6.25 | 1.30 | 7, 5, 5, 8 | Poster | Poster | ✔ |
64 | Zero Time Waste: Recycling Predictions in Early Exit Neural Networks | 4.50 | 0.87 | 6, 4, 4, 4 | Poster | Reject | ✔ |
65 | XDO: A Double Oracle Algorithm for Extensive-Form Games | 5.50 | 0.50 | 5, 6, 6, 5 | Poster | Poster | ✔ |
66 | XDO: A Double Oracle Algorithm for Extensive-Form Games | 6.00 | 0.71 | 7, 6, 6, 5 | Poster | Poster | ✔ |
67 | Which Mutual-Information Representation Learning Objectives are Sufficient for Control? | 6.25 | 0.43 | 7, 6, 6, 6 | Poster | Poster | ✔ |
68 | Which Mutual-Information Representation Learning Objectives are Sufficient for Control? | 6.00 | 0.71 | 5, 6, 6, 7 | Poster | Reject | ✔ |
69 | When Expressivity Meets Trainability: Fewer than
n
Neurons Can Work | 6.75 | 0.83 | 6, 7, 8, 6 | Poster | Poster | ✔ |
70 | When Expressivity Meets Trainability: Fewer than
n
Neurons Can Work | 5.80 | 0.40 | 5, 6, 6, 6, 6 | Poster | Reject | ✔ |
71 | What training reveals about neural network complexity | 6.60 | 0.49 | 6, 7, 7, 7, 6 | Poster | Poster | ✔ |
72 | What training reveals about neural network complexity | 6.75 | 1.30 | 6, 5, 8, 8 | Poster | Poster | ✔ |
73 | Weak-shot Fine-grained Classification via Similarity Transfer | 4.25 | 0.83 | 3, 5, 5, 4 | Poster | Reject | ✔ |
74 | Weak-shot Fine-grained Classification via Similarity Transfer | 6.33 | 0.47 | 7, 6, 6 | Poster | Poster | ✔ |
75 | VoiceMixer: Adversarial Voice Style Mixup | 6.75 | 1.64 | 8, 8, 7, 4 | Poster | Poster | ✔ |
76 | VoiceMixer: Adversarial Voice Style Mixup | 6.50 | 0.87 | 7, 7, 5, 7 | Poster | Poster | ✔ |
77 | Visual Search Asymmetry: Deep Nets and Humans Share Similar Inherent Biases | 6.75 | 1.09 | 8, 5, 7, 7 | Poster | Poster | ✔ |
78 | Visual Search Asymmetry: Deep Nets and Humans Share Similar Inherent Biases | 7.00 | 0.82 | 8, 6, 7 | Poster | Poster | ✔ |
79 | VigDet: Knowledge Informed Neural Temporal Point Process for Coordination Detection on Social Media | 5.00 | 0.71 | 4, 5, 5, 6 | Poster | Reject | ✔ |
80 | VigDet: Knowledge Informed Neural Temporal Point Process for Coordination Detection on Social Media | 6.50 | 0.87 | 7, 7, 7, 5 | Poster | Poster | ✔ |
81 | Video Instance Segmentation using Inter-Frame Communication Transformers | 6.25 | 0.43 | 6, 6, 7, 6 | Poster | Poster | ✔ |
82 | Video Instance Segmentation using Inter-Frame Communication Transformers | 4.50 | 0.87 | 5, 5, 3, 5 | Poster | Reject | ✔ |
83 | ViTAE: Vision Transformer Advanced by Exploring Intrinsic Inductive Bias | 5.33 | 0.47 | 5, 5, 6 | Poster | Reject | ✔ |
84 | ViTAE: Vision Transformer Advanced by Exploring Intrinsic Inductive Bias | 7.25 | 0.83 | 6, 8, 7, 8 | Poster | Poster | ✔ |
85 | User-Level Differentially Private Learning via Correlated Sampling | 7.20 | 0.40 | 7, 7, 7, 7, 8 | Poster | Poster | ✔ |
86 | User-Level Differentially Private Learning via Correlated Sampling | 7.00 | 0.00 | 7, 7, 7, 7 | Poster | Poster | ✔ |
87 | Unsupervised Object-Based Transition Models For 3D Partially Observable Environments | 5.75 | 1.48 | 4, 5, 8, 6 | Poster | Reject | ✔ |
88 | Unsupervised Object-Based Transition Models For 3D Partially Observable Environments | 6.50 | 0.96 | 5, 7, 8, 6, 6, 7 | Poster | Poster | ✔ |
89 | Universal Approximation Using Well-Conditioned Normalizing Flows | 6.86 | 0.35 | 7, 7, 6, 7, 7, 7, 7 | Poster | Poster | ✔ |
90 | Universal Approximation Using Well-Conditioned Normalizing Flows | 6.00 | 0.00 | 6, 6, 6 | Poster | Poster | ✔ |
91 | Understanding End-to-End Model-Based Reinforcement Learning Methods as Implicit Parameterization | 4.33 | 1.11 | 3, 3, 4, 6, 5, 5 | Poster | Reject | ✔ |
92 | Understanding End-to-End Model-Based Reinforcement Learning Methods as Implicit Parameterization | 6.50 | 0.50 | 6, 6, 7, 7 | Poster | Poster | ✔ |
93 | Understanding Bandits with Graph Feedback | 6.50 | 0.50 | 7, 6, 7, 6 | Poster | Poster | ✔ |
94 | Understanding Bandits with Graph Feedback | 6.00 | 0.71 | 5, 6, 6, 7 | Poster | Poster | ✔ |
95 | Uncertainty-Driven Loss for Single Image Super-Resolution | 4.50 | 0.50 | 4, 4, 5, 5 | Poster | Reject | ✔ |
96 | Uncertainty-Driven Loss for Single Image Super-Resolution | 6.67 | 0.47 | 7, 7, 6 | Poster | Poster | ✔ |
97 | Two Sides of Meta-Learning Evaluation: In vs. Out of Distribution | 6.50 | 0.50 | 6, 7, 6, 7 | Poster | Poster | ✔ |
98 | Two Sides of Meta-Learning Evaluation: In vs. Out of Distribution | 6.00 | 1.22 | 6, 8, 5, 5 | Poster | Poster | ✔ |
99 | Towards a Theoretical Framework of Out-of-Distribution Generalization | 6.25 | 1.09 | 6, 5, 8, 6 | Poster | Poster | ✔ |
100 | Towards a Theoretical Framework of Out-of-Distribution Generalization | 5.00 | 0.71 | 5, 4, 6, 5 | Poster | Reject | ✔ |
101 | Towards Multi-Grained Explainability for Graph Neural Networks | 6.25 | 0.43 | 6, 6, 7, 6 | Poster | Poster | ✔ |
102 | Towards Multi-Grained Explainability for Graph Neural Networks | 5.75 | 1.30 | 5, 7, 7, 4 | Poster | Reject | ✔ |
103 | Towards Better Understanding of Training Certifiably Robust Models against Adversarial Examples | 6.00 | 0.00 | 6, 6, 6, 6 | Poster | Poster | ✔ |
104 | Towards Better Understanding of Training Certifiably Robust Models against Adversarial Examples | 4.00 | 0.71 | 4, 4, 3, 5 | Poster | Reject | ✔ |
105 | Topology-Imbalance Learning for Semi-Supervised Node Classification | 6.00 | 0.82 | 7, 5, 6 | Poster | Poster | ✔ |
106 | Topology-Imbalance Learning for Semi-Supervised Node Classification | 5.33 | 0.47 | 5, 6, 5 | Poster | Reject | ✔ |
107 | The Hardness Analysis of Thompson Sampling for Combinatorial Semi-bandits with Greedy Oracle | 6.25 | 0.43 | 6, 7, 6, 6 | Poster | Poster | ✔ |
108 | The Hardness Analysis of Thompson Sampling for Combinatorial Semi-bandits with Greedy Oracle | 5.00 | 1.22 | 4, 5, 4, 7 | Poster | Reject | ✔ |
109 | Support Recovery of Sparse Signals from a Mixture of Linear Measurements | 6.25 | 0.43 | 6, 6, 7, 6 | Poster | Reject | ✔ |
110 | Support Recovery of Sparse Signals from a Mixture of Linear Measurements | 7.00 | 0.00 | 7, 7, 7 | Poster | Poster | ✔ |
111 | Structure learning in polynomial time: Greedy algorithms, Bregman information, and exponential families | 6.20 | 0.75 | 7, 7, 6, 6, 5 | Poster | Poster | ✔ |
112 | Structure learning in polynomial time: Greedy algorithms, Bregman information, and exponential families | 5.75 | 1.48 | 4, 8, 6, 5 | Poster | Reject | ✔ |
113 | Stronger NAS with Weaker Predictors | 5.80 | 0.98 | 4, 6, 7, 6, 6 | Poster | Poster | ✔ |
114 | Stronger NAS with Weaker Predictors | 6.80 | 0.75 | 6, 7, 6, 7, 8 | Poster | Poster | ✔ |
115 | Streaming Belief Propagation for Community Detection | 6.00 | 1.22 | 5, 5, 6, 8 | Poster | Reject | ✔ |
116 | Streaming Belief Propagation for Community Detection | 6.67 | 0.47 | 7, 6, 7 | Poster | Poster | ✔ |
117 | Stochastic Online Linear Regression: the Forward Algorithm to Replace Ridge | 7.25 | 0.43 | 8, 7, 7, 7 | Poster | Spotlight | ✔ |
118 | Stochastic Online Linear Regression: the Forward Algorithm to Replace Ridge | 5.25 | 0.43 | 5, 5, 6, 5 | Poster | Reject | ✔ |
119 | Stochastic Anderson Mixing for Nonconvex Stochastic Optimization | 6.25 | 1.30 | 7, 7, 4, 7 | Poster | Poster | ✔ |
120 | Stochastic Anderson Mixing for Nonconvex Stochastic Optimization | 6.25 | 0.43 | 6, 6, 6, 7 | Poster | Poster | ✔ |
121 | Sparse Steerable Convolutions: An Efficient Learning of SE(3)-Equivariant Features for Estimation and Tracking of Object Poses in 3D Space | 6.67 | 1.49 | 9, 8, 5, 6, 5, 7 | Poster | Poster | ✔ |
122 | Sparse Steerable Convolutions: An Efficient Learning of SE(3)-Equivariant Features for Estimation and Tracking of Object Poses in 3D Space | 6.25 | 0.43 | 7, 6, 6, 6 | Poster | Poster | ✔ |
123 | Slow Learning and Fast Inference: Efficient Graph Similarity Computation via Knowledge Distillation | 6.00 | 0.00 | 6, 6, 6, 6 | Poster | Poster | ✔ |
124 | Slow Learning and Fast Inference: Efficient Graph Similarity Computation via Knowledge Distillation | 6.33 | 0.47 | 6, 6, 7 | Poster | Poster | ✔ |
125 | Skipping the Frame-Level: Event-Based Piano Transcription With Neural Semi-CRFs | 7.00 | 0.00 | 7, 7, 7 | Poster | Poster | ✔ |
126 | Skipping the Frame-Level: Event-Based Piano Transcription With Neural Semi-CRFs | 5.00 | 0.71 | 5, 6, 5, 4 | Poster | Reject | ✔ |
127 | Simple Stochastic and Online Gradient Descent Algorithms for Pairwise Learning | 7.00 | 1.63 | 7, 5, 9 | Poster | Poster | ✔ |
128 | Simple Stochastic and Online Gradient Descent Algorithms for Pairwise Learning | 7.00 | 0.71 | 8, 7, 7, 6 | Poster | Poster | ✔ |
129 | Shared Independent Component Analysis for Multi-Subject Neuroimaging | 7.00 | 0.00 | 7, 7, 7, 7 | Poster | Poster | ✔ |
130 | Shared Independent Component Analysis for Multi-Subject Neuroimaging | 5.25 | 0.83 | 6, 4, 5, 6 | Poster | Poster | ✔ |
131 | Self-Supervised Bug Detection and Repair | 7.00 | 1.63 | 5, 9, 7 | Poster | Poster | ✔ |
132 | Self-Supervised Bug Detection and Repair | 6.67 | 0.47 | 6, 7, 7 | Poster | Poster | ✔ |
133 | Scaling Vision with Sparse Mixture of Experts | 6.75 | 0.83 | 6, 8, 6, 7 | Poster | Poster | ✔ |
134 | Scaling Vision with Sparse Mixture of Experts | 7.25 | 1.09 | 7, 9, 7, 6 | Poster | Poster | ✔ |
135 | Scaling Neural Tangent Kernels via Sketching and Random Features | 6.60 | 0.49 | 7, 6, 7, 7, 6 | Poster | Poster | ✔ |
136 | Scaling Neural Tangent Kernels via Sketching and Random Features | 6.25 | 0.43 | 6, 6, 6, 7 | Poster | Poster | ✔ |
137 | Scalars are universal: Equivariant machine learning, structured like classical physics | 6.25 | 0.83 | 5, 7, 6, 7 | Poster | Poster | ✔ |
138 | Scalars are universal: Equivariant machine learning, structured like classical physics | 5.80 | 0.98 | 4, 6, 7, 6, 6 | Poster | Poster | ✔ |
139 | Scalable Thompson Sampling using Sparse Gaussian Process Models | 6.25 | 0.83 | 7, 7, 6, 5 | Poster | Poster | ✔ |
140 | Scalable Thompson Sampling using Sparse Gaussian Process Models | 5.75 | 1.09 | 4, 6, 7, 6 | Poster | Poster | ✔ |
141 | Sample Selection for Fair and Robust Training | 6.00 | 0.00 | 6, 6, 6, 6 | Poster | Poster | ✔ |
142 | Sample Selection for Fair and Robust Training | 4.50 | 0.50 | 5, 4, 4, 5 | Poster | Reject | ✔ |
143 | STEP: Out-of-Distribution Detection in the Presence of Limited In-Distribution Labeled Data | 4.25 | 1.30 | 6, 3, 5, 3 | Poster | Reject | ✔ |
144 | STEP: Out-of-Distribution Detection in the Presence of Limited In-Distribution Labeled Data | 7.50 | 1.12 | 9, 8, 6, 7 | Poster | Poster | ✔ |
145 | STEM: A Stochastic Two-Sided Momentum Algorithm Achieving Near-Optimal Sample and Communication Complexities for Federated Learning | 5.75 | 0.43 | 6, 6, 5, 6 | Poster | Reject | ✔ |
146 | STEM: A Stochastic Two-Sided Momentum Algorithm Achieving Near-Optimal Sample and Communication Complexities for Federated Learning | 6.67 | 0.94 | 6, 6, 8 | Poster | Poster | ✔ |
147 | SSMF: Shifting Seasonal Matrix Factorization | 6.25 | 0.83 | 6, 7, 5, 7 | Poster | Poster | ✔ |
148 | SSMF: Shifting Seasonal Matrix Factorization | 4.67 | 0.47 | 5, 4, 5 | Poster | Reject | ✔ |
149 | SNIPS: Solving Noisy Inverse Problems Stochastically | 4.50 | 0.50 | 4, 4, 5, 5 | Poster | Reject | ✔ |
150 | SNIPS: Solving Noisy Inverse Problems Stochastically | 6.75 | 1.09 | 7, 7, 8, 5 | Poster | Spotlight | ✔ |
151 | SIMILAR: Submodular Information Measures Based Active Learning In Realistic Scenarios | 6.50 | 0.50 | 6, 6, 7, 7 | Poster | Poster | ✔ |
152 | SIMILAR: Submodular Information Measures Based Active Learning In Realistic Scenarios | 5.25 | 0.43 | 6, 5, 5, 5 | Poster | Reject | ✔ |
153 | SILG: The Multi-domain Symbolic Interactive Language Grounding Benchmark | 6.25 | 1.30 | 4, 7, 7, 7 | Poster | Reject | ✔ |
154 | SILG: The Multi-domain Symbolic Interactive Language Grounding Benchmark | 6.75 | 0.43 | 7, 7, 6, 7 | Poster | Poster | ✔ |
155 | SBO-RNN: Reformulating Recurrent Neural Networks via Stochastic Bilevel Optimization | 4.75 | 0.43 | 5, 5, 5, 4 | Poster | Reject | ✔ |
156 | SBO-RNN: Reformulating Recurrent Neural Networks via Stochastic Bilevel Optimization | 7.00 | 0.71 | 7, 8, 6, 7 | Poster | Poster | ✔ |
157 | SAPE: Spatially-Adaptive Progressive Encoding for Neural Optimization | 6.75 | 1.09 | 5, 7, 8, 7 | Poster | Poster | ✔ |
158 | SAPE: Spatially-Adaptive Progressive Encoding for Neural Optimization | 7.50 | 1.12 | 7, 6, 9, 8 | Poster | Poster | ✔ |
159 | SADGA: Structure-Aware Dual Graph Aggregation Network for Text-to-SQL | 5.00 | 0.71 | 4, 5, 5, 6 | Poster | Reject | ✔ |
160 | SADGA: Structure-Aware Dual Graph Aggregation Network for Text-to-SQL | 6.50 | 0.50 | 6, 6, 7, 7 | Poster | Poster | ✔ |
161 | Row-clustering of a Point Process-valued Matrix | 6.25 | 0.43 | 6, 7, 6, 6 | Poster | Poster | ✔ |
162 | Row-clustering of a Point Process-valued Matrix | 5.25 | 1.09 | 4, 7, 5, 5 | Poster | Reject | ✔ |
163 | Robust Counterfactual Explanations on Graph Neural Networks | 6.00 | 0.71 | 5, 6, 7, 6 | Poster | Poster | ✔ |
164 | Robust Counterfactual Explanations on Graph Neural Networks | 4.67 | 1.25 | 6, 3, 5 | Poster | Reject | ✔ |
165 | Risk-Averse Bayes-Adaptive Reinforcement Learning | 6.50 | 0.50 | 6, 7, 7, 6 | Poster | Poster | ✔ |
166 | Risk-Averse Bayes-Adaptive Reinforcement Learning | 5.60 | 1.02 | 6, 7, 5, 4, 6 | Poster | Reject | ✔ |
167 | Risk Minimization from Adaptively Collected Data: Guarantees for Supervised and Policy Learning | 6.75 | 0.83 | 6, 8, 7, 6 | Poster | Poster | ✔ |
168 | Risk Minimization from Adaptively Collected Data: Guarantees for Supervised and Policy Learning | 6.00 | 0.89 | 5, 6, 7, 7, 5 | Poster | Reject | ✔ |
169 | Reward-Free Model-Based Reinforcement Learning with Linear Function Approximation | 5.50 | 0.50 | 5, 6, 6, 5 | Poster | Reject | ✔ |
170 | Reward-Free Model-Based Reinforcement Learning with Linear Function Approximation | 6.00 | 0.00 | 6, 6, 6, 6 | Poster | Poster | ✔ |
171 | Reusing Combinatorial Structure: Faster Iterative Projections over Submodular Base Polytopes | 6.25 | 0.83 | 5, 6, 7, 7 | Poster | Poster | ✔ |
172 | Reusing Combinatorial Structure: Faster Iterative Projections over Submodular Base Polytopes | 5.25 | 0.83 | 6, 5, 4, 6 | Poster | Reject | ✔ |
173 | Residual2Vec: Debiasing graph embedding with random graphs | 4.75 | 0.83 | 6, 4, 5, 4 | Poster | Reject | ✔ |
174 | Residual2Vec: Debiasing graph embedding with random graphs | 6.75 | 0.83 | 7, 8, 6, 6 | Poster | Spotlight | ✔ |
175 | Representation Costs of Linear Neural Networks: Analysis and Design | 6.00 | 0.00 | 6, 6, 6 | Poster | Reject | ✔ |
176 | Representation Costs of Linear Neural Networks: Analysis and Design | 7.00 | 0.71 | 8, 6, 7, 7 | Poster | Poster | ✔ |
177 | Reliable Post hoc Explanations: Modeling Uncertainty in Explainability | 6.67 | 0.47 | 7, 6, 7 | Poster | Poster | ✔ |
178 | Reliable Post hoc Explanations: Modeling Uncertainty in Explainability | 6.50 | 2.06 | 8, 8, 3, 7 | Poster | Poster | ✔ |
179 | Reliable Decisions with Threshold Calibration | 6.25 | 1.79 | 4, 8, 5, 8 | Poster | Poster | ✔ |
180 | Reliable Decisions with Threshold Calibration | 6.75 | 1.79 | 9, 4, 7, 7 | Poster | Poster | ✔ |
181 | Relational Self-Attention: What's Missing in Attention for Video Understanding | 6.67 | 0.47 | 7, 6, 7 | Poster | Spotlight | ✔ |
182 | Relational Self-Attention: What's Missing in Attention for Video Understanding | 6.00 | 0.71 | 6, 5, 7, 6 | Poster | Reject | ✔ |
183 | Reinforcement Learning in Linear MDPs: Constant Regret and Representation Selection | 6.00 | 1.00 | 7, 5, 7, 5 | Poster | Poster | ✔ |
184 | Reinforcement Learning in Linear MDPs: Constant Regret and Representation Selection | 5.33 | 0.47 | 6, 5, 5 | Poster | Reject | ✔ |
185 | Regret Minimization Experience Replay in Off-Policy Reinforcement Learning | 6.50 | 2.29 | 8, 9, 3, 6 | Poster | Poster | ✔ |
186 | Regret Minimization Experience Replay in Off-Policy Reinforcement Learning | 6.00 | 1.87 | 6, 5, 4, 9 | Poster | Reject | ✔ |
187 | Referring Transformer: A One-step Approach to Multi-task Visual Grounding | 5.67 | 0.94 | 5, 7, 5 | Poster | Reject | ✔ |
188 | Referring Transformer: A One-step Approach to Multi-task Visual Grounding | 6.00 | 0.00 | 6, 6, 6, 6 | Poster | Poster | ✔ |
189 | Recursive Causal Structure Learning in the Presence of Latent Variables and Selection Bias | 7.00 | 0.00 | 7, 7, 7, 7 | Poster | Poster | ✔ |
190 | Recursive Causal Structure Learning in the Presence of Latent Variables and Selection Bias | 6.33 | 0.94 | 5, 7, 7 | Poster | Poster | ✔ |
191 | Recursive Bayesian Networks: Generalising and Unifying Probabilistic Context-Free Grammars and Dynamic Bayesian Networks | 4.25 | 0.43 | 5, 4, 4, 4 | Poster | Reject | ✔ |
192 | Recursive Bayesian Networks: Generalising and Unifying Probabilistic Context-Free Grammars and Dynamic Bayesian Networks | 6.50 | 1.12 | 5, 6, 8, 7 | Poster | Spotlight | ✔ |
193 | Rebooting ACGAN: Auxiliary Classifier GANs with Stable Training | 5.00 | 1.22 | 5, 7, 4, 4 | Poster | Reject | ✔ |
194 | Rebooting ACGAN: Auxiliary Classifier GANs with Stable Training | 6.50 | 0.50 | 7, 6, 6, 7 | Poster | Poster | ✔ |
195 | QuPeD: Quantized Personalization via Distillation with Applications to Federated Learning | 6.25 | 0.43 | 7, 6, 6, 6 | Poster | Poster | ✔ |
196 | QuPeD: Quantized Personalization via Distillation with Applications to Federated Learning | 6.00 | 1.00 | 5, 7, 5, 7 | Poster | Poster | ✔ |
197 | Provably efficient, succinct, and precise explanations | 6.75 | 1.30 | 8, 8, 5, 6 | Poster | Reject | ✔ |
198 | Provably efficient, succinct, and precise explanations | 6.67 | 0.47 | 7, 6, 7 | Poster | Poster | ✔ |
199 | Provably Strict Generalisation Benefit for Invariance in Kernel Methods | 6.00 | 1.22 | 8, 5, 5, 6 | Poster | Poster | ✔ |
200 | Provably Strict Generalisation Benefit for Invariance in Kernel Methods | 7.00 | 1.41 | 7, 9, 5, 7 | Poster | Poster | ✔ |
201 | Provably Efficient Reinforcement Learning with Linear Function Approximation under Adaptivity Constraints | 6.00 | 0.00 | 6, 6, 6, 6 | Poster | Poster | ✔ |
202 | Provably Efficient Reinforcement Learning with Linear Function Approximation under Adaptivity Constraints | 5.50 | 0.50 | 5, 6, 6, 5 | Poster | Reject | ✔ |
203 | Probability Paths and the Structure of Predictions over Time | 5.50 | 0.87 | 7, 5, 5, 5 | Poster | Reject | ✔ |
204 | Probability Paths and the Structure of Predictions over Time | 5.75 | 0.43 | 6, 6, 5, 6 | Poster | Poster | ✔ |
205 | Probabilistic Entity Representation Model for Reasoning over Knowledge Graphs | 6.50 | 0.50 | 7, 6, 6, 7 | Poster | Poster | ✔ |
206 | Probabilistic Entity Representation Model for Reasoning over Knowledge Graphs | 4.75 | 0.43 | 5, 4, 5, 5 | Poster | Reject | ✔ |
207 | Private and Non-private Uniformity Testing for Ranking Data | 6.50 | 0.50 | 6, 6, 7, 7 | Poster | Poster | ✔ |
208 | Private and Non-private Uniformity Testing for Ranking Data | 6.67 | 0.47 | 7, 6, 7 | Poster | Poster | ✔ |
209 | Post-processing for Individual Fairness | 5.50 | 1.12 | 4, 6, 5, 7 | Poster | Reject | ✔ |
210 | Post-processing for Individual Fairness | 6.50 | 0.50 | 6, 6, 7, 7 | Poster | Poster | ✔ |
211 | Post-Training Quantization for Vision Transformer | 5.75 | 1.64 | 3, 7, 6, 7 | Poster | Poster | ✔ |
212 | Post-Training Quantization for Vision Transformer | 5.25 | 0.83 | 4, 6, 6, 5 | Poster | Reject | ✔ |
213 | Pooling by Sliced-Wasserstein Embedding | 5.67 | 1.70 | 5, 8, 4 | Poster | Reject | ✔ |
214 | Pooling by Sliced-Wasserstein Embedding | 6.25 | 0.83 | 6, 7, 5, 7 | Poster | Poster | ✔ |
215 | PolarStream: Streaming Object Detection and Segmentation with Polar Pillars | 5.00 | 0.00 | 5, 5, 5 | Poster | Reject | ✔ |
216 | PolarStream: Streaming Object Detection and Segmentation with Polar Pillars | 6.25 | 0.43 | 6, 6, 7, 6 | Poster | Poster | ✔ |
217 | PlayVirtual: Augmenting Cycle-Consistent Virtual Trajectories for Reinforcement Learning | 6.67 | 0.47 | 6, 7, 7 | Poster | Poster | ✔ |
218 | PlayVirtual: Augmenting Cycle-Consistent Virtual Trajectories for Reinforcement Learning | 6.25 | 0.43 | 6, 7, 6, 6 | Poster | Poster | ✔ |
219 | Play to Grade: Testing Coding Games as Classifying Markov Decision Process | 7.00 | 1.41 | 9, 6, 6 | Poster | Poster | ✔ |
220 | Play to Grade: Testing Coding Games as Classifying Markov Decision Process | 7.00 | 0.82 | 8, 6, 7 | Poster | Poster | ✔ |
221 | Pipeline Combinators for Gradual AutoML | 6.00 | 1.22 | 4, 7, 6, 7 | Poster | Poster | ✔ |
222 | Pipeline Combinators for Gradual AutoML | 5.00 | 1.87 | 3, 8, 5, 4 | Poster | Reject | ✔ |
223 | Permutation-Invariant Variational Autoencoder for Graph-Level Representation Learning | 5.25 | 0.83 | 6, 5, 4, 6 | Poster | Reject | ✔ |
224 | Permutation-Invariant Variational Autoencoder for Graph-Level Representation Learning | 7.00 | 1.00 | 8, 6, 6, 8 | Poster | Poster | ✔ |
225 | Periodic Activation Functions Induce Stationarity | 6.75 | 0.43 | 7, 6, 7, 7 | Poster | Poster | ✔ |
226 | Periodic Activation Functions Induce Stationarity | 6.75 | 0.43 | 7, 7, 6, 7 | Poster | Poster | ✔ |
227 | Pay Better Attention to Attention: Head Selection in Multilingual and Multi-Domain Sequence Modeling | 6.50 | 0.50 | 7, 7, 6, 6 | Poster | Poster | ✔ |
228 | Pay Better Attention to Attention: Head Selection in Multilingual and Multi-Domain Sequence Modeling | 6.00 | 0.71 | 6, 5, 7, 6 | Poster | Reject | ✔ |
229 | Parallelizing Thompson Sampling | 6.25 | 1.48 | 7, 6, 4, 8 | Poster | Poster | ✔ |
230 | Parallelizing Thompson Sampling | 4.75 | 1.09 | 6, 3, 5, 5 | Poster | Reject | ✔ |
231 | ParK: Sound and Efficient Kernel Ridge Regression by Feature Space Partitions | 6.50 | 0.50 | 6, 6, 7, 7 | Poster | Poster | ✔ |
232 | ParK: Sound and Efficient Kernel Ridge Regression by Feature Space Partitions | 5.50 | 0.87 | 7, 5, 5, 5 | Poster | Reject | ✔ |
233 | POODLE: Improving Few-shot Learning via Penalizing Out-of-Distribution Samples | 5.75 | 1.09 | 6, 7, 6, 4 | Poster | Poster | ✔ |
234 | POODLE: Improving Few-shot Learning via Penalizing Out-of-Distribution Samples | 5.00 | 0.71 | 5, 6, 5, 4 | Poster | Reject | ✔ |
235 | Optimizing Reusable Knowledge for Continual Learning via Metalearning | 5.50 | 0.50 | 5, 6, 5, 6 | Poster | Poster | ✔ |
236 | Optimizing Reusable Knowledge for Continual Learning via Metalearning | 5.75 | 0.83 | 7, 6, 5, 5 | Poster | Reject | ✔ |
237 | Optimization-Based Algebraic Multigrid Coarsening Using Reinforcement Learning | 5.00 | 1.58 | 4, 7, 6, 3 | Poster | Reject | ✔ |
238 | Optimization-Based Algebraic Multigrid Coarsening Using Reinforcement Learning | 5.33 | 0.94 | 6, 6, 4 | Poster | Poster | ✔ |
239 | Open Rule Induction | 5.75 | 0.83 | 7, 5, 5, 6 | Poster | Reject | ✔ |
240 | Open Rule Induction | 6.25 | 0.43 | 6, 6, 7, 6 | Poster | Poster | ✔ |
241 | Online Knapsack with Frequency Predictions | 6.75 | 0.83 | 7, 6, 8, 6 | Poster | Spotlight | ✔ |
242 | Online Knapsack with Frequency Predictions | 5.00 | 1.41 | 4, 4, 7 | Poster | Reject | ✔ |
243 | Online Adaptation to Label Distribution Shift | 6.00 | 0.71 | 5, 6, 6, 7 | Poster | Poster | ✔ |
244 | Online Adaptation to Label Distribution Shift | 6.25 | 0.43 | 6, 6, 6, 7 | Poster | Poster | ✔ |
245 | One More Step Towards Reality: Cooperative Bandits with Imperfect Communication | 5.33 | 0.47 | 6, 5, 5 | Poster | Reject | ✔ |
246 | One More Step Towards Reality: Cooperative Bandits with Imperfect Communication | 6.33 | 0.94 | 5, 7, 7 | Poster | Poster | ✔ |
247 | One Explanation is Not Enough: Structured Attention Graphs for Image Classification | 4.50 | 0.50 | 4, 5, 5, 4 | Poster | Reject | ✔ |
248 | One Explanation is Not Enough: Structured Attention Graphs for Image Classification | 5.75 | 1.09 | 6, 7, 6, 4 | Poster | Poster | ✔ |
249 | On the Theory of Reinforcement Learning with Once-per-Episode Feedback | 5.50 | 1.12 | 4, 7, 6, 5 | Poster | Poster | ✔ |
250 | On the Theory of Reinforcement Learning with Once-per-Episode Feedback | 5.00 | 0.71 | 4, 5, 5, 6 | Poster | Reject | ✔ |
251 | On the Role of Optimization in Double Descent: A Least Squares Study | 4.33 | 0.47 | 5, 4, 4 | Poster | Reject | ✔ |
252 | On the Role of Optimization in Double Descent: A Least Squares Study | 7.33 | 0.47 | 7, 7, 8 | Poster | Poster | ✔ |
253 | On learning sparse vectors from mixture of responses | 4.60 | 0.49 | 5, 5, 4, 4, 5 | Poster | Reject | ✔ |
254 | On learning sparse vectors from mixture of responses | 6.00 | 1.41 | 8, 7, 5, 6, 4 | Poster | Poster | ✔ |
255 | On Success and Simplicity: A Second Look at Transferable Targeted Attacks | 6.00 | 1.22 | 7, 6, 4, 7 | Poster | Reject | ✔ |
256 | On Success and Simplicity: A Second Look at Transferable Targeted Attacks | 5.75 | 1.09 | 6, 4, 7, 6 | Poster | Poster | ✔ |
257 | On Locality of Local Explanation Models | 7.00 | 0.00 | 7, 7, 7, 7 | Poster | Poster | ✔ |
258 | On Locality of Local Explanation Models | 5.50 | 1.50 | 7, 6, 6, 3 | Poster | Reject | ✔ |
259 | On Learning Domain-Invariant Representations for Transfer Learning with Multiple Sources | 5.75 | 1.09 | 7, 6, 4, 6 | Poster | Reject | ✔ |
260 | On Learning Domain-Invariant Representations for Transfer Learning with Multiple Sources | 6.00 | 0.71 | 5, 6, 6, 7 | Poster | Poster | ✔ |
261 | Non-asymptotic convergence bounds for Wasserstein approximation using point clouds | 7.50 | 1.12 | 7, 6, 8, 9 | Poster | Spotlight | ✔ |
262 | Non-asymptotic convergence bounds for Wasserstein approximation using point clouds | 5.75 | 0.83 | 5, 5, 7, 6 | Poster | Reject | ✔ |
263 | Neural Tangent Kernel Maximum Mean Discrepancy | 4.50 | 1.50 | 7, 4, 4, 3 | Poster | Reject | ✔ |
264 | Neural Tangent Kernel Maximum Mean Discrepancy | 6.50 | 0.50 | 7, 6, 6, 7 | Poster | Poster | ✔ |
265 | Neural Routing by Memory | 7.00 | 0.71 | 8, 6, 7, 7 | Poster | Poster | ✔ |
266 | Neural Routing by Memory | 4.25 | 1.09 | 4, 3, 6, 4 | Poster | Reject | ✔ |
267 | Neural Architecture Dilation for Adversarial Robustness | 6.00 | 1.41 | 8, 6, 4, 6 | Poster | Reject | ✔ |
268 | Neural Architecture Dilation for Adversarial Robustness | 6.00 | 0.71 | 6, 5, 7, 6 | Poster | Poster | ✔ |
269 | Nested Variational Inference | 6.50 | 0.50 | 7, 7, 6, 6 | Poster | Poster | ✔ |
270 | Nested Variational Inference | 5.33 | 0.47 | 6, 5, 5 | Poster | Reject | ✔ |
271 | Nearly-Tight and Oblivious Algorithms for Explainable Clustering | 7.00 | 0.71 | 8, 7, 6, 7 | Poster | Spotlight | ✔ |
272 | Nearly-Tight and Oblivious Algorithms for Explainable Clustering | 7.00 | 0.71 | 7, 6, 7, 8 | Poster | Poster | ✔ |
273 | NEO: Non Equilibrium Sampling on the Orbits of a Deterministic Transform | 7.00 | 1.22 | 6, 9, 6, 7 | Poster | Poster | ✔ |
274 | NEO: Non Equilibrium Sampling on the Orbits of a Deterministic Transform | 6.50 | 0.50 | 7, 6, 7, 6 | Poster | Poster | ✔ |
275 | Multilingual Pre-training with Universal Dependency Learning | 6.00 | 0.00 | 6, 6, 6, 6 | Poster | Poster | ✔ |
276 | Multilingual Pre-training with Universal Dependency Learning | 5.50 | 1.12 | 6, 7, 5, 4 | Poster | Reject | ✔ |
277 | Multiclass Boosting and the Cost of Weak Learning | 6.50 | 0.87 | 7, 7, 5, 7 | Poster | Poster | ✔ |
278 | Multiclass Boosting and the Cost of Weak Learning | 6.33 | 0.47 | 6, 7, 6 | Poster | Poster | ✔ |
279 | Multi-Person 3D Motion Prediction with Multi-Range Transformers | 6.00 | 1.41 | 6, 3, 7, 7, 7, 6 | Poster | Poster | ✔ |
280 | Multi-Person 3D Motion Prediction with Multi-Range Transformers | 5.25 | 0.83 | 6, 5, 4, 6 | Poster | Reject | ✔ |
281 | Multi-Objective Meta Learning | 4.00 | 0.71 | 3, 4, 4, 5 | Poster | Reject | ✔ |
282 | Multi-Objective Meta Learning | 6.25 | 1.30 | 5, 8, 5, 7 | Poster | Poster | ✔ |
283 | Model-Based Episodic Memory Induces Dynamic Hybrid Controls | 5.75 | 0.43 | 6, 5, 6, 6 | Poster | Poster | ✔ |
284 | Model-Based Episodic Memory Induces Dynamic Hybrid Controls | 6.50 | 0.50 | 7, 6, 6, 7 | Poster | Poster | ✔ |
285 | Minibatch and Momentum Model-based Methods for Stochastic Weakly Convex Optimization | 4.75 | 1.30 | 6, 4, 6, 3 | Poster | Reject | ✔ |
286 | Minibatch and Momentum Model-based Methods for Stochastic Weakly Convex Optimization | 6.67 | 0.94 | 8, 6, 6 | Poster | Poster | ✔ |
287 | Meta-Adaptive Nonlinear Control: Theory and Algorithms | 6.75 | 0.43 | 7, 6, 7, 7 | Poster | Poster | ✔ |
288 | Meta-Adaptive Nonlinear Control: Theory and Algorithms | 5.00 | 1.00 | 4, 6, 6, 4 | Poster | Reject | ✔ |
289 | Machine versus Human Attention in Deep Reinforcement Learning Tasks | 6.50 | 0.87 | 5, 7, 7, 7 | Poster | Poster | ✔ |
290 | Machine versus Human Attention in Deep Reinforcement Learning Tasks | 4.75 | 1.48 | 7, 5, 3, 4 | Poster | Reject | ✔ |
291 | Machine learning structure preserving brackets for forecasting irreversible processes | 4.75 | 1.09 | 5, 5, 6, 3 | Poster | Reject | ✔ |
292 | Machine learning structure preserving brackets for forecasting irreversible processes | 7.00 | 0.82 | 8, 6, 7 | Poster | Spotlight | ✔ |
293 | Low-Rank Extragradient Method for Nonsmooth and Low-Rank Matrix Optimization Problems | 5.25 | 0.43 | 5, 5, 6, 5 | Poster | Reject | ✔ |
294 | Low-Rank Extragradient Method for Nonsmooth and Low-Rank Matrix Optimization Problems | 7.00 | 1.41 | 9, 6, 6 | Poster | Poster | ✔ |
295 | Low-Rank Constraints for Fast Inference in Structured Models | 6.00 | 0.71 | 5, 7, 6, 6 | Poster | Poster | ✔ |
296 | Low-Rank Constraints for Fast Inference in Structured Models | 5.50 | 1.12 | 5, 6, 4, 7 | Poster | Reject | ✔ |
297 | Low-Fidelity Video Encoder Optimization for Temporal Action Localization | 5.33 | 0.47 | 5, 6, 5 | Poster | Reject | ✔ |
298 | Low-Fidelity Video Encoder Optimization for Temporal Action Localization | 6.75 | 0.83 | 6, 7, 8, 6 | Poster | Poster | ✔ |
299 | Locally private online change point detection | 6.00 | 0.00 | 6, 6, 6 | Poster | Poster | ✔ |
300 | Locally private online change point detection | 5.25 | 1.48 | 3, 7, 5, 6 | Poster | Reject | ✔ |
301 | Locally differentially private estimation of functionals of discrete distributions | 6.00 | 0.71 | 6, 6, 7, 5 | Poster | Poster | ✔ |
302 | Locally differentially private estimation of functionals of discrete distributions | 5.60 | 0.80 | 6, 7, 5, 5, 5 | Poster | Reject | ✔ |
303 | Localization with Sampling-Argmax | 7.50 | 0.87 | 7, 7, 7, 9 | Poster | Poster | ✔ |
304 | Localization with Sampling-Argmax | 5.25 | 1.09 | 5, 7, 5, 4 | Poster | Reject | ✔ |
305 | Linear-Time Probabilistic Solution of Boundary Value Problems | 6.50 | 0.87 | 6, 6, 6, 8 | Poster | Poster | ✔ |
306 | Linear-Time Probabilistic Solution of Boundary Value Problems | 4.00 | 0.00 | 4, 4, 4 | Poster | Reject | ✔ |
307 | Limiting fluctuation and trajectorial stability of multilayer neural networks with mean field training | 5.75 | 0.43 | 6, 5, 6, 6 | Poster | Poster | ✔ |
308 | Limiting fluctuation and trajectorial stability of multilayer neural networks with mean field training | 6.00 | 0.00 | 6, 6, 6 | Poster | Poster | ✔ |
309 | Leveraging Recursive Gumbel-Max Trick for Approximate Inference in Combinatorial Spaces | 6.25 | 0.83 | 7, 5, 7, 6 | Poster | Poster | ✔ |
310 | Leveraging Recursive Gumbel-Max Trick for Approximate Inference in Combinatorial Spaces | 6.50 | 0.50 | 6, 7, 7, 6 | Poster | Poster | ✔ |
311 | Learning-Augmented Dynamic Power Management with Multiple States via New Ski Rental Bounds | 6.50 | 0.50 | 7, 7, 6, 6 | Poster | Poster | ✔ |
312 | Learning-Augmented Dynamic Power Management with Multiple States via New Ski Rental Bounds | 5.75 | 0.43 | 6, 6, 5, 6 | Poster | Poster | ✔ |
313 | Learning where to learn: Gradient sparsity in meta and continual learning | 6.25 | 0.83 | 5, 7, 6, 7 | Poster | Poster | ✔ |
314 | Learning where to learn: Gradient sparsity in meta and continual learning | 5.75 | 0.43 | 6, 5, 6, 6 | Poster | Poster | ✔ |
315 | Learning to Schedule Heuristics in Branch and Bound | 6.00 | 1.22 | 4, 7, 6, 7 | Poster | Poster | ✔ |
316 | Learning to Schedule Heuristics in Branch and Bound | 6.75 | 0.43 | 6, 7, 7, 7 | Poster | Poster | ✔ |
317 | Learning to Iteratively Solve Routing Problems with Dual-Aspect Collaborative Transformer | 6.00 | 0.71 | 7, 5, 6, 6 | Poster | Poster | ✔ |
318 | Learning to Iteratively Solve Routing Problems with Dual-Aspect Collaborative Transformer | 5.75 | 1.30 | 7, 7, 4, 5 | Poster | Poster | ✔ |
319 | Learning to Generate Visual Questions with Noisy Supervision | 6.75 | 0.43 | 7, 7, 6, 7 | Poster | Spotlight | ✔ |
320 | Learning to Generate Visual Questions with Noisy Supervision | 4.75 | 0.43 | 5, 5, 4, 5 | Poster | Reject | ✔ |
321 | Learning from Inside: Self-driven Siamese Sampling and Reasoning for Video Question Answering | 5.25 | 0.43 | 5, 5, 6, 5 | Poster | Reject | ✔ |
322 | Learning from Inside: Self-driven Siamese Sampling and Reasoning for Video Question Answering | 5.75 | 0.83 | 6, 5, 7, 5 | Poster | Poster | ✔ |
323 | Learning Tree Interpretation from Object Representation for Deep Reinforcement Learning | 5.50 | 0.87 | 6, 6, 6, 4 | Poster | Reject | ✔ |
324 | Learning Tree Interpretation from Object Representation for Deep Reinforcement Learning | 6.00 | 0.82 | 6, 5, 7 | Poster | Poster | ✔ |
325 | Learning Student-Friendly Teacher Networks for Knowledge Distillation | 6.00 | 0.71 | 6, 5, 7, 6 | Poster | Poster | ✔ |
326 | Learning Student-Friendly Teacher Networks for Knowledge Distillation | 4.50 | 0.50 | 4, 4, 5, 5 | Poster | Reject | ✔ |
327 | Learning Riemannian metric for disease progression modeling | 6.50 | 0.50 | 6, 6, 7, 7 | Poster | Poster | ✔ |
328 | Learning Riemannian metric for disease progression modeling | 6.00 | 1.00 | 5, 7, 7, 5 | Poster | Reject | ✔ |
329 | Learning Graph Models for Retrosynthesis Prediction | 4.33 | 0.47 | 4, 5, 4 | Poster | Reject | ✔ |
330 | Learning Graph Models for Retrosynthesis Prediction | 5.75 | 1.48 | 6, 8, 5, 4 | Poster | Poster | ✔ |
331 | Learnable Fourier Features for Multi-dimensional Spatial Positional Encoding | 5.75 | 0.43 | 6, 6, 5, 6 | Poster | Poster | ✔ |
332 | Learnable Fourier Features for Multi-dimensional Spatial Positional Encoding | 6.83 | 1.34 | 8, 7, 9, 6, 5, 6 | Poster | Poster | ✔ |
333 | Lattice partition recovery with dyadic CART | 6.33 | 0.47 | 7, 6, 6 | Poster | Poster | ✔ |
334 | Lattice partition recovery with dyadic CART | 6.00 | 0.00 | 6, 6, 6 | Poster | Poster | ✔ |
335 | Interventional Sum-Product Networks: Causal Inference with Tractable Probabilistic Models | 5.50 | 0.50 | 5, 6, 5, 6 | Poster | Poster | ✔ |
336 | Interventional Sum-Product Networks: Causal Inference with Tractable Probabilistic Models | 4.25 | 1.92 | 2, 5, 3, 7 | Poster | Reject | ✔ |
337 | Integrating Tree Path in Transformer for Code Representation | 5.00 | 1.41 | 4, 4, 7 | Poster | Reject | ✔ |
338 | Integrating Tree Path in Transformer for Code Representation | 5.75 | 1.09 | 6, 7, 6, 4 | Poster | Poster | ✔ |
339 | Integrating Expert ODEs into Neural ODEs: Pharmacology and Disease Progression | 5.25 | 1.64 | 4, 4, 5, 8 | Poster | Reject | ✔ |
340 | Integrating Expert ODEs into Neural ODEs: Pharmacology and Disease Progression | 5.75 | 1.09 | 6, 7, 6, 4 | Poster | Poster | ✔ |
341 | Increasing Liquid State Machine Performance with Edge-of-Chaos Dynamics Organized by Astrocyte-modulated Plasticity | 6.00 | 1.22 | 4, 6, 7, 7 | Poster | Reject | ✔ |
342 | Increasing Liquid State Machine Performance with Edge-of-Chaos Dynamics Organized by Astrocyte-modulated Plasticity | 6.33 | 0.94 | 7, 5, 7 | Poster | Poster | ✔ |
343 | Improving Compositionality of Neural Networks by Decoding Representations to Inputs | 6.00 | 1.58 | 8, 7, 4, 5 | Poster | Poster | ✔ |
344 | Improving Compositionality of Neural Networks by Decoding Representations to Inputs | 7.33 | 0.47 | 7, 8, 7 | Poster | Reject | ✔ |
345 | Improving Coherence and Consistency in Neural Sequence Models with Dual-System, Neuro-Symbolic Reasoning | 7.00 | 1.22 | 7, 6, 9, 6 | Poster | Spotlight | ✔ |
346 | Improving Coherence and Consistency in Neural Sequence Models with Dual-System, Neuro-Symbolic Reasoning | 4.25 | 0.83 | 5, 3, 4, 5 | Poster | Reject | ✔ |
347 | Improved Regularization and Robustness for Fine-tuning in Neural Networks | 4.50 | 0.50 | 5, 4, 4, 5 | Poster | Reject | ✔ |
348 | Improved Regularization and Robustness for Fine-tuning in Neural Networks | 6.25 | 0.43 | 6, 7, 6, 6 | Poster | Poster | ✔ |
349 | Improved Regret Bounds for Tracking Experts with Memory | 6.67 | 0.47 | 7, 7, 6 | Poster | Poster | ✔ |
350 | Improved Regret Bounds for Tracking Experts with Memory | 6.40 | 0.49 | 7, 7, 6, 6, 6 | Poster | Poster | ✔ |
351 | Hyperbolic Procrustes Analysis Using Riemannian Geometry | 6.25 | 0.43 | 6, 6, 6, 7 | Poster | Poster | ✔ |
352 | Hyperbolic Procrustes Analysis Using Riemannian Geometry | 6.00 | 0.71 | 7, 6, 6, 5 | Poster | Poster | ✔ |
353 | How does a Neural Network's Architecture Impact its Robustness to Noisy Labels? | 5.33 | 1.70 | 6, 7, 3 | Poster | Reject | ✔ |
354 | How does a Neural Network's Architecture Impact its Robustness to Noisy Labels? | 6.00 | 1.10 | 6, 5, 6, 8, 5 | Poster | Poster | ✔ |
355 | How Should Pre-Trained Language Models Be Fine-Tuned Towards Adversarial Robustness? | 6.33 | 0.47 | 7, 6, 6 | Poster | Poster | ✔ |
356 | How Should Pre-Trained Language Models Be Fine-Tuned Towards Adversarial Robustness? | 6.25 | 0.83 | 7, 6, 5, 7 | Poster | Poster | ✔ |
357 | How Fine-Tuning Allows for Effective Meta-Learning | 6.33 | 0.47 | 7, 6, 6 | Poster | Poster | ✔ |
358 | How Fine-Tuning Allows for Effective Meta-Learning | 6.00 | 0.71 | 6, 5, 6, 7 | Poster | Poster | ✔ |
359 | Hit and Lead Discovery with Explorative RL and Fragment-based Molecule Generation | 6.00 | 0.71 | 6, 6, 5, 7 | Poster | Poster | ✔ |
360 | Hit and Lead Discovery with Explorative RL and Fragment-based Molecule Generation | 4.40 | 0.49 | 5, 4, 5, 4, 4 | Poster | Reject | ✔ |
361 | Handling Long-tailed Feature Distribution in AdderNets | 5.00 | 0.71 | 5, 5, 6, 4 | Poster | Reject | ✔ |
362 | Handling Long-tailed Feature Distribution in AdderNets | 6.25 | 0.43 | 7, 6, 6, 6 | Poster | Poster | ✔ |
363 | Grounding Spatio-Temporal Language with Transformers | 5.50 | 1.12 | 4, 6, 7, 5 | Poster | Reject | ✔ |
364 | Grounding Spatio-Temporal Language with Transformers | 5.75 | 1.09 | 4, 7, 6, 6 | Poster | Poster | ✔ |
365 | Grammar-Based Grounded Lexicon Learning | 5.75 | 1.64 | 7, 3, 7, 6 | Poster | Poster | ✔ |
366 | Grammar-Based Grounded Lexicon Learning | 5.75 | 0.83 | 7, 5, 6, 5 | Poster | Poster | ✔ |
367 | Gradient-based Hyperparameter Optimization Over Long Horizons | 6.33 | 0.47 | 6, 6, 7 | Poster | Poster | ✔ |
368 | Gradient-based Hyperparameter Optimization Over Long Horizons | 6.00 | 0.00 | 6, 6, 6, 6 | Poster | Poster | ✔ |
369 | Gradient Starvation: A Learning Proclivity in Neural Networks | 6.50 | 0.50 | 7, 6, 6, 7 | Poster | Poster | ✔ |
370 | Gradient Starvation: A Learning Proclivity in Neural Networks | 5.20 | 1.60 | 4, 5, 7, 7, 3 | Poster | Reject | ✔ |
371 | GradInit: Learning to Initialize Neural Networks for Stable and Efficient Training | 6.50 | 0.50 | 7, 6, 6, 7 | Poster | Poster | ✔ |
372 | GradInit: Learning to Initialize Neural Networks for Stable and Efficient Training | 6.25 | 0.43 | 6, 6, 7, 6 | Poster | Poster | ✔ |
373 | Generalized Linear Bandits with Local Differential Privacy | 6.75 | 0.43 | 7, 6, 7, 7 | Poster | Poster | ✔ |
374 | Generalized Linear Bandits with Local Differential Privacy | 6.33 | 0.47 | 6, 7, 6 | Poster | Poster | ✔ |
375 | Generalized DataWeighting via Class-Level Gradient Manipulation | 5.00 | 0.00 | 5, 5, 5, 5 | Poster | Reject | ✔ |
376 | Generalized DataWeighting via Class-Level Gradient Manipulation | 6.00 | 1.58 | 4, 7, 8, 5 | Poster | Poster | ✔ |
377 | Gaussian Kernel Mixture Network for Single Image Defocus Deblurring | 5.25 | 1.48 | 3, 6, 7, 5 | Poster | Poster | ✔ |
378 | Gaussian Kernel Mixture Network for Single Image Defocus Deblurring | 5.67 | 1.25 | 7, 4, 6 | Poster | Reject | ✔ |
379 | GENESIS-V2: Inferring Unordered Object Representations without Iterative Refinement | 5.50 | 1.12 | 7, 6, 5, 4 | Poster | Reject | ✔ |
380 | GENESIS-V2: Inferring Unordered Object Representations without Iterative Refinement | 6.25 | 0.43 | 7, 6, 6, 6 | Poster | Poster | ✔ |
381 | Finite Sample Analysis of Average-Reward TD Learning and
Q
-Learning | 6.25 | 0.43 | 6, 6, 6, 7 | Poster | Poster | ✔ |
382 | Finite Sample Analysis of Average-Reward TD Learning and
Q
-Learning | 5.67 | 0.94 | 5, 5, 7 | Poster | Reject | ✔ |
383 | Faster Non-asymptotic Convergence for Double Q-learning | 6.00 | 0.00 | 6, 6, 6, 6 | Poster | Poster | ✔ |
384 | Faster Non-asymptotic Convergence for Double Q-learning | 5.50 | 1.12 | 6, 7, 5, 4 | Poster | Reject | ✔ |
385 | Fast Pure Exploration via Frank-Wolfe | 6.50 | 0.50 | 7, 7, 6, 6 | Poster | Poster | ✔ |
386 | Fast Pure Exploration via Frank-Wolfe | 7.33 | 0.47 | 7, 8, 7 | Poster | Poster | ✔ |
387 | Fast Policy Extragradient Methods for Competitive Games with Entropy Regularization | 6.25 | 0.83 | 6, 5, 7, 7 | Poster | Poster | ✔ |
388 | Fast Policy Extragradient Methods for Competitive Games with Entropy Regularization | 5.50 | 1.12 | 7, 4, 5, 6 | Poster | Reject | ✔ |
389 | Fast Extra Gradient Methods for Smooth Structured Nonconvex-Nonconcave Minimax Problems | 6.75 | 0.83 | 7, 6, 6, 8 | Poster | Poster | ✔ |
390 | Fast Extra Gradient Methods for Smooth Structured Nonconvex-Nonconcave Minimax Problems | 6.50 | 0.87 | 7, 7, 5, 7 | Poster | Poster | ✔ |
391 | Fast Certified Robust Training with Short Warmup | 5.67 | 0.47 | 6, 5, 6 | Poster | Poster | ✔ |
392 | Fast Certified Robust Training with Short Warmup | 6.50 | 0.50 | 7, 6, 7, 6 | Poster | Poster | ✔ |
393 | Fast Algorithms for
L∞
-constrained S-rectangular Robust MDPs | 6.50 | 1.12 | 6, 7, 5, 8 | Poster | Poster | ✔ |
394 | Fast Algorithms for
L∞
-constrained S-rectangular Robust MDPs | 6.00 | 0.71 | 5, 7, 6, 6 | Poster | Poster | ✔ |
395 | Fairness via Representation Neutralization | 6.75 | 0.43 | 7, 7, 7, 6 | Poster | Poster | ✔ |
396 | Fairness via Representation Neutralization | 6.00 | 0.71 | 7, 5, 6, 6 | Poster | Reject | ✔ |
397 | FACMAC: Factored Multi-Agent Centralised Policy Gradients | 6.00 | 1.10 | 7, 4, 7, 6, 6 | Poster | Poster | ✔ |
398 | FACMAC: Factored Multi-Agent Centralised Policy Gradients | 5.00 | 2.83 | 7, 1, 7 | Poster | Reject | ✔ |
399 | Extending Lagrangian and Hamiltonian Neural Networks with Differentiable Contact Models | 4.80 | 0.98 | 4, 4, 6, 4, 6 | Poster | Reject | ✔ |
400 | Extending Lagrangian and Hamiltonian Neural Networks with Differentiable Contact Models | 7.00 | 0.63 | 7, 8, 7, 6, 7 | Poster | Poster | ✔ |
401 | Exponential Separation between Two Learning Models and Adversarial Robustness | 5.33 | 0.47 | 5, 6, 5 | Poster | Reject | ✔ |
402 | Exponential Separation between Two Learning Models and Adversarial Robustness | 7.00 | 1.41 | 6, 9, 6 | Poster | Poster | ✔ |
403 | Exploring Cross-Video and Cross-Modality Signals for Weakly-Supervised Audio-Visual Video Parsing | 6.25 | 0.83 | 7, 5, 7, 6 | Poster | Poster | ✔ |
404 | Exploring Cross-Video and Cross-Modality Signals for Weakly-Supervised Audio-Visual Video Parsing | 4.75 | 0.83 | 6, 5, 4, 4 | Poster | Reject | ✔ |
405 | Exploiting the Intrinsic Neighborhood Structure for Source-free Domain Adaptation | 6.00 | 1.22 | 7, 6, 4, 7 | Poster | Poster | ✔ |
406 | Exploiting the Intrinsic Neighborhood Structure for Source-free Domain Adaptation | 5.50 | 0.87 | 6, 4, 6, 6 | Poster | Poster | ✔ |
407 | Exploiting Domain-Specific Features to Enhance Domain Generalization | 6.00 | 1.22 | 4, 7, 7, 6 | Poster | Poster | ✔ |
408 | Exploiting Domain-Specific Features to Enhance Domain Generalization | 5.50 | 1.12 | 5, 7, 6, 4 | Poster | Reject | ✔ |
409 | Evolution Gym: A Large-Scale Benchmark for Evolving Soft Robots | 6.00 | 1.58 | 5, 8, 4, 7 | Poster | Poster | ✔ |
410 | Evolution Gym: A Large-Scale Benchmark for Evolving Soft Robots | 6.33 | 1.25 | 6, 8, 5 | Poster | Reject | ✔ |
411 | Evaluation of Human-AI Teams for Learned and Rule-Based Agents in Hanabi | 5.25 | 0.43 | 5, 5, 6, 5 | Poster | Reject | ✔ |
412 | Evaluation of Human-AI Teams for Learned and Rule-Based Agents in Hanabi | 6.50 | 0.50 | 7, 6, 6, 7 | Poster | Poster | ✔ |
413 | Evaluating State-of-the-Art Classification Models Against Bayes Optimality | 5.50 | 0.50 | 5, 6, 6, 5 | Poster | Reject | ✔ |
414 | Evaluating State-of-the-Art Classification Models Against Bayes Optimality | 6.50 | 0.87 | 7, 7, 5, 7 | Poster | Poster | ✔ |
415 | ErrorCompensatedX: error compensation for variance reduced algorithms | 6.50 | 0.50 | 7, 7, 6, 6 | Poster | Poster | ✔ |
416 | ErrorCompensatedX: error compensation for variance reduced algorithms | 5.25 | 0.43 | 5, 6, 5, 5 | Poster | Reject | ✔ |
417 | Episodic Multi-agent Reinforcement Learning with Curiosity-driven Exploration | 6.25 | 0.83 | 7, 5, 6, 7 | Poster | Poster | ✔ |
418 | Episodic Multi-agent Reinforcement Learning with Curiosity-driven Exploration | 6.00 | 0.71 | 5, 6, 6, 7 | Poster | Poster | ✔ |
419 | Entropic Desired Dynamics for Intrinsic Control | 5.25 | 0.43 | 5, 5, 6, 5 | Poster | Reject | ✔ |
420 | Entropic Desired Dynamics for Intrinsic Control | 6.60 | 0.49 | 7, 6, 6, 7, 7 | Poster | Poster | ✔ |
421 | Emergent Discrete Communication in Semantic Spaces | 7.50 | 0.50 | 8, 7, 8, 7 | Poster | Poster | ✔ |
422 | Emergent Discrete Communication in Semantic Spaces | 4.75 | 0.83 | 4, 6, 4, 5 | Poster | Reject | ✔ |
423 | Efficient Training of Visual Transformers with Small Datasets | 5.75 | 0.43 | 6, 5, 6, 6 | Poster | Spotlight | ✔ |
424 | Efficient Training of Visual Transformers with Small Datasets | 5.40 | 1.02 | 5, 4, 7, 6, 5 | Poster | Reject | ✔ |
425 | Efficient Statistical Assessment of Neural Network Corruption Robustness | 4.33 | 0.94 | 5, 3, 5 | Poster | Reject | ✔ |
426 | Efficient Statistical Assessment of Neural Network Corruption Robustness | 6.33 | 0.94 | 5, 7, 7 | Poster | Poster | ✔ |
427 | Efficient Learning of Discrete-Continuous Computation Graphs | 5.25 | 1.09 | 4, 5, 7, 5 | Poster | Reject | ✔ |
428 | Efficient Learning of Discrete-Continuous Computation Graphs | 5.75 | 1.09 | 6, 4, 7, 6 | Poster | Poster | ✔ |
429 | Efficient Bayesian network structure learning via local Markov boundary search | 5.80 | 0.75 | 5, 7, 5, 6, 6 | Poster | Reject | ✔ |
430 | Efficient Bayesian network structure learning via local Markov boundary search | 6.00 | 0.71 | 7, 6, 5, 6 | Poster | Poster | ✔ |
431 | Efficient Active Learning for Gaussian Process Classification by Error Reduction | 5.00 | 0.00 | 5, 5, 5 | Poster | Reject | ✔ |
432 | Efficient Active Learning for Gaussian Process Classification by Error Reduction | 6.00 | 1.22 | 6, 5, 5, 8 | Poster | Poster | ✔ |
433 | EditGAN: High-Precision Semantic Image Editing | 4.25 | 0.83 | 5, 5, 3, 4 | Poster | Reject | ✔ |
434 | EditGAN: High-Precision Semantic Image Editing | 7.33 | 0.94 | 8, 6, 8 | Poster | Poster | ✔ |
435 | Edge Representation Learning with Hypergraphs | 6.00 | 0.71 | 7, 6, 5, 6 | Poster | Poster | ✔ |
436 | Edge Representation Learning with Hypergraphs | 4.00 | 0.00 | 4, 4, 4, 4 | Poster | Reject | ✔ |
437 | DynamicViT: Efficient Vision Transformers with Dynamic Token Sparsification | 6.00 | 0.71 | 6, 7, 5, 6 | Poster | Poster | ✔ |
438 | DynamicViT: Efficient Vision Transformers with Dynamic Token Sparsification | 6.33 | 0.47 | 7, 6, 6, 6, 7, 6 | Poster | Poster | ✔ |
439 | Dual Progressive Prototype Network for Generalized Zero-Shot Learning | 5.50 | 0.50 | 6, 6, 5, 5 | Poster | Reject | ✔ |
440 | Dual Progressive Prototype Network for Generalized Zero-Shot Learning | 6.00 | 0.00 | 6, 6, 6, 6 | Poster | Poster | ✔ |
441 | Does Knowledge Distillation Really Work? | 6.25 | 0.83 | 6, 7, 7, 5 | Poster | Poster | ✔ |
442 | Does Knowledge Distillation Really Work? | 6.40 | 0.80 | 5, 7, 7, 7, 6 | Poster | Poster | ✔ |
443 | Do Vision Transformers See Like Convolutional Neural Networks? | 5.25 | 0.83 | 5, 6, 6, 4 | Poster | Reject | ✔ |
444 | Do Vision Transformers See Like Convolutional Neural Networks? | 6.50 | 0.50 | 6, 7, 6, 7 | Poster | Spotlight | ✔ |
445 | Distributionally Robust Imitation Learning | 5.00 | 0.71 | 6, 5, 5, 4 | Poster | Reject | ✔ |
446 | Distributionally Robust Imitation Learning | 6.00 | 0.71 | 5, 6, 7, 6 | Poster | Poster | ✔ |
447 | Distributional Reinforcement Learning for Multi-Dimensional Reward Functions | 4.75 | 1.09 | 5, 3, 6, 5 | Poster | Reject | ✔ |
448 | Distributional Reinforcement Learning for Multi-Dimensional Reward Functions | 6.25 | 0.43 | 6, 6, 7, 6 | Poster | Poster | ✔ |
449 | Distribution-free inference for regression: discrete, continuous, and in between | 6.00 | 1.58 | 8, 4, 7, 5 | Poster | Reject | ✔ |
450 | Distribution-free inference for regression: discrete, continuous, and in between | 6.80 | 0.98 | 7, 7, 7, 5, 8 | Poster | Poster | ✔ |
451 | Distributed Saddle-Point Problems Under Data Similarity | 5.75 | 0.43 | 6, 5, 6, 6 | Poster | Reject | ✔ |
452 | Distributed Saddle-Point Problems Under Data Similarity | 6.00 | 0.71 | 6, 5, 6, 7 | Poster | Poster | ✔ |
453 | Disrupting Deep Uncertainty Estimation Without Harming Accuracy | 6.67 | 0.47 | 7, 6, 7 | Poster | Spotlight | ✔ |
454 | Disrupting Deep Uncertainty Estimation Without Harming Accuracy | 5.67 | 0.94 | 7, 5, 5 | Poster | Reject | ✔ |
455 | Disentangled Contrastive Learning on Graphs | 6.33 | 1.25 | 8, 6, 5 | Poster | Poster | ✔ |
456 | Disentangled Contrastive Learning on Graphs | 6.00 | 0.71 | 5, 6, 7, 6 | Poster | Poster | ✔ |
457 | Directed Graph Contrastive Learning | 6.25 | 0.43 | 6, 6, 6, 7 | Poster | Poster | ✔ |
458 | Directed Graph Contrastive Learning | 5.00 | 0.71 | 5, 4, 6, 5 | Poster | Reject | ✔ |
459 | Dimensionality Reduction for Wasserstein Barycenter | 5.33 | 1.70 | 3, 6, 7 | Poster | Reject | ✔ |
460 | Dimensionality Reduction for Wasserstein Barycenter | 6.75 | 0.43 | 6, 7, 7, 7 | Poster | Poster | ✔ |
461 | Differentially Private Stochastic Optimization: New Results in Convex and Non-Convex Settings | 6.25 | 1.48 | 6, 8, 4, 7 | Poster | Poster | ✔ |
462 | Differentially Private Stochastic Optimization: New Results in Convex and Non-Convex Settings | 6.67 | 0.47 | 7, 6, 7 | Poster | Spotlight | ✔ |
463 | Differentially Private Multi-Armed Bandits in the Shuffle Model | 6.00 | 0.00 | 6, 6, 6, 6 | Poster | Poster | ✔ |
464 | Differentially Private Multi-Armed Bandits in the Shuffle Model | 6.75 | 0.43 | 7, 7, 7, 6 | Poster | Poster | ✔ |
465 | Differentially Private Federated Bayesian Optimization with Distributed Exploration | 6.75 | 0.43 | 6, 7, 7, 7 | Poster | Poster | ✔ |
466 | Differentially Private Federated Bayesian Optimization with Distributed Exploration | 5.50 | 0.50 | 5, 5, 6, 6 | Poster | Reject | ✔ |
467 | Deformable Butterfly: A Highly Structured and Sparse Linear Transform | 6.25 | 0.83 | 7, 5, 6, 7 | Poster | Poster | ✔ |
468 | Deformable Butterfly: A Highly Structured and Sparse Linear Transform | 5.75 | 1.09 | 6, 6, 7, 4 | Poster | Poster | ✔ |
469 | Deep inference of latent dynamics with spatio-temporal super-resolution using selective backpropagation through time | 7.00 | 0.82 | 6, 8, 7 | Poster | Poster | ✔ |
470 | Deep inference of latent dynamics with spatio-temporal super-resolution using selective backpropagation through time | 4.50 | 0.50 | 5, 4, 4, 5 | Poster | Reject | ✔ |
471 | Deep Networks Provably Classify Data on Curves | 6.50 | 1.66 | 8, 6, 4, 8 | Poster | Poster | ✔ |
472 | Deep Networks Provably Classify Data on Curves | 6.50 | 0.50 | 6, 7, 6, 7 | Poster | Poster | ✔ |
473 | Deep Marching Tetrahedra: a Hybrid Representation for High-Resolution 3D Shape Synthesis | 6.00 | 0.71 | 7, 5, 6, 6 | Poster | Poster | ✔ |
474 | Deep Marching Tetrahedra: a Hybrid Representation for High-Resolution 3D Shape Synthesis | 6.00 | 1.41 | 6, 6, 4, 8 | Poster | Poster | ✔ |
475 | Deep Learning Through the Lens of Example Difficulty | 6.00 | 1.22 | 6, 8, 5, 5 | Poster | Reject | ✔ |
476 | Deep Learning Through the Lens of Example Difficulty | 6.00 | 0.82 | 7, 5, 6 | Poster | Poster | ✔ |
477 | Deep Jump Learning for Off-Policy Evaluation in Continuous Treatment Settings | 6.75 | 0.83 | 6, 8, 6, 7 | Poster | Poster | ✔ |
478 | Deep Jump Learning for Off-Policy Evaluation in Continuous Treatment Settings | 6.50 | 0.50 | 6, 7, 6, 7 | Poster | Poster | ✔ |
479 | Deep Conditional Gaussian Mixture Model for Constrained Clustering | 5.75 | 1.30 | 7, 4, 7, 5 | Poster | Poster | ✔ |
480 | Deep Conditional Gaussian Mixture Model for Constrained Clustering | 5.75 | 1.09 | 7, 4, 6, 6 | Poster | Reject | ✔ |
481 | Deep Bandits Show-Off: Simple and Efficient Exploration with Deep Networks | 6.67 | 0.47 | 7, 6, 7 | Poster | Poster | ✔ |
482 | Deep Bandits Show-Off: Simple and Efficient Exploration with Deep Networks | 5.75 | 0.83 | 7, 5, 5, 6 | Poster | Reject | ✔ |
483 | Decrypting Cryptic Crosswords: Semantically Complex Wordplay Puzzles as a Target for NLP | 6.75 | 1.09 | 7, 8, 5, 7 | Poster | Poster | ✔ |
484 | Decrypting Cryptic Crosswords: Semantically Complex Wordplay Puzzles as a Target for NLP | 7.25 | 0.83 | 7, 8, 8, 6 | Poster | Poster | ✔ |
485 | Decoupling the Depth and Scope of Graph Neural Networks | 5.50 | 1.12 | 6, 5, 7, 4 | Poster | Reject | ✔ |
486 | Decoupling the Depth and Scope of Graph Neural Networks | 6.67 | 1.70 | 5, 9, 6 | Poster | Poster | ✔ |
487 | Dealing With Misspecification In Fixed-Confidence Linear Top-m Identification | 5.33 | 0.94 | 6, 6, 4 | Poster | Reject | ✔ |
488 | Dealing With Misspecification In Fixed-Confidence Linear Top-m Identification | 6.67 | 0.47 | 7, 7, 6 | Poster | Poster | ✔ |
489 | Dataset Distillation with Infinitely Wide Convolutional Networks | 5.50 | 0.50 | 6, 5, 6, 5 | Poster | Poster | ✔ |
490 | Dataset Distillation with Infinitely Wide Convolutional Networks | 5.67 | 0.47 | 5, 6, 6 | Poster | Reject | ✔ |
491 | Dangers of Bayesian Model Averaging under Covariate Shift | 6.75 | 0.43 | 6, 7, 7, 7 | Poster | Poster | ✔ |
492 | Dangers of Bayesian Model Averaging under Covariate Shift | 6.75 | 0.43 | 7, 7, 7, 6 | Poster | Poster | ✔ |
493 | DP-SSL: Towards Robust Semi-supervised Learning with A Few Labeled Samples | 4.50 | 0.50 | 4, 4, 5, 5 | Poster | Reject | ✔ |
494 | DP-SSL: Towards Robust Semi-supervised Learning with A Few Labeled Samples | 6.00 | 0.00 | 6, 6, 6, 6 | Poster | Poster | ✔ |
495 | DIB-R++: Learning to Predict Lighting and Material with a Hybrid Differentiable Renderer | 5.25 | 0.43 | 5, 5, 6, 5 | Poster | Reject | ✔ |
496 | DIB-R++: Learning to Predict Lighting and Material with a Hybrid Differentiable Renderer | 5.75 | 0.43 | 5, 6, 6, 6 | Poster | Poster | ✔ |
497 | Coupled Gradient Estimators for Discrete Latent Variables | 5.40 | 0.80 | 6, 5, 6, 4, 6 | Poster | Poster | ✔ |
498 | Coupled Gradient Estimators for Discrete Latent Variables | 6.50 | 0.50 | 7, 6, 7, 6 | Poster | Poster | ✔ |
499 | CorticalFlow: A Diffeomorphic Mesh Transformer Network for Cortical Surface Reconstruction | 4.25 | 0.83 | 5, 5, 3, 4 | Poster | Reject | ✔ |
500 | CorticalFlow: A Diffeomorphic Mesh Transformer Network for Cortical Surface Reconstruction | 6.75 | 0.43 | 6, 7, 7, 7 | Poster | Spotlight | ✔ |
501 | Continuous Mean-Covariance Bandits | 6.00 | 1.22 | 6, 4, 7, 7 | Poster | Reject | ✔ |
502 | Continuous Mean-Covariance Bandits | 6.67 | 0.47 | 7, 7, 6 | Poster | Poster | ✔ |
503 | Continuous Doubly Constrained Batch Reinforcement Learning | 5.50 | 0.50 | 6, 6, 5, 5 | Poster | Reject | ✔ |
504 | Continuous Doubly Constrained Batch Reinforcement Learning | 6.25 | 0.43 | 6, 6, 7, 6 | Poster | Poster | ✔ |
505 | Contextual Recommendations and Low-Regret Cutting-Plane Algorithms | 5.67 | 2.05 | 6, 3, 8 | Poster | Reject | ✔ |
506 | Contextual Recommendations and Low-Regret Cutting-Plane Algorithms | 7.00 | 0.00 | 7, 7, 7, 7 | Poster | Poster | ✔ |
507 | Constrained Two-step Look-Ahead Bayesian Optimization | 5.25 | 0.83 | 6, 4, 5, 6 | Poster | Reject | ✔ |
508 | Constrained Two-step Look-Ahead Bayesian Optimization | 6.25 | 0.83 | 5, 6, 7, 7 | Poster | Poster | ✔ |
509 | Conditional Generation Using Polynomial Expansions | 5.00 | 1.22 | 4, 5, 7, 4 | Poster | Reject | ✔ |
510 | Conditional Generation Using Polynomial Expansions | 7.00 | 0.71 | 6, 7, 8, 7 | Poster | Spotlight | ✔ |
511 | Compressed Video Contrastive Learning | 5.75 | 1.09 | 7, 6, 4, 6 | Poster | Poster | ✔ |
512 | Compressed Video Contrastive Learning | 4.50 | 0.50 | 5, 5, 4, 4 | Poster | Reject | ✔ |
513 | Combining Latent Space and Structured Kernels for Bayesian Optimization over Combinatorial Spaces | 5.25 | 1.30 | 6, 4, 7, 4 | Poster | Reject | ✔ |
514 | Combining Latent Space and Structured Kernels for Bayesian Optimization over Combinatorial Spaces | 6.00 | 0.71 | 6, 7, 5, 6 | Poster | Poster | ✔ |
515 | Combinatorial Optimization for Panoptic Segmentation: A Fully Differentiable Approach | 5.60 | 1.50 | 3, 6, 7, 7, 5 | Poster | Reject | ✔ |
516 | Combinatorial Optimization for Panoptic Segmentation: A Fully Differentiable Approach | 6.25 | 0.43 | 7, 6, 6, 6 | Poster | Poster | ✔ |
517 | CoAtNet: Marrying Convolution and Attention for All Data Sizes | 5.00 | 0.71 | 5, 6, 5, 4 | Poster | Reject | ✔ |
518 | CoAtNet: Marrying Convolution and Attention for All Data Sizes | 6.75 | 0.43 | 7, 7, 6, 7 | Poster | Poster | ✔ |
519 | Class-agnostic Reconstruction of Dynamic Objects from Videos | 3.75 | 0.43 | 4, 3, 4, 4 | Poster | Reject | ✔ |
520 | Class-agnostic Reconstruction of Dynamic Objects from Videos | 7.00 | 0.71 | 6, 7, 8, 7 | Poster | Poster | ✔ |
521 | Chebyshev-Cantelli PAC-Bayes-Bennett Inequality for the Weighted Majority Vote | 5.00 | 0.00 | 5, 5, 5, 5 | Poster | Reject | ✔ |
522 | Chebyshev-Cantelli PAC-Bayes-Bennett Inequality for the Weighted Majority Vote | 7.00 | 1.26 | 5, 7, 7, 7, 9 | Poster | Poster | ✔ |
523 | Charting and Navigating the Space of Solutions for Recurrent Neural Networks | 5.75 | 0.43 | 5, 6, 6, 6 | Poster | Poster | ✔ |
524 | Charting and Navigating the Space of Solutions for Recurrent Neural Networks | 4.33 | 0.47 | 5, 4, 4 | Poster | Reject | ✔ |
525 | CO-PILOT: COllaborative Planning and reInforcement Learning On sub-Task curriculum | 6.00 | 0.00 | 6, 6, 6, 6 | Poster | Poster | ✔ |
526 | CO-PILOT: COllaborative Planning and reInforcement Learning On sub-Task curriculum | 3.75 | 0.43 | 4, 4, 3, 4 | Poster | Reject | ✔ |
527 | CBP: backpropagation with constraint on weight precision using a pseudo-Lagrange multiplier method | 6.25 | 0.43 | 6, 6, 7, 6 | Poster | Poster | ✔ |
528 | CBP: backpropagation with constraint on weight precision using a pseudo-Lagrange multiplier method | 4.67 | 0.94 | 6, 4, 4 | Poster | Reject | ✔ |
529 | Biological key-value memory networks | 6.67 | 0.47 | 7, 7, 6 | Poster | Poster | ✔ |
530 | Biological key-value memory networks | 5.50 | 0.50 | 6, 5, 5, 6 | Poster | Reject | ✔ |
531 | Beyond Bandit Feedback in Online Multiclass Classification | 6.00 | 0.71 | 5, 6, 6, 7 | Poster | Reject | ✔ |
532 | Beyond Bandit Feedback in Online Multiclass Classification | 6.50 | 0.50 | 6, 7, 7, 6 | Poster | Poster | ✔ |
533 | Bandit Quickest Changepoint Detection | 5.75 | 0.43 | 5, 6, 6, 6 | Poster | Reject | ✔ |
534 | Bandit Quickest Changepoint Detection | 5.67 | 0.94 | 5, 7, 5 | Poster | Poster | ✔ |
535 | Bandit Learning with Delayed Impact of Actions | 6.50 | 0.50 | 7, 7, 6, 6 | Poster | Poster | ✔ |
536 | Bandit Learning with Delayed Impact of Actions | 6.80 | 0.40 | 7, 7, 7, 6, 7 | Poster | Spotlight | ✔ |
537 | Balanced Chamfer Distance as a Comprehensive Metric for Point Cloud Completion | 5.67 | 0.47 | 5, 6, 6 | Poster | Poster | ✔ |
538 | Balanced Chamfer Distance as a Comprehensive Metric for Point Cloud Completion | 5.00 | 0.00 | 5, 5, 5, 5 | Poster | Reject | ✔ |
539 | BCORLE(
λ
): An Offline Reinforcement Learning and Evaluation Framework for Coupons Allocation in E-commerce Market | 6.67 | 0.94 | 6, 6, 8 | Poster | Poster | ✔ |
540 | BCORLE(
λ
): An Offline Reinforcement Learning and Evaluation Framework for Coupons Allocation in E-commerce Market | 5.50 | 1.12 | 4, 6, 7, 5 | Poster | Reject | ✔ |
541 | Average-Reward Learning and Planning with Options | 6.50 | 0.87 | 6, 6, 8, 6 | Poster | Poster | ✔ |
542 | Average-Reward Learning and Planning with Options | 6.25 | 0.83 | 7, 7, 5, 6 | Poster | Reject | ✔ |
543 | Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting | 6.00 | 0.00 | 6, 6, 6 | Poster | Poster | ✔ |
544 | Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting | 6.50 | 0.50 | 6, 7, 7, 6 | Poster | Poster | ✔ |
545 | Asymptotics of the Bootstrap via Stability with Applications to Inference with Model Selection | 6.25 | 1.92 | 3, 8, 7, 7 | Poster | Poster | ✔ |
546 | Asymptotics of the Bootstrap via Stability with Applications to Inference with Model Selection | 5.67 | 0.94 | 5, 5, 7 | Poster | Reject | ✔ |
547 | Asymptotically Best Causal Effect Identification with Multi-Armed Bandits | 4.25 | 0.83 | 5, 5, 4, 3 | Poster | Reject | ✔ |
548 | Asymptotically Best Causal Effect Identification with Multi-Armed Bandits | 5.33 | 1.25 | 7, 4, 5 | Poster | Poster | ✔ |
549 | Arbitrary Conditional Distributions with Energy | 6.00 | 0.00 | 6, 6, 6, 6 | Poster | Reject | ✔ |
550 | Arbitrary Conditional Distributions with Energy | 6.25 | 0.83 | 5, 7, 6, 7 | Poster | Poster | ✔ |
551 | An Uncertainty Principle is a Price of Privacy-Preserving Microdata | 5.75 | 0.83 | 5, 6, 7, 5 | Poster | Poster | ✔ |
552 | An Uncertainty Principle is a Price of Privacy-Preserving Microdata | 6.75 | 0.83 | 6, 6, 7, 8 | Poster | Spotlight | ✔ |
553 | An Empirical Study of Adder Neural Networks for Object Detection | 5.00 | 1.63 | 7, 5, 3 | Poster | Reject | ✔ |
554 | An Empirical Study of Adder Neural Networks for Object Detection | 5.00 | 0.71 | 6, 4, 5, 5 | Poster | Poster | ✔ |
555 | Adversarially Robust Change Point Detection | 6.50 | 1.50 | 7, 7, 4, 8 | Poster | Poster | ✔ |
556 | Adversarially Robust Change Point Detection | 6.67 | 0.94 | 8, 6, 6 | Poster | Poster | ✔ |
557 | Adversarially Robust 3D Point Cloud Recognition Using Self-Supervisions | 6.00 | 0.71 | 6, 7, 6, 5 | Poster | Poster | ✔ |
558 | Adversarially Robust 3D Point Cloud Recognition Using Self-Supervisions | 7.00 | 0.00 | 7, 7, 7, 7 | Poster | Poster | ✔ |
559 | Adversarial Robustness with Non-uniform Perturbations | 5.20 | 0.40 | 5, 5, 5, 6, 5 | Poster | Reject | ✔ |
560 | Adversarial Robustness with Non-uniform Perturbations | 6.50 | 0.50 | 7, 6, 6, 6, 7, 7 | Poster | Poster | ✔ |
561 | Adversarial Reweighting for Partial Domain Adaptation | 5.00 | 0.00 | 5, 5, 5 | Poster | Reject | ✔ |
562 | Adversarial Reweighting for Partial Domain Adaptation | 6.00 | 0.82 | 7, 5, 6 | Poster | Poster | ✔ |
563 | Adversarial Regression with Doubly Non-negative Weighting Matrices | 7.00 | 0.63 | 7, 8, 7, 6, 7 | Poster | Poster | ✔ |
564 | Adversarial Regression with Doubly Non-negative Weighting Matrices | 6.00 | 0.71 | 7, 6, 5, 6 | Poster | Reject | ✔ |
565 | Adversarial Feature Desensitization | 6.00 | 0.71 | 5, 6, 7, 6 | Poster | Poster | ✔ |
566 | Adversarial Feature Desensitization | 4.50 | 1.12 | 6, 3, 4, 5 | Poster | Reject | ✔ |
567 | Adversarial Attacks on Graph Classifiers via Bayesian Optimisation | 4.50 | 1.50 | 6, 5, 2, 5 | Poster | Reject | ✔ |
568 | Adversarial Attacks on Graph Classifiers via Bayesian Optimisation | 6.25 | 0.43 | 6, 6, 6, 7 | Poster | Poster | ✔ |
569 | Adaptive Machine Unlearning | 6.33 | 0.47 | 6, 7, 6 | Poster | Poster | ✔ |
570 | Adaptive Machine Unlearning | 6.50 | 0.50 | 6, 6, 7, 7 | Poster | Poster | ✔ |
571 | Adaptive Denoising via GainTuning | 4.50 | 0.50 | 5, 4, 4, 5 | Poster | Reject | ✔ |
572 | Adaptive Denoising via GainTuning | 5.75 | 0.43 | 5, 6, 6, 6 | Poster | Poster | ✔ |
573 | Active Learning of Convex Halfspaces on Graphs | 6.25 | 0.83 | 7, 7, 6, 5 | Poster | Poster | ✔ |
574 | Active Learning of Convex Halfspaces on Graphs | 6.75 | 0.43 | 7, 7, 6, 7 | Poster | Poster | ✔ |
575 | Achieving Rotational Invariance with Bessel-Convolutional Neural Networks | 5.25 | 1.64 | 5, 4, 8, 4 | Poster | Reject | ✔ |
576 | Achieving Rotational Invariance with Bessel-Convolutional Neural Networks | 6.75 | 0.43 | 7, 7, 7, 6 | Poster | Poster | ✔ |
577 | Accelerating Quadratic Optimization with Reinforcement Learning | 5.00 | 2.45 | 5, 1, 7, 7 | Poster | Poster | ✔ |
578 | Accelerating Quadratic Optimization with Reinforcement Learning | 6.67 | 0.47 | 7, 7, 6 | Poster | Poster | ✔ |
579 | A nonparametric method for gradual change problems with statistical guarantees | 5.75 | 0.43 | 6, 5, 6, 6 | Poster | Reject | ✔ |
580 | A nonparametric method for gradual change problems with statistical guarantees | 6.80 | 0.98 | 7, 7, 7, 5, 8 | Poster | Spotlight | ✔ |
581 | A mechanistic multi-area recurrent network model of decision-making | 6.33 | 0.47 | 7, 6, 6 | Poster | Poster | ✔ |
582 | A mechanistic multi-area recurrent network model of decision-making | 5.75 | 1.79 | 7, 8, 4, 4 | Poster | Reject | ✔ |
583 | A Stochastic Newton Algorithm for Distributed Convex Optimization | 6.25 | 0.43 | 6, 6, 7, 6 | Poster | Poster | ✔ |
584 | A Stochastic Newton Algorithm for Distributed Convex Optimization | 4.00 | 1.22 | 3, 4, 6, 3 | Poster | Reject | ✔ |
585 | A PAC-Bayes Analysis of Adversarial Robustness | 5.50 | 0.87 | 6, 6, 6, 4 | Poster | Reject | ✔ |
586 | A PAC-Bayes Analysis of Adversarial Robustness | 6.33 | 0.47 | 7, 6, 6 | Poster | Poster | ✔ |
587 | A Non-commutative Extension of Lee-Seung's Algorithm for Positive Semidefinite Factorizations | 7.00 | 0.00 | 7, 7, 7, 7 | Poster | Poster | ✔ |
588 | A Non-commutative Extension of Lee-Seung's Algorithm for Positive Semidefinite Factorizations | 4.33 | 0.47 | 4, 5, 4 | Poster | Reject | ✔ |
589 | A Mathematical Framework for Quantifying Transferability in Multi-source Transfer Learning | 6.00 | 0.71 | 6, 6, 5, 7 | Poster | Poster | ✔ |
590 | A Mathematical Framework for Quantifying Transferability in Multi-source Transfer Learning | 5.50 | 0.50 | 6, 5, 6, 5 | Poster | Reject | ✔ |
591 | A Highly-Efficient Group Elastic Net Algorithm with an Application to Function-On-Scalar Regression | 6.33 | 0.47 | 7, 6, 6 | Poster | Poster | ✔ |
592 | A Highly-Efficient Group Elastic Net Algorithm with an Application to Function-On-Scalar Regression | 5.75 | 1.09 | 7, 6, 6, 4 | Poster | Poster | ✔ |
593 | A Gaussian Process-Bayesian Bernoulli Mixture Model for Multi-Label Active Learning | 5.67 | 0.47 | 6, 6, 5 | Poster | Reject | ✔ |
594 | A Gaussian Process-Bayesian Bernoulli Mixture Model for Multi-Label Active Learning | 6.25 | 0.83 | 7, 7, 6, 5 | Poster | Poster | ✔ |
595 | A Computationally Efficient Method for Learning Exponential Family Distributions | 6.00 | 0.71 | 7, 6, 6, 5 | Poster | Poster | ✔ |
596 | A Computationally Efficient Method for Learning Exponential Family Distributions | 6.25 | 0.83 | 5, 6, 7, 7 | Poster | Poster | ✔ |