ICLR 2026 - Submissions

SubmissionsReviews

Submissions

Summary Statistics

Quantity AI Content Count Avg Rating
0-10% 0 (0%) N/A
10-30% 1 (100%) 2.50
30-50% 0 (0%) N/A
50-70% 0 (0%) N/A
70-90% 0 (0%) N/A
90-100% 0 (0%) N/A
Total 1 (100%) 2.50
Title Abstract Avg Rating Quantity AI Content Reviews Pangram Dashboard
Scaling Laws for Parameter Pruning in LLMs Scaling up model parameters and training data consistently improves the performance of large language models (LLMs), but at the cost of rapidly growing memory and compute requirements, which makes dep... 2.50 25% See Reviews View AI Dashboard
PreviousPage 1 of 1 (1 total rows)Next