ICLR 2026 - Submissions

SubmissionsReviews

Submissions

Summary Statistics

Quantity AI Content Count Avg Rating
0-10% 1 (100%) 4.00
10-30% 0 (0%) N/A
30-50% 0 (0%) N/A
50-70% 0 (0%) N/A
70-90% 0 (0%) N/A
90-100% 0 (0%) N/A
Total 1 (100%) 4.00
Title Abstract Avg Rating Quantity AI Content Reviews Pangram Dashboard
Extrapolating Large Models from the Small: Optimal Learning of Scaling Laws Evaluating large language models (LLMs) is increasingly critical yet prohibitively expensive as models scale to billions of parameters. Predictive evaluation via scaling laws has emerged as a cost-eff... 4.00 4% See Reviews View AI Dashboard
PreviousPage 1 of 1 (1 total rows)Next