ICLR 2026 - Submissions

SubmissionsReviews

Submissions

Summary Statistics

Quantity AI Content Count Avg Rating
0-10% 1 (100%) 6.00
10-30% 0 (0%) N/A
30-50% 0 (0%) N/A
50-70% 0 (0%) N/A
70-90% 0 (0%) N/A
90-100% 0 (0%) N/A
Total 1 (100%) 6.00
Title Abstract Avg Rating Quantity AI Content Reviews Pangram Dashboard
Revisiting the Scaling Properties of Downstream Metrics in Large Language Model Training While scaling laws for Large Language Models (LLMs) traditionally focus on proxy metrics like pretraining loss, predicting downstream task performance has been considered unreliable. This paper challe... 6.00 6% See Reviews View AI Dashboard
PreviousPage 1 of 1 (1 total rows)Next