ICLR 2026 - Submissions

SubmissionsReviews

Submissions

Summary Statistics

Quantity AI Content Count Avg Rating
0-10% 0 (0%) N/A
10-30% 0 (0%) N/A
30-50% 1 (100%) 4.50
50-70% 0 (0%) N/A
70-90% 0 (0%) N/A
90-100% 0 (0%) N/A
Total 1 (100%) 4.50
Title Abstract Avg Rating Quantity AI Content Reviews Pangram Dashboard
Making Slow Thinking Faster: Compressing LLM Chain-of-Thought via Step Entropy Large Language Models (LLMs) using Chain-of-Thought (CoT) prompting excel at complex reasoning but generate verbose thought processes with considerable redundancy, leading to increased inference costs... 4.50 40% See Reviews View AI Dashboard
PreviousPage 1 of 1 (1 total rows)Next