ICLR 2026 - Submissions

SubmissionsReviews

Submissions

Summary Statistics

Quantity AI Content Count Avg Rating
0-10% 0 (0%) N/A
10-30% 0 (0%) N/A
30-50% 1 (100%) 5.00
50-70% 0 (0%) N/A
70-90% 0 (0%) N/A
90-100% 0 (0%) N/A
Total 1 (100%) 5.00
Title Abstract Avg Rating Quantity AI Content Reviews Pangram Dashboard
Self-Guidance: Training VQ-VAE Decoders to be Robust to Quantization Artifacts for High-Fidelity Neural Speech Codec Neural speech codecs, predominantly based on Vector-Quantized Variational Autoencoders (VQ-VAEs), serve as fundamental audio tokenizers for speech large language models (SLLMs). However, their reconst... 5.00 32% See Reviews View AI Dashboard
PreviousPage 1 of 1 (1 total rows)Next