|
UniRestorer: Universal Image Restoration via Adaptively Estimating Image Degradation at Proper Granularity |
Soundness: 3: good
Presentation: 3: good
Contribution: 2: fair
Rating: 6: marginally above the acceptance threshold
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. |
UniRestorer presents a strong universal image restoration framework that achieves impressive performance across single, mixed, OOD, and even real-world degradations. Its combination of multi-granularity degradation representations, hierarchical expert specialization, and uncertainty-aware routing enables both robustness and precision, outperforming prior all-in-one baselines and approaching single-task SOTA results.
However, the core mechanism raises conceptual concerns: the training pipeline is overly complex, and the single-expert activation strategy appears suboptimal and parameter-inefficient. Clarification on these design choices—especially how the MoE routing behaves under mixed degradations—would be crucial to fully assess the method’s contribution.
1. The paper’s fine-grained degradation representation learning is a clear strength. By retraining a DA-CLIP–based extractor with fine-grained textual labels (e.g., light/medium/heavy noise or haze) and contrastive supervision, the authors enable the model to capture not only degradation types but also intensity levels. Supplementary t-SNE visualizations results show that these representations are separable at both coarse and fine granularity, providing a solid foundation for the subsequent hierarchical clustering and multi-granularity expert design.
2. Demonstrates strong cross-distribution generalization, maintaining stable PSNR/SSIM across single, mixed, OOD, and real-world degradations, with zero-shot gains on unseen types.
3. Uncertainty-aware hierarchical routing adaptively selects coarse or fine experts for robustness or precision, reducing routing ambiguity and representation conflict while matching or surpassing single-task performance.
4. Comprehensive evidence beyond parameter scaling: broad baselines and ablations show gains arise from division of labor and routing, with the LoRA variant retaining near full-model performance.
1. The inference scheme activates only a single expert at a time, which limits the expressive power of the MoE and introduces substantial parameter redundancy, as many experts remain unused for each input.
2. When the MoE system encounters mixed degradations, how frequently does the router fall back to the 0-th level (coarse) expert?
If this fallback occurs in most cases, it is unclear why the model significantly outperforms a single Restormer trained directly on mixed degradations. Conversely, if the router instead activates a fine-level expert (e.g., a “rain” or “haze” expert) for a mixed input such as rain-haze, it would contradict the intended degradation clustering principle and could potentially degrade performance. Clarification on the router’s behavior and expert selection under mixed-degradation inputs is needed.
3. The overall training pipeline is overly complex and resource-intensive. It requires first training a degradation extractor, then performing hierarchical clustering, followed by separate training for multiple experts and an additional router stage.
4. The paper does not clarify how data sufficiency is ensured for the fine-granularity experts. Since the hierarchical clustering process recursively divides the training set into smaller subsets, some fine-level clusters may contain only a limited number of samples. It is unclear whether the authors applied any method to avoid underfitting or data imbalance across experts.
The second weakness is the main issue that confuses me the most. I would appreciate a clear explanation, and I may consider raising my rating if it is addressed convincingly. |
Moderately AI-edited |
|
UniRestorer: Universal Image Restoration via Adaptively Estimating Image Degradation at Proper Granularity |
Soundness: 3: good
Presentation: 3: good
Contribution: 2: fair
Rating: 4: marginally below the acceptance threshold
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. |
The paper proposes UniRestorer, a universal all-in-one image restoration framework that bridges degradation-agnostic and degradation-aware paradigms. It performs hierarchical clustering on degradation representations to build a multi-granularity mixture-of-experts (MoE) model. Through degradation and granularity estimation, the router adaptively selects fine- or coarse-grained experts, achieving robustness against estimation errors. Experiments on single- and mixed-degradation benchmarks show that UniRestorer surpasses prior all-in-one methods and nearly matches single-task performance
1. This paper introduces a multi-granularity degradation representation that unifies coarse- and fine-grained experts. The idea of granularity estimation to quantify degradation uncertainty and guide routing is novel and intuitive.
2. Comprehensive experiments: covers 7 single-degradation and 11 mixed-degradation settings, plus real-world and unseen tasks.
3. Paper is well-organized and technically detailed with intuitive figures (especially Fig. 3 illustrating routing).
4. The hierarchical MoE design could inspire cross-domain generalization and adaptive restoration architectures.
1. While granularity estimation is conceptually convincing, there’s no formal uncertainty-theoretic or probabilistic analysis of its behavior.
2. Each granularity level adds parameters and routing complexity; actual FLOPs and latency comparisons are limited. The LoRA variant helps, but trade-offs between full and LoRA experts could be better quantified.
3. The K-means–based clustering assumes a consistent degradation embedding space; sensitivity to clustering hyperparameters (number of clusters, feature normalization) is not reported.
4. Evaluation primarily uses synthetic degradations; more real-world degradation diversity (e.g., motion blur, ISP artifacts) would strengthen the claim of universality.
5. The effect of routing noise or errors in granularity estimation itself is underexplored; visualization of routing confidence would add interpretability.
1. How sensitive is performance to the number of levels (l = 3) or cluster count per level? Could adaptive clustering improve robustness?
2. Can you visualize which experts are selected for different degradations and how granularity affects routing under ambiguous inputs?
3. Could the same hierarchical expert tree generalize to unseen degradation combinations(e.g., low-light + blur + noise) without retraining?
4. In hybrid usage, can user-provided degradation cues override granularity routing? Would it further close the gap with single-task models? |
Fully AI-generated |
|
UniRestorer: Universal Image Restoration via Adaptively Estimating Image Degradation at Proper Granularity |
Soundness: 3: good
Presentation: 3: good
Contribution: 3: good
Rating: 8: accept, good paper
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. |
UniRestorer proposes a multi-granularity MoE framework for all-in-one image restoration. It (1) hierarchically clusters degradation space, (2) trains multi-level MoE experts, and (3) uses joint degradation + granularity estimation to route inputs to the most suitable expert. Claims large gains over SOTA all-in-one models and narrows gap to single-task models. Code and models will be released.
Granularity estimation elegantly handles degradation estimation noise — robust and practical.
Hierarchical clustering + MoE scales well across 15+ degradation types.
Large, consistent gains (e.g., +2.1 dB PSNR on mixed test sets).
Comprehensive ablations (granularity levels, routing loss, expert count).
Clean figures: t-SNE of degradation space, expert activation heatmaps.
Clustering is offline and static — no online adaptation to new/unseen degradations.
Granularity estimator adds overhead — no inference latency reported (vs. PromptIR, AirNet).
No theoretical justification for hierarchical clustering choice (e.g., why 3 levels?).
Evaluation limited to synthetic degradations — no real-world camera pipeline (e.g., RAW → ISP).
MoE training unstable? No mention of load balancing loss or expert collapse.
Report inference FPS on RTX 3090 for 512×512 input — how much slower than PromptIR?
Test on real-world degradations (e.g., DND, SIDD, GoPro real blur).
Ablate dynamic clustering — can granularity be learned end-to-end without offline K-means? |
Fully AI-generated |
|
UniRestorer: Universal Image Restoration via Adaptively Estimating Image Degradation at Proper Granularity |
Soundness: 3: good
Presentation: 2: fair
Contribution: 2: fair
Rating: 6: marginally above the acceptance threshold
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. |
Existing all-in-one restoration schemes are either degradation-agnostic or degradation-aware, leaving a clear performance gap to single-task experts. This paper proposes UniRestorer, which first hierarchically clusters the degradation space and trains a multi-granularity mixture-of-experts (MoE) network. At inference it jointly estimates degradation type and granularity to activate the most suitable expert. Extensive experiments show that UniRestorer significantly surpasses state-of-the-art all-in-one competitors and narrows the gap to dedicated single-task models.
1. The paper proposes UniRestorer, the first framework that simultaneously exploits degradation and granularity estimation to overcome the inherent limitations of both degradation-agnostic and degradation-aware restoration methods.
2. Extensive quantitative and qualitative experiments convincingly demonstrate the superiority of UniRestorer over existing all-in-one baselines and its competitive performance against task-specific models.
3. The idea of granularity-aware expert selection is clearly articulated and technically grounded; it offers a fresh insight that could inspire future work on robust universal image restoration.
1. The paper lacks a quantitative analysis of the proposed hierarchical degradation-clustering step.
2. No comparison or discussion is provided against alternative clustering strategies (e.g., the spectral clustering adopted in SEAL).
3. It is unclear whether the Restormer baseline in Table 1 was re-trained under exactly the same degradation protocol and parameter budget; an ablation that removes both degradation and granularity estimation while keeping the backbone capacity fixed would better isolate the gain of the proposed method.
Please check the weakness |
Moderately AI-edited |