|
Self-Improved Prior for All-in-One Image Restoration |
Soundness: 3: good
Presentation: 3: good
Contribution: 3: good
Rating: 4: marginally below the acceptance threshold
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. |
This research paper introduces a novel paradigm called Self-Improved Privilege Learning (SIPL) to address optimization instability and inter-task conflicts in all-in-one image restoration models when handling diverse and mixed degradations. Unlike conventional Privilege Learning, SIPL innovatively extends the utility of privileged information (PI) beyond the training phase into inference. Its core mechanism is the "Proxy Fusion" module, which incorporates a learnable Privileged Dictionary. During training, this dictionary distills high-quality priors from ground-truth features, and during inference, it leverages the model's preliminary outputs as pseudo-privileged signals for an iterative self-refinement loop. Experimental results demonstrate that SIPL significantly improves performance across various all-in-one image restoration benchmarks, particularly for composite degradation tasks, while offering broad applicability and computational efficiency.
1. SIPL breaks the limitations of traditional Privilege Learning by extending privileged information from the training phase to inference, enabling self-improvement at test time, which is a significant innovation.
2. Experimental results demonstrate that SIPL achieves substantial PSNR improvements across various image restoration tasks, including composite degradation, deraining, dehazing, and denoising, performing exceptionally well on complex composite degradations.
3. The SIPL framework, particularly the Proxy Fusion module, is designed to be seamlessly integrated with diverse backbone architectures (e.g., PromptIR, Restormer, NAFNet, AdaIR), enhancing its versatility and practicality.
4. The paper provides comprehensive ablation studies, deeply analyzing the contributions and performance of individual SIPL components, which enhances the credibility of the conclusions.
1. Deeper theoretical understanding needed: The paper points out a lack of a deeper theoretical understanding of the optimization dynamics of Privilege Learning in this context. While empirically validated for stability, the absence of a solid theoretical foundation might limit further optimization and insights.
2. Additional training cost and inference latency: This mthods needs retraining the baseline methods with the pluged module, and this training is needed for each new model. The iterative refinement process also increases latency compared to single-pass baselines.
3. Native privilege learning as the important baseline are somewhat missing. The main performance table only compares SIPL with the base model (PromptIR), ignoring the comparison with native privilege learning pipeline.
4. As the core component of this method, the learned dictionary lacks necessary analyses. Additional analyses are encouraged, regarding the impact of the size of the dictionary, and so on.
Please refer to weakness part. |
Lightly AI-edited |
|
Self-Improved Prior for All-in-One Image Restoration |
Soundness: 3: good
Presentation: 2: fair
Contribution: 2: fair
Rating: 4: marginally below the acceptance threshold
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. |
This paper introduces Self-Improved Privilege Learning (SIPL), a novel framework for all-in-one image restoration that extends privilege learning (PL) into the inference stage. The key idea is to enable models to iteratively refine their outputs by using their own initial restorations as pseudo-privileged information. The authors propose a Proxy Fusion module with a learnable Privileged Dictionary (PD) to retain high-quality priors from privileged (ground-truth-derived) features during training and reuse them during inference. The method is claimed to be architecture-agnostic and can be integrated into various backbones like PromptIR. Extensive experiments across multiple benchmarks (three-task, five-task, deweathering, and composite degradation) show notable PSNR/SSIM gains and strong qualitative improvements.
1. The paper presents a creative extension of Privilege Learning by introducing an inference-time reuse mechanism. The idea of “self-refinement through pseudo-privileged signals” is conceptually elegant and distinct from test-time adaptation or self-ensembling.
2. The proposed Proxy Fusion and Privileged Dictionary are well-motivated and described with clear mathematical formulation (Eqs. 2–4). The training and inference procedures are systematically explained, and the iterative refinement mechanism (Eqs. 5–7) is logically sound.
3. Experiments cover diverse benchmarks with consistent improvements over strong baselines.
1. The framework lacks formal analysis of why the Privileged Dictionary enables stable self-refinement. The paper acknowledges this as a limitation but providing any theoretical intuition (e.g., gradient variance reduction) would strengthen the work.
2. While the paper differentiates SIPL from test-time adaptation, the boundary remains somewhat blurred. A more rigorous comparison (quantitative or procedural) to self-distillation or self-training methods could better situate SIPL in the broader landscape.
3. Although the overhead is smaller than ensembling, repeated refinement still doubles inference time. The practical trade-off between latency and improvement could be more thoroughly quantified (e.g., FLOPs, runtime).
4. All benchmarks are synthetic; it remains unclear whether pseudo-privileged refinement helps under real-world degradations (e.g., RAW noise, ISP artifacts).
5. The paper treats the PD as a black box; no visualization of learned atoms or similarity between retrieved priors and ground truth is shown.
1. What do PD entries represent visually or statistically? Can you visualize top-activated atoms or measure entropy / usage distribution?
2. How many refinement iterations are typically needed before saturation? Does performance ever degrade with more steps? Any evidence of oscillation?
3. If a PD is trained with one backbone (PromptIR), can it accelerate or improve another (Restormer) without retraining? This would demonstrate universality.
4. Have you tested SIPL on real-capture datasets (e.g., RainDS, LOL-V2, AIM-RealRain)? Does pseudo-privilege refinement still improve perceptual metrics (LPIPS/MUSIQ)? |
Fully AI-generated |
|
Self-Improved Prior for All-in-One Image Restoration |
Soundness: 3: good
Presentation: 2: fair
Contribution: 3: good
Rating: 6: marginally above the acceptance threshold
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. |
This paper proposes a new image restoration framework, Self-Improved Privilege Learning (SIPL), designed to address optimization instability and inter-task conflicts in all-in-one image restoration models. Built upon the concept of Privilege Learning (PL), the authors extend its use beyond training to inference via a lightweight module called Proxy Fusion, which incorporates a learnable Privileged Dictionary (PD). At inference, the model uses its own outputs as pseudo-privileged information, enabling iterative self-refinement. The method is shown to be architecture-agnostic, efficient, and broadly applicable, with strong results on multiple benchmarks, including multi-task, composite degradation, and out-of-distribution scenarios.
1. The extension of PL into test-time self-refinement using a learned dictionary is both conceptually interesting and practically effective.
2. The proposed Proxy Fusion mechanism is lightweight, plug-and-play, and well-motivated. It introduces minimal overhead while providing measurable improvements.
3. Comprehensive experiments across four challenging benchmarks (Three-Task, Five-Task, Deweathering, Composite Degradation) convincingly demonstrate SIPL’s effectiveness. Improvements are consistently reported across various restoration tasks, with particularly notable gains in composite degradation scenarios.
4. The paper includes detailed ablations dissecting PL and SIPL contributions, multi-step refinement behavior, and efficiency vs. performance trade-offs. These analyses are thorough and help clarify SIPL’s practical value.
1. The paper is difficult to read in many parts due to heavy terminology and dense writing.
2. While the empirical results are strong, the paper lacks a deeper theoretical analysis of why the proposed self-refinement via pseudo-PI is stable and effective. A formal justification or insight into training dynamics under SIPL would strengthen the claims.
3. The performance of SIPL at inference appears to depend heavily on the quality of the initial model output. If the initial restoration is poor, the pseudo-privileged signal may be too noisy to guide useful correction. This limitation is acknowledged but not explored further.
4. The data for Denoising in Table 2 appears to be incorrect. 31.45 is incorrectly bolded, but SSIM is indeed higher.
1. How does the performance of SIPL degrade if the initial restoration is poor? Are there failure cases?
2. Is there any benefit to fine-tuning the Privileged Dictionary during inference, or is it always fixed?
3. What the performance of SIPL under real-world degradations not covered by the benchmark datasets? |
Fully AI-generated |