ICLR 2026 - Reviews

SubmissionsReviews

Reviews

Summary Statistics

EditLens Prediction Count Avg Rating Avg Confidence Avg Length (chars)
Fully AI-generated 1 (25%) 4.00 3.00 1157
Heavily AI-edited 0 (0%) N/A N/A N/A
Moderately AI-edited 2 (50%) 2.00 4.50 3997
Lightly AI-edited 1 (25%) 2.00 5.00 2336
Fully human-written 0 (0%) N/A N/A N/A
Total 4 (100%) 2.50 4.25 2872
Title Ratings Review Text EditLens Prediction
Advanced Image Forensics: Detecting Tampered and AI-Generated Images with Adversarial Learning Soundness: 1: poor Presentation: 1: poor Contribution: 1: poor Rating: 2: reject Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. This paper introduces a framework for unified image forensics, designed to simultaneously detect both tampered images and AI-generated content (AIGC). The core of the proposed method is an adversarial learning architecture featuring a generator and a tri-branch discriminator. The three discriminator branches are specialized to identify distinct forensic traces: one for edge-based tampering artifacts, one for noise-based inconsistencies, and a third for features characteristic of AI-generated images. Through adversarial training, the generator learns to produce increasingly realistic forgeries that challenge the discriminator, which in turn enhances the discriminator's ability to detect subtle manipulations. To facilitate evaluation, the authors construct a new benchmark dataset named "Tampered AIGC images Dataset," which contains AI-generated images that have been subsequently manipulated. The experimental results demonstrate that the proposed framework significantly outperforms existing aigi detection and imdl methods across various datasets, including standard tampering benchmarks (CASIA V2, IDM), GAN-generated datasets (ProGAN, StyleGAN, CycleGAN), and their custom combined dataset. The primary merit of this paper lies in its focus on a highly relevant and pressing issue. The authors correctly identify the need for a unified framework that can simultaneously detect traditional image tampering and AI-generated content, a scenario that is increasingly common in real-world applications like social media analysis. Severely Flawed and Incomprehensible Methodological Description: The paper's core methodology, particularly the formulation of its loss functions and the adversarial training procedure, is riddled with inconsistencies, ambiguities, and conceptual errors. This makes the proposed method impossible to understand or reproduce, and it casts serious doubt on the validity of the implementation used to generate the results. For example, the authors define the adversarial loss L_adv in two conflicting ways. Equation (2) presents the standard GAN minimax objective, while Equation (12) (and Algorithm 1) redefines L_adv as a weighted sum of the discriminators' Binary Cross-Entropy (BCE) losses. The authors seem to have a severe disconnect between their architectural diagram, their mathematical formulation, and their actual experimental implementation. They present the AIGC branch as a segmentation/localization module but appear to have trained and evaluated it as a simple image classifier. Outdated and Incomplete Experimental Scope: The paper's evaluation of AIGC detection is exclusively focused on GAN-based models (ProGAN, StyleGAN, CycleGAN). For a submission targeting ICLR 2026, the complete absence of experiments on diffusion models (e.g., Stable Diffusion, Midjourney)—the dominant class of generative models for the past several years—is a critical oversight. Unreasonable experimental setup: As a method that integrates AIGI and IMDL detection, the paper retrains other AIGI and IMDL baselines on mixed datasets for comparison, causing those methods to lose their basic domain-specific capabilities. For example, on GAN-generated datasets, methods like CNNSpot should achieve 90%+ accuracy. This undermines fairness. In addition, the paper lacks a basic comparison that fuses two domain-specific detectors via a combined decision, which makes its fusion strategy unconvincing and raises doubts about whether the problem formulation is meaningful in the first place. Please refer to "Weaknesses" for details. Moderately AI-edited
Advanced Image Forensics: Detecting Tampered and AI-Generated Images with Adversarial Learning Soundness: 2: fair Presentation: 2: fair Contribution: 2: fair Rating: 2: reject Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. The paper claims to employ adversarial learning with a generator \( G \) and three discriminators \( D \), where \( G \) aims to produce an image \( \hat{I}_t \) that appears "real" to deceive the discriminators. please refer to the "Questions" sections directly. please refer to the "Questions" sections directly. - In image forensics—e.g., as in [1]—generators are typically used to simulate forgery processes, thereby enriching the training data for discriminators with diverse synthetic manipulations. In contrast, the generator here appears to "repair" tampered or AI-generated inputs by transforming them into realistic-looking images. This approach contradicts the core objective of forgery detection, which is to learn discriminative features of manipulation rather than to synthesize or restore realistic content. Consequently, the framework’s motivation is unclear, and its effectiveness for detection tasks is questionable. [1] Zhuo et al., Self-Adversarial Training Incorporating Forgery Attention for Image Forgery Localization, IEEE TIFS, 2022. - The paper proposes a unified framework for detecting both tampered and AI-generated images. While the goal is commendable, several key aspects require clarification: a) Ambiguous Detection Target: The introduction highlights "Tampered AIGC" as a distinct category requiring detection. However, it's unclear whether the model's target for such images is to identify them as AI-generated, as tampered, or as a combined state. The output lacks explicit indicators for which specific task (tampering vs. AI-generation) the detection applies to, making the "simultaneous detection" objective ambiguous. b) Unclear Labeling for Tampered AIGC Dataset: Table 2 presents pixel-level metrics (ACC, F1) for the "Tampered AIGC" dataset. It is crucial to clarify whether the ground truth labels for this dataset are at the image level (indicating if the image is tampered/AI-generated) or pixel level (marking specific tampered regions). This directly impacts the interpretation of the reported results. c) Inconsistent Metric Interpretation in Table 1: Table 1 reports ACC and F1 scores across diverse datasets (tampered, GAN-generated, combined). Given that Section 4 specifies pixel-level evaluation for tampered regions and image-level for AI-generated content, it is ambiguous whether the F1/ACC values in Table 1 refer to pixel-level or image-level performance for each dataset type. This inconsistency hinders clear comparison. d) Lack of Visual Examples: The paper mentions applying "adversarial tampering" to create the Tampered AIGC dataset but provides no visual examples. Including sample images would significantly aid readers in understanding the nature and subtlety of these specific manipulations. - The paper aims to address the joint detection of both tampered and AI-generated images. However, the evaluation is limited to only two tampered datasets (CASIA V2 and IDM) and three GAN-based synthetic datasets (ProGAN, StyleGAN, CycleGAN). Given that AI-generated images now encompass diverse generation paradigms—especially diffusion models—the exclusive focus on GAN-generated content undermines the generalizability of the proposed method. To strengthen the evaluation, the authors may align their test sets with those used in recent state-of-the-art methods such as TruFor [2], which includes comprehensive tampered benchmarks (e.g., Columbia, Coverage, NIST, DSO) and modern AI-generated image datasets like GenImage [3], covering a broader spectrum of generative models. [2] Guillaro et al., TruFor: Leveraging All-Round Clues for Trustworthy Image Forgery Detection and Localization, CVPR 2023. [3] Zhu et al., GenImage: A Million-Scale Benchmark for Detecting AI-Generated Images, NeurIPS 2023. - Other issues: (a) Table 4 does not specify the evaluation metric used, making the reported values ambiguous. (b) The decision threshold for computing F1-scores in Tables 1–3 is not stated. (c) While Appendix A.7.1 and A.7.3 define the pixel-level F1-score, the image-level F1 and accuracy (ACC) metrics referenced in Section 4 are not formally described. Providing their definitions would improve reproducibility and clarity. (d) Table 1 is not referenced in the main text. (e) The methods “TruFor” and “DFVT” listed in Table 1 are not discussed or cited anywhere in the manuscript, and “DFVT” corresponding references are missing. Moderately AI-edited
Advanced Image Forensics: Detecting Tampered and AI-Generated Images with Adversarial Learning Soundness: 1: poor Presentation: 1: poor Contribution: 1: poor Rating: 2: reject Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. This paper proposes a joint image forensics task by integrating tamper detection and AIGC detection, and presents a simple model with two separate branches for tamper detection and AIGC detection respectively. The model incorporates adversarial training to improve detection accuracy. Finally, a small mixed dataset containing both tampered and AIGC images is constructed, and the model demonstrates certain performance advantages on this dataset. - Proposes a joint image forensics task by integrating tamper detection and AIGC detection. - Presents a joint detection model for tamper and AIGC based on adversarial training. - Some experimental results show improvements compared with existing works. - Unreasonable motivation explanation: The paper states that "Existing solutions often focus on either tampered or AIGC images, yet both can coexist in various contexts", but fails to explain why tamper detection and AIGC detection need to be addressed jointly. What is the correlation between them? If tampering mainly refers to manual Photoshop manipulation, its artifacts and the generative artifacts of AIGC belong to two different categories. Why is it beneficial to learn them simultaneously? It is suggested that the authors conduct a more in-depth analysis of the problem background. Additionally, simply combining the two categories is not sufficient to support it as a contribution. - Insufficient technical innovation in the method: First, the use of adversarial learning to improve robustness/generalization is not novel. Moreover, the network modules, loss functions, and other components covered in the method section are all based on existing works. What is the authors' core original contribution? - The constructed dataset (Tampered AIGC images Dataset) lacks novelty: It is simply created by applying copy-move, splicing, inpainting, and other tampering operations to 10,000 AIGC images. On the one hand, the workload involved is relatively simple. On the other hand, compared with existing datasets, it does not demonstrate any unique attributes. - Unclear experimental setup: It is impossible to determine which training set the authors used. Furthermore, in Table 1, comparative methods such as TruFor and CNNSpot were trained on different training sets, making it unfair to compare them directly. See weaknesses. Lightly AI-edited
Advanced Image Forensics: Detecting Tampered and AI-Generated Images with Adversarial Learning Soundness: 3: good Presentation: 2: fair Contribution: 2: fair Rating: 4: marginally below the acceptance threshold Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. The paper proposes a dual-branch network that jointly performs AIGC detection and tampering localization via cross-attention and feature fusion. A new Tampered AIGC Dataset is introduced, built by applying copy-move, splicing, and inpainting to ProGAN/StyleGAN/CycleGAN outputs. The method outperforms existing forensic baselines, achieving 79% accuracy and 0.76 F1-score on a mixed-domain test set. - The technical approach is well-motivated. The dual-branch architecture with cross-attention enables mutual learning between global authenticity cues (AIGC vs. real) and local inconsistency signals (tampering boundaries). - Paper is clearly written and easy to follow. - Dataset uses GANs (ProGAN/StyleGAN) but omits modern diffusion-based generators (e.g., Stable Diffusion), limiting practical relevance. - Edits are traditional (copy-move, etc.), not reflecting current AI-assisted editing tools (e.g., Generative Fill and Inpainting). - Misses comparison to recent AIGC-aware forensics (e.g., UniversalForensics, GenImage). - Lacks localization metrics (e.g., IoU) **Excessive length: The paper exceeds the 9-page limit.** Please refer to Weaknesses. Fully AI-generated
PreviousPage 1 of 1 (4 total rows)Next