ICLR 2026 - Reviews

SubmissionsReviews

Reviews

Summary Statistics

EditLens Prediction Count Avg Rating Avg Confidence Avg Length (chars)
Fully AI-generated 0 (0%) N/A N/A N/A
Heavily AI-edited 0 (0%) N/A N/A N/A
Moderately AI-edited 0 (0%) N/A N/A N/A
Lightly AI-edited 2 (67%) 4.00 4.00 2508
Fully human-written 1 (33%) 6.00 3.00 2181
Total 3 (100%) 4.67 3.67 2399
Title Ratings Review Text EditLens Prediction
Target Before You Perturb: Enhancing Locally Private Graph Learning via Task-Oriented Perturbation Soundness: 3: good Presentation: 4: excellent Contribution: 3: good Rating: 6: marginally above the acceptance threshold Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. This paper presents Task Oriented Graph Learning (TOGL) framework for locally private graph learning under Local Differential Privacy (LDP) constraints. The paper advocates that instead of considering random dimensions of node attributes for perturbation, which provides privacy but at a cost of utility one should consider identifying task-specific features. To this end, the authors introduce the notion of “target then perturb” for LDP. TOGL follows a three-stage pipeline: in the first stage, node features are perturbed locally using LDP to satisfy privacy requirements, and the server then denoises the perturbed features through neighborhood aggregation. In the second stage, the server identifies the top-m task-relevant feature dimensions from the denoised representations using either Fisher Discriminant Analysis (FDA) or Sparse Model Attribution (SMA). Finally, in the third stage, a second round of LDP perturbation is performed to balance privacy and utility. The authors have performed evaluation on 6 small to medium scale datasets in the main paper and 2 additional in the appendix. 1. The flow of the introduction, along with the motivation for the proposed framework, is good. Overall, the paper is well-motivated and nicely written. 2. The three-stage framework is intuitive and easy to understand. 3. TOGL demonstrates strong utility improvements compared to baseline LDP methods. 4. The method performs well across various GNN architectures. 1. I believe the authors should at least mention experiments on large-scale datasets and robustness evaluations in the main text. 2. The method relies on access to task-specific signals, which may not always be practical in real-world scenarios. 3. The motivations for using FDA and SMA as feature-selection modules should be discussed, along with an analysis of how sensitive the algorithm is to this choice. 4. Could the authors also provide fairness evaluations for the baselines in Table 6? 5. There should be a discussion on the selection of the hyperparameter $\rho$ for practical deployment. 6. The neighborhood aggregation used for denoising may be detrimental for heterophilic datasets. See weakness. Fully human-written
Target Before You Perturb: Enhancing Locally Private Graph Learning via Task-Oriented Perturbation Soundness: 2: fair Presentation: 3: good Contribution: 2: fair Rating: 4: marginally below the acceptance threshold Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. This paper studies graph neural networks under local differential privacy and, for the first time, introduces a task-relevant optimization mechanism. The authors compare their approach with existing LDP methods that protect node features and demonstrate a better trade-off between privacy and utility. 1. In terms of novelty, the paper is the first to propose a multi-stage perturbation mechanism guided by task-relevant feature selection. 2. The proposed method achieves a superior privacy–utility trade-off compared to existing approaches. 3. The paper is well-structured, clearly written, and the experimental results are easy to follow. 1. The proposed method includes an additional server-side aggregation step that merges results from two rounds of perturbation, whereas the baselines do not. Therefore, it is unclear whether the observed improvement in the privacy–utility trade-off arises from the proposed LDP mechanism itself or from the aggregation process on the server. 2. The authors compute task-relevant features based on the first-round perturbed data and then perturb these features again in the second round. Intuitively, this means the most important features are perturbed twice, and the noise magnitude is larger than that of existing single-shot mechanisms. The authors should clarify why this design leads to higher utility rather than degradation. 3. The paper lacks a clear definition of what aspects are protected under LDP in the main text, only stating in experiments that feature privacy is considered. Since GNNs may involve protecting features, edges, or labels, the lack of explicit scope may cause confusion. 4. The paper does not evaluate resistance to feature inference attacks, which is an important aspect of verifying practical privacy protection. Such experiments are strongly recommended. See the issues discussed in the “Weaknesses” section above. Lightly AI-edited
Target Before You Perturb: Enhancing Locally Private Graph Learning via Task-Oriented Perturbation Soundness: 3: good Presentation: 2: fair Contribution: 2: fair Rating: 4: marginally below the acceptance threshold Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. This paper presents a new locally private graph learning framework from a task-oriented graph learning perspective (TOGL). It contains three phases: locally private feature perturbation, task-relevant attribute analysis, and task-oriented private learning. Extensive experiments demonstrate TOGL's substantial utility improvements over existing baselines. 1. Well-structured and clearly written. 2. This paper emphasizes the urgent need to connect local differential privacy (LDP) with downstream tasks to achieve better utility, and empirically demonstrates its importance. 3. This paper provides fundamental theoretical proof and analysis, showing the correctness of its use of LDP. 1. This paper does not contribute to the LDP part, only designing a task-oriented attribute selecting mechanism in the server to benefit downstream tasks. Phase I is a one-time perturbation, no different from LPGNN (Sajadmanesh & Gatica-Perez, 2021). 2. The presentation of Phase III in Figure 2 is misleading. According to Algorithm 2, the selected attributes $S^*$ and hyperparameter $\rho$ do not directly affect the LDP, but utilize the LDP's post-processing invariance properties, ensuring strict privacy guarantees for subsequent processing. 3. There is no summary of task-oriented methods. Is LPGNN a task-oriented method? - If not, why? And what special adjustments are needed for different tasks (node classification and link prediction) compared to the baselines? - If it is, then the contribution of this paper will be diminished. Overall, the method in this paper is similar to LPGNN in its approach, as both utilize embedding and labels to constrain task performance. 4. The LDP mechanisms of PM, MB, and SW lack an explanation of the coefficient $\delta$ $, which is only described in the Gaussian mechanism. 5. The accuracy in Figure 6 was normalized, which may overemphasize the differences between methods. It is recommended to show actual ablation study results. 6. The interpretation in Figure 9 is weak, casting doubt on the method's utility. The results show that random feature selection achieves near-suboptimal results when $\rho$=0, indicating that random diversity is more helpful. However, when $\rho$=1, the algorithm relies entirely on task-relevant effects (approximately 30%), almost losing its inference ability for downstream tasks, indicating that this module contributes little. 7. Attack experiments are lacking to demonstrate that the method's privacy guarantees are not compromised to address the second challenge in line 88. 8. The lack of open-source code and insufficient reproducibility reduce the credibility of this work. 1. How is Equation 7 used in Algorithm 2 to represent the SMA mechanism? 2. Are the six LDP mechanisms implemented by changing the perturbation mechanism based on the LPGNN framework? Please clarify. 3. Do the LDP mechanisms share the same set of parameters in the same dataset? For example, $K$, $\rho$, etc. 4. Which mechanism is described as the state-of-the-art (SOTA) in Figures 4, 5, and 6? 5. Why can the analysis of parameter $K$ be an ablation study in Figure 7? Lightly AI-edited
PreviousPage 1 of 1 (3 total rows)Next