|
Hierarchical Multi-Stage Recovery Framework for Kronecker Compressed Sensing |
Soundness: 3: good
Presentation: 3: good
Contribution: 2: fair
Rating: 6: marginally above the acceptance threshold
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. |
The paper studies the performance of Kronecker compressed sensing (CS) for three different hierarchical/structured sparsity patterns. New algorithms that mix reshaping of signal, observation, and sensing tensors and joint sparsity algorithms applied to tensor slices allow for improvements in performance versus agnostic baselines.
The paper covers theoretical analysis, algorithmic contributions, and numerical performance for several different signal setups.
The analytical results expand over existing contributions in Kronecker CS by proposing and leveraging custom restricted isometry properties, obtaining better scaling laws for the number of measurements necessary from random CS matrix constructions. The results also include the commonly observed tolerance to the presence of noise in the measurements.
The proposed algorithms exploit the structure of Kronecker matrices for CS and sparsity to reduce the computational complexity of recovery vs. their agnostic counterparts.
The numerical comparisons are convincing.
It is not always clear if these proposed sparsity structures find readily available applications where Kronecker CS is feasible and an improvement over baselines. All experiments are based on synthetic data.
There appears to be no discussion of the role of the sparsity transformation, which implicitly would have to follow a Kronecker product formulation as well.
Are there applications where the newly proposed hierarchical sparsity structures arise naturally? |
Fully human-written |
|
Hierarchical Multi-Stage Recovery Framework for Kronecker Compressed Sensing |
Soundness: 2: fair
Presentation: 2: fair
Contribution: 3: good
Rating: 6: marginally above the acceptance threshold
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. |
The authors consider the problem of recovering sparse vectors $x$ from Kronecker measurements, i.e., measurement matrix of the form $H=H_1\otimes \dots \otimes H_N$. The overarching idea is to perform the recovery in stages: By unfolding of the measurement $y=Hx$ in different manners, the recovery problem can be viewed as a number of MMV problems of the form $y= H_j U_j$, that can be solved in an iterative fashion. They coin this class of algorithms MSR, multi-stage recovery. Importantly, their method is improved if the $x$ also is assumed to have a Kronecker sparse structure, since the MMV problems become coupled.
They derive a convergence guarantee for their method that is valid under the assumption that each $H_i$ has the RIP. Their bound can be applied to different types of sparsity in $x$, and in particular improves when $x$ Kronecker-sparse. It however potentially suffers from the potential of error propagation between the folded stages.
The main strength of the paper is that it proposes a method which performs better for Kronecker-based sparsity than for general hierarchical sparsity, which with the exception of He and Joseph, 2025, seems to be novel.
The recovery guarantee that the authors provide is interesting in that it only requires each constituent matrix to have a constant RIP ($\delta_{s_i}(H_i)<1/\sqrt{3}$) for convergence in the noiseless case. This is actually a lot better than previous guarantees -- using the bound in Theorem 1 to get an RIC (which is what is needed for e.g. the HiHTP to converge), one essentially need the sums of the $\delta_{s_i}(H_i)$ to be less than $1/\sqrt{3}$, which intuitively means that each constant must be smaller than $1/\sqrt{I\cdot 3}$, which will be very low for a high number of Kronecker levels. It is also interesting to see that their technique, through Corollary 1, improves on the previously quite weak bounds for standard sparsity.
The paper is sometimes hard to follow, mainly due to the complexity of the notation related to tensor-algebra and the different sparsity notions. I think that refinement of the illustrations would go a long way to alleviate the problems. The color coding in Figure 4 is in my opiniion rather confusing than helpful.
The RIC analysis offers some new interesting perspectives, but it seems very close to the one performed in [Roth et. al; 2020] and [Roth et al; 2018]. The relations between the two should be better explained, see questions below.
The superiority of the proposed methods compared to the SOTA, both for hierarchically sparse vectors and Kronecker-sparse vectors, are mainly argued for via comparison of run-times. This metric depends a lot of the specifics of the implementation. Could something more objective be measured and presented, such as the number of iterations? I particularly stumble upon this since the discussion regarding time-complexity on page 5 seems to contain mistakes for HiHTP, see below.
** Relation of RIP analysis to [Roth et al; 2020]** Although the formulation of the results are different, it seems like the formulation and proof of Theorem 1 in the manuscript at hand is essentially the same as the formulation and proof of Theorem 4. Can the authors comment on this, and in particular pinpoint exactly how the analyses are different?
** Complexity discussion and HiHTP ** In the complexity discussion, the authors claim that the space- and time-complexity for HiHTP ($I=2$) both are $M^2N^2$. However [Roth et al; 2020] claim in their paper that the per-iteration time- complexity of their algorithm (for generic measurement matrices) is (in this notation) $MN^2$. Can the authors clarify? |
Fully human-written |
|
Hierarchical Multi-Stage Recovery Framework for Kronecker Compressed Sensing |
Soundness: 3: good
Presentation: 2: fair
Contribution: 2: fair
Rating: 4: marginally below the acceptance threshold
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. |
This paper proposes a multi-stage sparse recovery framework for Kronecker compressed sensing, tailored to three sparsity models, with theoretical guarantees and simulations showing comparable performance and faster runtime than state-of-the-art methods.
1)Developed a versatile multi-stage sparse recovery algorithmic framework tailored to three different sparsity models (standard, hierarchical, and Kronecker-supported) for the Kronecker compressed sensing problem.
2)Provided theoretical recovery guarantees by analyzing the restricted isometry property of Kronecker product matrices under different sparsity models, and demonstrated comparable recovery performance with significantly reduced run time through simulations.
1)Compared to traditional compressed sensing, Kronecker Compressed Sensing (KCS) has not garnered sufficient attention, primarily due to its excessive complexity—including challenges in measurement matrix construction, signal reconstruction and hardware implementation—while lacking significant performance advantages that could justify its complexity. One notable strength of the method proposed in this paper is its reduced reconstruction complexity. However, we are particularly interested in how it compares to traditional methods in terms of runtime and reconstruction accuracy, yet the paper appears to lack relevant experimental evaluations.
2)The construction of measurement matrices remains a major challenge in KCS. While the paper claims to improve Restricted Isometry Property (RIP) conditions, there seems no concrete method for constructing matrices that satisfy these enhanced conditions.
Besides the two major concerns mentioned above, I have the following questions:
1) The paper introduces three sparsity models, but it does not specify which types of real-world data these models are tailored for. Despite asserting broad applicability, the paper lacks simulations using authentic datasets.
2) As previously mentioned, a critical comparison with traditional compressed sensing methods in terms of reconstruction accuracy is essential, given the high computational complexity of KCS.
3) In the experiments, the analysis is limited to cases with a number of stages no greater than three, namely $I\leq 3$. Why was this restriction imposed? Ideally, the performance variations across different stage numbers should have been explored. |
Lightly AI-edited |
|
Hierarchical Multi-Stage Recovery Framework for Kronecker Compressed Sensing |
Soundness: 3: good
Presentation: 3: good
Contribution: 4: excellent
Rating: 8: accept, good paper
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. |
In the Kronecker compressed sensing (KCS) framework, the measurement matrix is composed by the Kronecker product of multiple factor matrices as $ H = H_I \otimes \cdots \otimes H_1$. While this allows for efficient measurement of signals with multidimensional structures, the dimensionality increases rapidly (e.g., $O(N^I))$, which makes existing methods computationally expensive. They also fail to fully utilize the individual factor matrix structures and hierarchical sparsity.
This paper addresses these difficulties by introducing the {\em hierarchical view}.
The contribution of this paper is composed of the following three parts.
First, it is theoretically demonstrated that the Kronecker product structure is a "hierarchical block partition" and that each factor matrix $H_i$ measures the sparsity of a different "hierarchical level." Taking advantage of the Kronecker structure, an algorithm, Multi-Stage Recovery (MSR), that performs reconstruction sequentially (or in parallel) for each layer is proposed. This reduces the computational cost for signal reconstruction to $O((MN)^I)$ to $O(MN^I)$.
Second, a theoretical guarantee is developed by introducing a new type RIP-based condition.
Finally, for all three sparse models, MSR methods (MSOMP, MSHTP, MSSBL, etc.) show comparable or superior accuracy to existing methods (KroOMP, HiHTP, KroSBL, etc.), while reducing computational time by 1–3 orders of magnitude.
By introducing a new perspective into the Kronecker compressed sensing framework, we propose a signal restoration algorithm that can significantly reduce the required computational effort compared to conventional methods. At the same time, we also conduct novel performance evaluations, theoretically guaranteeing performance, and demonstrate the practical usefulness of the proposed algorithm through numerical experiments.
Kronecker compressed sensing is only possible when the observed signal has multidimensional structural sparsity or separable statistical structure, which limits the practical applications in which the proposed method is useful.
This paper refers to radar imaging and wireless communication as examples of applications where the proposed method is useful, because the sparsity of each signal can be separated into dimensions. Can the usefulness of this method be verified for real data in these fields? |
Fully human-written |