Tuesday, November 03, 2015

Sequential Information Guided Sensing / Info-Greedy sequential adaptive compressed sensing

 

Finding changes in the compressed domain through adaptive sampling is what those two preprints are looking into:

Sequential Information Guided Sensing by Ruiyang Song, Yao Xie, Sebastian Pokutta

We study the value of information in sequential compressed sensing by characterizing the performance of sequential information guided sensing in practical scenarios when information is inaccurate. In particular, we assume the signal distribution is parameterized through Gaussian or Gaussian mixtures with estimated mean and covariance matrices, and we can measure compressively through a noisy linear projection or using one-sparse vectors, i.e., observing one entry of the signal each time. We establish a set of performance bounds for the bias and variance of the signal estimator via posterior mean, by capturing the conditional entropy (which is also related to the size of the uncertainty), and the additional power required due to inaccurate information to reach a desired precision. Based on this, we further study how to estimate covariance based on direct samples or covariance sketching. Numerical examples also demonstrate the superior performance of Info-Greedy Sensing algorithms compared with their random and non-adaptive counterparts.
 and earlier:
 
Info-Greedy sequential adaptive compressed sensing by Gabor Braun, Sebastian Pokutta, Yao Xie

We present an information-theoretic framework for sequential adaptive compressed sensing, Info-Greedy Sensing, where measurements are chosen to maximize the extracted information conditioned on the previous measurements. We lower bound the expected number of measurements for a given accuracy by drawing a connection between compressed sensing and complexity theory of sequential optimization, and derive various forms of Info-Greedy Sensing algorithms under different signal and noise models, as well as under the sparse measurement vector constraint. We also show the Info-Greedy optimality of the bisection algorithm for k-sparse signals, as well as that of the iterative algorithm which measures using the maximum eigenvector of the posterior Gaussian signals. For GMM signals, a greedy heuristic for the GMM signal is nearly Info-Greedy optimal compared to the gradient descent approach based on the minimum mean square error (MMSE) matrix. Numerical examples demonstrate the good performance of the proposed algorithms using simulated and real data.
 
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Printfriendly