Wednesday, October 22, 2014

Compressed Manifold Modes for Mesh Processing - implementation -


Kiran Varanasi mentioned it on the Google+ Community. He also said:


Compressed manifold modes are compressed eigenfunctions of the Laplace-Beltrami operator on 3D manifold surfaces. They constitute a novel functional basis, called the compressed manifold basis, where each function has local support. We derive a method, based on ADMM, for computing compressed manifold modes (CMMs) for discrete polyhedral 3D meshes. We show that CMMs identify key shape features, yielding an intuitive understanding of the basis for a human observer, where a shape can be processed as a collection of parts. We demonstrate various applications in 3D geometry processing. Our paper is published at the Symposium of Geometry Processing (SGP) 2014. We release the source-code for the method.

Please also refer to "Compressed modes for variational problems in mathematics and physics" - Ozolins, Lai, Caflisch & Osher, PNAS 2013.

Prof. Stan Osher's talk on their PNAS paper (and more) can be found here:

http://nuit-blanche.blogspot.fr/2013/08/videos-and-slides-sahd-2013-duke.html


Thanks Kiran ! Here is the paper and attendant implementation:
 
Compressed Manifold Modes for Mesh Processing by Thomas Neumann, Kiran Varanasi, Christian Theobalt, Marcus Magnor, and Markus Wacker


This paper introduces compressed eigenfunctions of the Laplace-Beltrami operator on 3D manifold surfaces. They constitute a novel functional basis, called the compressed manifold basis, where each function has local support. We derive an algorithm, based on the alternating direction method of multipliers (ADMM), to compute this basis on a given triangulated mesh. We show that compressed manifold modes identify key shape features, yielding an intuitive understanding of the basis for a human observer, where a shape can be processed as a collection of parts. We evaluate compressed manifold modes for potential applications in in shape matching and mesh abstraction. Our results show that this basis has distinct advantages over existing alternatives, indicating high potential for a wide range of use-cases in mesh processing.
The project page with implementation is here.
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Tuesday, October 21, 2014

Books: "Foundations of Signal Processing" and "Fourier and Wavelet Signal Processing"

Looks like Martin Vetterli, Jelena Kovacevic and Vivek Goyal went through the monumental undertaking of writing a book on the foundations of signal processing aptly titled Foundations of Signal Processing. You can get it directly from Cambridge University Press, from Amazon, or from another retailer.


Let us note that a trimmed down version is available in pdf from the book's website:


 
and let us also note that the book has a companion book called Fourier and Wavelet Signal Processing that is not yet finished. That second book gets to compressive sensing at the very end: a situation not unlike that of certain last sentences in Isaac Asimov's Foundation series.


 
PS: Until I looked it up, I did not know that Asimov had been a professor at Boston University where
Vivek, one of the author, also teaches.
 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

On Sparse Representation in Fourier and Local Bases - implementation -

Following up on this entry "On Sparse Representation in Fourier and Local Bases", Pier Luigi followed up with an implementation of ProSparse and the code that produces the figures in that article:  
Dear Igor,

thanks for the prompt answer.

In attachment [ http://www.commsp.ee.ic.ac.uk/~pld/software/ ] you'll find the MATLAB files to reproduce Fig2 and Fig3 of the paper.

Note that you still need to have the CVX package installed in your Matlab distribution (http://cvxr.com/cvx/) to run the files.

You have to execute the script “show_counter_examples.m”. There is a variable that controls which result to reproduce:
- line 9 @ show_counter_examples.m: IsMultipleSolution = 0; ====> reproduce figure 2
- line 9 @ show_counter_examples.m: IsMultipleSolution = 1; ====> reproduce figure 3


Please, note that these routines have been written for the sole purpose of reproducing the results in the figures and should not be treated as routines that can be easily adapted to alternative settings.

The zip file is also available at the following link
http://www.commsp.ee.ic.ac.uk/~pld/software/


all the best,
Pier Luigi
Pier Luigi was kind enough to provide a deep link but it is my experience that a better policy is to direct interested parties to a nice landing page which in this case is his software page.
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Monday, October 20, 2014

Thesis: Approximate Message Passing Algorithms for Generalized Bilinear Inference, Jason Parker


Here is a new thesis: Approximate Message Passing Algorithms for Generalized Bilinear Inference by Jason Parker

Recent developments in compressive sensing (CS) combined with increasing demands for effective high-dimensional inference techniques across a variety of disciplines have motivated extensive research into algorithms exploiting various notions of parsimony, including sparsity and low-rank constraints. In this dissertation, we extend the generalized approximate message passing (GAMP) approach, originally proposed for high-dimensional generalized-linear regression in the context of CS, to handle several classes of bilinear inference problems. First, we consider a general form of noisy CS where there is uncertainty in the measurement matrix as well as in the measurements. Matrix uncertainty is motivated by practical cases in which there are imperfections or unknown calibration parameters in the signal acquisition hardware. While previous work has focused on analyzing and extending classical CS algorithms like the LASSO and Dantzig selector for this problem setting, we propose a new algorithm called Matrix Uncertain GAMP (MU-GAMP) whose goal is minimization of mean-squared error of the signal estimates in the presence of these uncertainties, without attempting to estimate the uncertain measurement matrix itself. Next, we extend GAMP to the generalized-bilinear case, in which the measurement matrix is estimated jointly with the signals of interest, enabling its application to matrix completion, robust PCA, dictionary learning, and related matrix-factorization problems. We derive this Bilinear GAMP (BiG-AMP) algorithm as an approximation of the sum-product belief propagation algorithm in the high-dimensional limit, where central limit theorem arguments and Taylor-series approximations apply, and under the assumption of statistically independent matrix entries with known priors. In addition, we propose an adaptive damping mechanism that aids convergence under finite problem sizes, an expectation-maximization (EM)-based method to automatically tune the parameters of the assumed priors, and two rank-selection strategies. We then discuss the specializations of EM-BiG-AMP to the problems of matrix completion, robust PCA, and dictionary learning, and present the results of an extensive empirical study comparing EM-BiG-AMP to state-of-the-art algorithms on each problem. Our numerical results, using both synthetic and real-world datasets, demonstrate that EM-BiG-AMP yields excellent reconstruction accuracy (often best in class) while maintaining competitive runtimes and avoiding the need to tune algorithmic parameters. Finally, we propose a parametric extension known as P-BiG-AMP, which recovers BiG-AMP as a special case, that relaxes the assumption of statistically independent matrix entries by introducing parametric models for the two matrix factors. The resulting algorithm is rigorously justified for random affine parameterizations and constructed to allow its use with an even wider class of non-linear parameterizations, enabling numerous potential applications.

Jason is one of the Map Makers.
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Sunday, October 19, 2014

Watching Closely: Asteroids, Comets



Twenty years ago, we watched an asteroid hit Jupiter with mostly Earth or orbital based telescopes. Today, we will have a comet past Mars and it will be watched by our robots on Mars, in less than a month, we will be harpooning another comet on a spot that needs a name, while Hubble continues to downselect potential asteroid to visit by one of our spacecraft in the Kuiper belt. We live in interesting times.


Watch live streaming video from eurospaceagency at livestream.com

More in-depth discussions can be found in the videos from "The Comet Siding Spring and Its Close Approach to Mars Observer's Workshop" (4 Sessions).
The Comet Siding Spring and Its Close Approach to Mars Observer's Workshop:


Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Sunday Morning Insight: Crossing into P territory

 



To recap, in compressive sensing, it's been known for a while that some solutions can be found thanks to l_1 (P or Polynomial time) relaxation of combinatorial problems (NP). In fact, the whole field of compressive sensing took off when people realized one could be on the P side most of the time.

In genome sequencing the latest long read technology have enabled the whole field to transport itself  from an NP territory into one where polynomial-time algorithms (P) will do OK. The threshold to cross is about 2K. Here is what we can read from the PacBio technology



When you go in P territory, many things change, here is one:

and here is what people say about the Oxford Nanopore technology.
 
 
 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Friday, October 17, 2014

On Sparse Representation in Fourier and Local Bases

I just received the following from Pier Luigi:
Dear Igor,

I hope things are good with you and sorry for this email out of the blue.

I am writing to you because together with Yue Lu from Harvard University we have revisited the classical
problem of finding the sparse representation of a signal in a pair of bases, specifically Fourier and local bases,
and introduced a polynomial complexity algotithm, ProSparse, that solves the problem under the unicity constraint only. Thus the algorithm
works also when l_1 fails. In this way we implicitly demonstrate that the sparse representation problem for these specific structured dictionaries is not NP hard.

This is a result that applies, currently, to a limited number of cases, but given that it looks at the sparse representation problem in a way different from the traditional l_1 approach, I thought that this may
intrigue the nuit-blanche community. And this is why I'm contacting you. If you think it is worth it, you may consider mentioning it
in your blog.

The paper is open-access and is available at:


all the best,
Pier Luigi
Thanks Pier Luigi ! Here is the paper:
 
 
 
 
We consider the classical problem of finding the sparse representation of a signal in a pair of bases. When both bases are orthogonal, it is known that the sparse representation is unique when the sparsity K of the signal satisfies K lt 1/μ(D), where μ(D) is the mutual coherence of the dictionary. Furthermore, the sparse representation can be obtained in polynomial time by Basis Pursuit (BP), when K lt 0.91/μ(D). Therefore, there is a gap between the unicity condition and the one required to use the polynomial-complexity BP formulation. For the case of general dictionaries, it is also well known that finding the sparse representation under the only constraint of unicity is NP-hard. In this paper, we introduce, for the case of Fourier and canonical bases, a polynomial complexity algorithm that finds all the possible K-sparse representations of a signal under the weaker condition that K lt √2/μ(D). Consequently, when K lt 1/μ(D), the proposed algorithm solves the unique sparse representation problem for this structured dictionary in polynomial time. We further show that the same method can be extended to many other pairs of bases, one of which must have local atoms.Examples include the union of Fourier and local Fourier bases, the union of discrete cosine transform and canonical bases, and the union of random Gaussian and canonical bases.
An implementation can be found here.
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Thursday, October 16, 2014

Around The Blogs in 78 Summer Hours

There were quite a few blog entries of interest recently, here is a sample that got my attention:

Anna
Muthu
Alex
Dirk

Dustin
Tianyi
Afonso

Djalil
 Laurent
Hal
Suresh
Andrew

Pip

Carson

Stephen

Danny
Patrick
John
While on Nuit Blanche, we had:


 Photo Credits: ESA/Rosetta/MPS for OSIRIS Team MPS/UPD/LAM/IAA/SSO/INTA/UPM/DASP/IDA
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Wednesday, October 15, 2014

Ce soir: Paris Machine Learning #2 Season 2: : Learning Causality, Words, the Higgs & more.







Tonight's event will be sponsored by ANEO. Here is the lineup for tonight's Paris Machine Learning Meetup: 
The abstracts so far:

Emanuela Boros, "Learning word representations for event extraction from text", http://emanuelaboros.ro

Description: We propose a solution for the event extraction task that relies on learning word representations via a neural network. By automatically learning domain-relevant distributed word representations from a domain-specific unlabeled corpus without complex linguistic processing, a supervised classifier is fed with them. With only these word embeddings representing events, the system outperforms previous state-of-the-art event extraction models

Stay tuned to this entry for more abstracts presentations and video of the meetup.
 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Tuesday, October 14, 2014

A Bird's eye view of Compressive Sensing, Advanced matrix Factorization, Randomized Numerical Linear Algebra, Big Data and more ...

  
As one can see from reading Nuit Blanche, it is becoming clear that Compressive Sensing has affected in some fashion or another the way we think and aim to solve problems when there are plenty of data. Subjects like Advanced Matrix Factorization  or Randomized Numerical Linear Algebra (RandNLA) are examples of that. Today, we have a few review papers that provide this bird's eye view. It is as much useful to the newcomers as it is for the practicing researcher or Machine Learning professional. Some of the subjects mentioned here are barely mentioned in textbooks but eventually will.


Modeling and Optimization for Big Data Analytics by Konstantinos Slavakis, Georgios B. Giannakis, and Gonzalo Mateos
With the pervasive sensors continuously collecting and storing massive amounts of information, there is no doubt this is an era of data deluge. Learning from these large volumes of data is expected to bring significant science and engineering advances along with improvements in quality of life. However, with such a big blessing come big challenges. Running analytics on voluminous data sets by central processors and storage units seems infeasible, and with the advent of streaming data sources, learning must often be performed in real time, typically without a chance to revisit past entries. “Work-horse” signal processing (SP) and statistical learning tools have to be re-examined in today’s high-dimensional data regimes. This article contributes to the ongoing cross-disciplinary efforts in data science by putting forth encompassing models capturing a wide range of SP-relevant data analytic tasks, such as principal component analysis (PCA), dictionary learning (DL), compressive sampling (CS), and subspace clustering. It offers scalable architectures and optimization algorithms for decentralized and online learning problems, while revealing fundamental insights into the various analytic and implementation tradeoffs involved. Extensions of the encompassing models to timely data-sketching, tensor- and kernel-based learning tasks are also provided. Finally, the close connections of the presented framework with several big data tasks, such as network visualization, decentralized and dynamic estimation, prediction, and imputation of network link load traffic, as well as imputation in tensor-based medical imaging are highlighted. 

Convex Optimization for Big Data: Scalable, randomized, and parallel algorithms for big data analytics
by Cevher, V., Becker, S. ; Schmidt, M.

This article reviews recent advances in convex optimization algorithms for big data, which aim to reduce the computational, storage, and communications bottlenecks. We provide an overview of this emerging field, describe contemporary approximation techniques such as first-order methods and randomization for scalability, and survey the important role of parallel and distributed computation. The new big data algorithms are based on surprisingly simple principles and attain staggering accelerations even on classical problems.


Mathematics of sparsity (and a few other things) by Emmanuel Cand es

In the last decade, there has been considerable interest in understanding when it is possible to nd structured solutions to underdetermined systems of linear equations. This paper surveys some of the mathematical theories, known as compressive sensing and matrix completion, that have been developed to nd sparse and low-rank solutions via convex programming techniques. Our exposition emphasizes the important role of the concept of incoherence.

Less is more: compressive sensing in optics and image science by Damber Thapa, Kaamran Raahemifar and Vasudevan Lakshminarayanan
We review an emerging sampling theory, namely compressive sampling, which reproduces signals or images from a much smaller set of samples than that required by the Nyquist–Shannon criterion. This review article outlines the main theoretical concepts surrounding compressive sensing and discusses the relationship among compressive sensing, signal sparsity, and sensing modalities. We also describe the applications of compressive sensing in the field of optics and image science.

 

 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Printfriendly