Page Views on Nuit Blanche since July 2010







Please join/comment on the Google+ Community (1548), the CompressiveSensing subreddit (884), the Facebook page (60 likes), the LinkedIn Compressive Sensing group (3301) or the Advanced Matrix Factorization Group (1048)

Saturday, August 01, 2015

Saturday Morning Video: Dark Knowledge, Geoff Hinton

Dark Knowledge by Geoff Hinton

 
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Saturday Morning Video: A Mouse Brain

From Science's An incredibly detailed tour through the mouse brain
 
 
 
 Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Saturday Morning Video: Reinforcement Learning, David Silver

 
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Saturday Video Morning: Machine Learning Summer School Sydney 2015 (Slides and videos)


The program booklet can be downloaded here.
The lab instructions are available here.

Intro to Machine Learning (Webers)

Probabilistic Graphical Models (Domke)

Optimization (Schmidt)

Boosting (Nock)

Computational Information Geometry and Machine Learning (Nielsen)

ML for Recommender Systems (Karatzoglou)

Structured Prediction for Computer Vision (Gould)

Bayesian inference and MCMC (Carpenter)

Approximate Inference (Ihler)

Stan Hands-on (Carpenter)

Bayesian non-parametrics: Gaussian Processes
(Lawrence)

Bayesian non-parametrics: Dirichlet Processes and
friends (Adams)


Natural Language Processing (Johnson)

Bayesian non-parametric methods for unsupervised models (Buntine)


Deep Learning (Qu)


Prediction Markets (Reid)
 
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Friday, July 31, 2015

Nuit Blanche in Review ( July 2015 )

Since the last Nuit Blanche in Review ( June 2015 ), we saw Pluto up close while at the same time, COLT and ICML occured in France. As a result, we had quite a few implementations made available by their respective authors. Enjoy !

Implementations:
In-depth:
 
Job:
Conferences:
Credit: NASA/Johns Hopkins University Applied Physics Laboratory/Southwest Research Institute
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

The Spectral Norm of Random Inner-Product Kernel Matrices

When dealing with covariance matrices, one always wonder how thresholding coefficients will yield a matrix with similar properties. Today, we have some answers in that area:



We study the spectra of p×p random matrices K with off-diagonal (i,j) entry equal to n1/2k(XTiXj/n1/2), where Xi's are the rows of a p×n matrix with i.i.d. entries and k is a scalar function. It is known that under mild conditions, as n and p increase proportionally, the empirical spectral measure of K converges to a deterministic limit μ. We prove that if k is a polynomial and the distribution of entries of Xi is symmetric and satisfies a general moment bound, then K is the sum of two components, the first with spectral norm converging to μ (the maximum absolute value of the support of μ) and the second a perturbation of rank at most two. In certain cases, including when k is an odd polynomial function, the perturbation is 0 and the spectral norm K converges to μ. If the entries of Xi are Gaussian, we also prove that K converges to μ for a large class of odd non-polynomial functions k. In general, the perturbation may contribute spike eigenvalues to K outside of its limiting support, and we conjecture that they have deterministic limiting locations as predicted by a deformed GUE model. Our study of such matrices is motivated by the analysis of statistical thresholding procedures to estimate sparse covariance matrices from multivariate data, and our results imply an asymptotic approximation to the spectral norm error of such procedures when the population covariance is the identity.
 
 
 
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

A Perspective on Future Research Directions in Information Theory



Here is the table of content:
  1. What is Information Theory?
  2. Communications
  3. Networks and Networked Systems
  4. Control Theory
  5. Neuroscience
  6. Signal Processing
  7. Statistics and Machine Learning
  8. Genomics and Molecular Biology
  9. Theoretical Computer Science
  10. Physics
  11. Economics and Finance


A Perspective on Future Research Directions in Information Theory by Jeffrey G. Andrews, Alexandros Dimakis, Lara Dolecek, Michelle Effros, Muriel Medard, Olgica Milenkovic, Andrea Montanari, Sriram Vishwanath, Edmund Yeh, Randall Berry, Ken Duffy, Soheil Feizi, Saul Kato, Manolis Kellis, Stuart Licht, Jon Sorenson, Lav Varshney, Haris Vikalo

Information theory is rapidly approaching its 70th birthday. What are promising future directions for research in information theory? Where will information theory be having the most impact in 10-20 years? What new and emerging areas are ripe for the most impact, of the sort that information theory has had on the telecommunications industry over the last 60 years? How should the IEEE Information Theory Society promote high-risk new research directions and broaden the reach of information theory, while continuing to be true to its ideals and insisting on the intellectual rigor that makes its breakthroughs so powerful? These are some of the questions that an ad hoc committee (composed of the present authors) explored over the past two years. We have discussed and debated these questions, and solicited detailed inputs from experts in fields including genomics, biology, economics, and neuroscience. This report is the result of these discussions.
 
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Thursday, July 30, 2015

Dimensionality Reduction for k-Means Clustering and Low Rank Approximation

Random Projections as rank-k projection-cost preserving sketches (Th12) ! From the paper:
We start by noting that both problems are special cases of a general constrained k-rank approximation problem [DFK+04], which also includes problems related to sparse and nonnegative PCA[ PDK13, YZ13, APD14]. Then, following the coreset definitions of [FSS13], we introduce the concept of a projection-cost preserving sketch, an approximation where the sum of squared distances of A’s columns from any k-dimensional subspace (plus a fixed constant independent of the subspace) is multiplicatively close to that of A. This ensures that the cost of any k-rank projection of A is well approximated by A and thus, we can solve the general constrained k-rank approximation problem approximately for A using A.Next, we give several simple and efficient approaches for obtaining projection-cost preserving sketches with (1 + ǫ) relative error. All of these techniques simply require computing an SVD, multiplying by a random projection, random sampling, or some combination of the three.
Without further ado:

 

Dimensionality Reduction for k-Means Clustering and Low Rank Approximation by Michael B. Cohen, Sam Elder, Cameron Musco, Christopher Musco, Madalina Persu

We show how to approximate a data matrix A with a much smaller sketch A~ that can be used to solve a general class of constrained k-rank approximation problems to within (1+ϵ) error. Importantly, this class of problems includes k-means clustering and unconstrained low rank approximation (i.e. principal component analysis). By reducing data points to just O(k) dimensions, our methods generically accelerate any exact, approximate, or heuristic algorithm for these ubiquitous problems.
For k-means dimensionality reduction, we provide (1+ϵ) relative error results for many common sketching techniques, including random row projection, column selection, and approximate SVD. For approximate principal component analysis, we give a simple alternative to known algorithms that has applications in the streaming setting. Additionally, we extend recent work on column-based matrix reconstruction, giving column subsets that not only `cover' a good subspace for $\bv{A}$, but can be used directly to compute this subspace.
Finally, for k-means clustering, we show how to achieve a (9+ϵ) approximation by Johnson-Lindenstrauss projecting data points to just O(logk/ϵ2) dimensions. This gives the first result that leverages the specific structure of k-means to achieve dimension independent of input size and sublinear in k.
 
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Wednesday, July 29, 2015

Training Very Deep Networks - implementation -

As each layer of a deep neural  network can be viewed as an iteration step of a reconstruction solver, one wonders if and how to design them so that they more generally fit the generic behavior of traditional solvers (which use many iterations). In turn this may provide some insight on how to design reconstruction solvers (i.e. allow some information from far away iteration to come back into the loop). Here is the beginning of an answer today in the following preprint:


Training Very Deep Networks by Rupesh Kumar Srivastava, Klaus Greff, Jürgen Schmidhuber

Theoretical and empirical evidence indicates that the depth of neural networks is crucial for their success. However, training becomes more difficult as depth increases, and training of very deep networks remains an open problem. Here we introduce a new architecture designed to overcome this. Our so-called highway networks allow unimpeded information flow across many layers on information highways. They are inspired by Long Short-Term Memory recurrent networks and use adaptive gating units to regulate the information flow. Even with hundreds of layers, highway networks can be trained directly through simple gradient descent. This enables the study of extremely deep and efficient architectures.
Some implementation of this algorithm can be found here at: http://people.idsia.ch/~rupesh/very_deep_learning/

Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Tuesday, July 28, 2015

Compressive Sensing for #IoT: Photoplethysmography-Based Heart Rate Monitoring in Physical Activities via Joint Sparse Spectrum Reconstruction, TROIKA

 
 
Zhilin just sent me the following:
 
Hi Igor,

I hope all is well with you!

​ In recent years, I have been working on signal processing for wearable health monitoring, such as signal processing of vital signs in smart watch and other wearables. Particularly, I've applied compressed sensing to this area, and achieved some successes on heart rate monitoring for fitness tracking and health monitoring. So I think you and your blog's readers may be interested in the following work of my collaborators and me:
 
Zhilin Zhang, Zhouyue Pi, Benyuan Liu, TROIKA: A General Framework for Heart Rate Monitoring Using Wrist-Type Photoplethysmographic Signals During Intensive Physical Exercise, IEEE Trans. on Biomedical Engineering, vol. 62, no. 2, pp. 522-531, February 2015
​(preprint: ​http://arxiv.org/abs/1409.5181)

Zhilin Zhang, Photoplethysmography-Based Heart Rate Monitoring in Physical Activities via Joint Sparse Spectrum Reconstruction, IEEE Transactions on Biomedical Engineering, vol. 62, no. 8, pp. 1902-1910, August 2015
(preprint: http://arxiv.org/abs/1503.00688)


In fact, I think the problem of Photoplethysmography-based heart rate monitoring can be well formulated into various kinds of compressed sensing models, such as multiple measurement vector (MMV) model (as shown in my second paper), gridless compressed sensing model (also mentioned in my second paper), and time-varying sparsity model. Since the data are available online (the download link was given in my papers), I hope these data can encourage compressed sensing researchers to join this area, revealing potential values of compressed sensing in these real-life problems.


I will very appreciate if you can introduce my work on your blog.


Thank you!

Best regards,
Zhilin
Thanks Zhilin ! and yes, I am glad to cover work on how compressive sensing and related techniques can make sense of  IoT type of sensors (and work that includes datasets!). Without further ado:


TROIKA: A General Framework for Heart Rate Monitoring Using Wrist-Type Photoplethysmographic Signals During Intensive Physical Exercise by Zhilin Zhang, Zhouyue Pi, Benyuan Liu

Heart rate monitoring using wrist-type photoplethysmographic (PPG) signals during subjects' intensive exercise is a difficult problem, since the signals are contaminated by extremely strong motion artifacts caused by subjects' hand movements. So far few works have studied this problem. In this work, a general framework, termed TROIKA, is proposed, which consists of signal decomposiTion for denoising, sparse signal RecOnstructIon for high-resolution spectrum estimation, and spectral peaK trAcking with verification. The TROIKA framework has high estimation accuracy and is robust to strong motion artifacts. Many variants can be straightforwardly derived from this framework. Experimental results on datasets recorded from 12 subjects during fast running at the peak speed of 15 km/hour showed that the average absolute error of heart rate estimation was 2.34 beat per minute (BPM), and the Pearson correlation between the estimates and the ground-truth of heart rate was 0.992. This framework is of great values to wearable devices such as smart-watches which use PPG signals to monitor heart rate for fitness.
 

Photoplethysmography-Based Heart Rate Monitoring in Physical Activities via Joint Sparse Spectrum Reconstruction by Zhilin Zhang

Goal: A new method for heart rate monitoring using photoplethysmography (PPG) during physical activities is proposed. Methods: It jointly estimates spectra of PPG signals and simultaneous acceleration signals, utilizing the multiple measurement vector model in sparse signal recovery. Due to a common sparsity constraint on spectral coefficients, the method can easily identify and remove spectral peaks of motion artifact (MA) in PPG spectra. Thus, it does not need any extra signal processing modular to remove MA as in some other algorithms. Furthermore, seeking spectral peaks associated with heart rate is simplified. Results: Experimental results on 12 PPG datasets sampled at 25 Hz and recorded during subjects' fast running showed that it had high performance. The average absolute estimation error was 1.28 beat per minute and the standard deviation was 2.61 beat per minute. Conclusion and Significance: These results show that the method has great potential to be used for PPG-based heart rate monitoring in wearable devices for fitness tracking and health monitoring.
 
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Printfriendly