Wednesday, August 20, 2014

Compressive Sparse Light Deflection Maps and Imaging for Telescopic Systems

Two instance of CS Hardware today:


Light rays incident on a transparent object of uniform refractive index undergo deflections, which uniquely characterize the surface geometry of the object. Associated with each point on the surface is a deflection map which describes the pattern of deflections in various directions and it tends to be sparse when the object surface is smooth. This article presents a novel method to efficiently acquire and reconstruct sparse deflection maps using the framework of Compressed Sensing (CS). To this end, we use a particular implementation of schlieren deflectometer, which provides linear measurements of the underlying maps via optical comparison with programmable spatial light modulation patterns. To optimize the number of measurements needed to recover the map, we base the design of modulation patterns on the principle of spread spectrum CS. We formulate the map reconstruction task as a linear inverse problem and provide a complete characterization of the proposed method, both on simulated data and experimental deflectometric data. The reconstruction techniques are designed to incorporate various types of prior knowledge about the deflection spectrum. Our results show the capability and advantages of using a CS based approach for deflectometric imaging.
Further, we present a method to characterize deflection spectra that captures its essence in a few parameters. We demonstrate that these parameters can be extracted directly from a few compressive measurements, without needing any costly reconstruction procedures, thereby saving a lot of computations. Then, a connection between the evolution of these parameters as a function of spatial locations and the optical characteristics of the objects under study is made. The experimental results with simple plano-convex lenses and multifocal intra-ocular lenses show how a quick characterization of the objects can be obtained using CS.



Complementary compressive imaging for the telescopic system by Wen-Kai Yu, Xue-Feng Liu, Xu-Ri Yao, Chao Wang, Yun Zhai and Guang-Jie Zhai
Conventional single-pixel cameras recover images only from the data recorded in one arm of the digital micromirror device, with the light reflected to the other direction not to be collected. Actually, the sampling in these two reflection orientations is correlated with each other, in view of which we propose a sampling concept of complementary compressive imaging, for the first time to our knowledge. We use this method in a telescopic system and acquire images of a target at about 2.0 km range with 20 cm resolution, with the variance of the noise decreasing by half. The influence of the sampling rate and the integration time of photomultiplier tubes on the image quality is also investigated experimentally. It is evident that this technique has advantages of large field of view over a long distance, high-resolution, high imaging speed, high-quality imaging capabilities, and needs fewer measurements in total than any single-arm sampling, thus can be used to improve the performance of all compressive imaging schemes and opens up possibilities for new applications in the remote-sensing area.

I note from their conclusion:

In summary, our results represent the first complementary compressive imaging of a target at about 2.0 km range with 20 cm resolution, realizing large FOV imaging over a long distance.

Tuesday, August 19, 2014

Videos and Slides: Next-Generation Sequencing Technologies - Elaine Mardis (2014)

Last time I mentioned Elaine Mardis videos giving a summary of next generation sequencing the video shot up and garnered more than 127,893 viewers. Coincidence ? I think not :-)

Here is the new survey on Next Generation Sequencing where she talks about the current PacBio and Nanopore technology add-ons to the lab. I note the biology people liking the term "massively parallel" sequencing. Anyway, those third generation technologies are very interesting because instead of cutting DNA strands in small pieces and trying to put them back together, they output very long reads (up 50Kbp from 200bp for earlier next generation sequencing technology) thereby reducing much of the guessing in the read alignments. The current downside of those technologies are that they have large error rates. PacBio, for instance, with its SMRT technology has about 15pct error rate for a single strand of DNA but that error goes away when combining several DNA strand reading together down to 0.01pct overall. Nanopore, according to Elaine, is in the 30pct realm but one would have to check people looking into it to be really more accurate on that figure. Irrespective, the longer reads with oversampling means that one can get much nicer view of the genome that was, for chemical reasons, not reachable otherwise. I also note that the PacBio approach uses fluorescence and hence uses camera technology, one of the steamrollers. Fluorescence is not unheard of in compressive sensing and maybe an improvement of the technique might provide enhanced accuracy. The question is how do algorithms featured here on Nuit Blanche can help in realizing A Second Inflection Point in Genome Sequencing. (For people confused Next Generation sequencing refers to 2nd generation sequencing while third generation sequencing refers to newer sensors). More on that later.

Without further ado, here is a summary of sequencing technology as July 30th, 2014.




The slides are in Color or Grayscale

This lecture was part of a larger series of talks in Current Topics in Genome Analysis 2014



Also of interest: PBSuite, Software for Long-Read Sequencing Data from PacBio


Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

A compressed sensing perspective of hippocampal function

This is a theory but the main question really is: how do we go using fMRI and similar technology to figure out if this is really what is going on ? Food for thought.


Hippocampus is one of the most important information processing units in the brain. Input from the cortex passes through convergent axon pathways to the downstream hippocampal subregions and, after being appropriately processed, is fanned out back to the cortex. Here, we review evidence of the hypothesis that information flow and processing in the hippocampus complies with the principles of Compressed Sensing (CS). The CS theory comprises a mathematical framework that describes how and under which conditions, restricted sampling of information (data set) can lead to condensed, yet concise, forms of the initial, subsampled information entity (i.e., of the original data set). In this work, hippocampus related regions and their respective circuitry are presented as a CS-based system whose different components collaborate to realize efficient memory encoding and decoding processes. This proposition introduces a unifying mathematical framework for hippocampal function and opens new avenues for exploring coding and decoding strategies in the brain.
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Monday, August 18, 2014

IPAM workshop on Computational Photography and Intelligent Cameras, February 4 - 6, 2015

Yohann Tendero let me know of this IPAM workshop entitled Computational Photography and Intelligent Cameras that is set to take place in LA on February 4 - 6, 2015. I wonder if we are going to see some Data Drive Sensor Design/ Zero Knowledge Sensor Design. Anyway, you can also apply for financial assitance to attend the workshop. From the page:



Organizing Committee

Amit Agrawal (Amazon Lab126) 

Richard Baraniuk (Rice University, Electrical and Computer Engineering) 

Lawrence Carin (Duke University) 
Oliver Cossairt (Northwestern University) 
Stanley Osher (University of California, Los Angeles (UCLA)) 

Yohann Tendero (University of California, Los Angeles (UCLA))


Scientific Overview
Until recently, digital photography has mostly just replaced the traditional film with a silicon sensor, without substantial changes to the interface or the capabilities of a still camera. However, as the computational power of cameras, cell phones, and other mobile or embedded systems has increased, computation can now be coupled much more tightly with the act of photography. Computational photography is a new area of computer graphics and vision, seeking to create new types of photographs and to allow photographers to acquire better images or images they never could observe before. This involves research into new software algorithms for fusing data from multiple images, video streams, or other types sensors as well as into new hardware architectures for capturing the data needed for the software and numerical processing. Applications of computational photography paradigms include compressed sensing cameras, extended depth of field/refocussing, high dynamic range images, invertible motion blurs, and plenoptic cameras, and mathematics is an important tool for inventing and optimizing these new cameras. This workshop will serve as a gathering place for all those interested in theories, algorithms, methodologies, hardware designs, and experimental studies in computational photography.
This workshop will include a poster session; a request for posters will be sent to registered participants in advance of the workshop.


Confirmed Speakers

Amit Agrawal (Amazon Lab126) 
Richard Baraniuk (Rice University) 
David Brady (Duke University) 
Robert Calderbank (Duke University) 
Lawrence Carin (Duke University) 
Ayan Chakrabarti (Harvard University) 
Oliver Cossairt (Northwestern University) 
Kristin Dana (Rutgers University) 
Paolo Favaro (Universität Bern) 
Carlos Fernandez-Granda (Stanford University) 
Mohit Gupta (Columbia University) 
Wolfgang Heidrich (King Abdullah University of Science and Technology (KAUST)) 
Kevin Kelly (Rice University) 
Pascal Monasse (École Nationale des Ponts-et-Chaussées (ENPC)) 
Kari Pulli (Stanford University) 
Ramesh Raskar (Massachusetts Institute of Technology) 
Neus Sabater (Technicolor) 
Guillermo Sapiro (Duke University) 
Sabine Susstrunk (École Polytechnique Fédérale de Lausanne (EPFL)) 
Yohann Tendero (University of California, Los Angeles (UCLA)) 
Pauline Trouvé (Onera) 
Jack Tumblin (Northwestern University) 
Ashok Veeraraghavan (Rice University)



Application/Registration
An application/registration form is available at:

The application form is for those requesting financial support to attend the workshop. We urge you to apply as early as possible. Applications received by Wednesday, December 10, 2014 will receive fullest consideration. Letters of reference may be sent to the address or email address below. Successful applicants will be notified as soon as funding decisions are made. If you do not need or want to apply for funding, you may simply register. IPAM will close registration if we reach capacity; for this reason, we encourage you to register early.
We have funding especially to support the attendance of recent PhD's, graduate students, and researchers in the early stages of their career; however, mathematicians and scientists at all levels who are interested in this area are encouraged to apply for funding. Encouraging the careers of women and minority mathematicians and scientists is an important component of IPAM's mission and we welcome their applications.


Contact Us:
Institute for Pure and Applied Mathematics (IPAM)
Attn: CP2015
460 Portola Plaza
Los Angeles CA 90095-7121
Phone: 310 825-4755
Fax: 310 825-4756
Email: 





Thanks Yohann !

Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Fastfood: Approximate Kernel Expansions in Loglinear Time - The Paper -

Compressive sensing is not the only place where multiplying a vector or a matrix with a Gaussian matrix is a big deal for large scale problems (see the recent Random Matrices Are Too Damn Large ! and Another Comment on " Random Matrices Are Too Damn Large !"). If you recall this is also a problem for Random Kitchen Sinks, a randomized version of AdaBoost (a connection with compressive sensing is mentioned here). There, the training set in Machine Learning is used as a dictionary in order to learn a function. Those dictionaries are, however too large and the authors of the paper resort to a fast random projections to learn the function faster. 



We had the talk for over a year now, we now have the attendant paper. There is actually more in the paper than what was shown in the presentation earlier: Fastfood: Approximate Kernel Expansions in Loglinear Time by Quoc Viet Le, Tamas Sarlos, Alexander Johannes Smola

Despite their successes, what makes kernel methods difficult to use in many large scale problems is the fact that storing and computing the decision function is typically expensive, especially at prediction time. In this paper, we overcome this difficulty by proposing Fastfood, an approximation that accelerates such computation significantly. Key to Fastfood is the observation that Hadamard matrices, when combined with diagonal Gaussian matrices, exhibit properties similar to dense Gaussian random matrices. Yet unlike the latter, Hadamard and diagonal matrices are inexpensive to multiply and store. These two matrices can be used in lieu of Gaussian matrices in Random Kitchen Sinks proposed by Rahimi and Recht (2009) and thereby speeding up the computation for a large range of kernel functions. Specifically, Fastfood requires O(n log d) time and O(n) storage to compute n non-linear basis functions in d dimensions, a significant improvement from O(nd) computation and storage, without sacrificing accuracy.
Our method applies to any translation invariant and any dot-product kernel, such as the popular RBF kernels and polynomial kernels. We prove that the approximation is unbiased and has low variance. Experiments show that we achieve similar accuracy to full kernel expansions and Random Kitchen Sinks while being 100x faster and using 1000x less memory. These improvements, especially in terms of memory usage, make kernel methods more practical for applications that have large training sets and/or require real-time prediction.


Related:


[6] Uniform Approximation of Functions with Random Bases, Ali Rahimi and Benjamin Recht
[8] Nystrom Method vs Random Fourier Features:: A Theoretical and Empirical Comparison Tianbao Yang, Yu-Feng Li, Mehrdad Mahdavi, Rong Jin, Zhi-Hua Zhou
[9 Pruning random features with correlated kitchen sinks -poster- Brian McWilliams and David Balduzzi

Sunday, August 17, 2014

Sunday Morning Insight: Self-Configuring Universal Linear Optical Components

Figure from [1]


I discovered David Miller's work on Dan Piponi's recent G+ feed, here is how Dan summarized it:

I really like this paper: http://www-ee.stanford.edu/~dabm/430.pdf 
Imagine you have an incoming laser beam. It will have some kind of spatial profile, for example it might be a gaussian beam that's brighter in a hump in the middle and falls off towards the edges. Or it might have two "humps". Now imagine you want to perform an arbitrary linear transformation, M, on this profile. Like converting a beam with one hump to one with two, one with two humps to one with one, and leaving anything orthogonal to those unchanged. Can you build an optical component to do this? 
Any complex matrix M has a singular value decomposition as U*DV where U and V are unitary and D is diagonal. This elegant paper shows how to physically realise this decomposition as a pair of "matrices" of mirrors and phase shifters with a spatial light modulator acting as the diagonal matrix. (Mirrors and phase shifters are unitary.) Even better, it shows how if you have dynamically adjustable mirrors and phase shifters you can use a method akin to a matrix factorization algorithm to configure everything by using a "training set" of beams without having to calculate a thing yourself. 
It hasn't been built yet, but everything is buildable in principle. There's lots of related work here: http://www-ee.stanford.edu/~dabm/Selfalign.html


So quite clearly one could conceive of several ways these systems could be used. The first idea is a way to perform a calibration on a random scattering medium. With a two way system like the one mentioned in [2] one can evaluate one half of the full scattering matrix (instead of one quarter). Obviously this can be done not just for one wavelength but for several and one can even perform mutliplexing in time [3]. I am also wondering what it would mean to set each of the parameters at random instead of picking them up one by one, or use the system to perform some SVDs with light....It's a whole new world. Thank you Dan for the pointer !


Figure from [2]

Figure from [3]

References:

We show how to design an optical device that can perform any linear function or coupling between inputs and outputs. This design method is progressive, requiring no global optimization. We also show how the device can configure itself progressively, avoiding design calculations and allowing the device to stabilize itself against drifts in component properties and to continually adjust itself to changing conditions. This self-configuration operates by training with the desired pairs of orthogonal input and output functions, using sets of detectors and local feedback loops to set individual optical elements within the device, with no global feedback or multiparameter optimization required. Simple mappings, such as spatial mode conversions and polarization control, can be implemented using standard planar integrated optics. In the spirit of a universal machine, we show that other linear operations, including frequency and time mappings, as well as nonreciprocal operation, are possible in principle, even if very challenging in practice, thus proving there is at least one constructive design for any conceivable linear optical component; such a universal device can also be self-configuring. This approach is general for linear waves, and could be applied to microwaves, acoustics, and quantum mechanical superpositions.
[2] Establishing optimal wave communication channels automatically, David Miller


We show how multiple optimal orthogonal channels for communicating or interconnecting with waves between two objects can be aligned and optimized automatically using controllable beamsplitters, detectors and simple local feedback loops, without moving parts, without device calibration, without fundamental beam splitting loss, and without calculations. Optical applications include multiple simultaneous orthogonal spatial communication channels in free space or multimode optical fibers, automatically focused power delivery with waves, multiple channel communication through scattering or lossy media, and real-time-optimized focused channels to and from multiple moving objects. The approach physically implements automatic singular value decomposition of the wave coupling between the objects, and is equivalent in its effect to the beam forming in a laser resonator with phase-conjugate mirrors with the additional benefits of allowing multiple orthogonal channels to be formed simultaneously and avoiding the need for any nonlinear optical materials.

[3] Self-configuring universal linear optical component, David Miller


We show how to design an optical device that can perform any linear function or coupling between inputs and outputs. This design method is progressive, requiring no global optimization. We also show how the device can configure itself progressively, avoiding design calculations and allowing the device to stabilize itself against drifts in component properties and to continually adjust itself to changing conditions. This self-configuration operates by training with the desired pairs of orthogonal input and output functions, using sets of detectors and local feedback loops to set individual optical elements within the device, with no global feedback or multiparameter optimization required. Simple mappings, such as spatial mode conversions and polarization control, can be implemented using standard planar integrated optics. In the spirit of a universal machine, we show that other linear operations, including frequency and time mappings, as well as non-reciprocal operation, are possible in principle, even if very challenging in practice, thus proving there is at least one constructive design for any conceivable linear optical component; such a universal device can also be self-configuring. This approach is general for linear waves, and could be applied to microwaves, acoustics and quantum mechanical superpositions.

but also:
Here is a video of David Miller explaining the concepts:


Saturday, August 16, 2014

Foundations of Compressed Sensing

Here is a nice overview: Foundations of Compressed Sensing by Mike Davies



I will shortly add it to the Learning Compressive Sensing page.

Saturday Morning Videos: Intelligent Machines, Simon Knowles

Thanks to Andrew Davison for pointing this talk. Andrew's remarks are here.

Saturday Morning Videos: Two Talks on Synthetic Biology

h/t Socrates Logos


Andrew Hessel, pioneer in synthetic biology, discusses the similarities between computing and biology during a talk at Singularity University.

Methods for engineering biology. Nina DiPrimio discusses tools for design and construction and plans for future projects. Presented by Counter Culture Labs, this video is designed as an accessible introduction for bio-maker spaces for the basic tools of synthetic biology. Future projects, such as rose scented e coli, are detailed in a how-to of genetic modification. The safety of genetic modification is also discussed.


Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Friday, August 15, 2014

Data Driven Sensor Design: Learning to be a Depth Camera for Close-Range Human Capture and Interaction

Here is an instance of Data Driven Sensor Design or Zero Knowledge Sensor Design. Let us remind our readers what Data Driven Sensor Design is:

  1. Pick a sensor, 
  2. Discover its transfer function with supervised learning, 
  3. You're done.



Learning to be a Depth Camera for Close-Range Human Capture and Interaction by Sean Fanello, Cem Keskin, Shahram Izadi, Pushmeet Kohli, David Kim, David Sweeney, Antonio Criminisi, Jamie Shotton, Sing Bing Kang, and Tim Paek


We present a machine learning technique for estimating absolute, per-pixel depth using any conventional monocular 2D camera, with minor hardware modifications. Our approach targets close-range human capture and interaction where dense 3D estimation of hands and faces is desired. We use hybrid classification-regression forests to learn how to map from near infrared intensity images to absolute, metric depth in real-time. We demonstrate a variety of humancomputer interaction and capture scenarios. Experiments show an accuracy that outperforms a conventional light fall-off baseline, and is comparable to high-quality consumer depth cameras, but with a dramatically reduced cost, power consumption, and form-factor.





Of note this site: http://www.eigenimaging.com/DIY/NexusDYI for modifying smartphone camera module for Near Infrared (NIR) imaging such as night vision and multispectral imaging applications


Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.