Friday, November 28, 2014

Two Postdoc positions ( KU Leuven ) ERC Advanced Grant A-DATADRIVE-B for 1-year, extendable at ESAT-STADIUS

Marco Signoretto just sent me the following:
Dear Igor,

I hope everything is fine with you. Can you please post the following announcement [on] Nuit Blanche?

regards

Marco
Sure thing, Marco. Here is the announcement:

2 Postdoc positions ERC Advanced Grant A-DATADRIVE-B for 1-year, extendable at ESAT-STADIUS.

The research group KU Leuven ESAT-STADIUS is currently offering 2 Postdoc positions (1-year, extendable) within the framework of the ERC Advanced Grant A-DATADRIVE-B (PI: Johan Suykens) on Advanced Data-Driven Black-box modelling.
The research positions relate to the following possible topics:
  1. Prior knowledge incorporation
  2. Kernels and tensors
  3. Modelling structured dynamical systems
  4. Sparsity
  5. Optimization algorithms
  6. Core models and mathematical foundations
  7. Next generation software tool
Interested?
The research group ESAT-STADIUS at the university KU Leuven Belgium provides an excellent research environment being active in the broad area of mathematical engineering, including systems and control theory, neural networks and machine learning, nonlinear systems and complex networks, optimization, signal processing, bioinformatics and biomedicine.
The research will be conducted under the supervision of Prof. Johan Suykens. Interested candidates having a solid mathematical background and PhD degree can on-line apply at the website by including CV and motivation letter. For further information on these positions you may contact johan.suykens@esat.kuleuven.be.

--
--
dr. Marco Signoretto

FWO research fellow,
ESAT - STADIUS,
Katholieke Universiteit Leuven,
Kasteelpark Arenberg 10, B-3001 LEUVEN - HEVERLEE (BELGIUM)
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Matrix Completion on Graphs - implementation -


Vassilis Kalofolias just sent me the following:

Dear Igor, 
I am a student of Pierre Vandergheynst at EPFL and we recently had our paper on "Matrix Completion on Graphs" accepted at NIPS workshop on robustness of high dimensional data.

We have published Matlab code for our algorithm here, I think it could fit nicely somewhere in your blog space :)

https://lts2research.epfl.ch/software/matrix-completion-on-graphs/

I am also attaching the tar file with the code.

Thanks!

best regards,
Vassilis Kalofolias
PhD candidate,
LTS2, EPFL

Matrix Completion on Graphs by Vassilis Kalofolias, Xavier Bresson, Michael Bronstein, Pierre Vandergheynst

The problem of finding the missing values of a matrix given a few of its entries, called matrix completion, has gathered a lot of attention in the recent years. Although the problem under the standard low rank assumption is NP-hard, Cand\`es and Recht showed that it can be exactly relaxed if the number of observed entries is sufficiently large. In this work, we introduce a novel matrix completion model that makes use of proximity information about rows and columns by assuming they form communities. This assumption makes sense in several real-world problems like in recommender systems, where there are communities of people sharing preferences, while products form clusters that receive similar ratings. Our main goal is thus to find a low-rank solution that is structured by the proximities of rows and columns encoded by graphs. We borrow ideas from manifold learning to constrain our solution to be smooth on these graphs, in order to implicitly force row and column proximities. Our matrix recovery model is formulated as a convex non-smooth optimization problem, for which a well-posed iterative scheme is provided. We study and evaluate the proposed matrix completion on synthetic and real data, showing that the proposed structured low-rank recovery model outperforms the standard matrix completion model in many situations. 


 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

CS Internship in France: Compatibilité de l'apprentissage par représentations parcimonieuses et de la compression avec pertes / Compatibility between sparse machine learning and lossy compression

Laurent Duval just put out this announcement (in French and to be translated in English soon) :
Internship (2015): learning / compression compatibility

Intitulé de stage

Compatibilité de l'apprentissage par représentations parcimonieuses et de la compression avec pertes [français] (5-6 mois, premier ou second semestre 2015)
 

Internship subject title

Compatibility between sparse machine learning and lossy compression [English, Coming soon] (5-6 months, 2015, first or second semester)

Information

  • Description
    De nombreuses expérimentations acquièrent, en flux continu ou par séquence, des signaux ou images caractéristiques d'un phénomène particulier. Des exemples se trouvent à IFPEN dans les données sismiques, l'analyse par contrôle passif par émissions acoustiques (phénomènes de corrosion, diagnostic de batteries), les bancs d'essai moteur (signaux de pression cylindre, caméra rapide), l'expérimentation haut-débit en chimie. Très souvent, ces données sont analysées de manière standardisée par des indicateurs fixés a priori. La comparaison entre différentes expérimentations (par différences, classifications) se réalise le plus souvent sur les indicateurs calculés, sans revenir aux mesures initiales. Le volume croissant de ce type de données, la variabilité des capteurs et des échantillonnages, le fait qu'elles puissent avoir reçu des traitements différents posent deux problèmes distincts : la gestion et l'accès à ces volumes (aspect "big data") et leur exploitation optimale par des méthodes de réduction de dimension d'apprentissage, supervisé ou non (aspect "data science"). Ce projet vise à analyser conjointement les possibilités de représentations comprimées des données et l'extraction d'indicateurs pertinents à différentes échelles caractéristiques, et l'impact du premier aspect (dégradation due à la compression) sur le second (précision/robustesse des indicateurs extraits).

    L'objectif du stage est dual. Le premier aspect s'intéressera notamment aux travaux de représentation des signaux/images par des réseaux de convolution à base de techniques multi-échelles (ondelettes) appelés "réseaux de diffusion" (scattering networks), dont les descripteurs (ou empreintes) ont de bonnes propriétés d'invariance en termes de translation, rotation et échelle. Ces descripteurs seront employés à des fins de classification et de détection. Le second aspect portera sur l'évaluation de l'impact de techniques de compression avec pertes sur les résultats précédents, et potentiellement sur le développement de représentations parcimonieuses conjointes aux deux aspects (compression et apprentissage).
    [English, upcoming]
  • Encadrement/supervision : Camille Couprie, Laurent Duval (Rueil-Malmaison, 92)
    • Candidature : par email de CV/lettre de motivation à laurent(dod)duval(ad)ifpen(dod)fr et camille(dod)couprie(ad)ifpen(dod)fr
    • Application: email resume/cover letter at laurent(dod)duval(ad)ifpen(dod)fr and camille(dod)couprie(ad)ifpen(dod)fr

Bibliographie/References

 
 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Thursday, November 27, 2014

Upcoming CSJobs and more ....

Yves Wiaux, who soon will have many more friends, just sent me the following


  Hi Igor

Jason McEwen and myself have just been awarded £2M from UK research councils for research on compressive imaging in astronomy and medicine, along with Mike Davies and other colleagues.

Press releases about the funded initiative may be found on Edinburgh Heriot-Watt University website at http://www.hw.ac.uk/news-events/news/pioneering-work-helps-join-dots-across-known-19548.htm ,



Do not hesitate to post [this] on Nuit Blanche if you find it suitable.

Multiple positions will open very soon.


Cheers

Yves
___________________________________
Dr Yves Wiaux, Assoc. Prof., BASP Director
Institute of Sensors, Signals & Systems
School of Engineering & Physical Sciences
Heriot-Watt University, Edinburgh
 
Fantastic news, Yves
One small note: While the conversation piece seems to have actively removed several instances of it, there was still one instance of the word "partial" in the writeup, which, in my view, was not necessary.
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Wednesday, November 26, 2014

Stable Autoencoding: A Flexible Framework for Regularized Low-Rank Matrix Estimation

In the Matrix Factorization Jungle Page, there is a section for subspace clsutering that reads like:

Subspace Clustering: A = AX  with unknown X, solve for sparse/other conditions on X 


In recent times, when looking for subspace clustering algorithms, the conditions on X were focused on X having a zero main diagonal and sparse entries otherwise but in the following paper, the authors seek a low rank version of X instead and call that matrix/operator a linear autoencoder.


Let us note that AX now provides an evaluation of the SVD of X. Following this line of thought, subspace clustering could also be construed as another instance of a Linear Autoencoder. Can this help design better noisy autoencoders as we recently saw [1]. Without further due, here is: Stable Autoencoding: A Flexible Framework for Regularized Low-Rank Matrix Estimation by Julie Josse, Stefan Wager

We develop a framework for low-rank matrix estimation that allows us to transform noise models into regularization schemes via a simple parametric bootstrap. Effectively, our procedure seeks an autoencoding basis for the observed matrix that is robust with respect to the specified noise model. In the simplest case, with an isotropic noise model, our procedure is equivalent to a classical singular value shrinkage estimator. For non-isotropic noise models, however, our method does not reduce to singular value shrinkage, and instead yields new estimators that perform well in experiments. Moreover, by iterating our stable autoencoding scheme, we can automatically generate low-rank estimates without specifying the target rank as a tuning parameter.  



From [2]

Let us note that  in the subspace clustering approach if diag(Z) = 0, then Tr(Z) = 0 which really means that this is a transformation that is volume preserving (see Lie Algebra) and one wonders if, like in fluid mechanics, we should be aiming to find a mixed decomposition with a volume preserving transformation (really quantifying the deformation) and one that quantifies (low) volume change (low rank matrix, the paper featured today) for the autoencoders. Let us also note that while the aim for these subspace clustering algorithms is to have a zero diagonal, the regularizer may provide a solution that is close enough but is also low rank (see LRR for instance).  Let us hope that this approach can provide some light on how to devise nonlinear autoencoders !

Relevant:

 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Tuesday, November 25, 2014

NIPS 2014 proceedings are out, what paper did you fancy ?

The NIPS 2014 proceedings are out, what paper did you fancy ?

Advances in Neural Information Processing Systems 27 (NIPS 2014)

The papers below appear in Advances in Neural Information Processing Systems 27 edited by Z. Ghahramani and M. Welling and C. Cortes and N.D. Lawrence and K.Q. Weinberger.
They are proceedings from the conference Neural Information Processing Systems 2014.
 
 
 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Printfriendly