NEWS
- Our group has moved to the University of Tuebingen. Go here for our new webpage (under construction).
- Two papers accepted at NeurIPS 2019
- Provably robust boosted decision stumps and trees against adversarial attacks
M. Andriushchenko, M. Hein - Generalized Matrix Means for Semi-Supervised Learning with Multilayer Graphs
P. Mercado, F. Tudisco, M. Hein
- Provably robust boosted decision stumps and trees against adversarial attacks
- Our work on ``Sparse and Imperceivable Adversarial Attacks'' has been accepted at ICCV 2019.
- Our paper ``Error estimates for spectral convergence of the graph Laplacian on random geometric graphs towards the Laplace--Beltrami operator'' has been accepted at FOCM (Foundations of Computational Mathematics).
- "Scaling up the randomized gradient-free adversarial attack reveals overestimation of robustness using established attacks" has been accepted at IJCV.
- Our new sparse and imperceivable white-box (PGD variant) and black-box attack (CornerSearch) has been accepted at ICCV 2019.
- Our new fast adaptive boundary (fab)-attack improves upon
the best reported results on the Madry robust CIFAR-10 network
and for the robust TRADES model on MNIST and CIFAR-10.
- Our two papers on Perron-Frobenius theory of multi-homogeneous mappings and its application to tensor spectral problems got accepted at SIMAX. Congratulations, Antoine and Francesco!
- Our CVPR paper ``Why ReLU networks yield high-confidence predictions far away from the training data and how to mitigate the problem''
has been featured in the CVPR daily magazine.
- Our paper ``Spectral Clustering of Signed Graphs via Matrix Power Means'' has been accepted at ICML 2019.
- Two papers accepted at CVPR 2019
- Why ReLU networks yield high-confidence predictions far away from the training data and how to mitigate the problem
M. Hein, M. Andriushchenko, J. Bitterwolf - Disentangling Adversarial Robustness and Generalization
D. Stutz, M. Hein, B. Schiele
- Why ReLU networks yield high-confidence predictions far away from the training data and how to mitigate the problem
- Our paper ``Provable Robustness of ReLU networks via Maximization of Linear Regions'' has been accepted at AISTATS 2019.
- Our paper ``On the loss landscape of a class of deep neural networks with no bad local valleys'' has been accepted at ICLR 2019.
- The paper ``Logit Pairing Methods Can Fool Gradient-Based Attacks'' has been presented at the NeurIPS 2018 Workshop on Security in Machine Learning.
- GCPR Honorable Mention Award for our paper "A new randomized gradient free attack on ReLU networks. Congratulations, Francesco!
- Paper accepted at GCPR 2018 - a new randomized gradient free attack on ReLU networks
- Our work on the use of nonlinear eigenproblems in modularity optimization has been accepted at SIAM Journal of Applied Mathematics.
- Two papers accepted at ICML 2018
- Optimization Landscape and Expressivity of Deep CNNs
Quynh Nguyen, Matthias Hein - Neural Networks Should Be Wide Enough to Learn Disconnected Decision Regions
Quynh Nguyen, Mahesh Mukkamala, Matthias Hein
- Optimization Landscape and Expressivity of Deep CNNs
- Neural networks should be wide enough to learn disconnected decision regions - new paper on arxiv.
- New paper on spectral convergence of the graph Laplacian towards the Laplace-Beltrami operator.
- New papers on Perron Frobenius Theory for multihomogeneous mappings and its application to spectral problems of nonnegative tensors available on arxiv here and here.
- Antoine Gautier and Francesco Tudisco organize the Mini-Symposium ``Nonlinear Perron-Frobenius Theory and Applications'' at the SIAM Conference on Applied Linear Algebra 2018 in Hong Kong.
- Our paper ``The Power Mean Laplacian for Multilayer Graph Clustering'' has been accepted at AISTATS 2018
- Two workshop contributions at NIPS 2017
- Talk at the workshop ``Synergies in Geometric Data Analysis'' on ``The power mean Laplacian for multilayer graph clustering''
- Poster at the workshop ``Machine Learning and Computer Security'' on ``Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation''
- Matthias Hein has been introductory speaker for the area machine learning at the Frontiers of Science meeting jointly organized by the National Academy of Sciences (NAS), the von Humboldt foundation and the Japanese Society for the Promotion of Science.
- Our paper ``Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation'' has been accepted at NIPS 2017.
- Presentation of our work on formal guarantees against adversarial manipulation at the Dagstuhl Workshop on Machine Learning and Formal Methods, August, 28 - September, 1, 2017.
- Invited talk at the Workshop on Learning Theory, July, 13-15, 2017, at the Foundations of Computational Mathematics in Barcelona.
- Invited Talk at Google Deep Learning Workshop in Zuerich, July, 4-5, 2017, on formal guarantees on the robustness of a classifier against adversarial manipulation.
- New paper giving formal guarantees on the robustness of a classifier against adversarial manipulation. Moreover, the guarantees motivate a new form of regularization which increases robustness while yielding better or similar prediction performance. Check the paper here.
- Two papers accepted at ICML 2017
- The loss surface of deep and wide neural networks
Quynh Nguyen Ngoc, Matthias Hein - Variants of RMSProp and Adagrad with Logarithmic Regret Bounds
Mahesh Chandra Mukkamala, Matthias Hein
- The loss surface of deep and wide neural networks
- New seminar on Advanced Topics in Machine Learning starting May 5, 2017.
- Panel discussion on 3rd of May at the NaWik Symposium in Karlsruhe on ``Wissenschaftskommunikation im Spiegel politischer Debatten und Entscheidungen – am Beispiel Künstlicher Intelligenz''
- One paper accepted at CVPR 2017, ``Simple does it: Weakly Supervised Instance and Semantic Segmentation'' by A. Khoreva, R. Beneson, J. Hosang, M. Hein and B. Schiele
- Our paper ``A nodal domain theorem and a higher-order Cheeger inequality for the graph p-Laplacian'' by Francesco Tudisco and Matthias Hein has been accepted at Journal of Spectral Theory
- Our paper ``MeDeCom: discovery and quantification of latent components of heterogeneous
methylomes'' by Palvo Lutsik, Martin Slawski, Gilles Gasparoni, Nikita Vedeneev, Matthias Hein and Joern
Walter has been accepted at Genome Biology.
- Two workshop papers at the NIPS Workshop ``Optimization for Machine Learning'' and ``Learning in High Dimensions with Structure'' in 2016
- Two papers accepted at NIPS 2016 !
- Globally Optimal Training of Generalized Polynomial Neural Networks with Nonlinear Spectral Methods
Antoine Gautier, Quynh Nguyen Ngoc, Matthias Hein - Clustering Signed Networks with the Geometric Mean of Laplacians
Pedro Mercado Lopez, Francesco Tudisco, Matthias Hein
- Globally Optimal Training of Generalized Polynomial Neural Networks with Nonlinear Spectral Methods
- Our paper ``An Efficient Multilinear Optimization Framework for Hypergraph Matching'' by Quynh Ngoc, Francesco Tudisco, Antoine Gautier and Matthias Hein has been accepted at PAMI
- Francesco Tudisco is co-organizing the invited mini-workshop ``Matrix Methods in Network Analysis'' at the 20th Conference of ILAS (International Linear Algebra Society) in Leuven (July, 11-15)
- Our paper ``Tensor norm and maximal singular vectors of non-negative tensors - a Perron-Frobenius theorem, a Collatz-Wielandt characterization and a generalized power method'' by Antoine Gautier and Matthias Hein has been accepted at Linear Algebra and its Applications
- Three papers accepted at CVPR 2016 (two spotlights, one poster)
- Three papers accepted at NIPS 2015 (two posters, one spotlight)
- Together with Gabor Lugosi and Lorenzo Rosasco we organize a Dagstuhl seminar from 31.8.2015 to 4.9.2015 on "Mathematical and Computational Foundations of Learning Theory".
- One oral and one poster accepted at CVPR 2015 - final papers are online.
- Together with Daniel Lenz and Delio Mugnolo we organize a mini workshop from 8.2.2015 to 14.2.2015 at Oberwolfach on "Discrete p-Laplacians: Spectral Theory and Variational Methods in Mathematics and Computer Science".
- Our paper "Tight Continuous Relaxation of the Balanced k-Cut Problem" by Syama Sundar Rangapuram, Pramod Mudrakarta and Matthias Hein has been accepted at NIPS 2014.
- Paper "Robust PCA: Optimization of the Robust Reconstruction Error on the Stiefel Manifold" accepted as oral and
"Learning Must-Link Constraints for Video Segmentation based on Spectral Clustering" as poster at GCPR 2014.
- Code for the team formation problem is now available on our code webpage.
- Workshop "Mathematical Foundations of Learning Theory" from June 17-19, 2014, at CRM, Barcelona.
- The code for computing normalized or Cheeger hypergraph cuts according to our NIPS 2013 paper is finally online - see the code webpage.
- "Estimation of positive definite M-matrices and structure learning for attractive Gaussian Markov Random fields" by Martin Slawski and Matthias Hein
has been accepted at Linear Algebra and its Applications.
- "Hitting and Commute Times in Large Random Neighborhood Graphs" by Ulrike von Luxburg, Agnes Radl and Matthias Hein has been accepted at JMLR.
- "Scalable Multitask Representation Learning for Scene Classification" by Maksim Lapin, Matthias Hein and Bernt Schiele has been accepted at CVPR 2014.
- The paper "Learning Using Privileged Information: SVM+ and Weighted SVM" by Maksim Lapin, Matthias Hein and Bernt Schiele discussing the relationship of
SVM+ and weighted SVM has been accepted at Neural Networks.
- The paper "Non-negative least squares for high-dimensional linear models : consistency and sparse recovery without regularization" by Martin Slawski and Matthias Hein has been accepted at Electronic Journal of Statistics.
- Two papers accepted as spotlights (acceptance rate <5%) for NIPS 2013
- The Total Variation on Hypergraphs - Learning on Hypergraphs Revisited
Matthias Hein, Simon Setzer, Leonardo Jost, Syama Sundar Rangapuram - Matrix Factorization with Binary Components
Martin Slawski, Matthias Hein, Pavlo Lutsik
- The Total Variation on Hypergraphs - Learning on Hypergraphs Revisited
- We have improved 274 out of 816 possible best cuts in the graph partitioning benchmark of Chris Walshaw.
- We are organizing the 35th German Conference on Pattern Recognition (GCPR), 3.9.2013 - 6.9.2013, in Saarbruecken together with Joachim Weickert and Bernt Schiele.
- "Towards Realistic Team Formation in Social Networks based on Densest Subgraphs" by Shyam Rangapuram, Thomas Buehler, and Matthias Hein has been accepted at WWW 2013
- IPAM Workshop Convex Relaxation Methods for Geometric Problems in Scientific Computing at UCLA organized by Xavier Bresson, Antonin Chambolle, Tony Chan, Daniel Cremers, Stanley Osher, Thomas Pock and Gabriele Steidl.
- "Constrained fractional set programs and their application in local clustering and community detection" by Thomas Buehler, Shyam Rangapuram, Simon Setzer and Matthias Hein has been accepted at ICML 2013
- Martin's paper on non-negative least squares for deconvolution in peptide mass spectrometry has been accepted at Bioinformatics
- Matthias Hein receives ERC starting grant for his project NOLEPRO - Nonlinear Eigenproblems for Data Analysis. PhD and Postdoc positions available - more details can be found here.
- Ulrike von Luxburg and Matthias Hein organize a minisymposium on "Machine Learning'' at the Annual Meeting of the German Mathematical Society in Saarbruecken (17-21.9.2012).
- Minisymposium "Optimization methods in imaging and learning: From continuous to discrete and reverse'' organized by N. Thorstensen and O. Scherzer at ECCOMAS 2012 in Vienna (10-14.9.2012).
- Oberwolfach Workshop Learning Theory and Approximation (24.6.-30.6.) organized by Kurt Jetter, Steve Smale and Ding-Xuan Zhou.
- Minisymposium Modern matrix methods for large scale data and networks organized by David Gleich at the 2012 SIAM Conference on Applied Linear Algebra.
- Paper "How the result of graph clustering methods depends on the construction of the graph'' of Markus Maier, Ulrike von Luxburg and Matthias Hein accepted at ESAIM: Probability and Statistics.
- Joint paper on "An integer linear programming approach for finding deregulated subgraphs in regulatory networks'' by C. Backes, A. Rurainski, G.W. Klau, O. Müller, D. Stöckel, A. Gerasch, J. Küntzer, D. Maisel, N. Ludwig, M. Hein, A. Keller, H. Burtscher, M. Kaufmann, E. Meese, H.-P. Lenhof accepted at Nucleic Acids Research.
- Sparse PCA Code by Thomas Buehler and Matthias Hein based on our NIPS 2010 paper "An inverse power method for nonlinear eigenproblems with applications in 1-spectral clustering and sparse PCA'' can be downloaded here.
- Paper on "Constrained 1-Spectral Clustering" by Shyam Rangapuram and Matthias Hein accepted at AISTATS 2012. Paper and Matlab Code will be available soon.
- Workshop paper on "Sparse matrix factorizations with simplex constraints'' by Qinqing Zheng, Martin Slawski and Matthias Hein accepted at the NIPS 2011 Workshop "Sparse Representation and Low-rank Approximation''.
- We have achieved 16 new best cuts in the bipartitioning task of the graph partitioning benchmark of Chris Walshaw using our new algorithms for balanced graph partitioning. You can download the code for the Cheeger cut: HERE and the code for other balanced graph cuts will be here soon. .
- Matthias Hein receives the first German Pattern recognition award ("Deutscher Mustererkennungspreis'') of the DAGM (former Olympus Prize). This is the highest German award in the area of pattern recognition, computer vision and machine learning.
- Two papers accepted at NIPS 2011
- Martin Slawski and Matthias Hein: Sparse recovery by thresholded non-negative least squares
- Matthias Hein and Simon Setzer: Beyond Spectral Clustering - Tight Relaxations of Balanced Graph Cuts
- Dagstuhl seminar from 17.07.11 - 22.07.11
Mathematical and Computational Foundations of Learning Theory
organized by Matthias Hein, Gabor Lugosi, Lorenzo Rosasco and Steve Smale. - New: Code homepage
- Code online for Nonlinear eigenproblems and their application in 1-spectral Clustering and sparse PCA from our paper:
M. Hein and T. Buehler
An Inverse Power Method for Nonlinear Eigenproblems with Applications in 1-Spectral Clustering and Sparse PCA, NIPS 2010. - Code for the amplified commute kernel from our paper
U. von Luxburg, A. Radl and M. Hein
Getting lost in space: Large sample analysis of the commute distance, NIPS 2010.
- Code online for Nonlinear eigenproblems and their application in 1-spectral Clustering and sparse PCA from our paper: