Saarland University, Machine Learning Group, Fak. MI - Mathematik und Informatik, Campus E1 1, 66123 Saarbrücken, Germany     

Machine Learning Group
Department of Mathematics and Computer Science - Saarland University

TEACHING

MACHINE LEARNING

Wintersemester 2015/2016

EVALUATION

can be accessed here

SECOND EXAM

LECTURE MATERIAL

Lecture notes: PDF . The notes are pretty stable, but new material might be added during the semester.

The practical exercises will be in Matlab.

The google group of the lecture can be accessed HERE.

SLIDES AND EXCERCISES

21.10. - Introduction Exercise 0 Solution 0
23.10. - Recap: Probability Theory    
28.10. - Recap: Probability (ctd) Exercise 1 Solution 1
30.10. - Bayesian Decision Theory     Matlab Decision
Boundary Demo
04.11. - Bayesian Decision Theory (ctd)/Empirical Risk Min. Exercise 2 Solution 2
06.11. - Emp. Risk Min/Linear Regression    
11.11. - Linear Regression Exercise 3 Solution 3 Data/Material for Exercise 7
13.11. - Smooth Optimization    
18.11. - Opt(cont)/Linear Classification Exercise 4 Solution 4 Material for Problem 9
20.11. - Linear Classification      
25.11. - Linear SVM Exercise 5 Solution 5 Data for Problem 12
29.11. - Kernel Methods      
02.12. - Kernels II Exercise 6 Solution 6 Data for Problem 13
04.12. - Kernels II      
09.12. - Lecture canceled

Exercise 7 Solution 7 Data for Problem 15
11.12. - Evaluation, ROC-Curve, AUC      
16.12. - Tests, Model selection      
18.12. - Tests, Model selection (cont.)      
06.01. - Feature selection I Exercise 8 Solution 8 Data for Problem 16
08.01. - Feature selection II      
13.01. - Boosting Exercise 9 Solution 9 Data for Problem 17
15.01. - Boosting/Decision Trees      
20.01. - Large Scale Learning Exercise 10 Solution 10 Data for Problem 21
22.01. - Neural Networks aka Deep Learning      
27.01. - Semi-supervised Learning Exercise 11 Solution 11  
29.01. - K-Means and Spectral Clustering      
03.02. - Hierarchical Clustering      
05.02. - Dimensionality Reduction      
10.02. - Statistical Learning Theory      
12.02. - Statistical Learning Theory      

TIME AND LOCATION

Lecture:

  • We, 14-16, HS I, E2 5 - on December 9 and 16, lecture will be held in HS II, E2 5
  • Fr, 10-12, HS 002, E1 3

Exercise Groups:

  • Group A: Th, 12-14, SR 015, E1 3, Tutor: Nikita Vedeneev
  • Group B: Th, 14-16, SR 015, E1 3, Tutor: Yongqin Xian
  • Group C: Th, 14-16, SR 107, E1 3, Tutor: Pedro Mercado Lopez
  • Group D: Fr, 14-16, SR 015, E1 3, Tutor: Russa Biswas

  • If copies of previous year's solutions are submitted, this counts as plagiarism. The first time this happens, you get for the full sheet zero points - if it happens again, you are excluded from the course.

EXAMS AND GRADING

Exams: 22.2., 14.00-17.00 Re-exam: 1.4. , 14.00-17.00

Grading:

  • 50% of the points in the exercises (up to that point) are needed to take part in the exams (end-term/re-exam). In order to being admitted for the endterm and re-exam, you need to have presented properly once a solution in the exercise groups.
  • An exam is passed if you get at least 50% of the points.
  • The grading is based on the best result of the end-term and re-exam

LECTURER

Prof. Dr. Matthias Hein

Office Hours: Mo, 16-18, Do, 16-18

Organization:
Pedro Mercado Lopez
Office Hours: Mo, 15-16 and Tu: 13-14

GENERAL INFORMATION

In a broader perspective machine learning tries to automatize the process of empirical sciences - namely extracting knowledge about natural phenomena from measured data with the goal to either understand better the underlying processes or to make good predictions. Machine learning methods are therefore widely used in different fields: bioinformatics, computer vision, information retrieval, computer linguistics, robotics,...

The lecture gives a broad introduction into machine learning methods. After the lecture the students should be able to solve and analyze learning problems.

List of topics (tentative)

  • Reminder of probability theory
  • Maximum Likelihood/Maximum A Posteriori Estimators
  • Bayesian decision theory
  • Linear classification and regression
  • Kernel methods
  • Model selection and evaluation of learning methods
  • Feature selection
  • Nonparametric methods
  • Boosting, Decision trees
  • Neural networks
  • Structured Output
  • Semi-supervised learning
  • Unsupervised learning (Clustering, Independent Component Analysis)
  • Dimensionality Reduction and Manifold Learning
  • Statistical learning theory

Previous knowledge of machine learning is not required. The participants should be familiar with linear algebra, analysis and probability theory on the level of the local `Mathematics for Computer Scienticists I-III' lectures. In particular, attendees should be familiar with

  • Discrete and continuous probability theory (marginals, conditional probability, random variables, expectation etc.)
    The first three chapters of: L. Wasserman: All of Statistics, Springer, (2004) provide the necessary background
  • Linear algebra (rank, linear systems, eigenvalues, eigenvectors (in particular for symmetric matrices), singular values, determinant)
    A quick reminder of the basic ideas of linear algebra can be found in the tutorial  of Mark Schmidt (I did not check it for correctness!). Apart from the LU factorization this summarizes all what is used in the lecture in a non-formal way.
  • Multivariate analysis (integrals, gradient, Hessian, extrema of multivariate functions)

Type: Core lecture (Stammvorlesung), 9 credit points. The course counts both as a core lecture in computer science and mathematics e.g. it can be used as lecture in mathematics if you study computer science and your minor is mathematics.

LITERATURE AND OTHER RESOURCES

The lecture will be partially based on the following books and partially on recent research papers:

  • R.O. Duda, P.E. Hart, and D.G.Stork: Pattern Classification, Wiley, (2000).
  • B. Schoelkopf and A. J. Smola: Learning with Kernels, MIT Press, (2002).
  • J. Shawe-Taylor and N. Christianini: Kernel Methods for Pattern Analysis, Cambridge University Press, (2004).
  • C. M. Bishop: Pattern recognition and Machine Learning, Springer, (2006).
  • T. Hastie, R. Tibshirani, J. Friedman: The Elements of Statistical Learning, Springer, second edition, (2008).
  • L. Devroye, L. Gyoerfi, G. Lugosi: A Probabilistic Theory of Pattern Recognition, Springer, (1996).
  • L. Wasserman: All of Statistics, Springer, (2004).
  • S. Boyd and L. Vandenberghe: Convex Optimization, Cambridge University Press, (2004).

Other resources:

NEWS

Re-exam on April 1, 2016 takes place in lecture hall HS002 in E1 3.

Code changed: The piece of matlab code for coordinate ascent has been updated (26.11.).

Modified Lecture Notes: addition of softmax loss for multiclass, clarification on the model assumptions in MAP estimation, addition of equivalence of Least square with affine function class and LDA model

Added link to google group to lecture material.