Machine Learning I - Probabilistic Unsupervised Learning

Machine Learning I - Probabilistic Unsupervised Learning - WS23/24

Instructor: J. Lücke
Exercises: H. Mousavi, V. Boukun, F. Panagiotou and T. Kahlke
Language: English

Time and place:
Tuesday: 12:15 - 13:45, W02 1-148 (Lectures)
Tuesday: 16:15 - 17:45, W06 0-008W32 1-112 and W16A 015/016 (Exercises Ü1, Ü2 and Ü3)

Supplementary Material

The Matrix Cookbook: A helpful guide for typical matrix and vector operations (free online).

Further Reading

  • Pattern Recognition and Machine Learning, C. M. Bishop, ISBN: 978-0-387-31073-2, Springer, 2006.
  • Machine Learning: A Probabilistic Perspective, K. P. Murphy, MIT Press, 2012.
  • Theoretical Neuroscience – Computational and Mathematical Modeling of Neural Systems, P. Dayan and L. F. Abbott, ISBN: 0-262-04199-5, MIT Press, 2001.
  • Information Theory, Inference, and Learning Algorithms, D. MacKay, ISBN-10: 0521642981, Cambridge University Press, 2003. (online available)
  • Computational Cognitive Neuroscience, RC O’Reilly and Y Munakata, ISBN-10: 0262650541, MIT Press, 2000.


Basic knowledge in higher Mathematics as taught as part of first degrees in Physics, Mathematics, Statistics, Engineering or Computer Science (basic linear algebra and analysis). Basic programming skills (course supports MATLAB & Python). Many relations to statistical physics, statistics, probability theory, and stochastic but the course's content will be developed independently of detailed prior knowledge in these fields.


The field of Machine Learning develops and provides methods for the analysis of data and signals. Typical application domains are computer hearing, computer vision, general pattern recognition and large-scale data analysis (recently often termed "Big Data"). Furthermore, Machine Learning methods serve as models for information processing and learning in humans and animals, and are often considered as part of artificial intelligence approaches.

This course gives an introduction to unsupervised learning methods, i.e., methods that extract knowledge from data without the requirement of explicit knowledge about individual data points. We will introduce a common probabilistic framework for learning and a methodology to derive learning algorithms for different types of tasks. Examples that are derived are algorithms for clustering, classification, component extraction, feature learning, blind source separation and dimensionality reduction. Relations to neural network models and learning in biological systems will be discussed where appropriate.



The students will acquire advanced knowledge about mathematical models of data and sensory signals, and they will learn how such models can be used to derive algorithms for data and signal processing. They will learn the typical scientific challenges associated with algorithms for unsupervised knowledge extraction including, clustering, dimensionality reduction, compression and signal enhancements. Typical examples will include applications to computer vision and computer hearing.


Video recordings of the lectures can be found here (credentials needed).

(Changed: 29 May 2024)  | 
Zum Seitananfang scrollen Scroll to the top of the page