Machine Learning I - Probabilistic Unsupervised Learning

Machine Learning I - Probabilistic Unsupervised Learning - WS22/23

Instructor: J. Lücke

Exercises: H. Mousavi, D. Velychko, F. Panagiotou and Till Kahlke
Language: English
Time and place:

Tuesday: 14:15 - 17:45,  W32 1-112W32 1-113 and W01 0-012 (Exercises Ü1, Ü2 and Ü3)
Wednesday: 10:15 - 11:45, W32 0-005 (Lectures)

 

Supplementary Material

  • The Matrix Cookbook: A helpful guide for typical matrix and vector operations. (free online)

Further Reading

  • Pattern Recognition and Machine Learning, C. M. Bishop, ISBN: 978-0-387-31073-2, Springer, 2006.
  • Machine Learning: A Probabilistic Perspective, K. P. Murphy, MIT Press, 2012.
  • Theoretical Neuroscience – Computational and Mathematical Modeling of Neural Systems, P. Dayan and L. F. Abbott, ISBN: 0-262-04199-5, MIT Press, 2001.
  • Information Theory, Inference, and Learning Algorithms, D. MacKay, ISBN-10: 0521642981, Cambridge University Press, 2003. (online available)
  • Computational Cognitive Neuroscience, RC O’Reilly and Y Munakata, ISBN-10: 0262650541, MIT Press, 2000.

Content

The field of Machine Learning develops and provides methods for the analysis of data and signals. Typical application domains are computer hearing, computer vision, general pattern recognition and large-scale data analysis (recently often termed "Big Data"). Furthermore, Machine Learning methods serve as models for information processing and learning in humans and animals, and are often considered as part of artificial intelligence approaches.

This course gives an introduction to unsupervised learning methods, i.e., methods that extract knowledge from data without the requirement of explicit knowledge about individual data points. We will introduce a common probabilistic framework for learning and a methodology to derive learning algorithms for different types of tasks. Examples that are derived are algorithms for clustering, classification, component extraction, feature learning, blind source separation and dimensionality reduction. Relations to neural network models and learning in biological systems will be discussed where appropriate.

Outcome

The students will acquire advanced knowledge about mathematical models of data and sensory signals, and they will learn how such models can be used to derive algorithms for data and signal processing. They will learn the typical scientific challenges associated with algorithms for unsupervised knowledge extraction including, clustering, dimensionality reduction, compression and signal enhancements. Typical examples will include applications to computer vision and computer hearing.

Videos

Video recordings of the lectures can be found here (credentials needed). The videos correspond to WS20/21.

(Changed: 20 Jun 2024)  | 
Zum Seitananfang scrollen Scroll to the top of the page