Head of lab
23 Nov 2021
Our paper "A Variational EM Acceleration for Efficient Clustering at Very Large Scales" (Hirschberger et al.) has been accepted for publication by IEEE Trans. on Pattern Analysis and Machine Intelligence.
12 Oct 2021
Hamid Mousavi successfully defended his doctoral thesis "Non-Linear Latent Variable Models for Inference and Learning from Non-Gaussian Data". Congratulations!
15 Sept 2021
Our paper "Phase transition for parameter learning of hidden Markov models" (Rau et al.) has been accepted by Physical Review E (see here).
15 July 2021
Our paper "Generalizable dimensions of human cortical auditory processing of speech in natural soundscapes: A data-driven ultra high field fMRI approach" (Boos et al.) has been published by NeuroImage.
16 June 2021
The new postdoc Dr. Dmytro Velychko has joint the lab - welcome!
16 June 2021
Our grant "On the Convergence of Variational Deep Learning to Sums of Entropies" has been positively evaluated and is funded by the German Research Foundation (DFG). The grant is jointly with Asja Fischer (RUB). With the grant we are very happy to be part of the DFG priority program 2298 "Theoretical Foundations for Deep Learning".
29 April 2021
Our paper "Inference and Learning in a Latent Variable Model for Beta Distributed Interval Data" (Mousavi et al.) has been published by Entropy.
25 Feb 2021
We have published a bioRxiv paper on our first results in the BMBF special funds grant on COVID-19 research. See paper "Visualization of SARS-CoV-2 Infection Scenes by ‘Zero-Shot’ Enhancements of Electron Microscopy Images" (Drefs et al). See here for an example image. See here for a report on the University website.
4 February 2021
We hosted a hands-on session on our PyTorch software framework for Truncated Variational Expectation Maximization with members from the Haefner Lab from the University of Rochester.
22 December 2020
Our paper "Evolutionary Variational Optimization of Generative Models" (Drefs et al.) is now available on arXiv.
27 November 2020
Our paper "Direct Evolutionary Optimization of Variational Autoencoders With Binary Latents" (Guiraud et al.) is now available on arXiv.
Based on first theoretical principles, our group develops novel efficient learning algorithms for standard and novel data models. The resulting algorithms are applied to a range of different domains including acoustic data, visual data, medical data and data of general pattern recognition tasks. Alongside the theoretical and practical algorithm development, we investigate advanced Machine Learning methods as models for neural information processing; and, visa versa, use ideas and insights from the neurosciences to motivate novel research directions in Machine Learning.
We pursue and conduct projects on efficient generative models (including deep generative models) for large-scale unsupervised and semi-supervised learning, autonomous learning and data enhancement.
We are part of the cluster of excellence Hearing4all and the Department of Medical Physics and Acoustics at the School of Medicine and Health Sciences.
For any inquiries please contact Jörg Lücke.
(The complete list can be found here.)
F. Hirschberger*, D. Forster*, J. Lücke (2021).
A Variational EM Acceleration for Efficient Clustering at Very Large Scales
IEEE Trans. on Pattern Analysis and Machine Intelligence, accepted.
J. Lücke, D. Forster, Z. Dai (2021).
The Evidence Lower Bound of Variational Autoencoders Converges to a Sum of Three Entropies.
arXiv preprint arXiv:2010.14860 (arXiv)
A. S. Sheikh*, N. S. Harper*, J. Drefs, Y. Singer, Z. Dai, R.E. Turner and J. Lücke (2019).
STRFs in primary auditory cortex emerge from masking-based statistics of natural sounds.
PLOS Computational Biology 15(1): e1006595 (online access, bibtex)
*joint first authorship.
R. Holca-Lamarre, J. Lücke* and K. Obermayer* (2017).
Models of Acetylcholine and Dopamine Signals Differentially Improve Neural Representations.
Frontiers in Computational Neuroscience, 11:54 (online access, bibtex)
*joint senior authorship.
T. Monk, C. Savin and J. Lücke (2016).
Neurons Equipped with Intrinsic Plasticity Learn Stimulus Intensity Statistics.
Advances in Neural Information Processing Systems (NIPS), 29: 4278-4286. (online access, bibtex)
Z. Dai and J. Lücke (2014).
Autonomous Document Cleaning – A Generative Approach to Reconstruct Strongly Corrupted Scanned Texts.
IEEE Transactions on Pattern Analysis and Machine Intelligence 36(10): 1950-1962. (online access, bibtex)