Head of lab
Software is available for three areas of research:
Different algorithms using variational acceleration are available. The algorithms are basically applicable to any data where k-means is expected to work well. Results of k-means are usually improved upon in quality, speed or both. Improvements in quality can especially be expected for data with large overlap. Improvements in speed will be observed especially for large scale data, i.e., data with large data spaces, many data points and many clusters. In these cases speedups can be one or two orders of magnitude.
Sparse Coding algorithms aim at extracting the elementary constituents of data. We make available a range of algorithms which assume binary, discrete or a combination of binary-continuous constituents for data generation. As customary, the data is assumed continuous.
We offer an algorithm based on a generative neural network which allows for learning from large datasets with few labels. The aim is to learn as high quality classifiers from few labels as possible. Typical application domains of the approach are hand-written symbols (MNIST, NIST etc) and text document classification.