Computational Intelligence Lab
Computational Intelligence Lab
Welcome to the Computational Intelligence Lab at the University of Oldenburg.
Towards AGI: Cognitive Prompting
Cognitive prompting is our approach to guide problem-solving in large language models (LLMs) through structured, human-like cognitive operations, such as goal clarification, decomposition, filtering, abstraction, and pattern recognition. By employing systematic, step-by-step reasoning, cognitive prompting enables LLMs to tackle complex, multi-step tasks more efficiently. We introduce three variants: a deterministic sequence of cognitive operations, a self-adaptive variant in which the LLM dynamically selects the sequence of cognitive operations, and a hybrid variant that uses generated correct solutions as few-shot chain-of-thought prompts. Experiments with LLaMA, Gemma, and Qwen models in each two sizes on the arithmetic reasoning benchmark GSM8K demonstrate that cognitive prompting significantly improves performance compared to standard question answering.
Computational Biology: Sequence-Based Protein Pocket Prediction via ProtT5 Embeddings and Spatial Sampling
Protein-ligand binding pockets can be identified using ProtT5-based sequence embeddings, trained on known complexes from the PDBBind database. To compensate for the lack of negative examples, a spatial sampling strategy generates non-pocket sequences for supervised training. The method achieves classification accuracies up to 0.90, with SVM models performing best. A complete pocket detection pipeline enables prediction directly from protein structure. A case study on the c-Src kinase demonstrates the practical value of this method for identifying binding sites relevant to drug discovery.
Evolutionary Algorithms: Enhancing Evolutionary Algorithms through Meta-Evolution Strategies
A meta-evolutionary strategy dynamically adjusts key hyperparameters of CMA-ES, PSO, and DE by using a (1+1)-ES with Rechenberg’s 1/5-rule. Each optimizer adapts two specific parameters critical to performance while using restarts and dimension-aware budgets to counter stagnation. Results across benchmark functions reveal that algorithm performance is highly problem-dependent: PSO performs well on simple landscapes, CMA-ES on complex ones, and DE shows robust overall fitness. The approach highlights the power of meta-evolution for adaptive hyperparameter control.
Books
Blogposts on Towards Data Science




SPIEGEL Podcast - Moreno+1: »Eine starke KI? Halte ich für wahrscheinlich«
AI professor Oliver Kramer works on algorithms that exhibit “human-like cognitive performance.” He is convinced that machines will become more intelligent than humans. We should think about that.
Link to the podcast.





