Computational Intelligence Lab
Welcome to the Computational Intelligence Lab at the University of Oldenburg, where we advance research in artificial intelligence (AI). Our work spans across key areas, including evolution strategies for optimization, transformer-based architectures, and language models. By integrating these technologies, we aim to push the boundaries of machine learning and AI systems. Join us as we explore the future of computational intelligence and the path to AGI.
Research Topics
- Cognitive Models for Large Language Models (LLMs)
- LLMs and Evolutionary Computing
- Evolution Strategies (Bio-Inspired Optimization)
- Protein Language Models and Evolutionary Molecule Design
- Dimensionality Reduction
- Renewable Energy and Machine Learning
- Adversarial Attacks
Algorithms developed by CI Lab
- Cognitive Prompting (for human-like “thinking” in LLMs)
- Evolution Path Bias (for evolution strategies)
- Unsupervised Nearest Neighbors (a dimensionality reduction algorithm)
- Transformers for Molecule Generation
- Rake Selection (evolutionary multi-objective optimization)
- Two Sexes Evolution Strategy (evolutionary constraint handling)
AGI Research: Cognitive Prompting
Cognitive prompting is a novel approach to guide problem-solving in large language models (LLMs) through structured, human-like cognitive operations, such as goal clarification, decomposition, filtering, abstraction, and pattern recognition. By employing systematic, step-by-step reasoning, cognitive prompting enables LLMs to tackle complex, multi-step tasks more efficiently. We introduce three variants: a deterministic sequence of cognitive operations, a self-adaptive variant in which the LLM dynamically selects the sequence of cognitive operations, and a hybrid variant that uses generated correct solutions as few-shot chain-of-thought prompts. Experiments with LLaMA, Gemma~2, and Qwen models in each two sizes on the arithmetic reasoning benchmark GSM8K demonstrate that cognitive prompting significantly improves performance compared to standard question answering.
Link to the paper.
Further papers from the CI Group.
AGI Research: Conceptual Metaphor Theory as a Prompting Paradigm for Large Language Models
Conceptual Metaphor Theory (CMT) can be used as a framework for enhancing LLMs through cognitive prompting in complex reasoning tasks. CMT leverages metaphorical mappings to structure abstract reasoning, improving models' ability to process and explain intricate concepts. By incorporating CMT-based prompts, we guide LLMs toward more structured and human-like reasoning patterns. To evaluate this approach, we compare four native models (Llama3.2, Phi3, Gemma2, and Mistral) against their CMT-augmented counterparts on benchmark tasks spanning domain-specific reasoning, creative insight, and metaphor interpretation. Responses were automatically evaluated using the Llama3.3 70B model. Experimental results indicate that CMT prompting significantly enhances reasoning accuracy, clarity, and metaphorical coherence, outperforming baseline models across all evaluated tasks.
Link to the paper.
Covid Research: Evolutionary Multi-objective Design of SARS-CoV-2 Protease Inhibitor Candidates
Computational drug design based on artificial intelligence is an emerging research area. At the time of writing this paper, the world suffers from an outbreak of the coronavirus SARS-CoV-2. A promising way to stop the virus replication is via protease inhibition. We propose an evolutionary multi-objective algorithm (EMOA) to design potential protease inhibitors for SARS-CoV-2 ’s main protease. Based on the SELFIES representation the EMOA maximizes the binding of candidate ligands to the protein using the docking tool QuickVina 2, while at the same time taking into account further objectives like drug-likeness or the fulfillment of filter constraints. The experimental part analyzes the evolutionary process and discusses the inhibitor candidates.
Link to the paper.
Research: A Fast and Simple Evolution Strategy with Covariance Matrix Estimation
With the rise of A.I. methods the demand for efficient optimization methods that are easy to implement and use increases. This paper introduces a simple optimization method for numerical blackbox optimization. It proposes to apply covariance matrix estimation for the (1+1)-ES with Rechenberg's step size control. Experiments on a small set of benchmark functions demonstrate that the approach outperforms its isotropic variant allowing competitive convergence on problems with scaled and correlated dimensions.
Link to the paper
Book: Genetic Algorithm Essentials
This book introduces readers to genetic algorithms (GAs) with an emphasis on making the concepts, algorithms, and applications discussed as easy to understand as possible. Further, it avoids a great deal of formalisms and thus opens the subject to a broader audience in comparison to manuscripts overloaded by notations and equations.
The book is divided into three parts, the first of which provides an introduction to GAs, starting with basic concepts like evolutionary operators and continuing with an overview of strategies for tuning and controlling parameters. In turn, the second part focuses on solution space variants like multimodal, constrained, and multi-objective solution spaces. Lastly, the third part briefly introduces theoretical tools for GAs, the intersections and hybridizations with machine learning, and highlights selected promising applications.
Link to the book.
Blog: Towards Data Science Article: Convolutional Neural Networks: An Introduction
One may rightly wonder why another introduction to convolutional neural networks is necessary when there are numerous introductions to the same topic on the web. However, this article takes the reader from the simplest neural network, the perceptron, to the deep learning networks ResNet and DenseNet, (hopefully) in an understandable, but definitely in a concise way, covering many of the basics of deep learning in a few steps. So here we go— if you want.
Link to the blog post.
All TDS blog posts:
SPIEGEL Podcast - Moreno+1: »Eine starke KI? Halte ich für wahrscheinlich«
AI professor Oliver Kramer works on algorithms that exhibit “human-like cognitive performance.” He is convinced that machines will become more intelligent than humans. We should think about that.
Link to the podcast.