Adversarial Resilience Learning

Dr.-Ing. Eric Veith

Department of Computing Science  (» Postal address)

Adversarial Resilience Learning

Research Mission Statement

Our research aims to create learning agent systems that are fit to control critical national infrastructures. Our concern is to support human operators: Our agents learn from domain expert knowledge and give behavioral gurantees. They incorporate known good controllers. Our agents can counter unforseen events (“black swan events”), from forecast deviations to cyber attacks, and provide resilience to critical infrastructures. Our aim is to advance the state of the art in deep reinforcement learning, neuroevolutionary reinforcement learning, offline learning, and explainable reinforcement learning until a generalized agent architecture makes the AI expert superfluous in day-to-day operations: It shall allow to deploy this agent in critical infrastructures to support domain experts.

Research Topics

Autocurricula for Critical Infrastructures

The core of our methodology is an autocurriculum setup: During training, our agent is always paired with an exact adversary. This aids exploration and favors the development of more robust strategies. The autocurriculum setup as methodological basis for learning resilient strategies for complex Cyber-Physical Systems is the source of our name, Adversarial Resilience Learning.

If you’d like to know more, we suggest the following publication:

Offline Deep Reinforcement Learning

Deep Reinforcement Learning is resource-intensive. Especially when complex critical infrastructures are the target, simulations can consume a lot of compute power. However, a lot of domain knowledge already exists. Agents should not need to rediscover this. Our research enables agents to learn from prior modelled use cases and misuse cases.

If you’d like to know more, we suggest the following publication:

eXplainable Reinforcement Learning

Deep reinforcement learning agents are still largely a black box. Whether an agent has learned a sensible strategy or simply got “lucky” during tests because the simulation setup provided supportive situations that are easy to exploit, cannot be validated by simulation alone. Even large-scale simulation setups still leave a trace of doubt, especially when the agent is transferred into another, real environment. This precondition makes it unfit for deployment in critical infrastructures. Our research advances the state of the art to seamlessly provide equivalent representations of DRL policy networks, which make the agent analyzable and enable us to give behavioral guarantees, or verify the effect of our autocurriculum setup.

If you’d like to know more, we suggest the following publication:

Neuroevolutionary Deep Reinforcement Learning

Every algorithm in the domain of machine learning or reinforcement learning has its hyperparameters, and deep reinforcement learning also requires a neural network to be constructed. All is dependent on the tasks and environment at hand. We envision a system where not a researcher or DRL expert is required to fine-tune the hyperparameters of an agent and the learning algorithms it uses—it should do so automatically.

This part of our research is still in its infancy.

Combined Agent Architecture

The aforementioned modules must interact with each other in a sensible way, without disrupting side effects. An all-encompassing architecture is the heart of the Adversarial Resilience Learning research. It has two main features: A Discriminator tracks the efficiency of existing rules (e.g., from the NN2EQCDT algorithm) and the DRL policy, enabling the agent to react to unknown situations and leverage the power of deep reinforcement learning while still being able to give guarantees. Second, the rules extractor, rules repository, and the rules policy form a full cycle in which the agent codifies learnt strategies, allows to inspect them, and can even use them in a simple rehearsal approach to counter catastrophic forgetting.

If you’d like to know more, we suggest the following publication:

Software

We create free/libre open source software! Our agent architecture’s reference implementation is being developed completely in the open.

  https://gitlab.com/arl2/arl

We’re also part of the core development team of palaestrAI, a training ground for autonomous agent and the framework for sound experimentation that we use to verify our claims.

  https://gitlab.com/arl2/arl

 

(Changed: 20 Jun 2024)  | 
Zum Seitananfang scrollen Scroll to the top of the page