Contact

University of Oldenburg Faculty II - Department of Computer Science Department Safety-Security-Interaction 26111 Oldenburg

Secretariat

Ingrid Ahlhorn

A03 2-208

+49 (0) 441 - 798 2426

Safety-Security-Interaction

Welcome to the Safety-Security-Interaction Group!

The Safety-Security-Interaction group is concerned with the development of theoretically sound technologies for maintaining the security of IT systems in the context of safety-critical systems and the Internet of Things. The focus is on the development of security solutions that are tailored to the context-specific conditions and that take into account various types of user-interaction as well as the functional safety of the to-be-protected systems.

News

Article at DIMVA 2024

Our paper „Inferring Recovery Steps from Cyber Threat Intelligence Reports” got accepted at DIMVA 2024!

Our paper „Inferring Recovery Steps from Cyber Threat Intelligence Reports” got accepted at DIMVA 2024!

Short summary:

Within the constantly changing threat landscape, Security Operation Centers are overwhelmed by suspicious alerts, which require manual investigation. Nonetheless, given the impact and severity of modern threats, it is crucial to quickly mitigate and respond to potential incidents. Currently, security operators use predefined sets of actions from so-called playbooks to respond to incidents. However, these playbooks need to be manually created and updated for each threat, again increasing the workload of the operators. In this work, we research approaches to automate the inference of recovery steps by automatically identifying steps taken by threat actors within Cyber Threat Intelligence reports and translating these steps into recovery steps that can be defined in playbooks. Our insight is that by analyzing the text describing threats, we can effectively infer their corresponding recovery actions. To this end, we first design and implement a semantic approach based on traditional Natural Language Processing techniques, and we then study a generative approach based on recent Large Language Models (LLMs). Our experiments show that even if the LLMs were not designed to solve domain-specific problems, they outperform the precision of semantic approaches by up to 45%. We also evaluate factuality showing that LLMs tend to produce up to 90 factual errors over the entire dataset.

» Publications

Webmaster (Changed: 20 Aug 2024)  | 
Zum Seitananfang scrollen Scroll to the top of the page