Prof. Dr. Ernst-Rüdiger Olderog,
Department of Computing Science, FK II, University of Oldenburg,
D-26111 Oldenburg, Germany
In the following we list topics for new PhD projects in SCARE-2, each one with an acronym (for referring to it), the name of the main supervisor who proposes the topic, an abstract, and a description of the planned research. The topics appear in the alphabetic order of the names of the supervisors.
PhD candidates interested in a topic may consult the supervisor for more detailed information. The research direction may be influenced and changed by the doctorands, both in the beginning and during the project. Once a PhD project has started, a second supervisor will be appointed in agreement with the doctorand.
- Synthesis of Remorse Free Strategies for Traffic Sequence Charts (WD1)
- Probably Approximately Correct Automated Formal Modelling (MF1)
- Impact of Fault Models on Realizability and Costs of Region-adherent, Self-stabilizing Distributed Algorithms (OT2)
- Functional Verification of Cyber-Physicical Systems Containing Machine-Learning Components (MF3)
- Delayed Hybrid Systems (MF4)
- Synthesis of Safe Car Controllers (ERO1)
Main supervisor: Werner Damm
Abstract. Traffic Sequence Charts (TSCs) are an extension of Live Sequence Charts (LSCs) offering a concise specification of manoeuvres for highly automated driving. Formally, they can be seen as a visual dialect of a first order linear-time temporal logic, whose visually defined predicates describe traffic snapshots around a bounded environment of the ego car. Message passing elements of LSCs are inherited to describe Car2X communication. The thesis investigates conditions on Traffic Sequence Charts and the underlying world model which allow for the synthesis of remorse-free dominant strategies for the ego car, possibly involving cooperation with other vehicles, to perform the manoeuvres specified in TSCs.
A notion of world models will be defined taking into account the inherent openness, dynamicity, and partial informedness of applications for autonomous highway driving, including dynamic vehicle models. This forms the basis for defining the key concept of remorse- free dominant strategies for TSCs, also taking into account the key notion of robustness. The research will first assume full informedness and investigate cases for which the existence of robust remorse-free dominant strategies is decidable under full information, building on results in AVACS on quasi-decidability of safety properties in non-linear hybrid systems, and derive synthesis procedures for such cases. The work will then follow up one of the two research directions: to provide approximate computational efficient methods for synthesizing remorse- free strategies by performing synthesis on predicate abstractions of the world model, or to investigate the relaxation of full informedness by guaranteeing sufficient levels of environment observations based on realistic sensor models and Car2X communication. The aforementioned two directions could lead to pursuing two related PhD projects.
Main supervisor: Martin Fränzle
Abstract. The traditional formal verification cycle assumes a fixed formal model of the system under design and, if applicable, of its environment. System verification then is pursued based on this “golden device” supplied by the design engineer. This classical notion of design verification, however, lags noticeably behind current trends in cyber-physical system design, where systems become adaptive and may, e.g., learn part of their behaviour only after deployment in situ. Conventional suggestions for dealing with this issue are to either safeguard the system by built-in, fixed —and thus amenable to traditional design verification— interlocking mechanisms overriding any adaptive behaviour or to try verifying the adaption procedures as such. As both approaches do not seem to be able to keep pace with the rapidly increasing integration depth of learning and behavioural adaptation into cyber-physical systems, we do here suggest a complementary new approach: automated mining of formal models featuring rigorously guar- anteed epistemic validity in order to facilitate automatic run-time (re-)verification in presence of substantial behavioural adaptation.
The PhD project will generalise the initial and purpose-specific PAC formal symbolic model learning technique from Fränzle et al. (2015) into a general scheme. It will start from analysing the proposed use-cases of learning and adaptive behaviour in highly automated driving functions in cooperation with the pertinent groups at OFFIS and DLR Brunswick. Based on this, the candidate will develop a generic formal model comprising fixed parts (e.g., the car kinematics) and variable parts subject to learning (e.g., a driving strategy for a certain ma- noeuvre like overtaking). She will then expose techniques for learning an instance of this generic formal model from observations, provide mathematical techniques akin to those from Fränzle et al. (2015) for rigorously quantifying the epistemic validity of the model thus mined, and provide techniques based on quantitative SMT-based verification for robust automatic run-time verification reflecting the statistical nature of the model.
Main supervisor: Ernst-Rüdiger Olderog
Abstract. This PhD project aims at increasing the level of automation in the design of controllers for safe traffic manoeuvres on various types of roads. It will be based on spatio-temporal reasoning, but abstract from the continuous dynamics. As specifications, formulas of a spatio-temporal logic will be taken. Inspiration will be obtained from controller synthesis in discrete event systems and from bounded real-time model checking.
This PhD project will pursue synthesis of car controllers for traffic manoeuvres with the aim of (1) maintaining safety (no collision) and (2) achieving goals such as overtaking within a given time interval if possible (time-bounded liveness). To reduce the complexity of the synthesis, we shall utilise the separation of spatial reasoning from the car dynamics. Starting point for will be an approach by Larsen et al. (2015), and the work by Bochmann et al. (2015), where algorithms are sketched, but automation is still missing. As specifications for which the controllers are synthesised we start from EMLSL formulas of S. Linker (2015).
This research is complementary to PhD project WD1 in that our approach emphasises the decomposition of spatial and dynamic reasoning, whereas WD1 emphasises the need to have a quality measure for strategies even when no winning strategies exist (and then analyses conditions under various levels of informedness for synthesising these).
Main supervisor: Oliver Theel
Abstract. Self-stabilization and region adherence are fault tolerance concepts for building extremely dependable distributed systems: whereas self-stabilization eventually recovers a failed system, region adherence postpones total system failure. Region adherence is a concept that has been recently introduced by the research group of the main supervisor. In the scope of the PhD project, we are interested in analyzing the impact of different fault models on the realization of region-adherent and self-stabilizing systems. We are particularly interested in the identification of fault models that "lie in the intersection," meaning that region-adherent and - at the same time - self-stabilizing systems can be realized and what the resulting costs are. Knowing this fault model class will answer the research question of necessary contexts for the deployment of self-stabilizing, region-adherent systems. On the way to this goal, additional interesting research questions arise, like: (1) How can region-adherent systems be designed? (2) How can distributed algorithms be combined such that they result in region-adherent systems? (3) How can a region-adherent system be refined such that it either operates more
cost-efficiently or with improved reliability? etc. In this sketched research area, a potential candidate might define his/her own research focus.
This PhD project will evaluate ramifications of different fault models on the realizability and costs of distributed algorithms wrt. region adherence, self-stabilization, and - in particular - their co-occurrence. In self-stabilization the impact of specific fault models is often not considered, i.e., faults are unconstrained and may modify a system's state arbitrarily. For region adherence the fault model is of major importance, e.g., more less constrained fault models render a system less region-adherent. We plan to dive into the following research questions: (1) How do different classes of fault models affect the cost for self-stabilization? (2) For which characteristics of applications can self-stabilizing, region-adherent algorithms be constructed? (3) How to exploit knowledge of the fault model when designing self-stabilizing, region-adherent distributed algorithms? (4) How can we identify a sweet spot between region adherence and self-stabilization for a particular application?
Main supervisor: Martin Fränzle
Abstract. Functional architectures of cyber-physical systems, like autonomous cars, increasingly comprise components that are generated by training and machine learning rather than by more traditional engineering approaches. The validation and functional verification of such systems, as necessary in safety-critical application domains, poses various unsolved challenges. The three most prominent ones are:
1. Machine learning is particularly attractive for system functions lacking a detailed functional specification, like object classification tasks. While it is easy to formulate at a general level what an object classification ought to do, a detailed functional description currently is impossible to achieve, as it would require exact definitions of vague terms like "an image showing a human". The specification to be used in a component verification thus remains imprecise.
2. While most forms of machine learning can be expressed in terms of an optimization problem, the AI components' optimization goals are hitherto hardly aligned with the safety goals pertinent to an overall system architecture in a safety-critical application. As an example, again take an object classification task within an automatic driving function. Its generally used training goal of minimizing frequency of misclassification unfortunately is agnostic to error propagation through the overall driving function, and thus is only vaguely related to overall safety.
3. Commonly used computational structures underlying machine-learning, like deep neural networks, lack scalable automatic verification support.
The objective of the proposed PhD project is to address these issues.
The PhD project will build on the technological lead Oldenburg has acquired in both contract-based methods for system design and constraint solving in rich fragments of arithmetic paired with complex Boolean structure. While the former is a technology permitting to rigorously derive
a functional architecture and corresponding functional specifications for components from safety requirements for an overall system, the latter provides an approach to the automatic verification of neural networks. Depending on the preferences of the candidate, the project can be focused to any of the two aspects. In its one instance, it would address the open problems 1 and 2 mentioned above by investigating the error propagation (or, vice versa, masking) properties of a concrete functional architecture used in automated driving in order to derive component-level requirements for an AI component involved, thus systematizing its training by use of a safety-oriented loss metrics. In its other instance, it would lift Oldenburg's class-leading arithmetic constraint solving algorithm iSAT (Fränzle et al., 2007-) to a dedicated solver for automatic verification of formal properties of the input-output function of a trained neural network.
Main supervisor: Martin Fränzle
Abstract. Delays in feedback loops arise naturally in modern forms of embedded digital control, like networked control systems or control schemes employing large networks of small low-power sensor nodes (a.k.a. "smart dust") for data harvesting. In such applications, the communication delay in the feedback loop can become significant. In continuous control it is well-known that such delays can prompt oscillations and may thus cause deterioration of control performance, invalidating both stability and safety properties valid on delay-free abstractions of the feedback loop. This insight has prompted development of the model of delay differential equations (DDEs) already in the sixties of the last century, and sparked by renewed industrial interest we have over the last three years seen corresponding development of first automatic verification methods for dynamical properties of DDE.
Surprisingly, despite digital control schemes in practice often being multimodal, i.e., hybrid discrete-continuous, these developments for continuous control and DDE have not yet been countered by matching developments for hybrid-state control. Neither an established notion of delayed hybrid system nor corresponding verification methods exist. This PhD project shall address these issues.
The PhD project shall first establish a formal semantic, i.e., mathematical model of hybrid-state dynamics subject to delay. While for the continuous flows, this can easily be based on DDE, it gets more intricate at the discrete-continuous interface. Both a reasonable notion of delayed triggering of discrete transitions, where the trigger condition may persist shorter than the triggering delay, and of assignments to continuous quantities entering the essentially function-valued rather than scalar-valued stat-space of a DDE have to be defined. Based on this model, verification rules for dynamic properties of delayed hybrid systems shall be established.
On the practical side, the model shall be instantiated on and the verification rules applied to the case-study of string stability in automated platoon driving.