Three intertwined research tracks will establish, and ensure, the interdisciplinary character of the lab’s strategic research objectives:
This project will develop novel modeling and development methods to (1) master the complexity of modern embedded software and (2) eliminate bugs during development by adopting high-assurance modeling and verification methods. Logic and set-theory will form the basis to achieve verified software with precise specifications and zero bugs. A novelty in the project is its focus on extra-functional properties including maintainability and security. The key idea here is to find adequate abstraction and refinement techniques in order to ensure and maintain these properties throughout a product’s life-cycle.
In order to master the complexity, we will research new combinations of design-time verification with runtime verification. For design-time verification, we will rely on model checking and theorem proving, for run-time verification and fault tolerance, we will investigate model-based testing and reasoning (including test-case generation and diagnosis), as well as shielding.
We aim for a systematic development process where high-level requirements as well as assumptions about the environment are captured in abstract system-level specifications. These system models shall be refined in several steps towards design models, each adding lower-level details. Formal verification shall ensure that lower-level designs correctly implement the higher-level system properties. The method shall be applicable in practice and scalable. Therefore, we will not aim for fully verified code, but rely on run-time verification techniques on the implementation level. Model-based testing shall systematically generate test-cases from verified models with coverage guarantees. Model-based Diagnosis can help to isolate detected issues. In addition, active shields shall protect against violations of essential
properties.
ANNs have been instrumental in solving a range of hard problems in AI, e.g., the ancient game of Go, image classification, and natural language processing. However, major concerns have been raised about the suitability of these techniques for safety and security-critical systems, where faulty behavior carries the risk of endangering human lives or financial damage. Reinforcement learning can be understood as a trial and error approach. By interacting with theenvironment the agent implicitly learns about the environmental dynamics and infers the value in terms of long term reward to visit a certain state. Hence, in order to learn a working policy it is required to explore the unknown to a certain degree. In terminology of reinforcement learning speaking: with small probability the agent is required to pick a non-greedy (random) action over the greedy (optimal) one. Hence, in safety-critical applications this can definitely lead to disaster.
Another concern in context of safety addresses the environmental dynamics. Although picking a greedy action might lead to a valid condition in the next consecutive state s′, it still might end in disaster if the dynamics of the environment requires to consider a longer sequence of state-action pairs in the future. This is further aggravated by the fact, that the environmental state transitions are in general stochastic, hence the next consecutive state s′ is sampled according to a conditional distribution p(s′|a, s). Unfortunately, this distribution is not known in many cases and can also vary with respect to time. Moreover, environmental states might not be directly observable and the agent operates on a feature representation instead. This raises the question of state observability which might not be entirely available for many problems.
This project will research the application of formal methods that are based on logic and mathematics for the testing and verification of artificial intelligence components (FM4AI). The aim is to increase the dependability of machine-learned systems, including safety and robustness. The idea is to combine more robust machine learning techniques with fault-tolerant mechanisms (run-time enforcement) that protect against unintended AI behavior.
Furthermore, we will investigate the integration of AI techniques to formal methods. Here, our aim is to learn formal models from a black-box system that can be verified via automata learning algorithms (AI4FM). With this technique we will try to mine models from ANNs in order to understand and verify what and how well an ANN has been trained.
Hence, in this project we will exploit the advantages of two research areas, AI and Formal Methods, to deliver more dependable software at lower costs. Here, AI shall lead to a higher degree of automation, while formal methods will provide strong guarantees.
Robustness is one of the key aspects of Dependable Systems and will be the main topic of this project. It is of certain interest in the context of neural networks that have shown to be powerful assets in the design of embedded systems. That is, with neural networks (NNs) we can move much of the computations’ complexity to the design time (structure and learning) while the computations at runtime are very fast. As we will discuss, NN’s are however prone to suffer from robustness issues in several directions. In our project’s three work packages we will thus investigate three major topics in this context.
For AI applications at the edge, low-power solutions are often necessary. To that end, specialized socalled “neuromorphic” chips have been designed that implement power-efficient versions of deep learning systems. These systems typically rely on neural networks implemented in analog hardware or spiking neural networks (spiking neural networks (SNNs)). SNNs are biologically inspired by neuronal networks that use short voltage pulses (spikes) for communication instead of analog values. Recent work has shown that SNNs can be trained efficiently with gradient descent. In safety-critical applications, the robustness of these systems is important. Here, robustness may refer to general robustness against distorted inputs (e.g., parts of an input image are occluded), or to robustness against adversarial attacks (i.e., inputs which are specially designed to fool the network). It is well known that deep ANNs are vulnerable to adversarial attacks. Addressing this concern, studies indicate that SNNs can be more robust to crafted adversarial counterexamples and to distorted inputs in reinforcement learning tasks. Similarly, sparsity and model compactness is found to involve intriguing properties that relate to robustness of deep learning mechanisms.
Another aspect of robustness is in relation to the perturbations caused by inaccuracies in the evaluation at inference time. For example, this could be the case when the NN is implemented with analog circuitry, where the reliability of those operations suffers from systematic and stochastic errors, potentially varying from device to device, in time, due to temperature, etc. However, this is not the only case where an approximate implementation of a NN might be interesting. For example, in a low-resource device, it might be convenient to use low-precision float operations whilst the network might have been trained in a high-precision setting. The question then arises, whether is it possible to train and/or design NNs in a way such that they are more robust against such type of perturbations.