Promotor: prof.dr. J.C. van de Pol (UT)
Date: 9 May
Our modern society relies increasingly on the sound performance of digital systems. Guaranteeing that these systems actually behave correctly according to their specification is not a trivial task, yet it is essential for mission-critical systems like auto-pilots, (nuclear) power-plant controllers and your car’s ABS.
The highest degree of certainty about a system’s correctness can be obtained via mathematical proof, a tedious manual process of formally describing and analyzing the system’s behavior. Especially the latter step is tedious and requires the creativity of a mathematician to demonstrate that certain properties are preserved under the strict mathematical rule system. With the invention of “model checking”, this part of this process became automated, by letting a computer exhaustively explore the behavior of the system.
However, the size of the systems that can be “model checked” is severely limited by the available computational resources. This is caused by the so called state explosion, a consequence of the fact that a machine can only perform small mechanized computations and does not exhibit the creativity to make generalizing (thinking) steps. Therefore, the goal of the current thesis is to enable the full use of computational power of modern multi-core computers for model checking. The parallel model checking procedures that we present, utilize all available processor cores and obtain a speedup proportional to the number of cores, i.e. they are “scalable”.
The current thesis achieves efficient parallelization of a broad set of model checking problems in three steps, each described in one part of the thesis:
First, we adapt lockless hash tables for multi-core, explicit-state reachability, the underlying search method that realizes the exhaustive exploration of the system’s behavior. With a concurrent tree data structure we realize state compression, and reduce memory requirements significantly. Incremental updates to this tree further ensure similar performance and scalability as the lockless hash table, while the combination with a compact hash table realizes small compressed sizes of around 4 bytes per state, even when storing more than 10 billion states. Empirical evidence shows that the compression rates most often lie within 110% of this optimal.
Second, we devise parallel nested depth-first search algorithms to support model checking of LTL properties in linear time. Building on the multi-core reachability, we let worker threads progress semi-independently through the search space. This swarm-based technique leverages low communication costs through the use of optimistic, yet possibly redundant work scheduling. It could therefore become more important in future multi-core systems, where communication costs rise with the increasing steepness of memory hierarchies. Experiments on current hardware already demonstrate little redundancy and good scalability.
Third, to support verification of real-time systems as well, we extend multi-core reachability and LTL checking to the domain of timed automata. We develop a lockless multimap to record time-abstracted states, and also present algorithms that deal with coarse subsumption abstraction for the verification of LTL for solving larger problem instances. The scalability, memory compression and performance are all maintained in the timed setting, and experiments therefore show great gains with respect to the state-of-the-art timed model checker UPPAAL.
The above techniques were all implemented in the model checking toolset LTSmin, which is language-independent, allowing a direct comparison to other model checkers. We present an experimental comparison with the state-of-the-art explicit-state model checkers SPIN and DiVinE. Both implement multi-core algorithms, while DiVinE also heavily focuses on distributed verification. These experiments show that our proposed techniques offer significant improvements in terms of scalability, absolute performance and memory usage.
Current trends and future predictions tell us that the available processing cores increase exponentially over time (Moore’s Law). Hence, our results may stand to gain from this trend. Whether our proposed methods will withstand the ravages of time is to be seen, but so far the speedup of our algorithms has kept up with the 3-fold increase in cores that we have witnessed during this 4-year project.