Tom van Dijk
Promotor: Prof.dr. J.C. van de Pol (UT)
Date: 13 July, 2016, 16:30
Decision diagrams are fundamental in computer science. They are extensively used in various fields, such as symbolic model checking, logic synthesis, fault tree analysis, test generation, data mining, Bayesian network analysis and game theory. Binary decision diagrams, which are the most common type of decision diagrams, are often used to represent Boolean functions, which are at the core of computer science.
A particular application that extensively uses decision diagrams is symbolic model checking. Model checking studies the properties of models, for example models of software, hardware, communication protocols, automated systems, and properties such as whether a program is correctly implemented, what the failure rate of a system is, how long the average person has to wait for a traffic light, etc. Models typically consist of states and transitions: the system is in a certain state (such as: all lights are red, or: some lights are green) and can transition from one state to another state.
A major challenge in model checking is that the computation time and the amount of memory required to compute interesting properties typically increases exponentially when the size of the models increases, when models are combined, or when we want to check more complex properties. One method to tackle this fundamental challenge is to consider sets of states instead of individual states, and to use Boolean functions represented by binary decision diagrams to encode these sets of states. Hence, symbolic model checking often consists mostly of operations on binary decision diagrams.
A major goal in computing is to perform ever larger calculations and to improve their performance and efficiency. The processing power of computers increases according to Moore’s law, but as physical constraints limit the opportunities for higher clock speeds, the increases in processing power of modern chips mostly come from parallel processing. Efficient parallel algorithms are thus required to make effective use of the processing power of modern chips. Hence, research and development of scalable parallel algorithms is a key aspect to computing in the 21st century.
The main contribution of this thesis is the reusable multi-core decision diagram library Sylvan. Sylvan implements parallelized operations on various types of decision diagrams, in particular binary decision diagrams, multi-terminal binary decision diagrams and list decision diagrams. One of its strengths is that it can replace existing non-parallel decision diagram libraries to bring the processing power of multi-core machines to non-parallel applications.
In this thesis, we study the design of Sylvan and its two main ingredients, namely the parallel hash tables that store the decision diagram nodes and the operation cache, and the parallel framework Lace that we developed, which executes tasks in parallel using a work-stealing algorithm for load balancing and a novel double-ended queue to store the tasks of each thread. Furthermore, we study how Sylvan performs in two applications: state space exploration and bisimulation minimisation.
We look at state space exploration on a large benchmark set, using the existing model checking toolset LTSmin with Sylvan as a backend. We first only use the parallel operations offered by Sylvan in an otherwise sequential program to obtain a parallel speedup. We then also parallelize the state space exploration algorithm, exploiting a partitioning of the transition relation in LTSmin, and obtain an improved parallel speedup.
Bisimulation minimisation computes the smallest equivalent model of some input model according to some notion of equivalence, such as strong or branching bisimulation. For this application, we develop two new decision diagram algorithms for bisimulation minimisation and show that this significantly improves the efficiency of the minimisation algorithm, on top of the parallelism obtained by Sylvan.
In the experimental results presented in this thesis, we obtain a good parallel speedup: up to 29x with 48 threads when purely relying on Sylvan for parallel execution; up to 38x with 48 threads when the program that uses Sylvan is also parallelized. In a study that compares Sylvan with popular decision diagram implementations, we see that Sylvan is competitive with the other implementations when running single-core (3rd place of 6 implementations), and faster than the other implementations when more threads are used.