This page may give you an idea of the type of graduation projects you may do under my supervision. Note that the list on this page is by no means exhaustive: if you would like to do a graduation project on a specific algorithmic topic that interests you, you are welcome to come and see me to discuss whether I could be your advisor on that project (or whether I know another professor whom you should talk to).
Many of the projects are done in the form of an internship with the algorithms group here at the TU Eindhoven. However, looking at the list below, you will see that graduation projects may also be done at companies and/or abroad.
From an elevation model of a terrain one can compute visibility maps: which parts of the terrain can be seen from a given viewpoint? Such maps are useful, for example, when planning the construction of buildings that should have a good view over the terrain (such as watch towers) or rather the opposite (they should not spoil the view). Whether a point p is visible or not, depends on the elevation profile of the terrain on the line from the viewpoint to p. The elevation profile along that line has to be estimated by interpolating between the elevation values of nearby grid points. However, in many cases we cannot afford to do this too precisely: because of the huge size of the data sets we have to find a balance between accuracy and efficiency.
The goal of a graduation project could be to analyse and improve the state-of-the-art in the accuracy/efficiency balance of visibility map algorithms. In fact, often it is not the exact shape of a viewshed of a single point that is needed, but the sizes of the viewsheds of many points. In this graduation project, we could try to push the accuracy/efficiency balance further by exploiting the fact that viewsheds of nearby points are often similar. The project would be a combination of theoretical work and experiments.
Computer memory is not completely reliable. When running computations on large amounts of data, occasionally there will be a fault that corrupts a small amount of data. If full integrity of the data is not a primary concern, hardware or software solutions that detect and correct all errors may not be worth the expense. That means, however, that any algorithms operating on our data must be able to tolerate small amounts of corrupted data. For example, if some data gets corrupted while running a sorting algorithm, the algorithm should nevertheless manage to sort the remaining uncorrupted data (more or less) correctly.
In the literature, various models of faulty machines have been proposed, along with algorithms to cope with the faults. However, not all models are equally realistic or applicable in all situations. One should consider carefully how the design and analysis of proposed algorithms is affected by context-dependent and possibly unrealistic assumptions underlying the model. In a graduation project we could analyse and compare algorithms under different machine models to see how the choice of the faulty-machine model affects the relative performance of algorithms. The project would be a combination of literature study, experimental research and/or development of theoretical algorithms.
I am generally happy to supervise or co-supervise graduation projects that are partially or completely done in a research group located abroad and/or working in a completely different discipline such as, for example, cognitive psychology, geography and cartography, or law. Depending on your preferred topic, location or discipline, I may be able to help you with drafting a project proposal that can be accepted by the exam committee, or I may even be able to help you with concrete ideas for projects and/or contacts abroad. Please feel free to come and talk to me about what opportunities might be there.
Maps of public transportation networks (train, bus, metro) are usually schematized: shapes of lines are simplified and geographic distances are often heavily distorted to maximize the number of stations that can be shown on a given sheet of paper or handheld device. Typical maps, if well-designed, show clearly how the different lines of the network are connected. Thus they help the map user to recognize quickly how to avoid changing trains more often than necessary. However, due to the distortion, schematic maps can be quite misleading when it comes to travel times, and the fastest connection is sometimes hard to find.
Around a recent workshop on schematic mapping, several ideas were developed about how one could make fast connections easier to recognize on schematic maps. Some (but not all) of these ideas are described in a manuscript. The implementation and experimental evaluation of these ideas requires tackling interesting algorithmic challenges. Michel Cijsouw and Roel van Happen are attacking these challenges in a graduation project involving theoretical work and a proof-of-concept implementation.
Most computer scientists know Dijkstra's algorithm to compute the shortest path from one point to another in a graph. Unfortunately, if the graph is too large to fit in main memory and nodes and edges have to be read from disk during the computation, Dijkstra's algorithm accesses the disk in a way that can be very inefficient in the worst case. Known solutions that are theoretically efficient with respect to disk access, were known to be complicated or inefficient with respect to computation time. Wilco van Maanen investigated several simple alternatives and found that these solve the problem in practice, at least for graphs with a regular grid structure.
On maps of public transportation networks, metro lines are usually drawn with straight line segments. However, lines are often easier to follow when drawn as smooth curves. How can we draw clear and beautiful maps with smooth curves automatically? In his Master's project, Mathijs Miermans developed a new approach to do this with a force-directed method. The method starts from a drawing that is easy to obtain but is not so nice, and improves the map by iteratively applying forces between different parts of objects on the map that result in pushing and pulling the map into a better shape. Mathijs's results were presented in the 2014 Schematic Mapping Workshop in England.
Given a point q in a terrain, we would like to be able to compute the watershed of q, that is, the set of points in the terrain from which water flows to q, assuming that water always follows the direction of steepest descent. We assume that our terrains are specified by a triangulated network of vertices. The longitude and latitude of each vertex are known exactly, but the elevation of each vertex is an estimate with a certain error margin. This leads to uncertainty about which points lie in the watershed of q: some points can never lie in the watershed of q, some points will definitely lie in the watershed of q, and some points may lie in the watershed of q depending on the unknown exact elevations of the terrain vertices. Marlous Theunissen investigated the theoretical properties and practical performance of algorithms to compute the minimum and maximum extents of watersheds in this challenging setting.
Nicky Gerritsen developed, implemented and tested algorithms to compute minimum spanning trees and shortest paths on large graphs that have a simple grid structure. His algorithms are optimized for efficient disk access and to exploit the parallel computing power of graphics processing units.
Marvin Raaijmakers did his graduation project in the research laboratories of Audi in Germany, while keeping in touch with me by weekly “meetings” using Skype. Marvin studied how a moving car can detect other cars that are overtaking or driving next to it. After his graduation project, Marvin continued to work for Audi to do scientific research and to publish about it.
In geographic information systems, digital terrain models are mostly based on a regular grid of squares. Ed Schouten investigated if, for certain computations, higher precision could be obtained by using a grid of hexagons, and developed techniques to store and traverse such grids. After his graduation project, Ed went to work for Google in München.
Space-filling curves are continuous surjective functions that map the unit interval [0,1] to a higher-dimensional region, such as the unit (hyper-)cube. Their inverse can be used to define a linear order on high-dimensional points; this order can then be used, for example, to decide how to store the points in a data structure in memory or on disk. Not all curves are equally effective at optimizing, for example, the disk access patterns of queries in such data structures. Simon Sasburg developed and implemented algorithms to compute quality measures of certain classes of curves that may help in assessing how effective they are. After his graduation project, Simon went to work for MagnaView.
Robert Leeuwestein did his project at Prodrive in Eindhoven. Robert developed and tested algorithms to optimize the paths followed by two consecutive solder machines that fix large electronic components on a circuit board—the bottleneck in the process of manufacturing these boards.
Quadtrees are data structures that can be used to store, for example, planar subdivisions. However, the traditional top-down algorithms to construct quadtrees are rather inefficient, especially when the planar subdivision is too big to fit in memory during the construction process. Joery Mens studied algorithms to build such trees efficiently with a bottom-up process, constructed from a few simple and efficient sorting and scanning passes.
Suppose a set of villages is built on a newly created artificial island, and you get to build a road network of a certain total length. The budget is not sufficient to build a direct road between each pair of villages, so typically, the shortest route from one village to another has to take a detour. Where should you build the roads, such that the maximum detour (distance along the road divided by distance as the crow flies) between any pair of villages is as small as possible? Peter Kooijmans, and, in a follow-up project, Marco Verstege, studied this problem for certain settings with a small of number of villages, implementing clever search strategies to find the optimal solution among a huge number of combinatorially different candidate solutions. After his graduation project, Marco followed the postgraduate programme of the 3TU.School for Technological Design.
Freek van Walderveen did his graduation project at the university of Aarhus in Denmark, while keeping in touch with me by weekly “meetings” using Skype. Oil companies use sonar to get an accurate picture of the ocean floor, which they need for the maintenance of oil pipes. Freek developed, implemented and tested algorithms that can automatically remove noise from huge clouds of sonar measurements of the ocean floor—something which, until then, was a very time-consuming job mostly done by hand. Freek published and presented his work at ACMGIS, one of the world's top conferences on computer science for geographic information systems. After his graduation project, Freek stayed in Aarhus and got a PhD there, and then continued to work for Scalgo.
Sjoerd Cranen did his graduation project at the University of Sheffield in England, while keeping in touch with me by weekly “meetings” using Skype. Sjoerd redesigned an experimental software package for speech recognition, so that new, previously infeasible, techniques for speech recognition in noisy circumstances could be implemented and tested. After his graduation project, Sjoerd became a PhD student at the TU Eindhoven.
Jeffrey Janssen developed, implemented and tested algorithms to compute the flow of water on digital elevation models which are given as huge grids of elevation samples. His algorithms turned out to be an order of magnitude faster than the state of the art at the time, reducing computation times of days to hours. After his graduation project, Jeffrey went to work for Realworld Systems.
Ummar Abbas implemented and tested the logarithmic priority-R-tree, a data structure that can store rectangles and points in the plane. Theoretically, the data structure was known to support efficient insertions, deletions and queries efficiently, but nobody had ever implemented it. Ummar filled in the gaps in the published description of the data structure and did experiments to find out what the theoretical performance guarantees were really worth.