Master Projects

Are you looking for a Master’s thesis topic that

  • solves a challenging real-world problem,
  • allows you to become a specialist in the area of processes and information systems,
  • gives you cutting-edge knowledge to solve today’s problems regarding processes and their use in all kinds of organizations and companies?

The AIS group offers you thesis topics and supervision in a group

  • that has invented and developed cutting edge technologies that were applied in over 100 different companies and is now being used in various industrial tools,
  • that has set and contributed to various standards in the area of process-aware information systems,
  • that actively cooperates with many large companies in the area of process-aware information systems in the Netherlands and worldwide, as well as with universities and researchers from all over the world (we are currently running 17 projects with different partners),
  • that gives you clear and regular guidance through the challenges of a Master’s thesis project with the freedom to develop your own ideas, and
  • that after all is fun and supportive to work with.

Possible assignments

The AIS group is conducting research, development, and experiments in the area of Process Analytics and Process Mining. You can join these endeavors by developing your own Process Analytics or Process Mining solution. We offer challenging assignments on industrially relevant topics, personal guidance developing your own cutting-edge solution in concept and in realization, and evaluation and experiments on real-life data.

The following assignments are a selection of possible topics and available at all times. If you have a question about a particular topic, please contact the lecturer offering the topic. If you have ideas for your own topic, please contact Dr. Dirk Fahland.

  • Event Data Management in Graph databases. Process mining assumes event data to be stored in an event log, which is technically either a relational table (attributes as columns) or a stream of events (attribute value pairs). These data formats limit the way process mining analysis can be done in various dimensions such as shared resources, shared data, or multiple interacting cases. The objective of this assignment is to develop concepts for storing, retrieving, and managing event data in graph databases such as Neo4j, to develop new process mining use cases enabled by this format, and to showcase their application in a proof-of-concept implementation. Contact: Dr. Dirk Fahland.
  • Performance pattern analysis. A recent visual analytics technique called the “Performance Spectrum” allows us to gain more fine-grained insights into performance behavior and changes over time. But the current “Performance Spectrum” is just a picture. The objective of this research project is to employ statistical techniques and machine learning techniques to identify patterns in the performance spectrum, and to contextualize the performance patterns in the control-flow model of the process in a proof-of-concept implementation. Contact: Dr. Dirk Fahland.
  • Efficient unsupervised event context detection for event log clustering, outlier detection, and pre-processing. We recently developed a technique to detect the context of events from an event log in an efficient way through sub-graph matching. This allows to identify events and parts of event logs which are similar or different to each other, allowing to cluster traces, detect outliers, and improve event log quality, as illustrated in this Demo video. As sub-graph matching is a NP-complete problem, the technique currently does not scale well (even with the existing heuristic). The objective of this assignment is to improve the efficiency of the matching algorithm and the quality of the results for better clustering and outlier detection on real-life event logs, demonstrated in a proof-of-concept implementation. Contact: Dr. Dirk Fahland.
  • Smart event log pre-processing. The quality of process mining results highly depends on the quality of the input data where noise, infrequent behaviors, log incompleteness or many different variants undercut the assumptions of process discovery algorithms, and lead to low-quality results. ProM provides numerous event log pre-processing and filtering options, but they require expert knowledge to understand when which pre-processing step is advised to reach a particular outcome. The objective of this assignment is to develop a comprehensive framework for event log pre-processing and a recommender that suggests the user adequate pre-processing steps depending on the data properties and the analysis task. Ideas are to be realized in a proof-of-concept implementation. Contact: Dr. Dirk Fahland.
  • Log Data Anonymization In the context of process mining, we are often confronted with companies willing to share their data if we can sufficiently anonymize this. However, to date, there are no well-defined plugins to do such anonymizations. Therefore, we are looking for a Master student that is willing to help us with this. Part of the project will be an in-depth investigation in existing anonymization techniques. Which techniques are available and what analysis properties do they preserve. More importantly, how difficult is it to de-anonymize the data? Another important part is the implementation of an anonymization framework in our toolset ProM. Good programming skills in Java are therefore important, no matter if you are a CSE or a Data Science student. For more information, contact prof. Boudewijn van Dongen
  • Handling noise and/or duplicate tasks in Log Skeletons. In the Process Discovery Contest of 2017, the Log Skeletons were the only approach to classify all 200 traces correctly. However, to do achieve this, some manual steps were required, like removing noisy traces and splitting duplicate tasks. Without these manual steps, only 194 traces would be classified correctly. If we could automatically remove the noisy traces, we could improve this to 196, and if we could automatically split duplicate tasks (like splitting task “a” into “a.0” and “a.1”), we could improve this to 197. For more information, contact Eric Verbeek.
  • Adding heuristics to the Block Layout. TheBlock Layout can be used to create a layout for a process graph. For this, it uses well-known Petri-net-based reduction rules to reduce the entire net into a single place. For nicely structured process graphs, this layout works quite well, but for more complex structured graphs, the resulting layout needs to be improved. Either good heuristics are needed for deciding (n case we have a choice) which way to go, or new reduction rules that can cope with more complex structured graphs. For more information, contact Eric Verbeek.
  • N-out-of-M patterns in alignments. Aligning structured process models to event logs is a far from trivial task. In complex modelling languages, inclusive OR-split/join patterns play an important role and they are known to be notoriously difficult to align to event logs due to their large state-spaces.
    The known Petri net translations of OR-joins rely either on token coloring or on structured combinations of co-called tau transitions to route the tokens in the Petri net mimicking the OR-join semantics. In this project, we investigate alternative translations of OR-joins/splits to Petri nets which are optimized for alignment computations. This means we are not looking for sound translations, but rather relaxed or easy sound ones. However, the close relation between the Petri net, its incidence matrix and the heuristic function in alignments needs to be taken into consideration.
    Finally, we aim to generalize the classical OR join (skip at most M-1 out of M options) to a (N out of M version, where the requirement is to execute N out of M submodels). Contact prof. Boudewijn van Dongen
  • Generating non block-structured models and corresponding logs. For experimenting with process discovery and Petri nets, scientists often rely on experiments with artificial models and logs. More often than not, these models are block structured as it is easy to generate such models by simply building a random process tree and translating that into a Petri net. However, Petri nets allow for more intricate structures which may heavily affect the experimental results, i.e. the representational bias of block structured models it too limited for state-of-the-art process mining technology.
    We are therefore looking for a tool to generate sound, but not block structured, Petri nets and corresponding event logs with various characteristics in terms of deviations and decision point behaviour. Contact prof. Boudewijn van Dongen
  • Petri net reduction rules for replay. Replaying event logs on Petri nets, either through token-replay or using alignments, is a complex task. Especially when models become larger and have more labels, the size of the models becomes a problem. In Petri net theory, many reduction rules exist for reducing Petri nets while retaining, for example, soundness of the model. Can we derive such a set of rules that are “replay-preserving”, i.e. can we make the problem of replaying logs on models simpler by doing a structural reduction on the input. Contact prof. Boudewijn van Dongen

Possible external assignments

In addition to the above assignments by the AIS group, we also offer supervision on graduation topics together with external partners.

Data-driven Product Design at Philips Healthcare (2 Master projects)

The study of user behaviour is an important part of the product design process. This process is particularly more difficult when dealing with products that require very complex user interaction. Therefore, obtaining as much product usage information as possible is needed, since it can reveal patterns in the user behaviour that indicate a misalignment between the use intended by product designers and the actual use. Most often, there are a lot of assumptions made when designing that are either based on domain knowledge or on personal expertise. Traditional techniques such as interviews or reviewing customer complaints provide a restricted view of the behaviour of a small set of users.

Philips Healthcare is using data-driven techniques, in particular the analysis of event data of product usage, to improve both the design of products and the design process itself.

A prior Master project by the AIS group and Philips Healthcare showed first promising results. Philips Health wants to continue research in this area with 2 new Master projects.

→

Performance-aware Conformance analysis for logistics systems (Vanderlande)

Vanderlande focuses on the optimization of its customers’ business processes and competitive positions. Through close cooperation, we strive for the improvement of our customers’ operational activities and the expansion of their logistical achievements using our Material Handling solutions. A key concern is to ensure that all materials flow through “optimally” through the system. The objective of conformance checking in Material Handling Systems is to identify where material flows, such as a group of bags on their way to a flight, deviate from the ideal routing and processing - leading to a negative impact on processing times for the bags, or even suboptimal system performance. The objective of this project is to develop a scalable, performance-aware conformance analysis technique for logistics systems in the context of the joint Process Mining in Logistics research project.

→

Automatically Matching Requirements to Process Models Determining the Impact of Change (ACCHA)

Target audience: Computer Science students with a data science/machine learning/NLP background.

Task description: The main task of this Master thesis is to develop and implement a technique that is able to automatically link textual requirements to model-based representations of business process (so-called process models). By doing so, it will be possible to quantify the impact of a set of requirements (e.g. specifying a new piece of software) on the currently running business processes of an organization. The main challenge associated with this task is to successfully leverage Natural Language Processing techniques to deal with rather short language fragments that are typically used in process models. Real-world data will be provided for both developing and testing the technique.

→

Business Process Mining and Modeling at Amsterdam municipality (4 positions)

Amsterdam is a dynamic metropolis with great ambitions: Creating an excellent urban climate of living, working and leisure, but also a decisive and effective government. The city is a 'living lab', where metropolitan tasks are both a challenge and an opportunity to develop and apply new insights, technologies and practices.

In this context, Amsterdam municipality is looking for 4 thesis interns (WO) Business Process Mining and Modeling who are able to gain insight into existing process equipment and systems in the municipal information chains. Nederlands versie, English version

→

Using Process Mining to find lead indicators of Quality Issues

Using process mining in combination with (more regular) data analysis to predict the chance of part failure during EUV assembly and testing.

→

Detecting root causes of complaints and investigating the continuation within the customer journey

In the Dutch health care system health care insurance is obligated for all residents. The government sets the basis package and insurers compete based on price and service. Customer service is therefore very important for every health insurance company; especially in the fast changing digital world. As a result customer satisfaction is the most important KPI. Complaints are related to customer satisfaction, because they have a highly negative effect on customer satisfaction. Therefore, it is crucial to prevent customers from having any complaints. Customers do not submit a complaint out of the blue: a complaint is a reaction to one or more occurrences in the past. The best way to understand the triggers, or root causes, it to understand the touchpoints prior to a complaint.

CZ aims for a method to prevent complaints (in order to increase customer satisfaction and lower churn). When taking the touchpoints of the customer journey into account, complaint root causes can be detected and CZ can prevent future complaints. Moreover, the touchpoints after the complaint give insights about an optimal follow up after the complaint.

Prior research development

Underlined works together with several companies like CZ, Aegon and SNS to build a generic framework in which all customer contacts are brought together as a unique dataset which is further enriched by Underlined with relevant analyses that can be linked to customer events. Underlined co-created customer journey mining algorithms, together with the TU/e (Bart Hompes and Joos Buijs from the Architecture of Information Systems group). This research showed that it is possible to distinguish the different journeys per customer. Moreover, in collaboration with the TiU econometrist department, a driver model has recently developed to distinguish relevant drivers and to predict the NPS-score.

→

Advanced Process Mining techniques in Practice (several Master projects with ProcessGold)

ProcessGold is a software supplier bringing together Process Mining and Business Intelligence, driven by highly skilled ICT entrepreneurs and backed by a wealth of experience. ProcessGold recently released a new Process Mining platform, the ProcessGold Enterprise Platform, that combines data extraction, process mining techniques, and visual analytics in order to produce dynamic visual reports which are easy to monitor and analyze for process stakeholders. These reports form the basis for deeper, fact-driven analysis and continuous process improvement projects.

In this context, ProcessGold is constantly offering graduation internships for investigating new techniques and methodologies in Process Mining and their application in a business context. A few example topics are given below - the specific graduation project and scope and will be further developed in mutual agreement.

→

Philips HUE Product Evolution Using Stream Mining of Customer Journey

Philips HUE is a connected personal lighting system. It is controlled by a range of apps and smart home devices.

To acquire Philips HUE, one starts with a starter kit that consists of a few lamps and a bridge. Subsequently, consumers decide to expand their system with additional lamps or/and physical sensors. About 50 lamps can be controlled in one system.

An example of HUE system is seen in the Figure below. It consists of a bridge, three color lamps and a dimmer switch.

Every consumer chooses a device to interact with the lights. By default, all consumers start with HUE or third-party app. There are also options in the HUE app to create routines to be able to control the lights automatically. For instance, one can create a wakeup routine so that the lights in the bedroom go on naturally in the morning. HUE can also be controlled by Philips physical sensors (motion sensor, tap, dimmer switch), voice control and other third party smart home devices. Amazon Echo, Google Home, Eneco Toon, Homekit are some of smart home devices that are linked to HUE. In addition, traditional wall switch can also be used to control the lights. For further information see Philips Lighting website.

The master thesis focusses on developing a process mining model to understand product evolution of consumers in their journey and identify the paths that contribute to a high Customer Lifetime Value (CLV). Moreover, the thesis will also explore data mining techniques that can be applied on the outcome of process mining model.

→

When Portfolio Management meets Process Mining Challenges and Opportunities

FLIGHTMAP is Bicore’s flagship software solution for portfolio management. Since its launch in 2010, a growing group of international clients, such as DAF, Océ, and Fokker, have implemented FLIGHTMAP. With this tooling, they can perform roadmapping, budget and resource planning, scenario analysis, planning and tracking, and more. More information about FLIGHTMAP is available via The figure above shows a screenshot of the tool obtained after the portfolio analysis

→

Data Science: Developing a Self-Standing Dynamic reporting tool

A huge amount of (transaction) data is generated on a daily basis in ASML Development and Engineering department. The data is scattered in different sources. The challenge would be extracting data from relevant sources and creating a self-standing dynamic reporting tool (dashboard) demonstrating performance of (Supplier Quality) Engineers in different granularity levels (Department, Section, Individual) based on a set of pre-defined KPIs.

Are you a master student in Software Engineering (Data Science) with a passion on real-life data challenges? Then we are looking for you!

→

Real-Time Model Discovery of the Service Order Process Using Stream Process Mining

Kropman Installatietechniek is a Dutch company established in 1934 and has become one of the leading companies of the Dutch installation industry. With about 800 employees, 12 regional locations and an annual turnover of more than 100 million Euro, Kropman is an integral service provider with a multidisciplinary approach. Kropman is mainly active in office buildings, health care and industry. It offers design, construction and maintenance in the field of facility installations. Kropman also has a separate business for process installations and cleanrooms: Kropman Contamination Control. The maintenance (services) is a fast growing business line. The order process is fully supported within an ERP environment. The service order process is not a trouble-free process: the process takes too long and flows more often than necessary. The company aims at increasing the throughput of the SO process and decreasing the amount of process deviations by applying process mining and data mining techniques.

To have a better overview of the process, Kropman is aiming at:

  • Exposing bottlenecks happening in the SO process
  • Detecting deviations from the supposed model in the real time they are happening using stream process mining techniques

→

Océ is a Netherlands-based company that develops, manufactures and sells printing and copying hardware and related software. Their cut-sheet production printers are used to produce millions of prints on a daily basis. Print jobs may involve printing on different paper stock, which is loaded in the various paper trays of the machine. Print operators are mainly busy with loading the printer with new paper and unloading the printed paper stacks. As this is a human task, errors are made in the process, especially in loading the paper trays with the correct paper stock.

Using the wrong media for a print job may lead to print quality issues and is most often unacceptable for the print buyer. For these reasons, a timely and automatic detection that the wrong media is loaded is needed to guarantee the overall production quality.

Your assignment is to use machine learning to determine whether the loaded paper media in a paper tray actually matches the medium specified on the printer. As input, a set of data can be used that is logged during the printer operation.

→

Process Oriented Query Language

Looking for a master project? Are you interested in topics like Process Mining, Process Querying and Data extraction?. Take a look at our proposals below. If you are interested, do not hesitate to contact Eduardo González or Hajo Reijers

→

Do CHANGE project with Onmi Design

Onmi Design is currently working on the Do CHANGE project, funded in the Horizon 2020 program from the European Commission. In this project they are developing a health ecosystem that provides real-time behaviour change coaching based on input from various sensors. The student would assist in the development of analysis algorithms and integration of these algorithms in the system. For more information on the project please visit

→

Procedure for Master Projects

  • Within the AIS group, we offer Master projects in several different areas and also on concrete topics. We then tailor the Master project assignments for each individual student, with that student’s skills in mind. To create an individual assignment, you have to find and commit to a supervisor who helps you in this process.
  • If you already know with which supervisor/on which topic you want to graduate, contact the potential supervisor of your choice directly, and as early as possible (at the end of the first year, after you have obtained 40 to 50 credits)
  • If you first want to get an overview on available projects and supervisors, please contact Dirk Fahland. He can direct you to a potential supervisor in your area of interest, and you then talk with the potential about possible Master projects.
  • At some point the student has to commit to a particular supervisor. Only then an actual assignment can be defined, either internal or external. In particular external assignments must be discussed and defined together with the supervisor to ensure that the defined assignment fits the student, the company, and the study program.

→

Research Lines

Assignments for master projects typically fall in one of the main three research lines of AIS:


The research group distinguishes itself in the Information Systems discipline by its fundamental focus on modelling, understanding, analyzing, and improving processes. Essentially, every time two or more activities are performed to reach a certain goal, fundamental principles of processes apply. Processes take place on the level of individual actors, groups of actors, entire organizations, and networks of organizations. Processes, in one form or another, can be seen everywhere:

In traditional workflow management settings, the idea behind a process is that there is a single notion of a “case” flowing through a “process” according to a pre-defined route, i.e. a “process model”. However, the notion of processes and the principles behind them are much broader. In many situations, processes are not explicitly defined (there is no procedure for a customer clicking through a website) or are highly flexible (typically in knowledge intensive processes such as designing products or deciding about visa and immigration). Also, many processes have interactions with other processes: for example, there are many interlinked processes behind every patient in a hospital or behind the supply chain of any multi-national manufacturer. There is a growing concern within organizations, governments, and society as a whole that processes need to be governed properly and efficiently. The availability of large amounts of data on the one hand and a fundamental understanding of the process notion on the other shape the opportunity for researchers in our group to contribute to this societal challenge.

Background: Business Process Management

Within organizations, the management of processes has been an important topic since the introduction of conveyor belts in the early 1920’s. From 2000, business process management is the research field focusing on agility in organizations and continuous (business) process improvement through a BPM life-cycle of designing, modelling, execution, monitoring and optimizing processes.

Lately, there is growing attention in the field of business process management for the embedding of process analytics into (process aware) information systems, i.e. BPM provides the context in which our analytics are being developed.

Fundamental to the research group at the Eindhoven University of Technology is the choice for Petri nets as the language to precisely describe process dynamics also in complex settings at a foundational level. The choice for this language is what distinguishes our research group from research groups in more industrial engineering oriented information systems groups.

Process Analytics

One of the foundations of computer science today is data. The omnipresence of increasingly large volumes of data has become a key driver for many innovations and new research directions in computer science. Specifically in information systems, data - and the analytics developed on top of this data - have transformed the field from expert-driven to evidence-based, which in turn massively broadens the applicability of results to more and larger contexts. Many advanced process analysis tools and techniques exist today in over 25 commercial packages that were developed in the AIS group over the last 15 years.

The research in the our group continues to expand outward from a “classical” situation of data with clear case notions in the context of explicitly structured processes to a broad, multi-faceted field, where processes are less structured or consist of many interacting artifacts and where case notions in data become more fluid or are complex, multi-dimensional networks.

The figure above shows this research field of process analytics.

Impact and Societal Relevance

When selecting a Master project it is advisable to consider the track record of the research group supervising the project. Therefore, we briefly discuss the impact and societal relevant of AIS’s research.

The work of the AIS group is world renowned, especially in the fields of

  1. the modeling and analysis of workflow processes (cf. workflow nets and the seminal soundness notion),
  2. workflow patterns (the DAPD paper on the workflow patterns is the most cited paper in the BPM domain and the Workflow Patterns web site is the most visited web site on workflow management over the last decade), and
  3. process mining (e.g., we established an international process mining community).

The impact of our work is reflected by the many citations of the publications of the AIS group. For example, Wil van der Aalst is the highest ranked European computer scientist based on Google Scholar. His work has been cited more than 63,000 times and his Hirsch Index is 120. During the last evaluation of all computer science groups in the Netherlands, the AIS group got the highest marks possible: 5-5-5-5-5 (i.e., a perfect score). The workflow patterns have had a very positive effect on commercial WFM/BPM products. Today, the patterns are widely used to describe workflow functionality in a language/system-independent manner. In addition, the patterns are also highly visible. The Wokflow Patterns web site has been one of the most visited web sites in the field of BPM averaging more than 300 unique visitors per working day over the last decade. Several vendors changed their tools to support more patterns and some have provided wizards based on the patterns. For example, IBM recently added a wizard-like functionality to their WebSphere product inspired by the patterns. Also standardization efforts were influenced by the patterns, see for example BPMN.

→

Staff involved

Wil van der Aalst

Position: HGL
Room: MF 7.064
Tel (internal): 4295
Projects: Core, DeLiBiDa, DSC/e & NWO Graduate Program, Philips Flagship, Process Mining in Logistics, RISE BPM
Links: Personal home page, Google scholar page, Scopus page, ORCID page, TU/e employee page, DSC/e Wil van der Aalst is a full professor of the Process and Data Science (PADS) group at the RWTH in Aachen (Germany) and a part-time professor in the AIS group. His personal research interests include process mining, business process management, workflow management, Petri nets, process modeling, and process analysis.

Joos Buijs

Position: UD
Room: MF 7.062
Tel (internal): 3661
Projects: CoseLoG, EDSA, Phlips Flagship, RISE BPM, TACTICS
Courses: 2IMI20
Links: Personal home page, Google scholar page, Scopus page, ORCID page, TU/e employee page, DSC/e
Joos Buijs' current research interests include Process mining in healthcare and Learning analytics. Next to these research topics Joos is also involved in MOOC creation. Related to the learning analytics of course, we also create MOOCs on the topic of process mining. There is the Coursera MOOC "Process Mining: Data Science in Action". And on July 11, 2016 the first session of the FutureLearn MOOC "Process mining with ProM" will launch, a hands-on MOOC where you learn how you can apply process mining on your own data!

Boudewijn van Dongen

Position: UHD
Room: MF 7.103
Tel (internal): 2181
Projects: 3TU.BSR, BPR4GDPR, DeLiBiDa, Process Mining in Logistics
Courses: 2IMC92, 2IMC97, 2IMI05, 2IMI30, JBG010, Big Data Honors Track
Links: Google scholar page, Scopus page, ORCID page, DBLP page, TU/e employee page
Boudewijn’s research focusses on conformance checking. Conformance checking is considered to be anything where observed behavior, needs to be related to already modeled behavior. Conformance checking is embedded in the larger contexts of Business Process Management and Process Mining. Boudewijn aims to develop techniques and tools to analyze databases and logs of large-scale information systems for the purpose of detecting, isolating, diagnosing and predicting misconformance in the business processes supported by these systems. The notion of alignments play a seminal role in conformance checking and the AIS group is world-leading in the definition of alignments for various types of observed behavior and for various modelling languages.

Massimiliano de Leoni

Position: UD
Room: MF 7.059
Tel (internal): 8430
Projects: Core
Courses: 2IIC0, 2IHI10, 2IMI35, 2IOC0
Links: Personal home page, Google scholar page, Scopus page, ORCID page, DBLP page, TU/e employee page
Dr. Massimiliano de Leoni is assistant professor at the AIS group. His research focuses in the areas of Process-aware Information Systems and Business Process Management, predominantly on multi-perspective process mining, process-aware decision support systems as well as on visualization techniques for business process management and analysis. He has a genuine interest in the practical applications of his research in real-life settings, which led him to concretely develop his ideas in term of software tools and apply them with a large number of organizations world-wide.

Dirk Fahland

Position: UD
Room: MF 7.066
Tel (internal): 4804
Projects: Process Mining in Logistics
Courses: 2IIC0, 2IHI10, 2IMC92, 2IMC97, 2IMI10, 2IOC0, JBG030, JBG040, JBG050
Links: Personal home page, Google scholar page, Scopus page, DBLP page, TU/e employee page
Dirk is Assistant Professor (UD) in the AIS group. He completed his PhD with summa cum laude at Humboldt-Univeristät zu Berlin and Eindhoven University of Technology in 2010. His research interests include distributed processes and systems built from distributed components for which he investigates modeling systems (using process modeling languages, Petri nets, or scenario-based techniques), analyzing systems for errors or misconformances (through verification or simulation), and process mining/specification mining techniques for discovering system models from event logs. He particularly focuses on distributed system with multi-instance characteristics and their synchronizing and interacting behaviors. Dirk published his research results in over 40 articles at international conferences and journals and implemented them in a number of software tools.

Marwan Hassani

Position: UD
Room: MF 7.097A
Tel (internal): 3887
Projects: BPR4GDPR
Courses: 2IAB0, 2IIC0, 2IHI10, Big Data Honors Track, JM0210
Links: Personal home page, Google scholar page, Scopus page, ORCID page, DBLP page, TU/e employee page
Dr. Marwan Hassani is assistant professor at the AIS group. His research interests include streaming process mining, stream data mining, sequential pattern mining of multiple streams, efficient anytime clustering of big data streams and exploration of evolving graph data. Marwan received his PhD (2015) from RWTH Aachen University. He coauthored more than 45 scientific publications and serves on several program committees.

Farideh Heidari

Position: UD
Room: MF 7.119
Tel (internal): 2257
Links: Personal home page, Google scholar page, Scopus page, ORCID page, TU/e employee page
Farideh has a multi-disciplinary educational and professional background: math and physics, mechanical and industrial engineering, and PhD in information systems. Her unique blend of experiences in academic and industrial areas has made Farideh a person with an original point of view and gave Farideh a broad perspective to life and a goal to aim. This enables Farideh to bring industry and research together to provide organisations with effective solutions to their businesses.

Wim Nuijten

Position: HGL
Room: Flex
Tel (internal): 3877 (2733)
Projects: DAIPEX
Links: Scopus page, TU/e employee page
Prof. Wim Nuijten is a part-time professor at the AIS group focusing on Intelligent Information Systems. He is an expert in the area of constraint programming and advanced planning and scheduling. Currently, he is working for Quintiq, a company leading in the area of advanced planning and scheduling. Before, he worked within IBM and ILOG.

Hajo Reijers

Position: HGL
Room: Flex
Tel (internal): 3629
Projects: CoseLoG, TACTICS
Courses: 2IMI00
Links: Personal home page, Google scholar page, Scopus page, ORCID page, DBLP page, TU/e employee page Hajo Reijers is a part-time, full professor of Information Systems at the Technische Universiteit Eindhoven (TU/e). He is also a full professor in Business Informatics at VU University Amsterdam. He is also affiliated to the TiasNimbas Business School, where he is involved as one of the core lecturers in the Executive Master of Operations and Supply Chain Excellence (MOS) program. Hajo Reijers is one of the founders of the Business Process Management Forum, a Dutch platform for the development and exchange of knowledge between industry and academia. Hajo Reijers received a PhD degree in Computer Science (2002), an MSc in Computer Science (1994), and an MSc in Technology and Society (cum laude) (1994), all from TU/e. Hajo Reijers wrote his PhD thesis while he was a manager with Deloitte. Previously, he also worked for Bakkenist Management Consultants and Accenture. As a consultant, he has been involved in various reengineering projects and workflow system implementations, particularly for governmental agencies and organizations offering financial services. From 2012 to 2014, Hajo Reijers headed the BPM R&D group of Perceptive Software. The focus of Hajo Reijers' academic research is on business process redesign, workflow management, conceptual modeling, process mining, and simulation. On these topics, he published over 150 scientific papers, chapters in edited books, and articles in professional journals.

Natalia Sidorova

Position: UD
Room: MF 7.105
Tel (internal): 3705
Projects: Phlips Flagship, TACTICS
Courses: 2IAB0, 2IMI15, 2IMI35, DASU20
Links: Personal home page, Google scholar page, Scopus page, DBLP page, TU/e employee page
Dr. Natalia Sidorova is assistant professor at the AIS group. She actively works on topics related to process modeling and verification. The application domains include business processes and distributed systems. She has published more than 70 conference and journal papers. She is active in the Health and Wellbeing Action Line of EIT ICT Labs, taking lead of projects towards the development of innovative services for disease prevention making use of modern sensor technologies together with mining, conformance analysis, prediction and recommendation techniques.

Example projects

Bram in 't Groen

VDSEIR - A graphical layer on top of the Octopus toolset


In his work, Bram in 't Groen introduces a graphical representation for DSEIR (a language used in the Octopus toolset for designing embedded systems) called Visual DSEIR (VDSEIR). By using VDSEIR, users of the toolset can create specifications in DSEIR by means of creating graphical models, removing the need for those users to know how to program in the Octopus API. Bram in 't Groen provides a model transformation from VDSEIR to DSEIR that makes use of an intermediate generator model and a parser that is automatically generated from an annotated JavaCC grammar. The graphical representation for DSEIR consists of several perspectives and it contains a special form of syntactic sugar, namely hierarchy. It is possible to create hierarchical models in the graphical representation without having support for hierarchy in the original DSEIR language, because these hierarchical models can be translated into non-hierarchical DSEIR models. This way, additional expressiveness is created for the user, without modifying the underlying toolset.


AIS / External / ESI

Borana Luka

Model merging in the context of configurable process models


While the role of business process models in the operation of modern organizations becomes more and more prominent, configurable process models have recently emerged as an approach that can facilitate their reuse, thereby helping to reduce costs and effort. Configurable models incorporate the behavior of several model variants into one model, which can be configured and individualized as necessary. The creation of configurable models is a complicated process, and tool support for it is in its early steps. In her thesis, Borana Luka evaluates two existing approaches to process model merging which are supported by tools and test an approach to model merging based on the similarity between models. Borana’s work resulted in a paper presented in the 2011 International Workshop on Process Model Collections.


AIS / Internal / CoSeLog project involving 10 municpalities

Staff involved

Cosmina Cristina Niculae

Guided configuration of industry reference models


Configurable process models are compact representations of process families, capturing both the similarities and differences of business processes and further allowing for the individualization of such processes in line with particular requirements. Such a representation of business processes can be adopted in the consultancy sector and especially in the ERP market, as ERP systems represent general solutions applicable for a range of industries and need further configuration before being implemented to particular organizations. Configurable process models can potentially bring several benefits when used in practice, such as faster delivery times in project implementations or standardization of business processes. Cosmina Niculae conducted her project within To-Increase B.V., a company that specializes in ERP implementations. She developed an approach to make configuration much easier, implemented it, and tested it on real-life cases within To-Increase.


AIS / External / To-Increase

Staff involved

Dennis Schunselaar

Configurable Declare


Declarative languages are becoming more popular for modeling business processes with a high degree of variability. Unlike procedural languages, where the models define what is to be done, a declarative model specifies what behavior is not allowed, using constraints on process events. In his thesis, Dennis Schunselaar studies how to support configurability in such a declarative setting. He takes Declare as an example of a declarative process modeling language and introduces Configurable Declare. Configurability is achieved by using configuration options for event hiding and constraint omission. He illustrated our approach using a case study, based on process models of ten Dutch municipalities. A Configurable Declare model is constructed supporting the variations within these municipalities.


AIS / Internal

Staff involved

Erik Nooijen

Artifact-Centric Process Analysis, Process discovery in ERP systems


In his thesis, Erik Nooijen developed an automated technique for discovering process models from enterprise resource planning (ERP) systems. In such systems, several different processes interact together to maintain the resources of a business, where all information about the business resources are stored in a very large relational database. The challenge for discovering processes from ERP system data, is to identify from arbitrary tables how many different processes exist in the system, to extract event data for each instance of each process in the system. Erik Nooijen identified a number of data mining techniques that can solve these challenges and integrated them in a software tool, so that he can automatically extract for a process of the ERP system an event log containing all events of that process. Then classical process discovery techniques allow to show the different process models of the system. Erik conducted his project within Sligro where he is actively using his software to improve the company's processes. The thesis resulted in a workshop paper presented at the International Conference on Business Process Management 2012 in Tallinn, Estonia.


AIS / External / Sligro

Staff involved

Irina-Maria Ailenei

Process mining tools: A comparative analysis


In her thesis, Irina-Maria Ailenei proposes an evaluation framework that is used to assess the strengths and the weaknesses of process mining tools. She applied the framework in practice for evaluating four process mining systems: ARIS PPM, Flow, Futura Reflect, and ProcessAnalyzer. The framework is based on a collection of use cases. A use case consists of a typical application of process mining functionality in a practical situation. The set of use cases was collected based on the functionality available in ProM and was validated by conducting a series of semi-structured interviews with process mining users and by conducting a survey. The validated collection of use cases formed the base of her tool evaluation. The project was conducted within Fluxicon and was created based on a request from Siemens. The work also resulted in a paper presented at the BPI workshop in Clermont-Ferrand in 2011.


AIS / External / Fluxicon

Staff involved

Stefania Rusu

Discovery and analysis of field service engineer process using process mining


In her thesis, Stefania Rusu applied process mining techniques using event logs from Philips Healthcare to get insights into the workflow of Field Service Engineers. In order to get insights into Philips’s Field Service Engineers Stefania created hierarchical models that abstract from the low levels at which events are stored to higher levels of activities, based on the role of the analyst. Performance analysis based on process mining techniques helped to identify bottlenecks and thereby generated ideas for improvement regarding the work of Field Service Engineers within Philips Healthcare. Stefania proposed a new approach to annotate process maps with performance information extracted from event logs. To support the approach and to apply in in practice, she implemented a new plug-in in the ProM framework.


AIS / External / Philips Healthcare

Staff involved

Erasmus Mundus Joint Master in the field of Big Data Management and Analytics

The department of Computer Science at Technische Universiteit Eindhoven (TU/e) in a consortium of five European Universities is awarded EU co-funding to run a Erasmus Mundus Join Master Degree (EMJMDs) in the field of Big Data Management and Analytics (BDMA).

The programme curriculum is jointly delivered by Université libre de Bruxelles (ULB), Belgium, Universitat Politècnica de Catalunya (UPC), Spain, Technische Universität Berlin (TUB), Germany, Technische Universiteit Eindhoven (TU/e), the Netherlands, and Université François Rabelais Tours (UFRT), France. Academic partners around the world and partners from leading industries in BI and BD, private R&D companies, excellence centres, service companies, start-up incubators, public research institutes, and public authorities will contribute to the programme by giving lectures, training students, providing software, course material, and internships or job placements. The EMJMD in “Big Data Management and Analytics” (BDMA) is designed to provide understanding, knowledge, and skills in the broad scope of fields underlying Business Intelligence (BI) and Big Data (BD). Its main objective is to train computer scientists who have an in-depth understanding of the stakes, challenges, and open issues of gathering and analysing large amounts of heterogeneous data for decision-making purposes. The programme will prepare the graduates to answer the professional challenges of our data-driven society through a strong connection with industry, but also to pursue their studies into doctorate programmes through a strong connection with research and innovation

BDMA is a 2-year (120 ECTS) programme. The first two semesters are devoted to fundamentals on BI and BD delivered by ULB and UPC. Then, all students participate to the European Business Intelligence and Big Data Summer School (eBISS). In the third semester, students chose one of the three specialisations delivered by TUB, TU/e, and UFRT. The fourth semester is dedicated to the master's thesis and can be carried out as an internship in industry or in a research laboratory in any full or associated partner. Eventually, all students attend the Final Event devoted to master's theses defences and the graduation ceremony. The tuition language is English. The program targets students with a Bachelor of Science (or a level equivalent to 180 ETCS) with major in Computer Science, as well as an English proficiency corresponding to level B2 of CEFR. The programme will deliver a joint degree to graduates following the mobility ULB, UPC, and UFRT, and three degrees from ULB, UPC, and the university of the specialisation (UFRT, TUB or TU/e) to graduates following the other mobilities.

More information can be found on the UFRT BDMA site, which will open soon.

If you have a master project item for this page, which may include possible master project assignments, on-going master projects, and completed master projects, please send it to Eric Verbeek.