Information for students

This page is deprecated and has been replaced by this projects page. Please update your bookmarks accordingly.

Master Projects

Are you looking for a Master’s thesis topic that

  • solves a challenging real-world problem,
  • allows you to become a specialist in the area of processes and information systems,
  • gives you cutting-edge knowledge to solve today’s problems regarding processes and their use in all kinds of organizations and companies?

The AIS group offers you thesis topics and supervision in a group

  • that has invented and developed cutting edge technologies that were applied in over 100 different companies and is now being used in various industrial tools,
  • that has set and contributed to various standards in the area of process-aware information systems,
  • that actively cooperates with many large companies in the area of process-aware information systems in the Netherlands and worldwide, as well as with universities and researchers from all over the world (we are currently running 17 projects with different partners),
  • that gives you clear and regular guidance through the challenges of a Master’s thesis project with the freedom to develop your own ideas, and
  • that after all is fun and supportive to work with.

How to do a graduation project with the AIS group?

  1. We expect you to take our specialization courses in Process Mining:
    1. 2IMI35 Introduction to Process Mining ,
    2. 2IMI20 Advanced Process Mining, and our
    3. 2IMI00 Seminar “Analytics for Information Systems” (or show equivalent knowledge).
  2. We can supervise you if you are a student in Computer Science or Data Science at the department of Mathematics & Computer Science or within JADS.
  3. You need to find a topic and a supervisor matching your interests and skills.
    1. Our Procedure for Master projects helps you finding the right topics for you
  4. Let us know about your interests so we can match you to the right supervisor

Possible assignments

The AIS group is conducting research, development, and experiments in the area of Process Analytics and Process Mining. You can join these endeavors by developing your own Process Analytics or Process Mining solution. We offer challenging assignments on industrially relevant topics, personal guidance developing your own cutting-edge solution in concept and in realization, and evaluation and experiments on real-life data.

The following assignments are a selection of possible topics and available at all times. If you have a question about a particular topic, please contact the lecturer offering the topic. If you have ideas for your own topic, please contact Dr. Dirk Fahland.

  • Process Discovery using Generative Adversarial Neural Networks. Process Discovery is an unsupervised learning problem with the task of discovering a graph-based model from sequences (or graphs) of event data that describes the data best. Generative Adversarial Neural Networks (GANNs) are a type of neural networks used to learn structures in an unsupervised fashion. The objective of this project is to explore the potential to apply GANNs for process discovery. Besides surveying the state of the art GANNs, the project will require generation training and evaluating existing techniques GANNs to discover process models, and to compare the outcomes to models produced by current algorithm-based process discovery techniques. Contact: Dr. Dirk Fahland.
  • Process Mining on Event Graph Databases (multiple projects). Process mining assumes event data to be stored in an event log, which is technically either a relational table (attributes as columns) or a stream of events (attribute value pairs). Recently, we developed a new technique to store event data in a Graph database such as Neo4j. This allows to do process mining over various dimensions such as shared resources, shared data, or multiple interacting cases allowing for a more complete view on an entire organization. The objective of this assignment is to develop new process mining use cases and techniques enabled by this format and to showcase their application in a proof-of-concept implementation. Contact: Dr. Dirk Fahland.
  • Mining processes, social networks, and queues (multiple projects). A recent visual analytics technique called the “Performance Spectrum” https://github.com/processmining-in-logistics/psm allows us to gain more fine-grained insights into performance behavior and changes over time. A TU/e Master student showed that it is possible to mine synchronization of cases from the performance spectrum data showing that the behavior of a case depends on the mechanisms and actors that drive it. The objective of this research is to investigate the interplay of processes, actors (social networks), and work-organization mechanisms (e.g., queues) in more detail. Contact: Dr. Dirk Fahland.
  • Process Mining with Textual Data. In many application domains, a process execution is captured using natural language. Think of medical records, customer complaints, legal records… The same holds for process models: they can be captured as text for medical guidelines, user manuals, legal regulations are typical examples of such cases. Such data forms a new challenge for the process mining field, since traditional process mining techniques need to be augmented with Natural Language Processing (NLP) technologies. The first steps in this direction were made in a recent Master thesis “Local Process Model Extraction from Textual Data” by Fatemeh Shafiee. In that work process fragments were mined from separate sentences of the text. The next challenge is in building bigger fragments from the elementary process fragments mined. Contact: Dr. Natalia Sidorova.
  • Real-Time Process Mining for Customer Journey Data. Available process discovery have been tested in the customer journey context under offline settings. Recent online process discovery approaches like: https://ieeexplore.ieee.org/document/7376771 bring however a lot of added value for a real-time customer journey optimization. The objective of this assignment is to use two different customer journey datasets to test the effectiveness of such approaches for a real-time optimization of customer journey. Contact: Dr. ing. Marwan Hassani.
  • Finding Patterns in Evolving Graphs. The analysis of the temporal evolution of dynamic graphs like social networks is a key challenge for understanding complex processes hidden in graph structured data. Graph evolution rules capture such processes on the level of small subgraphs by describing frequently occurring structural changes within a network. Existing rule discovery methods make restrictive assumptions on the change processes present in networks. The objective of this assignment is to track particular patterns when considering the natural evolution that contains both insertion and deletion of nodes. Contact: Dr. ing. Marwan Hassani.
  • Using Sequential Pattern Mining to Detect Drifts in Streaming Data. BFSPMiner is an effective and efficient batch-free algorithm for mining sequential patterns over data streams was published very recently https://link.springer.com/article/10.1007/s41060-017-0084-8. An implementation of the algorithm is available here: https://github.com/Xsea/BFSPMiner. As BFSPMiner has proven to be effective (see Figures 10-14 of the paper) in different domains (see Table 1 in the paper), we would like to use it for detecting drifts in the underlying stream. This can be done, for instance, by observing considerable statistical deviations in the discovered top k patterns. Additionally, for more adaptation to the streaming settings, a new method is needed to adaptively change the optimal value of the maximal pattern length. Contact: Dr. ing. Marwan Hassani.
  • Efficient unsupervised event context detection for event log clustering, outlier detection, and pre-processing. We recently developed a technique to detect the context of events from an event log in an efficient way through sub-graph matching. This allows to identify events and parts of event logs which are similar or different to each other, allowing to cluster traces, detect outliers, and improve event log quality, as illustrated in this Demo video. The objective of this assignment is to improve the efficiency of the matching algorithm and the quality of the results through graph summarization and data mining techniques. Contact: Dr. Dirk Fahland.
  • Smart event log pre-processing. The quality of process mining results highly depends on the quality of the input data where noise, infrequent behaviors, log incompleteness or many different variants undercut the assumptions of process discovery algorithms, and lead to low-quality results. ProM provides numerous event log pre-processing and filtering options, but they require expert knowledge to understand when which pre-processing step is advised to reach a particular outcome. The objective of this assignment is to develop a comprehensive framework for event log pre-processing and a recommender that suggests the user adequate pre-processing steps depending on the data properties and the analysis task. Ideas are to be realized in a proof-of-concept implementation. Contact: Dr. Dirk Fahland.
  • Log Data Anonymization In the context of process mining, we are often confronted with companies willing to share their data if we can sufficiently anonymize this. However, to date, there are no well-defined plugins to do such anonymizations. Therefore, we are looking for a Master student that is willing to help us with this. Part of the project will be an in-depth investigation in existing anonymization techniques. Which techniques are available and what analysis properties do they preserve. More importantly, how difficult is it to de-anonymize the data? Another important part is the implementation of an anonymization framework in our toolset ProM. Good programming skills in Java are therefore important, no matter if you are a CSE or a Data Science student. For more information, contact prof. Boudewijn van Dongen
  • Adding heuristics to the Block Layout. TheBlock Layout can be used to create a layout for a process graph. For this, it uses well-known Petri-net-based reduction rules to reduce the entire net into a single place. For nicely structured process graphs, this layout works quite well, but for more complex structured graphs, the resulting layout needs to be improved. Either good heuristics are needed for deciding (n case we have a choice) which way to go, or new reduction rules that can cope with more complex structured graphs. For more information, contact Eric Verbeek.
  • N-out-of-M patterns in alignments. Aligning structured process models to event logs is a far from trivial task. In complex modelling languages, inclusive OR-split/join patterns play an important role and they are known to be notoriously difficult to align to event logs due to their large state-spaces.
    The known Petri net translations of OR-joins rely either on token coloring or on structured combinations of co-called tau transitions to route the tokens in the Petri net mimicking the OR-join semantics. In this project, we investigate alternative translations of OR-joins/splits to Petri nets which are optimized for alignment computations. This means we are not looking for sound translations, but rather relaxed or easy sound ones. However, the close relation between the Petri net, its incidence matrix and the heuristic function in alignments needs to be taken into consideration.
    Finally, we aim to generalize the classical OR join (skip at most M-1 out of M options) to a (N out of M version, where the requirement is to execute N out of M submodels). Contact prof. Boudewijn van Dongen
  • Generating non block-structured models and corresponding logs. For experimenting with process discovery and Petri nets, scientists often rely on experiments with artificial models and logs. More often than not, these models are block structured as it is easy to generate such models by simply building a random process tree and translating that into a Petri net. However, Petri nets allow for more intricate structures which may heavily affect the experimental results, i.e. the representational bias of block structured models it too limited for state-of-the-art process mining technology.
    We are therefore looking for a tool to generate sound, but not block structured, Petri nets and corresponding event logs with various characteristics in terms of deviations and decision point behaviour. Contact prof. Boudewijn van Dongen
  • Petri net reduction rules for replay. Replaying event logs on Petri nets, either through token-replay or using alignments, is a complex task. Especially when models become larger and have more labels, the size of the models becomes a problem. In Petri net theory, many reduction rules exist for reducing Petri nets while retaining, for example, soundness of the model. Can we derive such a set of rules that are “replay-preserving”, i.e. can we make the problem of replaying logs on models simpler by doing a structural reduction on the input. Contact prof. Boudewijn van Dongen

Possible external assignments

In addition to the above assignments by the AIS group, we also offer supervision on graduation topics together with external partners.

Resolving kpn Customer Journey Variances through a Suitable Similarity Measure

The customer journey approach is quickly becoming the game changer within KPN www.kpn.com to become a customer-centric service provider and to improve the customer experience to an un-telco like level. Within this context, we have already connected various touch points of the customer, including calls, chats, store visits, online visits and engineer visits, and we are continuously adding more touch points to our customer journey dataset. In addition to these touch points also metadata, such as cost, is added to the data set. For research purposes, we use only data from customers who have given permission for this.

We are starting to harvest the benefits from a customer journey approach. For example, we use insights about follow-up touch points to direct our efforts to improve business process towards a more efficient and self-service oriented user portal.

A major obstacle however in finding opportunities to improve upon is the huge variance within the customer journey. A customer has countless of ways to reach the same goal. To improve our opportunity finding we need to bring order to this chaos. In this project, we want to achieve that by clustering journeys according to customer intend. Therefore, a similarity measure between the observed variants needs to be developed. First, available distance measures (e.g. for trace clustering) should be explored. Then, using domain knowledge and unsupervised evaluation measures, one would like to identify the best available similarity measure and optimize it.

Finally, by using this similarity measure the customer journey variants can be clustered, where each cluster should then represent a specific customer intend. This final cluster result should help KPN in opportunity finding by better sizing and prioritization of our efforts as well as creating better solutions by having a more complete picture of the average journey per customer intend.

KPN is one of the largest Dutch telecommunication companies with a market share of approximately 40% for fixed services, with a customer base of about 3 million broadband customers and 5,5 million postpaid mobile customers. We deliver our services through several channels with a myriad of touchpoints.

→ Read more...

Enriching Customer Journey Prediction in kpn with Context Data

The customer journey approach is quickly becoming the game changer within KPN www.kpn.com to become a customer-centric service provider and to improve the customer experience to an un-telco like level. Within this context, we have already connected various touch points of the customer, including calls, chats, store visits, online visits and engineer visits, and we are continuously adding more touch points to our customer journey dataset. In addition to these touch points also metadata, such as cost, is added to the data set, see figure below. For research purposes, we use only data from customers who have given permission for this.

We are starting to harvest the benefits from a customer journey approach. For example, we use insights about follow-up touch points to direct our efforts to improve the business process towards a more efficient and self-service oriented user portal. With these improvements in mind, we are now ready for the next step, with the following questions. Can we predict, given the contact history and the context such as demographic and performance data, what the next touch point(s) will be? If this is possible, how early can we get this prediction?

The final prediction model should be good enough to pro-actively help the customer in its journey towards a satisfactory solution. First, this will reduce customer effort and boost customer satisfaction. Second, this will allow us to be more efficient by combining touch points and optimize planning. The effectiveness of the prediction approach will be tested in a real customer satisfactory setting.

KPN is one of the largest Dutch telecommunication companies with a market share of approximately 40% for fixed services, with a customer base of about 3 million broadband customers and 5,5 million postpaid mobile customers. We deliver our services through several channels with a myriad of touchpoints.

→ Read more...

Data-driven Product Design at Philips Healthcare (3 Master projects)

The study of user behaviour is an important part of the product design process. This process is particularly more difficult when dealing with products that require very complex user interaction. Therefore, obtaining as much product usage information as possible is needed, since it can reveal patterns in the user behaviour that indicate a misalignment between the use intended by product designers and the actual use. Most often, there are a lot of assumptions made when designing that are either based on domain knowledge or on personal expertise. Traditional techniques such as interviews or reviewing customer complaints provide a restricted view of the behaviour of a small set of users.

Philips Healthcare is using data-driven techniques, in particular the analysis of event data of product usage, to improve both the design of products and the design process itself.

A prior Master project by the AIS group and Philips Healthcare showed first promising results. Philips Health wants to continue research in this area with 3 new Master projects.

→ Read more...

Automatically Matching Requirements to Process Models Determining the Impact of Change (ACCHA)

Target audience: Computer Science students with a data science/machine learning/NLP background.

Task description: The main task of this Master thesis is to develop and implement a technique that is able to automatically link textual requirements to model-based representations of business process (so-called process models). By doing so, it will be possible to quantify the impact of a set of requirements (e.g. specifying a new piece of software) on the currently running business processes of an organization. The main challenge associated with this task is to successfully leverage Natural Language Processing techniques to deal with rather short language fragments that are typically used in process models. Real-world data will be provided for both developing and testing the technique.

→ Read more...

Business Process Mining and Modeling at Amsterdam municipality (4 positions)

Amsterdam is a dynamic metropolis with great ambitions: Creating an excellent urban climate of living, working and leisure, but also a decisive and effective government. The city is a 'living lab', where metropolitan tasks are both a challenge and an opportunity to develop and apply new insights, technologies and practices.

In this context, Amsterdam municipality is looking for 4 thesis interns (WO) Business Process Mining and Modeling who are able to gain insight into existing process equipment and systems in the municipal information chains. Nederlands versie, English version

→ Read more...

Using Process Mining to find lead indicators of Quality Issues

Using process mining in combination with (more regular) data analysis to predict the chance of part failure during EUV assembly and testing.

→ Read more...

Detecting root causes of complaints and investigating the continuation within the customer journey

In the Dutch health care system health care insurance is obligated for all residents. The government sets the basis package and insurers compete based on price and service. Customer service is therefore very important for every health insurance company; especially in the fast changing digital world. As a result customer satisfaction is the most important KPI. Complaints are related to customer satisfaction, because they have a highly negative effect on customer satisfaction. Therefore, it is crucial to prevent customers from having any complaints. Customers do not submit a complaint out of the blue: a complaint is a reaction to one or more occurrences in the past. The best way to understand the triggers, or root causes, it to understand the touchpoints prior to a complaint.

CZ aims for a method to prevent complaints (in order to increase customer satisfaction and lower churn). When taking the touchpoints of the customer journey into account, complaint root causes can be detected and CZ can prevent future complaints. Moreover, the touchpoints after the complaint give insights about an optimal follow up after the complaint.

Prior research development

Underlined works together with several companies like CZ, Aegon and SNS to build a generic framework in which all customer contacts are brought together as a unique dataset which is further enriched by Underlined with relevant analyses that can be linked to customer events. Underlined co-created customer journey mining algorithms, together with the TU/e (Bart Hompes and Joos Buijs from the Architecture of Information Systems group). This research showed that it is possible to distinguish the different journeys per customer. Moreover, in collaboration with the TiU econometrist department, a driver model has recently developed to distinguish relevant drivers and to predict the NPS-score.

→ Read more...

Advanced Process Mining techniques in Practice (several Master projects with ProcessGold)

ProcessGold is a software supplier bringing together Process Mining and Business Intelligence, driven by highly skilled ICT entrepreneurs and backed by a wealth of experience. ProcessGold recently released a new Process Mining platform, the ProcessGold Enterprise Platform, that combines data extraction, process mining techniques, and visual analytics in order to produce dynamic visual reports which are easy to monitor and analyze for process stakeholders. These reports form the basis for deeper, fact-driven analysis and continuous process improvement projects.

In this context, ProcessGold is constantly offering graduation internships for investigating new techniques and methodologies in Process Mining and their application in a business context. A few example topics are given below - the specific graduation project and scope and will be further developed in mutual agreement.

→ Read more...

Philips HUE Product Evolution Using Stream Mining of Customer Journey

Philips HUE is a connected personal lighting system. It is controlled by a range of apps and smart home devices.

To acquire Philips HUE, one starts with a starter kit that consists of a few lamps and a bridge. Subsequently, consumers decide to expand their system with additional lamps or/and physical sensors. About 50 lamps can be controlled in one system.

An example of HUE system is seen in the Figure below. It consists of a bridge, three color lamps and a dimmer switch.

Every consumer chooses a device to interact with the lights. By default, all consumers start with HUE or third-party app. There are also options in the HUE app to create routines to be able to control the lights automatically. For instance, one can create a wakeup routine so that the lights in the bedroom go on naturally in the morning. HUE can also be controlled by Philips physical sensors (motion sensor, tap, dimmer switch), voice control and other third party smart home devices. Amazon Echo, Google Home, Eneco Toon, Homekit are some of smart home devices that are linked to HUE. In addition, traditional wall switch can also be used to control the lights. For further information see Philips Lighting website.

The master thesis focusses on developing a process mining model to understand product evolution of consumers in their journey and identify the paths that contribute to a high Customer Lifetime Value (CLV). Moreover, the thesis will also explore data mining techniques that can be applied on the outcome of process mining model.

→ Read more...

Real-Time Model Discovery of the Service Order Process Using Stream Process Mining

Kropman Installatietechniek is a Dutch company established in 1934 and has become one of the leading companies of the Dutch installation industry. With about 800 employees, 12 regional locations and an annual turnover of more than 100 million Euro, Kropman is an integral service provider with a multidisciplinary approach. Kropman is mainly active in office buildings, health care and industry. It offers design, construction and maintenance in the field of facility installations. Kropman also has a separate business for process installations and cleanrooms: Kropman Contamination Control. The maintenance (services) is a fast growing business line. The order process is fully supported within an ERP environment. The service order process is not a trouble-free process: the process takes too long and flows more often than necessary. The company aims at increasing the throughput of the SO process and decreasing the amount of process deviations by applying process mining and data mining techniques.

To have a better overview of the process, Kropman is aiming at:

  • Exposing bottlenecks happening in the SO process
  • Detecting deviations from the supposed model in the real time they are happening using stream process mining techniques

→ Read more...

Do CHANGE project with Onmi Design

Onmi Design is currently working on the Do CHANGE project, funded in the Horizon 2020 program from the European Commission. In this project they are developing a health ecosystem that provides real-time behaviour change coaching based on input from various sensors. The student would assist in the development of analysis algorithms and integration of these algorithms in the system. For more information on the project please visit www.do-change.eu.

→ Read more...

Procedure for Master Projects

We offer Master projects in several different areas and also on concrete topics. We then tailor the Master project assignments for each individual student, with that student’s skills in mind. To create an individual assignment, you have to find and commit to a supervisor who helps you in this process. In particular external assignments must be discussed and defined together with the supervisor to ensure that the defined assignment fits the student, the company, and the study program.

We help you finding and assignment and a supervisor by the following steps (explained in detail below)

  • Steps 1 and 2: You identify a short-list of possible topics and supervisors.
  • Step 3: You enter your interests (and some other information about yourself) in an online form. We will match you to a possible supervisor who can supervise your topic/your interests.
  • Step 4: You then define and run your graduation project together with your supervisor.

→ Read more...

Research Lines

Assignments for master projects typically fall in one of the main three research lines of AIS:

Processes

The research group distinguishes itself in the Information Systems discipline by its fundamental focus on modelling, understanding, analyzing, and improving processes. Essentially, every time two or more activities are performed to reach a certain goal, fundamental principles of processes apply. Processes take place on the level of individual actors, groups of actors, entire organizations, and networks of organizations. Processes, in one form or another, can be seen everywhere:

In traditional workflow management settings, the idea behind a process is that there is a single notion of a “case” flowing through a “process” according to a pre-defined route, i.e. a “process model”. However, the notion of processes and the principles behind them are much broader. In many situations, processes are not explicitly defined (there is no procedure for a customer clicking through a website) or are highly flexible (typically in knowledge intensive processes such as designing products or deciding about visa and immigration). Also, many processes have interactions with other processes: for example, there are many interlinked processes behind every patient in a hospital or behind the supply chain of any multi-national manufacturer. There is a growing concern within organizations, governments, and society as a whole that processes need to be governed properly and efficiently. The availability of large amounts of data on the one hand and a fundamental understanding of the process notion on the other shape the opportunity for researchers in our group to contribute to this societal challenge.

Background: Business Process Management

Within organizations, the management of processes has been an important topic since the introduction of conveyor belts in the early 1920’s. From 2000, business process management is the research field focusing on agility in organizations and continuous (business) process improvement through a BPM life-cycle of designing, modelling, execution, monitoring and optimizing processes.

Lately, there is growing attention in the field of business process management for the embedding of process analytics into (process aware) information systems, i.e. BPM provides the context in which our analytics are being developed.

Fundamental to the research group at the Eindhoven University of Technology is the choice for Petri nets as the language to precisely describe process dynamics also in complex settings at a foundational level. The choice for this language is what distinguishes our research group from research groups in more industrial engineering oriented information systems groups.

Process Analytics

One of the foundations of computer science today is data. The omnipresence of increasingly large volumes of data has become a key driver for many innovations and new research directions in computer science. Specifically in information systems, data - and the analytics developed on top of this data - have transformed the field from expert-driven to evidence-based, which in turn massively broadens the applicability of results to more and larger contexts. Many advanced process analysis tools and techniques exist today in over 25 commercial packages that were developed in the AIS group over the last 15 years.

The research in the our group continues to expand outward from a “classical” situation of data with clear case notions in the context of explicitly structured processes to a broad, multi-faceted field, where processes are less structured or consist of many interacting artifacts and where case notions in data become more fluid or are complex, multi-dimensional networks.

The figure above shows this research field of process analytics.

Impact and Societal Relevance

When selecting a Master project it is advisable to consider the track record of the research group supervising the project. Therefore, we briefly discuss the impact and societal relevant of AIS’s research.

The work of the AIS group is world renowned, especially in the fields of

  1. the modeling and analysis of workflow processes (cf. workflow nets and the seminal soundness notion),
  2. workflow patterns (the DAPD paper on the workflow patterns is the most cited paper in the BPM domain and the Workflow Patterns web site is the most visited web site on workflow management over the last decade), and
  3. process mining (e.g., we established an international process mining community).

The impact of our work is reflected by the many citations of the publications of the AIS group. For example, Wil van der Aalst is the highest ranked European computer scientist based on Google Scholar. His work has been cited more than 63,000 times and his Hirsch Index is 120. During the last evaluation of all computer science groups in the Netherlands, the AIS group got the highest marks possible: 5-5-5-5-5 (i.e., a perfect score). The workflow patterns have had a very positive effect on commercial WFM/BPM products. Today, the patterns are widely used to describe workflow functionality in a language/system-independent manner. In addition, the patterns are also highly visible. The Wokflow Patterns web site has been one of the most visited web sites in the field of BPM averaging more than 300 unique visitors per working day over the last decade. Several vendors changed their tools to support more patterns and some have provided wizards based on the patterns. For example, IBM recently added a wizard-like functionality to their WebSphere product inspired by the patterns. Also standardization efforts were influenced by the patterns, see for example BPMN.

→ Read more...

Staff involved

Boudewijn van Dongen
Position: Full professor
Room: MF 7.103
Tel (internal): 2181
Projects: 3TU.BSR, BPR4GDPR, DeLiBiDa, Process Mining in Logistics
Courses: 2IIH0,2IMI05, 2IMI35
Links: Google scholar page, Scopus page, ORCID page, DBLP page, TU/e employee page
Boudewijn’s research focusses on conformance checking. Conformance checking is considered to be anything where observed behavior, needs to be related to already modeled behavior. Conformance checking is embedded in the larger contexts of Business Process Management and Process Mining. Boudewijn aims to develop techniques and tools to analyze databases and logs of large-scale information systems for the purpose of detecting, isolating, diagnosing and predicting misconformance in the business processes supported by these systems. The notion of alignments play a seminal role in conformance checking and the AIS group is world-leading in the definition of alignments for various types of observed behavior and for various modelling languages.
Dirk Fahland
Position: Associate-Professor (UHD)
Room: MF 7.066
Tel (internal): 4804
Projects: Process Mining in Logistics
Courses: 2IMI00, 2IMI05, 2IMI20
Links: TU/e profile page, Personal home page, Google scholar page, Scopus page, DBLP page
Dirk is Associate-Professor (UHD) in the AIS group. He completed his PhD with summa cum laude at Humboldt-Univeristät zu Berlin and Eindhoven University of Technology in 2010. His research interests include distributed processes and systems built from distributed components for which he investigates modeling systems (using process modeling languages, Petri nets, or scenario-based techniques), analyzing systems for errors or misconformances (through verification or simulation), and process mining/specification mining techniques for discovering system models from event logs. He particularly focuses on distributed system with multi-instance characteristics and their synchronizing and interacting behaviors. Dirk published his research results in over 40 articles at international conferences and journals and implemented them in a number of software tools.
Marwan Hassani
Position: UD
Room: MF 7.097A
Tel (internal): 3887
Projects: BPR4GDPR
Courses: 2IAB0, JBG030, JM0210
Links: Personal home page, Google scholar page, Scopus page, ORCID page, DBLP page, TU/e employee page
Dr. Marwan Hassani is assistant professor at the AIS group. His research interests include streaming process mining, stream data mining, sequential pattern mining of multiple streams, efficient anytime clustering of big data streams and exploration of evolving graph data. Marwan received his PhD (2015) from RWTH Aachen University. He coauthored more than 45 scientific publications and serves on several program committees.
Renata Medeiros de Carvalho
Position: UD
Room: MF 7.059
Tel (internal):
Projects: BPR4GDPR
Courses: 2IAB0, 2IMC93, 2IMC98, JM0200
Links: Scopus page, DBLP page, TU/e employee page
Hajo Reijers
Position: HGL
Room: Flex
Tel (internal): 3629
Projects: CoseLoG, TACTICS
Courses:
Links: Personal home page, Google scholar page, Scopus page, ORCID page, DBLP page, TU/e employee page
Prof.dr.ir. Hajo Reijers is a part-time, full professor of Information Systems at the Technische Universiteit Eindhoven (TU/e). He is also a full professor in Business Informatics at VU University Amsterdam. He is also affiliated to the TiasNimbas Business School, where he is involved as one of the core lecturers in the Executive Master of Operations and Supply Chain Excellence (MOS) program. Hajo Reijers is one of the founders of the Business Process Management Forum, a Dutch platform for the development and exchange of knowledge between industry and academia. Hajo Reijers received a PhD degree in Computer Science (2002), an MSc in Computer Science (1994), and an MSc in Technology and Society (cum laude) (1994), all from TU/e. Hajo Reijers wrote his PhD thesis while he was a manager with Deloitte. Previously, he also worked for Bakkenist Management Consultants and Accenture. As a consultant, he has been involved in various reengineering projects and workflow system implementations, particularly for governmental agencies and organizations offering financial services. From 2012 to 2014, Hajo Reijers headed the BPM R&D group of Perceptive Software. The focus of Hajo Reijers' academic research is on business process redesign, workflow management, conceptual modeling, process mining, and simulation. On these topics, he published over 150 scientific papers, chapters in edited books, and articles in professional journals.
Natalia Sidorova
Position: UD
Room: MF 7.105
Tel (internal): 3705
Projects: Phlips Flagship, TACTICS
Courses: 2IAB0, 2IMI15, DASU20
Links: Personal home page, Google scholar page, Scopus page, DBLP page, TU/e employee page
Dr. Natalia Sidorova is assistant professor at the AIS group. She actively works on topics related to process modeling and verification. The application domains include business processes and distributed systems. She has published more than 70 conference and journal papers. She is active in the Health and Wellbeing Action Line of EIT ICT Labs, taking lead of projects towards the development of innovative services for disease prevention making use of modern sensor technologies together with mining, conformance analysis, prediction and recommendation techniques.

Example projects

Bram in 't Groen

VDSEIR - A graphical layer on top of the Octopus toolset

Description

In his work, Bram in 't Groen introduces a graphical representation for DSEIR (a language used in the Octopus toolset for designing embedded systems) called Visual DSEIR (VDSEIR). By using VDSEIR, users of the toolset can create specifications in DSEIR by means of creating graphical models, removing the need for those users to know how to program in the Octopus API. Bram in 't Groen provides a model transformation from VDSEIR to DSEIR that makes use of an intermediate generator model and a parser that is automatically generated from an annotated JavaCC grammar. The graphical representation for DSEIR consists of several perspectives and it contains a special form of syntactic sugar, namely hierarchy. It is possible to create hierarchical models in the graphical representation without having support for hierarchy in the original DSEIR language, because these hierarchical models can be translated into non-hierarchical DSEIR models. This way, additional expressiveness is created for the user, without modifying the underlying toolset.

Type

AIS / External / ESI

Borana Luka

Model merging in the context of configurable process models

Description

While the role of business process models in the operation of modern organizations becomes more and more prominent, configurable process models have recently emerged as an approach that can facilitate their reuse, thereby helping to reduce costs and effort. Configurable models incorporate the behavior of several model variants into one model, which can be configured and individualized as necessary. The creation of configurable models is a complicated process, and tool support for it is in its early steps. In her thesis, Borana Luka evaluates two existing approaches to process model merging which are supported by tools and test an approach to model merging based on the similarity between models. Borana’s work resulted in a paper presented in the 2011 International Workshop on Process Model Collections.

Type

AIS / Internal / CoSeLog project involving 10 municpalities

Links
Staff involved
Cosmina Cristina Niculae

Guided configuration of industry reference models

Description

Configurable process models are compact representations of process families, capturing both the similarities and differences of business processes and further allowing for the individualization of such processes in line with particular requirements. Such a representation of business processes can be adopted in the consultancy sector and especially in the ERP market, as ERP systems represent general solutions applicable for a range of industries and need further configuration before being implemented to particular organizations. Configurable process models can potentially bring several benefits when used in practice, such as faster delivery times in project implementations or standardization of business processes. Cosmina Niculae conducted her project within To-Increase B.V., a company that specializes in ERP implementations. She developed an approach to make configuration much easier, implemented it, and tested it on real-life cases within To-Increase.

Type

AIS / External / To-Increase

Links
Staff involved
Dennis Schunselaar

Configurable Declare

Description

Declarative languages are becoming more popular for modeling business processes with a high degree of variability. Unlike procedural languages, where the models define what is to be done, a declarative model specifies what behavior is not allowed, using constraints on process events. In his thesis, Dennis Schunselaar studies how to support configurability in such a declarative setting. He takes Declare as an example of a declarative process modeling language and introduces Configurable Declare. Configurability is achieved by using configuration options for event hiding and constraint omission. He illustrated our approach using a case study, based on process models of ten Dutch municipalities. A Configurable Declare model is constructed supporting the variations within these municipalities.

Type

AIS / Internal

Links
Staff involved
Erik Nooijen

Artifact-Centric Process Analysis, Process discovery in ERP systems

Description

In his thesis, Erik Nooijen developed an automated technique for discovering process models from enterprise resource planning (ERP) systems. In such systems, several different processes interact together to maintain the resources of a business, where all information about the business resources are stored in a very large relational database. The challenge for discovering processes from ERP system data, is to identify from arbitrary tables how many different processes exist in the system, to extract event data for each instance of each process in the system. Erik Nooijen identified a number of data mining techniques that can solve these challenges and integrated them in a software tool, so that he can automatically extract for a process of the ERP system an event log containing all events of that process. Then classical process discovery techniques allow to show the different process models of the system. Erik conducted his project within Sligro where he is actively using his software to improve the company's processes. The thesis resulted in a workshop paper presented at the International Conference on Business Process Management 2012 in Tallinn, Estonia.

Type

AIS / External / Sligro

Links
Staff involved
Irina-Maria Ailenei

Process mining tools: A comparative analysis

Description

In her thesis, Irina-Maria Ailenei proposes an evaluation framework that is used to assess the strengths and the weaknesses of process mining tools. She applied the framework in practice for evaluating four process mining systems: ARIS PPM, Flow, Futura Reflect, and ProcessAnalyzer. The framework is based on a collection of use cases. A use case consists of a typical application of process mining functionality in a practical situation. The set of use cases was collected based on the functionality available in ProM and was validated by conducting a series of semi-structured interviews with process mining users and by conducting a survey. The validated collection of use cases formed the base of her tool evaluation. The project was conducted within Fluxicon and was created based on a request from Siemens. The work also resulted in a paper presented at the BPI workshop in Clermont-Ferrand in 2011.

Type

AIS / External / Fluxicon

Links
Staff involved
Stefania Rusu

Discovery and analysis of field service engineer process using process mining

Description

In her thesis, Stefania Rusu applied process mining techniques using event logs from Philips Healthcare to get insights into the workflow of Field Service Engineers. In order to get insights into Philips’s Field Service Engineers Stefania created hierarchical models that abstract from the low levels at which events are stored to higher levels of activities, based on the role of the analyst. Performance analysis based on process mining techniques helped to identify bottlenecks and thereby generated ideas for improvement regarding the work of Field Service Engineers within Philips Healthcare. Stefania proposed a new approach to annotate process maps with performance information extracted from event logs. To support the approach and to apply in in practice, she implemented a new plug-in in the ProM framework.

Type

AIS / External / Philips Healthcare

Links
Staff involved

Erasmus Mundus Joint Master in the field of Big Data Management and Analytics

The department of Computer Science at Technische Universiteit Eindhoven (TU/e) in a consortium of five European Universities is awarded EU co-funding to run a Erasmus Mundus Join Master Degree (EMJMDs) in the field of Big Data Management and Analytics (BDMA).

The programme curriculum is jointly delivered by Université libre de Bruxelles (ULB), Belgium, Universitat Politècnica de Catalunya (UPC), Spain, Technische Universität Berlin (TUB), Germany, Technische Universiteit Eindhoven (TU/e), the Netherlands, and Université François Rabelais Tours (UFRT), France. Academic partners around the world and partners from leading industries in BI and BD, private R&D companies, excellence centres, service companies, start-up incubators, public research institutes, and public authorities will contribute to the programme by giving lectures, training students, providing software, course material, and internships or job placements. The EMJMD in “Big Data Management and Analytics” (BDMA) is designed to provide understanding, knowledge, and skills in the broad scope of fields underlying Business Intelligence (BI) and Big Data (BD). Its main objective is to train computer scientists who have an in-depth understanding of the stakes, challenges, and open issues of gathering and analysing large amounts of heterogeneous data for decision-making purposes. The programme will prepare the graduates to answer the professional challenges of our data-driven society through a strong connection with industry, but also to pursue their studies into doctorate programmes through a strong connection with research and innovation

BDMA is a 2-year (120 ECTS) programme. The first two semesters are devoted to fundamentals on BI and BD delivered by ULB and UPC. Then, all students participate to the European Business Intelligence and Big Data Summer School (eBISS). In the third semester, students chose one of the three specialisations delivered by TUB, TU/e, and UFRT. The fourth semester is dedicated to the master's thesis and can be carried out as an internship in industry or in a research laboratory in any full or associated partner. Eventually, all students attend the Final Event devoted to master's theses defences and the graduation ceremony. The tuition language is English. The program targets students with a Bachelor of Science (or a level equivalent to 180 ETCS) with major in Computer Science, as well as an English proficiency corresponding to level B2 of CEFR. The programme will deliver a joint degree to graduates following the mobility ULB, UPC, and UFRT, and three degrees from ULB, UPC, and the university of the specialisation (UFRT, TUB or TU/e) to graduates following the other mobilities.

More information can be found on the UFRT BDMA site, which will open soon.


If you have a master project item for this page, which may include possible master project assignments, on-going master projects, and completed master projects, please send it to Eric Verbeek.

Process mining of collaborative healthcare data - VitalHealth Software