Honors, Awards and Grants

  • 2018-2021
    H2020 BPR4GDPR
    image

    The EU General Data Protection Regulation (GDPR) replaces the Data Protection Directive 95/46/EC and was designed to harmonize data privacy laws across Europe, to protect and empower all EU citizens data privacy and to reshape the way organizations across the region approach data privacy.

    The goal of BPR4GDPR (Business Process Re-engineering and functional toolkit for GDPR compliance) is to provide a holistic framework able to support end-to-end GDPR-compliant intra- and interorganisational ICT-enabled processes at various scales, while also being generic enough, fulfilling operational requirements covering diverse application domains.

    The BPR4GDPR project has received funding from the European Union’s Horizon 2020 innovation programme under grant agreement No.787149 (Innovation Action) and coordinated by CAS SOFTWARE AG.

  • 2016-2018
    University Affairs Committee Grant - Xerox
  • 2015
    NSERC Engage Grant
  • 2016-2010
    PhD scholarship - CNPq
    Scholarship for the scientific and technological development, granted by the Brazilian National Council for Scientific and Technological Development (CNPq - Conselho Nacional de Desenvolvimento Científico e Tecnológico).
  • 2005
    Eclipse Innovation Grant - IBM
  • 2005-2006
    Undergratuate scholarship - CNPq

Research Projects

  • image

    Predicting the need of Nursing or Care Home and Home Care

    website

    In this project, the aim is to early predict the amount of patients that will need to go to a VVT institution after admission to the hospital. The goal is to predict both the number of patients, so that the hospital can take care of places for patients, and to make predictions of the patient level, indicating which points of the process were important to determine the need for VVT.

    The St. Antonius hospital in Nieuwegein is an ambitious top clinical training hospital. Every year, more than 40,000 patients are admitted, of which 6,000 patients go home through Care Mediation with home care or to a nursing or care home. This outflow is very erratic with seasonal influences. At the moment, patients who go to a nursing/care home sometimes spend too long in the hospital because there is no place available yet at a VVT institution. By indicating the desired capacity in advance at the various VVT institutions, St. Antonius wants to shorten the unnecessary hospital stay. In this way we try to improve the flow (inflow-flow-outflow) in the hospital.

    Research questions to be answered:

    Can you predict the capacity needed in nursing/care home?

    How early can this be done considering a certain confidence level?

    Which factors in the care path of the patient lead to the need of nursing/care home?

  • image

    Emergency and Operating Rooms

    website

    In this project, the aim is to work in either the Emergency or Operating room data to, among other things, investigate the factors that influence process performance, identify significant differences between different scenarios, building models to early identification of bottlenecks.

    Two of the most important parts of a hospital are the Emergency room and the Operating room. HagaZiekenhuis deals with tens of thousands of patients every year. For each patient, everything is recorded in the Electronic Patient Record (Chipsoft Hix). Exploring the data recorded by both areas of the hospital opens the possibilities to understand the processes around them, to identify/classify efficient aspects, and to find new ways to make things even more efficient/valuable. The students involved in these projects will be able to implement the results achieved whenever possible. Furthermore, the students should be able to communicate/discuss the achieved results with medical specialists, management and Executive Board of the institution. In short, it is expected for these projects to make impact! And perhaps they are the nicest graduation projects that you can find …

  • image

    Happy Nurses

    website

    In this project we work on quantifying work load in daily clinical nursing care to provide insights for the management to guide better decision making in number of nursing staff per shift and help the hospital to understand patterns in workload evolvement over time.

    The biggest problem that Dutch hospitals face today is the lack of nursing staff. Therefore, hospital beds are closed to maintain safety levels for the already admitted patients. However, patients on the waiting list or those already planned for elective care experience longer waiting times. And sometimes emergency care beds have to be closed causing arriving patients to be transferred to other hospitals. If this trend goes on, we are losing the high quality of care and even emergency care, reducing the chance for a sick patient to recover quickly.

    Patient stay shortened substantially in Dutch hospitals over the last decade. This is good for both patients, hospitals and society. However, due shorter stay the turnover of patients increased and also care acuity per patient per time unit in the hospital, the workload for nurses increases. Higher workload is associated with higher levels of burnout (sickness absence) and work dissatisfaction. Both could be predecessors for voluntarily stop of clinical nursing work by reschooling to a specialised nurse, nurse practioner or even leaving the job. The voluntary job leaves lead for the nurse behind to a higher workload if vacancies are not filled quickly with experienced nurses.

    Scientific research shows an association between patient outcomes and nurse autonomy, sufficient numbers of nursing staff, a good relationship between nurses and doctors and the presence of nurses with a bachelor degree. This emerges by the influence of the nurses on the care process (surveillance, continuity of care, patient centredness and effective communication at discharged for the continuity of care at home or in a different location). Large studies over different departments and different hospitals show that facilitating these factors reduces mortality, improves patient satisfaction, provide less burnout among nurses and causes less safety issues for both patients and nurses. [Vahey, 2004]

    The hospital needs to close bed due to the lack of nursing staff that should take care of the patients more frequently. Until now this has not resulted in problems for hospital liquidity, but the amount of personnel will decrease further coming years. By the increase of elderly in the general population, nurse getting older and low increase of young nurses this will create a disbalance between demand and supply. In this project you will work on quantifying work load in daily clinical nursing care to provide insights for the management to guide better decision making in number of nursing staff per shift and help the hospital to understand patterns in workload evolvement over time.

  • image

    BPR4GDPR: Business Process Re-engineering and functional toolkit for GDPR compliance

    website

    The goal of BPR4GDPR is to provide a holistic framework able to support end-to-end GDPR-compliant intra- and interorganisational ICT-enabled processes at various scales, while also being generic enough, fulfilling operational requirements covering diverse application domains.

    The EU General Data Protection Regulation (GDPR) replaces the Data Protection Directive 95/46/EC and was designed to harmonize data privacy laws across Europe, to protect and empower all EU citizens data privacy and to reshape the way organizations across the region approach data privacy.

    The goal of BPR4GDPR (Business Process Re-engineering and functional toolkit for GDPR compliance) is to provide a holistic framework able to support end-to-end GDPR-compliant intra- and interorganisational ICT-enabled processes at various scales, while also being generic enough, fulfilling operational requirements covering diverse application domains.

    The BPR4GDPR project has received funding from the European Union’s Horizon 2020 innovation programme under grant agreement No.787149 (Innovation Action) and coordinated by CAS SOFTWARE AG.

  • image

    Object-Centric Behavioral Constraint (OCBC) Modeling Language

    website

    Object-Centric Behavioral Constraint (OCBC) modeling language is a novel language that combines ideas from declarative, constraint-based languages like Declare, and from data/object modeling techniques (ER, UML, or ORM).

    Object-Centric Behavioral Constraint (OCBC) modeling language is a novel language that combines ideas from declarative, constraint-based languages like Declare, and from data/object modeling techniques (ER, UML, or ORM). Cardinality constrains are used as a unifying mechanism to tackle data and behavioral dependencies, as well as their interplay. In other words, it extends data models with a behavioral perspective. Data models can easily deal with many-to-many and one-to-many relationships. This is exploited to create process models that can also model complex interactions between different types of instances. Classical multiple-instance problems are circumvented by using the data model for event correlation. The declarative nature of the proposed language makes it possible to model behavioral constraints over activities like cardinality constraints in data models. The resulting object-centric behavioral constraint model is able to describe processes involving interacting instances and complex data dependencies. This model can be then used for conformance checking, i.e., diagnosing deviations between observed behavior (i.e., an event log) and modeled behavior (OCBC model) for compliance, auditing, or risk analysis. Unlike existing approaches, instances are not considered in isolation and cardinality constraints in the data/object model are taken into account. Hence, problems that would have remained undetected earlier, can now be detected.

Filter by type:

Sort by year:

Detecting Privacy, Data and Control-Flow Deviations in Business Processes

Azadeh S. Mozafari Mehr; Renata M. de Carvalho; Boudewijn van Dongen
Conference Paper Intelligent Information Systems. CAiSE 2021. Lecture Notes in Business Information Processing, vol 424. Springer, Cham.

Abstract

Existing access control mechanisms are not sufficient for data protection. They are only preventive and cannot guarantee that data is accessed for the intended purpose. This paper proposes a novel approach for multi-perspective conformance checking which considers the control-flow, data and privacy perspectives of a business process simultaneously to find the context in which data is processed. In addition to detecting deviations in each perspective, the approach is able to detect hidden deviations where non-conformity relates to either a combination of two or all three aspects of a business process. The approach has been implemented in the open source ProM framework and was evaluated through controlled experiments using synthetic logs of a simulated real-life process.

Protecting Citizens’ Personal Data and Privacy: Joint Effort from GDPR EU Cluster Research Projects

Renata M. de Carvalho, Camillo Del Prete, Yod Samuel Martin, Rosa M. Araujo Rivero, Melek Önen, Francesco Paolo Schiavo, Ángel Cuevas Rumín, Haralambos Mouratidis, Juan C. Yelmo, Maria N. Koukovini
Journal Paper SN Computer Science 1, Article number: 217

Abstract

Confidence in information and communication technology services and systems is crucial for the digital society which we live in, but this confidence is not possible without privacy-enhancing tools and technologies, nor without risks management frameworks that guarantee privacy, data protection, and secure digital identities. This paper provides information on ongoing and recent developments in this area in the European Union (EU) space. We start by providing an overview of EU’s General Data Protection Regulation (GDPR) and proceed by identifying challenges concerning GDPR implementation, either technical or organizational. For this, we consider the work currently being done by a set of EU projects on the H2020 DS-08-2017 topic, namely BPR4GDPR, DEFeND, SMOOTH, PDP4E, PAPAYA and PoSeID-on, which address and aim at providing specific, operational solutions for the identified challenges. We briefly present these solutions and discuss the ways in which the projects cooperate and complement each other. Finally, we identify guidelines for further research.

Facilitating GDPR Compliance: The H2020 BPR4GDPR Approach

Georgios V. Lioudakis; Maria N. Koukovini; Eugenia I. Papagiannakopoulou; Nikolaos Dellas; Kostas Kalaboukas; Renata Medeiros de Carvalho; Marwan Hassani; Lorenzo Bracciale; Giuseppe Bianchi; Adrian Juan-Verdejo; Spiros Alexakis; Francesca Gaudino; Davide Cascone; Paolo Barracano
Conference Paper Digital Transformation for a Sustainable Society in the 21st Century. I3E 2019. IFIP Advances in Information and Communication Technology, vol 573. Springer, Cham.

Abstract

This paper outlines the approach followed by the H2020 BPR4GDPR project to facilitate GDPR compliance. Its goal is to provide a holistic framework able to support end-to-end GDPR-compliant intra- and inter-organisational ICT-enabled processes at various scales, while also being generic enough, fulfilling operational requirements covering diverse application domains. To this end, solutions proposed by BPR4GDPR cover the full process lifecycle addressing major challenges and priorities posed by the Regulation.

A Model-based Framework to Automatically Generate Semi-real Data for Evaluating Data Analysis Techniques

Li, G.; de Carvalho, R.M.; van der Aalst, W.M.P.
Conference Paper 21st International Conference on Enterprise Information Systems (ICEIS), Crete, Greece, 2019, pp. 213-220.

Abstract

As data analysis techniques progress, the focus shifts from simple tabular data to more complex data at the level of business objects. Therefore, the evaluation of such data analysis techniques is far from trivial. However, due to confidentiality, most researchers are facing problems collecting available real data to evaluate their techniques. One alternative approach is to use synthetic data instead of real data, which leads to unconvincing results. In this paper, we propose a framework to automatically operate information systems (supporting operational processes) to generate semi-real data (i.e., “operations related data” exclusive of images, sound, video, etc.). This data have the same structure as the real data and are more realistic than traditional simulated data. A plugin is implemented to realize the framework for automatic data generation.

An Approach for Workflow Improvement based on Outcome and Time Remaining Prediction

Galdo Seara, L.; de Carvalho, R.M.
Conference Paper 7th International Conference on Model-Driven Engineering and Software Development (MODELSWARD), Prague, Czech Republic, 2019, pp. 475-482.

Abstract

Some business processes are critical to organizations. The efficiency at which involved tasks are performed define the quality of the organization. Detecting where bottlenecks occur during the process and predicting when to dedicate more resources to a specific case can help to distribute the work load in a better way. In this paper we propose an approach to analyze a business process, predict the outcome of new cases and the time for its completion. The approach is based on a transition system. Two models are then developed for each state of the transition system, one to predict the outcome and another to predict the time remaining until completion. We experimented with a real life dataset from a financial department to demonstrate our approach.

Object-centric behavioral constraint models: a hybrid model for behavioral and data perspectives

Li, G.; de Carvalho, R.M.; van der Aalst, W.M.P.
Conference Paper 34th ACM/SIGAPP Symposium on Applied Computing, Limassol, Cyprus, 2019, pp. 48-56.

Abstract

In order to maintain a competitive edge, enterprises are driven to improve efficiency by modeling their business processes. Existing process modeling languages often only describe the lifecycles of individual process instances in isolation. Although process models (e.g., BPMN and Data-aware Petri nets) may include data elements, explicit connections to real data models (e.g., a UML class model) are rarely made. Therefore, the Object-Centric Behavioral Constraint (OCBC) modeling language was proposed to describe the behavioral and data perspectives, and the interplay between them in one single, hybrid diagram. In this paper, we describe OCBC models and introduce the extended interactions between the data and behavioral perspectives on the attribute level. We implement the approach in a plugin and evaluate it by a comparison with other models.

Process Mining in Social Media: Applying Object-Centric Behavioral Constraint Models

Li, G.; de Carvalho, R.M.
Journal Paper IEEE Access, vol. 7, pp. 84360-84373, 2019.

Abstract

The pervasive use of social media (e.g., Facebook, Stack Exchange, and Wikipedia) is providing unprecedented amounts of social data. Data mining techniques have been widely used to extract knowledge from such data, e.g., community detection and sentiment analysis. However, there is still much space to explore in terms of the event data (i.e., events with timestamps), such as posting a question, commenting on a tweet, and editing a Wikipedia article. These events reflect users' behavior patterns and operational processes in the media sites. Classical process mining techniques support to discover insights from event data generated by structured business processes. However, they fail to deal with the social media data which are from more flexible “media” processes and contain one-to-many and many-to-many relations. This paper employs a novel type of process mining techniques (based on object-centric behavioral constraint models) to derive insights from the event data in social media. Based on real-life data, process models are mined to describe users' behavior patterns. Conformance and performance are analyzed to detect the deviations and bottlenecks in the question and answer process in the Stack Exchange website.

Configurable Event Correlation for Process Discovery from Object-Centric Event Data

Li, G.; de Carvalho, R.M.; van der Aalst, W.M.P.
Conference Paper 2018 IEEE International Conference on Web Services (ICWS), San Francisco, USA, 2018.

Abstract

Many modern enterprises are employing serviceoriented systems to execute their transactions. These systems generate an abundance of events, which can be analyzed to diagnose and improve business processes. However, the events are distributed over different data sources, as service-oriented systems are often implemented asWeb services involving different departments in an organization. They need to be consolidated in a process view since it makes no sense to analyze individual events. Existing approaches depend on an explicit case notion to correlate events. They work well on data from process-aware systems (e.g., BPM/WFM systems) which record events explicitly and each event is attached with a case id (a global identifier) to indicate its related process instance. However, most serviceoriented systems (e.g., ERP and CRM) produce object-centric data (e.g., database tables), which record events implicitly (e.g., through redo logs) and separately without a common case id. In this paper, we propose a novel approach to correlate events by the data perspective (e.g., by a so-called "object path"). Besides, we define the concepts of correlation patterns and evaluation metrics, which enable users to evaluate and select the best way to correlate events based on their needs.

Dealing with artifact-centric systems: a process mining approach

Li, G.; de Carvalho, R.M.
Conference Paper 9th International Workshop on Enterprise Modeling and Information Systems Architectures (EMISA 2018), Rostock, Germany, 2018, pp.80-84.

Abstract

Process mining provides a series of techniques to analyze business processes based on execution data in enterprises. It has been successfully applied to classical processes on WFM/BPM systems, in which one process execution consists of events attached with the same case id. However, existing process mining techniques suffer from problems when dealing with artifact-centric systems, such as ERP and CRM, in which a business process involves a set of interacting artifacts and a case notion for the whole process is missing. Some typical problems are convergence and divergence in XES logs, and lost interactions between multiple instances in process models. Existing artifact-centric approaches try to address these problems, but have not yet solved them satisfactorily. For instance, one has to pick an instance notion in each artifact, the description of the end-to-end behavior is distributed over multiple diagrams, and the interactions between the data perspective and the behavioral perspective are not explicitly presented. This paper proposes a set of new techniques, such as a novel log format and a novel modeling language, to enable process mining for artifact-centric systems.

Extracting object-centric event logs to support process mining on databases

Li, G.; López de Murillas, E.G.; de Carvalho, R.M.; van der Aalst, W.M.P.
Conference Paper Information Systems in the Big Data Era (CAiSE 2018), Tallinn, Estonia, 2018.

Abstract

Process mining helps organizations to investigate how their operational processes are executed and how these can be improved. Process mining requires event logs extracted from information systems supporting these processes. The eXtensible Event Stream (XES) format is the current standard which requires a case notion to correlate events. However, it has problems to deal with object-centric data (e.g., database tables) due to the existence of one-to-many and many-to-many relations. In this paper, we propose an approach to extract, transform and store object-centric data, resulting in eXtensible Object-Centric (XOC) event logs. The XOC format does not require a case notion to avoid flattening multi-dimensional data. Besides, based on so-called object models which represent the states of a database, a XOC log can reveal the evolution of the database along with corresponding events. Dealing with object-centric data enables new process mining techniques that are able to capture the real processes much better.

Log skeletons: a classification approach to process discovery

Verbeek, H.M.W.; de Carvalho, R.M.
Report arXiv.org

Abstract

To test the effectiveness of process discovery algorithms, a Process Discovery Contest (PDC) has been set up. This PDC uses a classification approach to measure this effectiveness: The better the discovered model can classify whether or not a new trace conforms to the event log, the better the discovery algorithm is supposed to be. Unfortunately, even the state-of-the-art fully-automated discovery algorithms score poorly on this classification. Even the best of these algorithms, the Inductive Miner, scored only 147 correct classified traces out of 200 traces on the PDC of 2017. This paper introduces the rule-based log skeleton model, which is closely related to the Declare constraint model, together with a way to classify traces using this model. This classification using log skeletons is shown to score better on the PDC of 2017 than state-of-the-art discovery algorithms: 194 out of 200. As a result, one can argue that the fully-automated algorithm to construct (or: discover) a log skeleton from an event log outperforms existing state-of-the-art fully-automated discovery algorithms.

Automatic Discovery of Object-Centric Behavioral Constraint Models

Li, G.; de Carvalho, R.M.; van der Aalst, W.M.P.
Conference Paper 20th International Conference on Business Information Systems (BIS 2017), Poznan, Poland, 2017, pp. 43-58.

Abstract

Process discovery techniques have successfully been applied in a range of domains to automatically discover process models from event data. Unfortunately existing discovery techniques only discover a behavioral perspective of processes, where the data perspective is often as a second-class citizen. Besides, these discovery techniques fail to deal with object-centric data with many-to-many relationships. Therefore, in this paper, we aim to discover a novel modeling language which combines data models with declarative models, and the resulting object-centric behavioral constraint model is able to describe processes involving interacting instances and complex data dependencies. Moreover we propose an algorithm to discover such models.

On the Analysis of CMMN Expressiveness: Revisiting Workflow Patterns

de Carvalho, R.M.; Mili, H.; Boubaker, A.; Gonzalez-Huerta, J.; Ringuette, S.
Conference Paper Enterprise Distributed Object Computing Workshop (EDOCW), 2016 IEEE 20th International, Vienna, Austria, 2016.

Abstract

Traditional business process modeling languages use an imperative style to specify all possible execution flows, leaving little flexibility to process operators. Such languages are appropriate for low-complexity, high-volume, mostly automated processes. However, they are inadequate for case management, which involves low-volume, high-complexity, knowledge-intensive work processes of today’s knowledge workers. OMG’s Case Management Model and Notation (CMMN), which uses a declarative style to specify constraints placed at a process execution, aims at addressing this need. To the extent that typical case management situations do include at least some measure of imperative control, it is legitimate to ask whether an analyst working exclusively in CMMN can comfortably model the range of behaviors s/he is likely to encounter. This paper aims at answering this question by trying to express the extensive collection of Workflow Patterns in CMMN. Unsurprisingly, our study shows that the workflow patterns fall into three categories: 1) the ones that are handled by CMMN basic constructs, 2) those that rely on CMMN’s engine capabilities and 3) the ones that cannot be handled by current CMMN specification. A CMMN tool builder can propose patterns of the second category as companion modeling idioms, which can be translated behind the scenes into standard CMMN. The third category is problematic, however, since its support in CMMN tools will break model interoperability.

On the Analysis of CMMN Expressiveness: Revisiting Workflow Patterns

de Carvalho, R.M.; Boubaker, A.; Gonzalez-Huerta, J.; Mili, H.; Ringuette, S.; Charif, Y.
Report Laboratoire de recherche sur les technologies du commerce électronique, Département d’informatique, Université du Québec à Montréal

Abstract

Traditional business process modeling languages use an imperative style to specify all possible execution flows, leaving little flexibility to process operators. Such languages are appropriate for low-complexity, high-volume, mostly automated processes. However, they are inadequate for case management, which involves low-volume, highcomplexity, knowledge-intensive work processes of today’s knowledge workers. OMG’s Case Management Model and Notation(CMMN), which uses a declarative stytle to specify constraints placed at a process execution, aims at addressing this need. To the extent that typical case management situations do include at least some measure of imperative control, it is legitimate to ask whether an analyst working exclusively in CMMN can comfortably model the range of behaviors s/he is likely to encounter. This paper aims at answering this question by trying to express the extensive collection of Workflow Patterns in CMMN. Unsurprisingly, our study shows that the workflow patterns fall into three categories: i) the ones that are handled by CMMN basic constructs, ii) those that rely on CMMN’s engine capabilities and iii) the ones that cannot be handled by current CMMN specification. A CMMN tool builder can propose patterns of the second category as companion modeling idioms, which can be translated behind the scenes into standard CMMN. The third category is problematic, however, since its support in CMMN tools will break model interoperability.

Comparing ConDec to CMMN: towards a common language for flexible processes

de Carvalho, R.M.; Mili, H.; Gonzalez-Huerta, J.; Boubaker, A.; Leshob,A.
Conference Paper 5th International Conference on Model-Driven Engineering and Software Development (MODELSWARD 2016), Rome, Italy, 2016.

Abstract

Flexible processes emerged to provide flexibility to business process execution. A flexible process is not static and can have several different executions, that is influenced by the current situation. In this context, the decision-making is placed in the hands of any knowledge worker during the execution, who decides which tasks and in which order they will be executed. Two approaches for flexible processes are discussed in this paper: case management and declarative processes. In particular we use the CMMN standard and the ConDec language for the two approaches, respectively. We compare them based on scope, model representation, formal semantics, and limitations. Our goal is to present commonalities and differences between the languages in order to identify potential extensions to make them more complete to attain more flexible process examples.

REFlex: an entire solution to business process modeling

de Carvalho, R.M.; Silva, N.C.
Conference Paper Workshop on Methodologies for Robustness Injection into Business Processes (MRI-BP'15), Montréal, Canada, 2015.

Abstract

Some approaches have emerged providing flexibility to business processes (BP) execution. Among them, we can enhance the declarative one, which purpose is to model what must be done without describing how they must execute. Previously, i) we extended the definition of the business process to approach web service orchestration, besides the definition of activities and rules; and ii) we formalized the REFlex rule engine, allowing us to discuss the verification of the engine core properties and the liveness-enforcing mechanism properties. In this paper, we aim at discussing and presenting the REFlex as an entire solution to BPM. We show that, using REFlex, an user has support to: i) model a business process; ii) trust in a formalized semantics that does not allow the user to be guided to an unacceptable situation; and iii) link (some) activities to web services, (semi-)automating the process execution.

REFlex: An Efficient Graph-Based Rule Engine to Execute Declarative Processes

de Carvalho, R.M.; Silva, N.C.; Lima, R.M.F; Cornélio, M.L.
Conference Paper Systems, Man, and Cybernetics (SMC), 2013 IEEE International Conference on, pp. 1379-1384, Manchester, England, 2013.

Abstract

Declarative Business Processes offer more flexibility to business processes by the use of business rules. Such business rules describe what must or must not be done during the process execution, but do not prescribe how. To fully experience the benefits of this modeling approach, companies need a rule engine capable of checking the rules and guiding the user through the execution of the process. The rule engines available today present several limitations that impair their use to this application. In particular, the well-known approach that employs Linear Temporal Logic (LTL) has the drawback of the state space explosion as the process model grows. This paper proposes a novel graph-based rule engine that does not share the problems presented by other engines, being better suited to model declarative business processes than the techniques currently in use.

REFlex: An Efficient Web Service Orchestrator for Declarative Business Processes

Silva, N.C.; de Carvalho, R.M.; Oliveira, C.A.L.; Lima, R.M.F.
Conference Paper In Service-Oriented Computing, volume 8274 of Lecture Notes in Computer Science, pp. 222-236, Springer-Verlag Berlin Heidelberg, Berlin, Germany, 2013.

Abstract

Declarative business process modeling is a flexible approach to business process management in which participants can decide the order in which activities are performed. Business rules are employed to determine restrictions and obligations that must be satisfied during execution time. In this way, complex control-flows are simplified and participants have more flexibility to handle unpredicted situations. Current implementations of declarative business process engines focus only on manual activities. Automatic communication with external applications to exchange data and reuse functionality is barely supported. Such automation opportunities could be better exploited by a declarative engine that integrates with existing SOA technologies. In this paper, we introduce an engine that fills this gap. REFlex is an efficient, data-aware declarative web services orchestrator. It enables participants to call external web services to perform automated tasks. Different from related work, the REFlex algorithm does not depend on the generation of all reachable states, which makes it well suited to model large and complex business processes. Moreover, REFlex is capable of modeling data-dependent business rules, which provides unprecedent context awareness and modeling power to the declarative paradigm.

A solution to the state space explosion problem in declarative business process modeling

de Carvalho, R.M.; Silva, N.C.; Oliveira, C.A.L.; Lima, R.M.
Conference Paper 25th International Conference on Software Engineering and Knowledge Engineering (SEKE), Boston, USA, 2013.

Abstract

Declarative business process models focus on modeling what must be done but do not determine how. The existing engine for controlling the execution of declarative processes uses automata-based model checking. Unfortunately, the well-known state space explosion problem limits the ability to explore large processes through automata-based approaches. In this work, we propose a novel mechanism to control the execution of declarative business processes. Our approach has the advantage of not requiring the computation of all reachable states. This allows for the modeling and execution of larger business processes when compared to the automata-based approach.

Integrating Declarative Processes and SOA: A declarative web service orchestrator

Silva, N.C; de Carvalho, R.M.; Oliveira, C.A.L.; Lima, R.M.
Conference Paper 2013 International Conference on Semantic Web and Web Services, Las Vegas, USA, 2013.

Abstract

Service Oriented Architecture (SOA) is a computer model that aims at building new software by assembling independent and loosely coupled services. Traditional web service orchestration is a mechanism for combining and coordinating different web services based on a predefined pattern. However the orchestration requirements may evolve due to business needs. In business context, the declarative approach has emerged to provide flexibility by modeling what must be done but not how it must be executed through business rules. When working with such a model, the results produced depend on the users’ preferences. It is therefore fundamental that orchestration mechanisms provide simple yet efficient ways to dynamically make service composition. This paper proposes a web service orchestrator for declarative processes that makes service composition at runtime. The resulting business process obey all the business rules. The composition is done as the user chooses the service to run, providing an application-aligned infrastructure that can be scaled based on the needs of each business process, since it is described using declarative strategy.

An efficient algorithm for static task scheduling in parallel applications

de Carvalho, R.M.; Lima, R.M.F.; de Oliveira, A.L.I.
Conference Paper Systems, Man, and Cybernetics (SMC), 2011 IEEE International Conference on, pp. 2313-2318, Anchorage, Alaska, 2011.

Abstract

Scheduling is an important tool for optimizing the performance of parallel systems. It aims at reducing the completion time of parallel applications by properly allocating the tasks to the processors. This work proposes a novel scheduling algorithm to parallelize tasks with dependence restrictions. The communication costs between processors and computer architecture are parameters of the proposed algorithm, which explores the trade off between process execution time and communication costs between processes to optimize the system's overall performance. The paper conducts an experiment to compare the performance of the proposed algorithm against six other scheduling algorithms. The experiment considered several execution scenarios. Although our algorithm does not present the best performance in any of the execution scenarios, it produces the best average execution time for the scenarios studied.

Identifying parallel jobs for multiphysics simulators scheduling

de Carvalho, R.M.; Lima, R.M.F.; de Oliveira, A.L.I; Santos, F.C.G.
Conference Paper Systems, Man, and Cybernetics (SMC), 2010 IEEE International Conference on, pp.923-930, Istanbul, Turkey, 2010.

Abstract

Real problem simulations involving physic phenomena can demand too much execution time. To improve the performance of these simulations it is necessary to have an approach to parallelize the processes that compose the simulation. MPhyScaS (Multi-Physics and Multi-Scale Solver Environment) is an environment dedicated to the automatic development of simulators. Each MPhyScaS simulation demands a great amount of time. To parallelize MPhyScaS simulations, the approach used should define a hierarchical parallel structure. The aim of the work herein presented is to identify parallel jobs and dependent ones. The presented model is based on Coloured Petri Nets (CPN). This information will be input to a scheduling algorithm.

Scheduling parallel jobs for multiphysics simulators

de Carvalho, R.M.; Lima, R.M.F.; de Oliveira, A.L.I.; Santos, F.C.G.
Conference Paper Evolutionary Computation (CEC), 2010 IEEE Congress on, pp. 1-8, Barcelona, Spain, 2010.

Abstract

Real problem simulations involving physic phenomena can demand too much execution time. To improve the performance of these simulations it is necessary to have an approach to parallelize the processes that compose the simulation. MPhyScaS (Multi-Physics and Multi-Scale Solver Environment) is an environment dedicated to the automatic development of simulators. Each MPhyScaS simulation demands a great amount of time. To parallelize MPhyScaS simulations, the approach used should define a hierarchical parallel structure. The aim of the work herein presented is to improve the performance of clusters in the processing of MPhyScaS simulations which are composed by a set of dependent tasks by scheduling them. The presented model is based on Genetic Algorithms (GA) to schedule the parallel tasks following MPhyScaS architecture dependence restrictions. The communication between processors must also be considered in this scheduling. Therefore, a trade off must be found between the execution of processes and the time necessary for these processes to communicate with each other.

Dynamic Interface for Multiphysics Simulators

Oliveira, C.; Rocha, F.; Medeiros, R.; Lima, R.; Soares, S.; Santos, F.; Santos, I.
Journal Paper International Journal of Modeling and Simulation for the Petroleum Industry, vol. 2, no. 1, 2008.

Abstract

Simulation is a well known technique to study complex systems. However, the implementation of a simulator may be more complex than the simulation itself. For instance, graphical user interface (GUI) development might consume around 50% of the software development time. Therefore, strategies and techniques to reduce costs in the development of GUI are mandatory in modern software engineering. In this paper we present a framework called GUI Generation Tool that dynamically constructs user interfaces based on specifications defined in XML files. This framework was defined to support automatic generation of simulators for multi-physics phenomena using software reuse techniques and software product lines concepts.

Applying XP to an Agile Inexperienced Software Development Team

Silva, L.; Santana, C.; Rocha, F.; Paschoalino, M.; Falconieri, G.; Ribeiro, L.; Medeiros, R.; Soares, S.; Gusmão, C.
Conference Paper Agile Processes in Software Engineering and Extreme Programming, volume 9 of Lecture Notes in Business Information Processing, pp. 114-126, Springer-Verlag Heidelberg, Limerick, Ireland, 2008.

Abstract

Agile Methods are becoming each day a more and more frequently used alternative among software developing organizations producing high-quality products in real-world projects. Despite this growth in industry, few academic institutions provide courses related to this new software development approach. This paper describes an initiative of introducing agile method concepts through a Master’s Degree course where the students had not experienced XP before. In spite of being MSc students they had previous software development background in industry environment. In this work we present how the issues found over the process may and have been handled as well as the benefits found; how the XP practices have been adapted and applied in a project with time, personnel, and skill constraints and what hindered some principles from being fully effective. We also present real results and open problems for further studies from this experience. The study used a real-life application taken from a need of a real software development company.

Currrent Teaching

  • Present 2019

    Data Analytics for Engineers

    Level: Bachelor level

    Role: Lecturer

    In this course, students gain insight in basic techniques for processing large amounts of data in an efficient, reliable, and consistent way. They develop skills in understanding, interpreting, and documenting data and information in the context of realistic scenarios. Students will also learn to estimate the consequences of choices made for the other phases of data processing. The interpretation of results is considered in every phase of the analysis.

  • Present 2019

    DBL Data Challenge

    Level: Bachelor level

    Role: Responsible Lecturer

    In this first Data Challenge, students will get the possibility to apply the methods and techniques acquired during the first year of the program on a large, complex dataset. The students will be given a large, structured dataset, several specific analysis questions about this dataset, and a proposed analysis approach for each question (i.e., particular analysis techniques to apply). The task for the students is to technically realize these analyses by identifying and familiarizing themselves with the right software tools for this analysis, implementing the analysis in a repeatable form, and reflecting on the validity of their results and the suitability of their analysis approach. An important element in this course will be the actual handling of large data being stored in various formats (files, relational databases, object databases, etc.), the pre-processing of data to be usable for the analysis, and the storage of analysis results in a suitable data format

Teaching History

  • 2019 2019

    Seminar Analytics for Information System

    Level: Master level

    Role: Lecturer

    This seminar combines teaching research methods (in preparation for a Master project) with providing students with recent and ongoing research in the area of event data analysis and process analysis. We study recent research articles, book chapters, and Master theses on topics along the entire analysis life-cycle. Through presentation and group discussions, we work out how to approach, execute, evaluate, and discuss research questions. The aim of the seminar is to prepare students for their graduation project.

  • 2019 2017

    Data Analytics for Engineers

    Level: Bachelor level

    Role: Senior Supervisor

    In this course, students gain insight in basic techniques for processing large amounts of data in an efficient, reliable, and consistent way. They develop skills in understanding, interpreting, and documenting data and information in the context of realistic scenarios. Students will also learn to estimate the consequences of choices made for the other phases of data processing. The interpretation of results is considered in every phase of the analysis.

  • 2019 2018

    Data Entrepreneurship in Action 2

    Program: Master in Data Science and Entrepreneurship

    Role: Responsible Lecturer

    This course is the second in the Data Entrepreneurship in Action series. In Data Entrepreneurship in Action II (DEiAII), students focus on detecting important business problems, translating them into research questions and fashioning data science solutions for them. The course enables students to find solutions to business problems in a scientific way, which will be helpful to them while launching start-ups or working in the industry.

  • 2018 2017

    Data Entrepreneurship in Action 3

    Program: Master in Data Science and Entrepreneurship

    Role: Responsible Lecturer

    Data Entrepreneurship in Action 3 is the last course in the series and shall familiarize students with aspects of data-driven business intelligence. The focus is on learning how to understand processes from the raw event data. Besides exposing the process which happens in reality, understanding it allows the direct comparison to what is expected, the identification of problems, and/or the enrichment of the process model based on new insights gathered.

  • 2016 2015

    Projet d'analyse et de modélisation (French)

    Level: Bachelor in "Informatique"

    Role: Assistant

    Intégrer les connaissances théoriques acquises en analyse et modélisation par la réalisation, en groupe, d'un travail important. Acquérir une expérience pratique de mise en oeuvre d'une méthode formelle utilisée en industrie. Planification, réalisation et documentation formelle d'un projet de système d'information. Apprentissage étape par étape et utilisation d'une méthodologie de développement employée dans l'industrie pour procéder à l'analyse et la conception de systèmes. Pratique des méthodes courantes de travail en génie logiciel: présentations, révisions structurées, etc.

  • 2016 2015

    Exigences et spécifications de systèmes logiciels (French)

    Program: Master in "Génie Logiciel"

    Role: Assistant

    - Introduction à l'ingénierie des systèmes. - Modèles de processus des exigences logicielles. - Intervenants dans le processus des exigences logicielles. - Support et gestion du processus des exigences logicielles. - Qualité et amélioration du processus des exigences logicielles.

  • 2014 2014

    Introduction to Computer Science (Portuguese)

    Level: Bachelor in Computer Science

    Role: Responsible Lecturer

    Computer Science history, basic computer architectures, data representation, data compression.

  • 2014 2014

    Data Structures 2 (Portuguese)

    Level: Bachelor in Computer Science

    Role: Responsible Lecturer

    AVL Tree, Trie Tree, B Tree and variations, hashing, sorting algorithms.

  • 2014 2014

    Object-Oriented Programming (Portuguese)

    Level: Bachelor in Computer Science

    Role: Responsible Lecturer

    Concepts of object-oriented programming, using Java language for practical examples and exercises.

  • 2014 2014

    Paradigms of Programming Languages (Portuguese)

    Level: Bachelor in Computer Science

    Role: Responsible Lecturer

    Concepts and comparison between different paradigms of programming languages (imperative programming, object-oriented programming, functional programming, and logic programming).

Currrent Supervision

  • Present 2019

    Azadeh Mozafari Mehr (PhD)

    Project: Compliance to data protection and purpose control using process mining technique (inside the scope of BPR4GDPR)

  • Present 2021

    Stef van der Sommen (MSc)

    Project: Predicting the need of Nursing or Care Home and Home Care

Supervision History

At My Office

My office is located at TU/e Metaforum - MF 7.067. You will occasionally find me there.

If you need help finding Metaforum, please consider TU/e map. It is the building number 5 (column C, row 4).