The IPA Fall Days are an annual multi-day event, dedicated to a specific theme of current interest to the research community of IPA. This year’s Fall Days are dedicated to Models in Software Engineering, fitting in the Real World Algorithmics and Models, Cyber Physical Systems, Software Analysis, and Model-Driven Software Engineering focus areas of the research school for the period 2013-2018.
The programme of the Fall Days is composed with the help of Arend Rensink (UT), Ulyana Tikhonova (CWI), and the IPA PhD council.
The Fall Days start with registration and lunch on Monday 29 October, and conclude with lunch on Friday 2 November. Monday afternoon is reserved for a special session around a keynote talk by Davy Landman, winner of the IPA Dissertation Award of 2017. The remainder of the days are devoted to various themes and topics in Models in Software Engineering, including model learning and mining; model analytics; model transformations; model representations and DSL design; model checking; and applications of models in academia and industry.
Monday October 29th
|13:35-14:30||Anders Moller (Aarhus University, Denmark)
|14:50-15:50||Serge Demeyer (University of Antwerp, Belgium)
Agile Software Engineering: Opportunities for Industry 4.0
Industry 4.0 is the current trend of automation and data exchange in manufacturing technologies. This trend forces the manufacturing industry to switch to a more agile way of working, hence software engineering teams can and should take a leading role therein. This talk will explore the state-of-the-art in agile software development and the opportunities this may present for Industry 4.0. Consequently it will address questions like: Will our test suite detect critical defects early? Where should we fix a defect? How long will it take to fix defects? Which team members get frustrated? Can we use bots to process easy issues?
|15:50-16:20||Paul Klint (CWI)
Rascal, from Research Idea to Startup Company
Reverse Engineering Source Code: Empirical Studies of Limitations and Opportunities
I will reflect back on my thesis, how I got there and what lessons I would love to share. I will summarize the 3 main research subjects and their important results. Afterwards I will talk on how we use this to start a company, and what we have been doing in the past 2 years.
|20:00||First Social Event
Board and Card Games Night
It has long been a tradition for participants to bring board and card games to the IPA Fall Days, and groups of participants can often be seen playing such games into the late evenings. This is of course by no means meant to be limited to Monday evening, but putting it on the schedule serves as a reminder to please bring your favourite, new, or other board and/or card games with you!
Tuesday October 30th
|9:00-9:45||Maurice ter Beek (CNR, Pisa, Italy)
An Introduction to Software Product Lines and Modelling and Analysing Variability
Software Product Line Engineering (SPLE) is a software engineering methodology aimed at developing, in a cost-effective and time-efficient manner, a family of software-intensive highly configurable systems by systematic reuse. Individual products (or variants) share an overall reference variability model of the family, but they differ with respect to specific features, which are user-visible increments in functionality. The explicit introduction and management of feature-based variability in the software development cycle causes complexity in the modelling and analysis of SPLs, because the number of possible individual products typically grows exponentially with the number of features. An important example concerns behavioural validation, i.e., guaranteeing that each product of the SPL satisfies a series of behavioural requirements. Variants of this problem include computing a set of products that do not satisfy the requirements plus a justification. A common approach to assess these requirements consists of building the system and testing it. However, if the system fails to meet the requirements, costly interactions are needed to improve it. Recent research has focussed on lifting successful formal specification languages and formal verification techniques known from single system engineering to product line engineering or highly configurable systems in general.In this lecture, after a general introduction to SPLE, we present a number of modelling and analysis frameworks that were introduced for the specification and verification of variability in behavioural models of SPLs. These range from variants of Modal Transition Systems (MTSs) and Featured Transitions Systems (FTSs) and their dedicated family-based model checkers VMC and SNIP/ProVeLines to QFLan, a quantitative modelling and verification environment offering a probabilistic feature-oriented language and statistical model checking.Much of what I present is based on joint work with colleagues from the Formal Methods and Tools group in Pisa and with Axel Legay, Alberto Lluch Lafuente and Andrea Vandin.In an accompanying lecture, Erik de Vink will focus on a feature mu-calculus over FTSs that allows family-based model checking with mCRL2.
|9:45-10:30||Erik de Vink (TU/e)
Model Checking of Variability with mCRL2
(joint work with Maurice ter Beek and Tim Willemse)
Family-based model checking targets the simultaneous verification of
systems with feature-based variability. We present an approach based on the
feature mu-calculus muLf, which combines modalities with feature expressions.
This logic is interpreted over feature transition systems, which allows one
to reason over the collective behavior of a number of variants (a family of
products). Via an embedding into the modal mu-calculus with data, underpinned
by the general-purpose mCRL2 toolset, off-the-shelf tool support for muLf is
available. We illustrate the feasibility of our approach on an SPL
|11:00-11:45||Önder Babur (TU/e)
Model Analytics with SAMOS
With increased adoption of Model-Driven Engineering, the number of related artifacts in use, such as models and metamodels, greatly increases. To be able to tackle this dimension of scalability in MDE, we propose to treat the artifacts as data, and apply various techniques – ranging from information retrieval and natural language processing to machine learning – to analyze and manage those artifacts in a holistic, scalable and efficient way. We have developed the SAMOS model analytics framework to address these issues. In this talk we elaborate our approach with the major components in the SAMOS workflow, and present case studies that involve model repository exploration and management, model variability mining and clone detection.
|11:45-12:00||to be announced|
|14:00-15:00||Alfons Laarman (UL)
Model Checking: A Computational Perspective
I will provide an overview of different model checking techniques for coping with the state space explosion. The emphasis is on both enumerative and symbolic approaches, and their respective properties and combination.
|15:00-15:30||to be announced|
|16:00-max. 17:00||to be announced|
Wednesday October 31st
|9:00-10:00||Luís Ferreira Pires (UT)
Metamodelling-based model transformations: concepts, tools and applications
Model-driven engineering (MDE) is an approach to develop systems based on metamodelling and model transformations. MDE can be applied in many software development scenarios, like, e.g., code generation, interoperability and reverse engineering. In this presentation, we will introduce the principles of metamodelling-based model transformation and the available tooling. We will also discuss some applications of model transformations found in the literature, aiming at giving concrete information about when and how to use this technique in software development projects.
|10:30-11:15||Pieter Kwantes (UL)
On the synthesis of industry level process models from enterprise level process models
Production of goods and services is often organized as a collection of interacting enterprise level business processes. The required interactions are typically subject to industry level standardization, but Implementation decisions regarding business processes are, as a rule, taken locally, at the enterprise level. Industry level (global) behavior is therefore an emergent property resulting from local implementation decisions. Aligning local implementation decisions, such that the emerging global behavior is compliant with industry standards, is a significant and costly coordination problem for the enterprises involved. In this talk, a framework is presented to specify the synthesis of a global process model from a set of local process models. This framework allows us to verify that the resulting model is compliant with a given blueprint (reference model), using local checks only. The reference model describes the intended interactions (requirements) between the enterprise processes in accordance with industry standards. We propose an approach that allows to check compliance locally per enterprise without requiring information from other enterprise models. This implies in particular that there is no need for individual enterprises to disclose internal design information to the public domain. In our general set-up, we use Petri nets as a modelling framework for our process models and labelled transition systems with silent actions as their semantics. Branching bisimilarity of their transition systems is here the criterion for compliance of process models.
|11:15-12:00||Stefano Schivo (OU)
Efficient domain-specific tool development with Model-Driven Engineering
We present an approach that provides a systematic way to build ‘software bridging tools’ (tools that act as bridge between different domains and languages) such that these tools become easier to debug, extend, reuse and maintain.
We demonstrate our approach by connecting the UPPAAL model checker to five different domains: cyber-physical systems, hardware-software co-design, cyber-security, reliability engineering and software timing analysis.
|14:00-15:00||Erwan Bousse (Université de Nantes, France)
Building Generic Tools for Domain-Specific Languages
Model-driven engineering advocates the separation of concerns through the use of models, each representing a different aspect of a system. Such models can be created using a variety of Domain-Specific Languages (DSLs), each engineered to enable the efficient definition of a specific aspect of a system. However, engineering a DSL implies the creation of a wide range of tools to allow the DSL user to define, analyze and use conforming models. To avoid building each of these tools manually, and hence to reduce the amount of DSL engineering effort, we will explore in this talk different strategies to create generic tools that can be reused from one DSL to another – ie. a “build once and reuse everywhere” strategy. We will focus on two kinds of tools each built with a different strategy: an execution trace constructor generated from the definition of the DSL, and an omniscient debugger that dynamically interprets the definition of the DSL. We will then discuss how useful it can be to enrich the definition of a DSL when applying such methods. We will finally discuss the limitations of such approaches, and how they could improve in the future.
|15:30-16:15||Mauricio Verano Merino (TU/e)
Bacatá: A Notebook Generator for DSLs
Interactive notebooks, such as the ones provided by the Jupyter project, are gaining traction in different fields like scientific computing, data science, and machine learning. However, developing Jupyter kernel machinery for a new language requires significant effort.
In this talk, I will present Bacatá, a language-parametric notebook generator for DSLs with support for a selection of syntactic editor services (e.g., syntax highlighting and tab-completion) and interactive outputs. Bacatá re-uses existing language components, such as parsers, code generators, and interpreters; and generates language kernels so that DSL end-users can interact with the language through a notebook interface.
|16:15-17:00||Jasper Denkers (TUD)
Modeling with textual DSLs in Spoofax
Domain-specific languages (DSLs) are effective tools for applying model-driven software engineering. However, developing DSLs, especially including IDE support, is non-trivial. Language workbenches are tools that ease developing DSLs and that provide IDE functionality for free.
In this talk, I will demonstrate Spoofax, a language workbench that integrates several meta-DSLs for declaratively defining textual languages.
In particular, we will define syntax and transformations for a small functional language.
Additionally, I will show two concrete example DSLs used for model-driven software engineering at Océ. These DSLs were migrated from custom implementations in XML and Python to Spoofax.
Thursday November 1st
|9:00-10:00||Jan Friso Groote (TU/e)
Design for verification
To make automated verification more effective, a lot of research goes into finding faster algorithms, better state space compression techniques and employing multiple processors. But for the verification of industrial software this turns out to be insufficient. We require a different road, namely, adapting our specification style such that verification is possible.In this talk we present a number of specification techniques that allow to keep the state space small. We also present how such techniques are used in the Dezyne toolset of Verum, using which systems of many hundreds of components, each consisting on average of 1000 rule cases, can be verified completely automatically
|10:30-12:00||Ramon Schiffelers (ASML and TU//e), Kousar Aslam (TU/e), and Nan Yang (TU/e)
Model Driven Development of TWINSCAN software, but not from scratch!
ASML, a high-tech company, aims at cost-effective transitioning of its traditionally built software to Model Driven Engineered software. The existing software is quite vital to the company as it contains important business logic and works perfectly fine. It will be costly in terms of both time and effort to construct models for this software from scratch. To address this issue, we have come up with the applications of existing dynamic software analysis techniques to retrieve behavior of the ASML’s software.Dynamic analysis techniques deal with analysis of software in execution mode. The sub-categories of dynamic learning, active learning and passive learning, differentiate on the basis of input acceptable by the techniques. In this talk, we will discuss the methodology to infer interface protocols of software components in ASML context. The interface protocols later can serve as the starting point for the maintenance activities like re-factoring, re-engineering etc. Taking into account the complementary nature of active and passive learning, we will also talk about our approach of refining active learning with passive learning and execution logs. This contributes to increasing efficiency of the active learning process, while guaranteeing minimal behavior coverage. We will also share the results of case studies for applying our techniques at ASML.
|14:00-15:00||Arnd Hartmanns (UT)
Probabilities, Decisions and Guarantees: On Probabilistic Modelling and Statistical Model Checking
Markov chains are a basic model for random behaviour. Adding unquantified choices to represent implementation freedom or adversarial behaviour results in the model of Markov decision processes (MDP), which is well-known in operations research and forms the mathematical foundation for recently hyped achievements in artificial intelligence. I will present Markov chains and MDP from a verification perspective, where we are interested in providing guaranteed optimal or safe solutions via model checking. For complex systems, state space explosion limits the classical probabilistic model checking approach, with statistical methods gaining popularity as an alternative. I will highlight the challenges posed to statistical model checking by rare events and nondeterministic choices and explain two solutions implemented in the Modest Toolset, our toolbox for quantitative modelling and verification.
|15:30-16:30||Joshua Moerman (RUN)
Learning Register Automata
I will talk about results on (query) learning of register automata (RAs). These automata accept languages over an infinite alphabet, and they are strictly more powerful than ordinary automata, which makes learning them a challenge. This was motivated by learning and verifying models for protocols like TCP which need sequence numbers. There are different techniques for learning RAs, such as abstraction, logic, and symmetries… Each technique will have different (dis)advantages. I will also show an equivalent model to RAs, so-called nominal automata.
|20:00-||Second social event, organised by the IPA PhD council|
Friday (morning) November 2nd
|9:00-10:00||Tijs van der Storm (CWI and RUG)
Live Modeling: Why and How
Live modeling aims to improve the developer experience when using domain-specific modeling languages by shortening the feedback loop between model an execution. In this talk I’ll motivate this concept using a number of examples and sketch language engineering techniques for realizing it in practice.
|10:00-10:45||Jouke Stoel (TU/e)
Constraint-based Run-time State Migration for Live Modeling
Live modeling enables modelers to incrementally update models as they are running and get immediate feedback about the impact of their changes. Problems arise though when changes are introduced that are inconsistent between the model and its current run-time state (e.g., deleting the current state in a statemachine). These changes often cause the need for the run-time to be reset, or worse, restarted, which breaks the ‘live’ experience.
In this talk, I will introduce an approach that enables automatic migration of such run-time state, based on declarative constraints. To illustrate the approach I’ll introduce Nextep, a meta-modeling language for defining invariants and migration constraints on run-time state models. When a model changes, Nextep employs model finding techniques, backed by the relational model finder AlleAlle, to automatically infer a new run-time model that satisfies the declared constraints.
|11:15-12:00||Riemer van Rozen (CWI)
Registration closes on Thursday 18 October. Follow this link to register. As always, IPA PhD students can attend for free. For others, details on pricing will be made available upon request.
To make maximal use of the available capacity, we process applications on the following basis: Registrations are treated “first come, first served”. In principle, all PhD students (IPA and non-IPA) will share a room. Others may also be asked to share if we run out of rooms. Until the registration deadline, we will accept participants until we run out of space at the hotel!
New PhD students
PhD students new to IPA Fall Days are expected to give a brief talk in a slot of 15 minutes. The purpose of such talks is to introduce your research and yourself to the IPA community. You can briefly introduce your research (research area, initial and future research directions), but also yourself (who you are, what your background is, but also e.g. you like or don’t like to do in your spare time). The Fall Days provide a friendly, open, and informal atmosphere to do this, and you will surely be welcomed into the IPA community and receive constructive feedback.
Reaching the venue by pubic transport involves a bus ride from one of Nunspeet (line 112); Putten or Ede-Wageningen (line 107); Amersfoort or Apeldoorn (line 102). Some of the busses run infrequently, have you sit in a bus for an hour, and/or have you walking from downtown Garderen to the hotel, so please use e.g. Google Maps or 9292ov.nl to find out which options work well for you.