Abstracts Lentedagen 2004



State of the art of software architecture in research and industry

Michiel Perdeck (LogicaCMG)

This talk will introduce the concept of Software Architecture and its relevance for Research and Industry. Software architecture is a very broad subject that encompasses both technology and issues such as business-IT alignment. Due to the background of the speaker, the focus in this talk will be on information systems used in large organisations rather than on embedded systems; most of what is said will be relevant for both types of environments however.

Software Architecture is a discipline that recognizes that there are many different views on a system and many different stakeholders withy conflicting interests. We will show some models that capture these views and interests.

Research at universities has traditionally focused on the more technical issues such as the problems around distributed systems, the formalization of architecture descriptions etc. Today however, serious attention is given to those aspects of software architecture that recognizes that it deals with groups of human beings, it is a Wicked Problem.

The use of IT in large organizations is not always an undisputed success and Software Architecture is the discipline that is fit to investigate these problems. Some of the pressing questions and ideas for solutions that exist in practice will be discussed.


Model Driven Architecture: All you need are models

Anneke Kleppe (Klasse Objecten)

This presentation is an introduction to Model Driven Architecture (MDA). It will introduce the jargon, and the key elements of MDA. Explained will be what aspects of MDA are really important, and what are not. The different types of usage of models within software development projects, called the Modeling Maturity Levels, will be covered. On Modeling Maturity Level 5, the highest level, the model takes the role of program code, and the code takes the role of compiler output.

Next to modeling software, the role of metamodeling is very important in MDA. The answer to the question why this is the case, will be given, and the role of metamodeling in language design and transformation design will be explained.


Model Transformations in MDA

Ivan Kurtev (UT)

Transforming models is a main activity in Model Driven Architecture (MDA). Model transformations are executed among models according to transformation definitions. In general, any language may be used to express transformation definitions. Object Management Group (OMG) adopted an approach based on domain-specific transformation languages for expressing transformation definitions. Requirements for such languages are formulated in MOF 2.0 Queries/Views/Transformations Request for Proposals (QVT RFP).

This talk will address the main issues related to model transformations in the context of MDA. The MDA transformation pattern that governs model transformations will be introduced. In addition, the talk will present and discuss briefly the requirements formulated in the QVT RFP. Some examples of languages submitted as response to QVT will be presented.

The talk will give several transformation scenarios observed in the context of the Meta Object Facility (MOF) meta-modeling architecture and will evaluate how current transformation technologies handle the scenarios.

Finally, the talk will focus on the problem of multiple instantiation relations defined by various modeling languages and their impact on model transformation languages.


Graph transformation for model transformation

Arend Rensink (UT)

A core concept of MDA is the transformation of models, where models are understood in the MDA sense of residing under some meta-model, itself an instance of the MOF. Such transformations are envisaged as occurring in many different scenarios within the software engineering process. The aim is to be able to define transformations easily and unambigously; however, for this a language is required that can capture the concepts of arbitrary meta-models (provided they are MOF instances).

In this presentation we make a case for graph transformation as a core technology upon which to base such a model transformation language. We briefly recapitulate the concepts of graph transformation; then we discuss the requirements for its application as a model transformation language. We illustrate these ideas by showing how we have recently given a graph transformation-based semantics to an experimental object-oriented language.


Paradigm: Global behaviour and dynamic consistency

Luuk Groenewegen (UL)

Paradigm is a modelling language, originally meant for coordination situations. Its operational semantics provide a formal basis for verifying dynamic properties of a Paradigm model relevant for the particular coordination situation.

In such a model, specifying a number of components and their coordination, Paradigm introduces 0, 1, or more global behavioural descriptions superpositioned onto the given and unique detailed behavioural descriptions (STDs) of the various components involved. A global behaviour is specified in terms of subprocesses and traps, Paradigm's key notions. The Paradigm language guarantees intra-component consistency between the component's detailed behaviour (realization) and all of its global behaviours (realizations) - if any. A Paradigm modeller then can concentrate on global behaviours for specifying inter-component coordination as required.

As it turns out, Paradigm's global behaviour descriptions allow for useful forms of flexibility, such as:


An experiment on the interpretation of inconsistent UML architecture descriptions

Michel Chaudron and Christian Lange (TU/e)

The Unified Modeling Language (UML) is the de facto standard for architecting and designing software systems. With a large variety of diagram types it offers a large degree of freedom. Mainly due to this there is a potential risk for consistency defects and lack of completeness in UML models. Previous research has shown that in industrial UML models a large number of defects occur. The purpose of this study is to investigate to what extent implementers detect defects and to what extent defects cause different interpretations by different readers. Therefore we performed two controlled experiments with a large group of students and a smaller group of industrial practitioners respectively. The experiment's results show that defects often remain undetected and cause misinterpretations. As a result of this study we present a ranking of defect types according to detection rate and risk for misinterpretation. Additionally we observed effects of using domain knowledge to compensate defects. The results are generalizable to industrial UML users.


Software architecture assessment of usability

Eelke Folmer (RuG)

One of the qualities that has received increased attention in recent decades is usability. A software product with poor usability is likely to fail in a highly competitive market; therefore software developing organizations are paying more and more attention to ensuring the usability of their software. However, practice shows that product quality (which includes usability among others) is not that high as it could be. Studies of software engineering projects show that a large number of usability related change requests are made after its deployment.Fixing usability problems during the later stages of development often proves to be costly, since many of the necessary changes require changes to the system that cannot be easily accommodated by its software architecture. Adding certain usability improving solutions during late stage development is to some extent restricted by the software architecture.

The quintessential example that is always used to illustrate this restriction is adding undo to an application. Undo is the ability to reverse certain actions (such as reversing deleting some text in Word). Undo may significantly improve usability as it allows a user to explore, make mistakes and easily go some steps back; facilitating learning the application's functionality. However from experience, it is also learned that it is often very hard to implement undo in an application because it requires large parts of code to be completely rewritten. The problem is that software architects and HCI engineers are often not aware of the impact of usability improving solutions on the software architecture. As a result, avoidable rework is frequently necessary. This rework leads to high costs, and because tradeoffs need to be made during design, for example between cost and quality, this leads to systems with less than optimal usability.

The successful development of a usable software system therefore must include creating a software architecture that supports the right level of usability. Unfortunately, no documented evidence exists of architecture level assessment techniques focusing on usability. To support software architects in creating a software architecture that supports usability, we present a scenario based assessment technique that has been successfully applied in several cases. Explicit evaluation of usability during architectural design prevent part of the high costs incurred by adaptive maintenance activities once the system has been implemented and leads to architectures with better support for usability.


Explicit assumptions enrich architectural models

Patricia Lago (VU)

Design for change is a well-known adagium in software engineering. We separate concerns, employ well-designed interfaces, and the like to ease evolution of the systems we build. We model and build in changeability through parameterization and variability points (as in product lines).

These all concern places where we explicitly consider variability in our systems. We conjecture that it is helpful to also think of and explicitly model 'invariability', things in our systems and their environment that we assume will 'not' change. We give examples from the literature and our own experience to illustrate how evolution can be seriously hampered because of tacit assumptions made. In particular, we show how we can explicitly model assumptions in an existing product family. From this, we derive a metamodel to document assumptions. Finally, we show how this type of modeling adds to our understanding of the architecture and the decisions that led to it.

We conclude with an overview of some ongoing research on the broader concept of architectural driver.


Symphony: View-driven software architecture reconstruction

Arie van Deursen (TUD, CWI)

Authentic descriptions of a software architecture are required as a reliable foundation for any but trivial changes to a system. Far too often, architecture descriptions of existing systems are out of sync with the implementation. If they are, they must be reconstructed.

There are many existing techniques for reconstructing individual architecture views, but no information about how to select views for reconstruction, or about process aspects of architecture reconstruction in general. In this paper we describe view-driven process for reconstructing software architecture that fills this gap. To describe Symphony, we present and compare different case studies, thus serving a secondary goal of sharing real-life reconstruction experience.

The Symphony process incorporates the state of the practice, where reconstruction is problem-driven and uses a rich set of architecture views. Symphony provides a common framework for reporting reconstruction experiences and for comparing reconstruction approaches. Finally, it is a vehicle for exposing and demarcating research problems in software architecture reconstruction.


The task-resource architectural view

Bas Graaf (TUD)

Control of manufacturing systems can be implemented by considering these systems as task-resource systems, in which machine actions, tasks, are executed by electromechanical subsystems, resources, to carry out different manufacturing requests. This presentation considers the architectural implications of following such an approach for the control of complex manufacturing systems, such as, for example, a wafer scanner.

The scheduling of tasks and resource allocation to tasks can be done in a generic way using reusable scheduling functionality, leaving the choice of tasks, resources and their relations as the open design decisions for developing such a controller. We propose an architectural viewpoint that helps to understand the software of control systems in terms of these design decisions by explicitly showing tasks and resources. We define the viewpoint according to the templates proposed in the software architecture literature, explain the relationship with existing viewpoints such as the module-uses or pipe-and-filter views, and discuss a number of different applications of the new viewpoint, including conformance checking and support for migration to a generic task-resource approach.


Composing configurable Java components

Tijs van der Storm (CWI)

We present techniques to reason about the composition of configurable components and to automatically derive consistent compositions. The reasoning is achieved by describing components in a formal component description language, that allows the description of component variability, dependencies and configuration actions. In addition to formal checks on the configuration space, the language is the starting point for the automatic, configuration-driven, derivation of product instances. To illustrate the approach we instantiate the abstract component model for Java components (packages).


Software deployment with Nix

Eelco Dolstra (UU)

Software deployment is the act of transferring software components to the machines on which they are to be used. This is surprisingly hard to do correctly due to the difficulty in enforcing reliable specification of component dependencies, and the lack of support for multiple versions or variants of a component. This renders deployment operations such as upgrading or deleting components dangerous and unpredictable. A deployment system must also be flexible (i.e., policy-free) enough to support a variety of deployment policies.

In this presentation I will talk about the Nix system, which is a deployment system developed as part of the TraCE project (www.cs.uu.nl/groups/ST/Trace) that addresses these issues. Nix enables reliable dependency identification, side-by-side existence of component versions and variants, atomic upgrades and rollbacks, automatic garbage collection of unused components, transparent source/binary deployment, and lots of other nice features. These features follow from the simple technique of using cryptographic hashes to compute unique paths for component instances.


Generic programming for software evolution

Johan Jeuring (UU,OU)

Change is endemic to any large software system. Business, technology, and organization usually change in the life cycle of a software system. However, changing a large software system is difficult: localizing the code that is responsible for a particular part of the functionality of a system, changing it, and ensuring that the change does not lead to inconsistencies in other parts of the system or in the architecture or documentation is usually a challenging task. Software evolution is a fact of life in the software development industry, and leads to interesting research questions.

Current approaches to software evolution focus on the software development process, and on analyzing, visualizing, and refactoring existing software. Although all of these approaches support changing software, many still require a substantial amount of programming when modifying software. An important research theme of software evolution is to minimize the amount of programming needed for modifying software. Theories about constructing software that automatically adapts to a changing environment, without any programming at all, are scarce, and challenging.


In my talk I want to investigate using generic programming for software evolution. A generic program is a program that works for values of each type for a large class of data types (or DTDs, schemas, class hierarchies). If data types change, or new data types are added to a piece of software, a generic program automatically adapts to the changed or new data types. Take as an example a generic program for calculating the total amount of salaries paid by an organization. If the structure of the organization changes, for example by removing or adding an organizational layer, the generic program still calculates the total amount of salaries paid.


Managing variability with Koala

Rob van Ommering (Philips Research)

Koala is a software component model and an architectural description language designed to manage the diversity and complexity of software in consumer electronics products. Koala was created by Philips Research and is currently being used by Philips Consumer Electronics and Philips Semiconductors in most of their television products. Koala allows to build products from components, where components have explicit provides, requires and diversity interfaces, which can be bound by third parties. The component model is recursive: components can consist of (instances of) other components. A product family is a composition graph of such components; reuse arises through sharing of components between products in this graph.

In my talk I will explain the requirements that have lead to Koala. I will then introduce the model, and explain how it can be used to create a product family. One of the fundamental problems in component-based software engineering is to predict system properties in terms of the properties of the components that constitute the system. I will show with some examples how we are making (small) steps towards such a predictability.


Building-level components

Merijn de Jonge (Philips Research)

Reuse between software systems is often not optimal. An important reason is that while at the functional level well-known modularization principles are applied for structuring functionality in modules, this is not the case at the build level for structuring files in directories. This leads to a situation where files are entangled in directory hierarchies and build processes, making it hard to extract functionality and to make functionality suitable for reuse. Consequently, software may not come available for reuse at all, or only in rather large chunks of functionality, which may lead to extra software dependencies.

In my talk I will explain how this situation can be improved by applying component-based software engineering (CBSE) principles to the build level. First, I will explain how existing software systems break CBSE principles. Next, I introduce the concept of build-level components and define rules for developing such components. Then, I will describe a reengineering process for semi-automatically transforming existing software systems into build-level components. If time is available, I will explain the relation with the Koala software component model as presented by Rob van Ommering.


Service-oriented architectures: Motivation and concepts

Johan Lukkien (TU/e)

The term "Service-oriented architectures" is rapidly gaining attention. There is a strong association with business applications and sometimes the term seems to be identical to web services and .Net technology.

In this presentation I introduce the concepts that underly service-oriented architectures through examples from other areas than enterprise applications, viz., networked embedded systems.It will be shown that the service-oriented architectures concepts by themselves are not new but that the combination represents a new way of thinking about the way applications are built. I will discuss design alternatives for service-oriented architectures aspects like discovery, naming, and interaction styles and will point out research directions in this area.


Methodological support for service-oriented design

Luis Ferreira Pires (UT)

Service-oriented computing is currently mainly technology-driven. Most service-oriented developments focus on the technology that enables enterprises to describe, publish and compose application services, and to communicate with applications of other enterprises according to their service descriptions. In this presentation, we argue that this focus on technology should be complemented with modelling languages, design methods and techniques supporting service-oriented design. We consider service-oriented design as the process of designing application support for business processes, using the service-oriented paradigm. We assume that service-oriented computing technology is used to implement application support. This talk presents two main contributions to the area of service-oriented design. First, it discusses a systematic service-oriented design approach, identifying generic design milestones and a method for assessing the conformance between application designs at related abstraction levels. Second, it introduces a conceptual model for service-oriented design that provides a common and precise understanding of the terminology used in service-oriented design. In order to express service-oriented designs we introduce the ISDL modelling language, which supports our conceptual model and is precise enough to allow formal validation and verification. The talk also includes an elaborate example to illustrate our ideas.

This presentation is based on [1] and other results from the Freeband A-Muse project (http://a-muse.freeband.nl). Freeband A-Muse is sponsored by the Dutch government under contract BSIK 03025.

[1] D.A.C. Quartel, R.M. Dijkman, and M.J. van Sinderen. Methodological Support for Service-oriented Design with ISDL. In: Proceedings of the 2nd ACM International Conference on Service Oriented Computing (ICSOC), New York City, NY, USA, pp. 1-10, 2004.


back