Using basic instructions, termination, jumps and tests as a basis we outline an algebra of programs, denoted PGA, which captures the crux of sequential programming. The meaning of the expressions of PGA is explained in terms of a behaviour extraction operator which assigns to each program a behaviour in the basic polarized process algebra BPPA. On top of PGA a small hierarchy of programming languages is developed which can be projected into PGA. This yields a semantics for `user friendly' programs. As an instance of the set of basic instructions we consider the `molecular programming primitives' which model object based programming systems.
Research on program algebra has resulted in a hierarchy of program notations. The toolset must provide parsers, projections and simulators for these program notations. Further research will probably result in new program notations, and so the toolset must be easy to extend. We designed a generic parser and simulator that take a primitive and basic instruction set as parameters.
Program Algebra website: www.science.uva.nl/research/prog/projects/pga/
In OO programming several correctness issues arise that can be presented concisely and exactly in the context of a formal proof system.
We present a Hoare-style proof system with data induction on class invariants and deal with issues such as Callback, the Forward-Backward Problem, and the Fragile Base Class Problem.
An important issue here is dynamic binding (polymorphic method calls). We take the approach of behavioural subtyping as a starting point.This is not enough, however, if the same dependency relations in the proof and specification are desired as are present in the code. We introduce reinforced behavioural subtyping to remedy this.
To obtain the expressiveness required to apply reinforced behavioural subtyping we introduce dynamic contracts, which parallels at the specification and proof level the dynamic binding at the programming language level.
We present a new shell that provides the full basic functionality of a strongly typed lazy functional language, including overloading. The shell can be used for manipulating files, applications, data and processes at the command line. The shell does type checking and only executes well-typed expressions. Files are typed, and applications are simply files with a function type. The shell executes a command line by combining existing code of functions on disk.
We use the hybrid static/dynamic type system of Clean to do type checking/inference. Its dynamic linker is used to store and retrieve any expression (both data and code) with its type on disk.Our shell combines the advantages of interpreters (direct response) and compilers (statically typed, fast code). Applications (compiled functions) can be used, in a type safe way, in the shell, and functions defined in the shell can be used by any compiled application.
A generic program is written once, and works on values of many data types. Generic Haskell is a recent extension of the functional programming language Haskell that supports generic programming. These talks discuss how Generic Haskell can be used to implement XML tools. We use generic programming to implement a data binding from XML to Haskell, and to implement XML tools whose behaviour depends on the DTD or Schema of the input XML document. Examples of such tools include XML editors, databases, and compressors. Generic Haskell is ideally suited for implementing XML tools:
Furthermore, we show how we can use type isomorphisms to integrate legacy code with code produced by the Haskell XML data binding.
It is important to be able to program GUI applications in a fast and easy manner. Current GUI tools for creating visually attractive applications offer limited functionality. In this talk we introduce a new, easy to use method to program GUI applications in a pure functional language such as Clean or Generic Haskell. The method we use is a refined version of the model-view paradigm.
The basic component in our approach is the Graphical Editor Component (GEC-t) that can contain any value of any flat data type t and that can be freely used to display and edit its value. GEC's can depend on others, but also on themselves. They can even be mutually dependent. With these components we can construct flexible, reusable and customizable editors. For the realization of the components we had to invent a new generic implementation technique for interactive applications.
Based on the theory of metric spaces, we give a denotational semantics for BPPA. The models of BPPA are considered as complete metric spaces of a suitable mathematical structure. We show that a space consisting of projective sequences is an appropriate model for BPPA. Furthermore, using Banach's fixed point theorem, we prove that the specification of a regular process in this space has a unique solution. This result suggests a model consisting of regular processes.
We discuss the situation in which a program may interact with a reactor, for instance a boolean register or a Turing tape. Reactors consume and process actions complementary to programs issuing actions. Reactors maintain a state and reply with a boolean response to each program-action in their interface.
Starting from PGA-notation for programs, program execution and interaction with reactors can be made precise. E.g., one can model the Halting Problem in this setting, or execution architectures or forecasting devices.
Aspects provide a modular way to describe system concerns that cross-cut the normal modularity units, such as the class hierarchy in an object-oriented system. The separate description of aspects, and their subsequent integration (called weaving) into systems have the potential to provide new kinds of compositionality for complex system development. But how can we show that a collection of aspects really adds the functionality it is supposed to, and moreover does not invalidate desirable properties of the system to which the aspects are woven?
Here we suggest a methodology known as aspect validation, as an alternative both to traditional testing and to a full formal proof that the aspects always are correct. In aspect validation we prove separately that each new weaving of the aspects satisfies its specification and does not invalidate desirable properties of the underlying system. The approach uses an existing software model checker, and automatically generates several model checking tasks every time a collection of aspects is integrated to an underlying system. A collection of aspects that together augment a system, which we call a superimposition, is considered to be the most appropriate unit for modularity and specification. New "validation aspects" that augment systems with specifications are a crucial part of the approach, and thus show an interesting application of aspects for the purpose of validating other aspects.
Design patterns capture the essence of the structure of good designs.
The design patterns in Gamma et al. aim at flexibility with regard to change by introducing independence between parts of the software, exploiting OO concepts. With traditional specification methods these independecies do not carry over to proofs and specifications. We show that dynamic contracts (as introduced in the talk "A proof system with class invariants") can be used to preserve independencies.
Furthermore, design patterns are generic in that they accomodate various implementations and specifications - they capture a pattern. OO concepts can be used to achieve some of this genericity. Here dynamic contracts can be put to good use again, namely to provide a specification and proof on the level of the pattern rather than a specific instantiation.
The full genericity of the design pattern is currently achieved by informal decriptions. Ongoing research aims to formalize this genericity, for example by parameterizing specifications. Another approach is to describe the pattern at a highr level of abstractin. We show how to describe the observer pattern generically by using abstract states.
Using patterns in software development is becoming a common practice. However, applying patterns may be difficult due to at least two reasons. Firstly, most patterns incorporate some sort of crosscutting behavior. As a result, some features of patterns can be scattered over multiple elements of the applications. Generally, this hinders the traceability of patterns in software. Secondly, patterns generally define certain constraints, which have to be obeyed in the implementation. Due to scattering however, enforcing the constraints of a pattern in software becomes a complex and error-prone task. Implementing patterns using aspect-oriented languages makes patterns too specific to the implementation context.
In this paper we propose an extension to the current pattern format with a language-independent crosscut specification. A role-model is used to represent the aspects of patterns, which are to be superimposed by the crosscut specification on the designated places in the implementation. The semantic constraints of a pattern can be expressed by its crosscut specification as well. This allows both tracing and verification of patterns in the implementation. The crosscut specification is formally specified, tested and implemented by using the composition-filters approach.
ISpec is large scale specification language that incorporates many different practical specification concepts like types, pre-/postconditions, abstract sequential descriptions called action-clauses, several kinds of invariants and structural hierarchy. We show how all these concepts can be integrated into a single formal framework that is based on the calculus of (binary) relations.
The concepts are defined in such a way that we can simply use relation inclusion for the (partial) refinement relation that is considered natural. This means that input types and preconditions may be weakened, that output types, postconditions, action-clauses and invariants may be strengthened and that classes may be extended with new methods. The last one required the development of a new relational construct called the conjoint sum. We show how the integration of pre-/postconditions and action-clauses enables us to define (partial) program correctness in terms of refinement, leading to a Hoare-like proofsystem without logical variables, purely based on relations. We also show how the conjoint sum enables us to define component-composition in terms of refinement.
Expressions in ISpec are evaluated non-strictly. Through a thorough investigation of the conjoint sum, we discovered some other new interesting constructs that enabled us to define a simple skeleton for non-strict expressions and even non-deterministic expressions.
Recursion is incorporated into the framework using fixed point theory. When we extend the theory of refinement to the framework with recursion, we will see that binary relations are too abtract to serve as a model for systems with visible method calls. To tackle this issue, we introduce a generalisation of relations called protocols.
The Ideals project aims to develop a software desing methodology that realizes the structured composition of software from separate modules, while handling system-wide interacting aspects of a problem domain. In this talk we will outline the research areas in the project and the co-operation with the industrial partner ASML.
In this talk we will argue for the need for research in the area of aspect mining, covering concepts, principles, methods and tools supporting the identification of aspects in object oriented software systems as well as the subsequent refactoring of such systems in aspect-oriented systems. We give an overview of the state of the art in this area which shows some of the research directions that have been considered up to now and describes ongoing efforts. We provide an initial assessment of how reverse engineering and software exploration techniques can help in aspect mining, identify promising research directions and pose a number of research questions that could help to advance the state of the art in aspect mining and refactoring.
This presentation is based on joint work with Marius Marin (Delft University of Technology) and Leon Moonen (Delft University of Technology and CWI).
We will give a general introduction in attribute grammars with the emphasis on enabling the separation of a computation over a tree into a set of different, loosely coupled aspects. In the second half of the talk we will explore three different ways in which we may make such aspect-descriptions first class citizens, so they can be constructed and combined dynamically.
In this second half we will make use of our work on "Typing Dynamic Typing". As an example we will show how one can use this mechanism to (re)define and extend not only the syntax of a language, but also its semantics without changing the core compiler. This can be done on a per program basis.
As a result of this we will provide the user with a framework in which he/she can define a set of syntactical macro's, without the usual unfortunate effect that error messages resulting from compiling the transformed program show up in a representation of the transformed program, instead of in terms of the original program.Components for Language-Parametric Program Restructuring Abstract of the talk given by Ralf Lämmel at the IPA autumn meeting. This talk relates to a joint CWI/VU project on Language-Parametric Program Restructuring.
Tools for program restructuring are crucial for life-cycle enabling. For example, refactoring tools help developers and maintainers to improve the design of their code. Also, in the context of software re-engineering, restructuring tools are needed to meet new requirements for a given code base, e.g., the migration from a text-based user interface to a GUI. Tool support is necessary to handle large code bases, where a manual approach does not scale.
The development of restructuring tools as well as reasoning about the provided transformations is challenged by the fact that there exists a myriad of languages or even language cocktails. This naturally leads to the question if reusable components or a kind of language-parametric framework can be identified. This would allow for the systematic construction of language-specific tools for program restructuring. The talk discusses exactly this concern.
At a more concrete level, generic programming in Haskell is employed for the specification of components for language-parametric program restructuring. This will clarify that the components of a language-parametric framework for program restructuring exhibit quite a rich structure. A typical transformation operator is highly parametrised to abstract from syntactical and semantical details.
This talk relates to a joint CWI/VU project on Language-Parametric Program Restructuring.
Reo is a paradigm for composition of software components based on the notion of mobile channels. It is a channel-based exogenous coordination model wherein complex coordinators, called connectors are compositionally built out of simpler ones. The simplest connectors in Reo are a set of channels with well-defined behavior supplied by users.
Reo can be used as a language for coordination of concurrent processes, or as a "glue language" for compositional construction of connectors that orchestrate component instances in a component-based system. The emphasis in Reo is on connectors and their composition only, not on the entities that connect, communicate, and cooperate through these connectors. Each connector in Reo imposes a specific coordination pattern on the entities (e.g., components) that perform I/O operations through that connector, without the knowledge of those entities.
Channel composition in Reo is a surprisingly powerful mechanism for construction of connectors. We demonstrate the expressive power of connector composition in Reo through a number of examples. We show that exogenous coordination patterns that can be expressed as (meta-level) regular expressions over I/O operations can be composed in Reo out of a small set of only five primitive channel types.
We shall review the motivation underlying Farhad Arbab's Reo calculus of component connectors as circuits, (1) by briefly comparing component composition with digital circuits, and (2) by giving an elementary but entirely formal treatment of a related but much simpler class of circuits: signal flow graphs, in terms of coinductive stream calculus.