In this talk we will look at the motivations and rationale of OO and why OO has failed to realize its original promises with respect to reuse and maintenance. Facing the challenges of the current Web-driven frenzy of software construction, we will discuss how classical notions of OO may be integrated with the new component-oriented technologies. Centering around the notions of software architecture and frameworks, we will delineate the research issues for the next century, as another attempt at taming the forces underlying the ever present software crisis.
Based on Principles of OO Software Development (2nd edn soon to appear, Addison-Wesley, 2000) Online at: http://www.cs.vu.nl/~eliens/online/talks/ipa99/contents.html
Jini is a framework for connecting distributed services implemented in hardware or software. The presentation will present the key concepts of Jini, including the main protocols and services. Jini protocols are available for discovering, joining and looking up services. Services available for Jini include leasing, distributed events, transactions, and JavaSpaces.
XML is a metalanguage for the markup of documents. The structure of a class of documents may be described by a Document Type Definition (DTD), which is akin to an unambiguous context free grammar. The parsing of a document according to the DTD results in a tree-structure, which may be externally stored and accessed by formally defined access methods, described by the Document Object Model (DOM).
Tags in documents may be supplemented by "attributes" with constants as values. Components of DTD's and documents are referenced by "entities", which also may describe other varieties of data, like graphical data and links to positions within other documents or objects.
XML is a recommendation of the W3C for the exchange of documents on the WWW. Accompanying (drafts of) recommendations describe the rendering (XSL), transformation (XSLT) and linking (XLL) of documents. They all may be regarded as (more or less) subsets of comparable ISO standards (SGML, DSSSL and Hytime), of which SGML has been widely used. Two of the design goals of SGML were to provide for facilities (1) to separate logical structure and layout and (2) to let organizations define their own logical structure and markup of documents.
Because of the relation between DTD's and cfg's there has always been an interest in the application of core methodology in computer science, e.g. for the processes of parsing, parser generation, transformation, querying, inferencing and revision detection. Currently the W3C is soliciting ideas for a recommendation on query languages for XML and related datastructures. Also the further integration of XML and its companions with standards for the infrastructure of the WWW continuously provokes new standardization activities.
The XML (and SGML) world is well organised. Newcomers are advised to start browsing with http://www.oasis-open.org/cover/sgml-xml.html.
In mission-critical systems fault-tolerance is often one of the primary requirements in order to ensure a sufficiently high level of reliability, availability, and safety during systems' operation. Replication of software components on commercially off-the-shelf hardware is a generally applicable and cost-effective way of implementing fault-tolerance. However, guaranteeing correct system behaviour in an environment where replicated software components exist and may fail at arbitrary moments is a difficult problem that is often solved using application specific techniques. Failing components may either produce erroneous data, for instance due to faulty communication links, or completely stop producing data, because the component crashed. In this paper, we particularly focus on replication techniques to recover from or mask crash failures.
We present a new software architecture in which replication of components and recovery from crash failures are transparent. The architecture is based on a shared data space through which components interact. We define the architecture by a denotational semantics in which the possible behaviours of a component are defined by a set of state transition traces. Using the semantics, we derive some fundamental algebraic properties of the architecture, that reflect transparent replication and crash recovery.
More conventional approaches towards replication for fault-tolerance usually assume a message-passing (or client-server) model. Over the past years group communication has been developed as a framework in which replication of services is transparent to client applications. However, maintaining consistency among replicated services, using either active, passive, or semi-active replication, requires the implementation of complicated communication protocols.
In the architecture that we introduce, replication comes for free. Consequently, fault-tolerant services can be implemented fairly easily and with low overhead during run-time, which is particularly important for time-critical applications. Synchronisation either between active copies or between a primary process and its back-ups is not required. Furthermore, the implementation does not depend on a perfect failure detector, because correct system behaviour is still retained if, in the case of passive or semi-active replication, one of the back-up processes is activated while the primary process is still running.
We refine the notion of embedding as formal tool to compare the relative expressive power of different languages, by taking into account also the intended architectures on which the languages are executed. The new notion, called architectural embedding, is suitable for the comparison of different communication mechanisms, and gives rise to a natural notion of implementability. We will use this notion to present equivalence and difference results for several coordination models based on components that communicate either through an unordered broadcast, through an atomic broadcast, or through a synchronous broadcast.
The basic principles of compositional verification are presented. A formal verification method is called compositional if it allows reasoning with the specifications of components without knowing their implementation. Whereas classical a-posteriori verification requires the complete program text, compositionality makes it possible to verify design steps during the process of software development. Components can be (re)used on the basis of a formal specification of their external behaviour. The approach will be illustrated by a few simple examples of distributed real-time systems.
The interface of a component defines the component's acces points, and describes how it connects to other components. Specification (and verification) of interfaces is of utmost importance for components, because components are usually developed independently, without contact between provider and client. Also, when components are revised, they should continue to satisfy the same interface specification, in order to guarantee smooth upgrading. In practice no distinction is made between component interfaces and object interfaces. Therefore, the work in the LOOP group at Nijmegen University on the specification and verification of classes (with proof tools) is also relevant for components. This talk will present a discussion of interface specification for components, and also an overview of the LOOP project. It will conclude with a demo on verification of Java classes using the proof tool PVS.
For information on the LOOP project, see the URL: http://www.cs.kun.nl/~bart/LOOP/index.html
The use of components in software development is becoming widely accepted. Still, many of the components on the marketplace are limited to user interface elements and are not aimed to solve specific business problems. FinCom+ is a company that made a well-considered choice to develop and market business components for use in the financial world. Based on practical experience, this presentation will explain why FinCom+ decided to develop these components, how development of components is different from developing a "regular" software application, and how FinCom+ assists their customers to ensure the components are deployed in the most effective way.
Within Philips, component-based development is taking place in a wide variety of product divisions, ranging from very complex, professional systems to only slightly less complex consumer electronics. Component-based development is not a goal in itself, but a means towards the cost-effective development of product families. In order to make full use of the possibilities that components offer, several aspects of the development need to be optimized towards them. This presentation will sketch how domain analysis, reference architectures, and decoupled development can work together to realize the promise of components.
This presentation stresses the importance of conceptual integrity in component-based software development, identifies common traps in achieving conceptual integrity, and explores means to overcome these.
It seems to be a common opinion that 1) objects/components that supportencapsulation and 2) a suitable component infrastructure (including a standardized interface language) are necessary and sufficient conditions for the successful assembly of systems from standard components. This presentation shows there is a third necessary condition: conceptual integrity.
Conceptual integrity of a system basically means that the system has been conceived by a single mind: it is a "clean" system. This implies that the system conforms to general quality criteria, such as orthogonality, consistency, generality, and parsimony. A clean system is as easy as possible to develop, to maintain, to adapt and to use.
On the basis of a case study (component-based development at an insurance company), this presentation identifies three traps that may prevent conceptual integrity: use of standard packages, wrapping, and use of standard middleware platforms. The presentation analyses the causes and consequences of these traps.
Finally, the presentation identifies ways to achieve conceptual integrity in component-based development. These ways are related to current market developments and scientific research.