Research within IPA concentrates on Algorithmics and Complexity, Formal Methods and Software Engineering and Technology. Within these larger fields IPA focusses on areas in which it expects important developments will happen in the near future, the so-called focus areas.
The current focus areas are listed below; click on a focus area to reveal a more detailed description.
- Real World Algorithmics and ModelsComputer science nowadays plays a pivotal role in life sciences, natural sciences, and other scientific disciplines, and in almost all technological developments. An important reason for this is the computing power that has become available for widespread use: standard laptops and desktops already have enormous computing power, and grids and supercomputers even more so. At the same time, however, the tasks that computers have to perform become more and more complicated, the desired accuracy of the computations becomes greater and greater, and the amount of data that needs to be processed and analysed explodes. Hence, many real-world problems still cannot be solved in a satisfactory manner. The theme Real-World Algorithmics and Models aims at applying the strengths of IPA in Algorithms and Complexity and in Formal Methods to increase our understanding of algorithms and models for real-world problems. Research in this direction includes the following topics.
- Spatial data. Modelling and analysing real-world phenomena often involves spatial data, usually in 2-dimensional or 3-dimensional space. There is an increased need to process and analyse such data. For example, with the advance of GPS and other tracking devices, more and more trajectory data — data describing the movement of people, cars, animals, etc — becomes available. We wish to develop new algorithmic techniques for the efficient processing of trajectory and other spatial data. An important question is this research is: how can we quantify the properties of real-world spatial data, and how can we exploit those properties when designing our algorithms? We also want to study optimisation problems on networks: how can we exploit the properties of, say, road networks to solve real-world optimisation problems more efficiently than is possible on abstract graphs?
- Quantum computing. Despite the computing power currently available, there are still many important problems that cannot be solved efficiently. In fact, several of these problems — so-called NP-complete problems — are expected to be inherently unsolvable in reasonable (that is, polynomial) time. Sometimes this is unfortunate, but in other cases this can be used to our advantage. For example, security protocols rely on the fact that certain algorithmic problems are not solvable efficiently by traditional computers. Quantum computers may change all of this: with a quantum computer it may in the future be possible to solve various real-world problems that are out of reach with traditional computers. IPA research in this direction includes the investigation of the recently discovered link between combinatorial optimisation and quantum computing, and quantum network security (in particular position-based cryptography).
- Life sciences. Research in life sciences has changed dramatically over the past decades. Computer science now plays a major role in this area, in particular in the modelling and the simulation of biological processes. One of the great challenges in the study of biological systems is to understand them to such an extent that predictions can be made as to how they will react under certain circumstances. To this end, one has to model the system under study (with the help of the analysis of the available experimental data). We want to continue our investigation into how formal methods, for example from concurrency theory, may provide a good way of doing this. We also want to investigate efficient (parallel) algorithms that are needed to perform accurate simulations of biological processes; dealing with varying time scales is not of the challenges in this area.
These topics are well aligned with the Dutch Top Sectors. For instance, the analysis of movement data plays an important within our research on algorithms for spatial data, and this fits in well with the topics Monitoring and sensor neworks and eLogistics from the ICT Roadmap for the Top Sectors. Life sciences itself is, in fact, one of the top sectors, so our research in this area is also well aligned with the national agenda. Our research on quantum computing is more long-term and speculative, but if successful it may have tremendous impact in many areas.
- Cyber-Physical SystemsThe design of the next generation of high-tech systems requires a tight coordination between computation, communication and control elements (the cyber part) on the one hand, and physical processes such as heating, cooling, motion, vibrations, etc (the physical part) on the other hand. Examples of these so-called cyber-physical systems (CPS) are professional systems for medical imaging, lithography, smart electricity grids, intelligent transportation, electron microscopy, high-end printing, and sensor networking systems (monitoring and control, personal health care, maintenance systems). The corresponding scientific disciplines for designing cyber-physical systems have predominantly developed independently. This separation needs to be bridged urgently, as the design of CPSs faces major challenges: (i) exponential growth in complexity due to increasing numbers of sensors and actuators, increased performance requirements, increased (multi-)processing, increased network connectivity, and increased interaction between systems, (ii) increased uncertainly due to unreliable components and changing conditions in the physical environment. Generally, cyber-physical systems are ubiquitous in modern technological systems and should function in a dependable and safe way. Therefore the societal importance of this research can hardly be overestimated. The many and multifaceted scientific challenges can be naturally organised into three areas:
- Modelling and Simulation. Here the main quest is to create modelling formalisms that are applicable on an industrial scale. Such formalisms should have the flexibility to be turned into domain dependent methods; compositionality is an important issue. Furthermore, clear relations and mappings should be established with other models like functional, timed, and stochastic models. Both mappings and formalisms should be supported by modelling and simulation tools.
- Analysis. The desired dependability of cyber-physical systems asks for mature analysis techniques that can assess key properties like correctness, stability, and robustness. A central topic in verification is the support of model checking by abstraction techniques that are parameterised by both model features and property domains.
- Control. Synthesis and verification of controllers become major technical challenges in the presence of discontinuities, distribution, the interplay between discrete and continuous aspects, and stochastic aspects. Hybrid controllers will have to address various (and often simultaneous) control objectives like functionality, stability, robustness, and optimality. Game theory is expected to be a promising paradigm in this field.
The Netherlands plays a leading role in the CPS domain through its very strong position in Embedded and high-tech systems. The topic of CPS plays an important role within the Dutch top sectors: it occurs prominently within the Embedded Systems roadmap of HTSM and the ICT roadmap, and also within the Smart Grids roadmap for the top sector energy. The IPA Focus Area CPS has close links with the IPA Focus Area on Real-World Algorithmics, since algorithm design is crucial for CPS.
- SecurityAs our dependence on the ICT infrastructure increases, so do concerns about its security. The growing complexity and interconnectivity of ICT systems implies that bugs and vulnerabilities are harder to avoid, creating new opportunities for increasingly sophisticated attackers. Indeed, cyber security and privacy issues are no longer limited to traditional computer systems, such as PCs and laptops. Rather, they surface everywhere where an IT infrastructure is present, from electricity and water supply systems to the health service, from public transport to smart cars, from implants to supply chains, and from banking and logistics to the emergency services. Nowadays, “Computer” Security is a multidisciplinary area in which technical questions are deeply interleaved with societal, ethical, legal, behavioural and governance aspects. The focus area ‘Security’, which aims at further developing the strength of IPA and stimulate interaction with other disciplines, can be naturally organised in two application areas:
- Security and Trust of Citizens. This includes privacy protection, security of mobile services, data and policy management, and accountability.
- Security and Trustworthiness of Infrastructure. This includes malware detection and removal, intrusion detection and prevention, trustworthiness of networks and hardware, software security, security of SCADA/industrial control systems (ICS), and secure operating systems.
Underlying research areas of particular interest for IPA include: formal aspects of security (including security and privacy models), cryptography (including secret sharing, fast public cryptography, cryptanalysis), network security (intrusion detection, malware), access control and trust management, software security (design of secure code,code analysis, security types), security testing (fuzzying, penetration testing, etc), data security (searching encrypted data, proxy
re-encryption, and other forms of privacy-preserving operations on sensitive data), cyber security (measuring the effectiveness and the risk of security technology).
These topics are fully in line with the (Dutch) National Cyber Security Research Agenda, the Security area in the ICT Road Map, and the Security Roadmap within the HTMS top sector (Cyber-security chapter). The need of cyber-security research is also reflected in the emphasis it has in the FP8 ICT programme and in the FP8 Security programme.
- Model-Driven Software EngineeringOur contemporary society will stop functioning without software. High tech systems, embedded system, communication technology, web based systems, and information systems are invaluable assets for our modern society. There are hardly any products that do not contain software. On the one hand, this dependability on software implies not only high demands on the overall correctness of the systems, but also on the efficiency in the production of software systems. On the other hand, as the complexity of products is continuously increasing, the gap between the specification of system requirements and their implementation increases dramatically. Consequently, construction and implementation of such a magnitude cannot be efficiently and effectively performed by the use of conventional techniques, and general-purpose modelling approaches.Model-Driven Software Engineering is an answer to this increasing demand for
more efficient and effective development and maintenance of correct complex systems. Models provide flexibility and increase the level of abstraction which paves the route for model analysis which provides assurance in correctness, and model transformation provides model reusability and interoperability, as well as generation of executable code and synthesis of system implementations.Model analysis based on formal methods has become mature but are hardly adopted by industry due to their steep learning curve. Formalisms such as UML have been adopted by industry because of the strong modelling tooling, but soon proved to be restricted because of the broad range of diagrams, the lack of proper semantics and the fact that they are too general purpose. Industry is currently exploring domain-specific UML-like languages to design their software artefacts, which are tailored to the specific application area and the needs of a particular domain. Domain-specific language engineering and domain-specific modelling offer the opportunity to integrate formal methods in industrial software engineering.We can identify the following main research challenges:
- The design and implementation of domain-specific languages is one of the major challenges. It is evident from the current practice that very often only the syntactic part is properly addressed. The lack of proper static and dynamic semantics for a DSL leads to ambiguities and subjective interpretation of the language and therefore of models. Having in mind the large number of people involved in product design and development, of software as well as hardware, inconsistencies are likely to occur between models used in various phases of production or even between models used and developed by experts in the same domain. Furthermore, the definition of a good syntax and a proper semantics is necessary for defining proper model transformations and for performing model analysis. Language workbenches that support DSL design, development, maintenance and use, have to provide seamless usage of formal model analysis tools and code generators. Furthermore, we are witnessing situations in which a number of DSLs for different domains are used in the production. It goes beyond the core idea of the domain-specific approach, and therefore is undesirable, to come up with a DSL that covers all aspects of modern system design. Nevertheless, different domain experts are involved in the design of modern systems, therefore, different DSLs have to be used in a consistent way for expressing different aspects. It is necessary therefore, to bridge different DSLs in a semantically sound manner.
- Model transformations have become important artefacts in the software life-cycle. If a design model is proven correct, then the correctness of the model transformations guarantees the correctness of any model obtained by transformation of the design model. Thus, one of the challenges with respect to model transformations is the formal verification of correctness of model transformations. Furthermore, when model transformations are used they have to be maintained as well, so quality and measuring the quality of model transformations is an area of great importance. Today, most systems are evolutionary engineered: a new version of an existing product, a new features, a design-iteration, etc. The models evolve with the system. However, model (and code) reusability, though is usually presented as great advantages of MDSE, is not used at its full potential in practice, to large extent due to the lack of proper model-based engineering methodologies that support evolutionary development. In particular, there are hardly any techniques for reusability of model analysis results. In practice, any new (re-)design is analysed (simulated, tested or verified) from the scratch.
- The rapid development of multicore systems poses also challenges for the MDSE community. The analysis of the behaviour properties when these models are translated to multicore architectures. The translation itself is not trivial and may also introduce undesired behaviour. However, this is related to the research challenge of the correctness of model transformations.
These topics are well aligned with the Dutch Top Sector High-Tech Systems and Materials. They are also closely related with topics in the focus area Software Analysis.
- Software AnalysisThe ever increasing dependence of our society on software systems and the high-profile failures of some of those systems makes it mandatory to invest in techniques that can measure various quality attributes of software in order to prevent such failures. Quality software can be characterised in many ways. Of course, software should first and foremost satisfy its functional requirements, but there are many non-functional quality aspects as well. For instance, availability, modifiability, performance, security, testability and usability are also important qualities of software. For each of these it is desirable to have analysis and measurement instruments to determine to what extent a software system satisfies them.Analysis starts with extracting relevant facts from either the source code or other available sources for meta-data like versioning systems, bug repositories, execution traces or test runs. This is a detailed step that is largely dependent on the programming languages that have been used during the construction of the source code, on the actual execution platform, and on the forms of analysis that are required. It is not uncommon that several languages have been used and in those cases cross-language fact extraction and analysis are necessary. This is also the case when the results of different run-time monitoring tools have to be combined.After fact extraction, direct analysis of these facts can be done for certain applications such as call graph analysis, dead code removal, analysis of information flows, classification of execution patterns and the like. For other applications a more abstract model has to be constructed that describes certain domain-specific properties of the system that is implemented by the source code. In this area fall, for instance, model-based testing, verification, protocol analysis, dead lock analysis and determining certain security and performance properties. It is crucial to guarantee that the abstract model faithfully represents relevant properties of the original source code. Achieving scalability and usability of the involved validation techniques is a major challenge.A final aspect to consider is the way in which the results of software analysis are presented. It is important to develop new methods for information visualisation that will increase the usability of software analysis.Software analysis is essential for getting insight in the quality aspects of software. This is true for old software that has evolved over the years, but also for new software that has to be accepted from a third party. Software analysis will therefore become more and more important as the outsourcing or resourcing of software remains popular as well. It is also important in the area of open source software, where there is typically not a single producer that can be held responsible, but (by the very nature of the development process) the source code itself is available, providing extensive opportunities for analysis. Software analysis may reveal certain defects in software that have to be repaired. It is therefore also a prerequisite for refactoring, transformation, renovation and other forms of software improvement.Research challenges are:
- Analyzing heterogeneous, cloud-based, systems.
- Obtaining language-parametric analysis techniques that can be used across different languages and execution platforms.
- Scaling advanced analysis techniques to industrial systems.
- Combining the results of static and dynamic analysis.
- Analyzing dynamically typed languages that are widely used in web-development.
- Seamless integration of software analysis and transformation.
- Obtaining software visualisations that give insight in the properties of a software system.
Software Analysis is of general relevance for the Dutch Top Sectors High-Tech Systems and Materials, Energy, Logistics and Creative Industry. ICT is cross-cutting and software analysis is important for all the themes mentioned in the Roadmap ICT where quality of the software infrastructure is at stake, in particular “ICT one can rely on” and “ICT Systems for monitoring and control”. Software analysis is also needed when standards and open data are involved. In the EU Digital Agenda several topics require software analysis including “Standards operability testing and certification”, “Measurement techniques for energy performance of ICT systems”, and others.
The focus areas in the period 2007-2012 were as follows:
- Beyond Turing
- Algoritms & models for life sciences
- Hybrid systems
- Model-driven software engineering
- Software analysis
A detailed description can be found here.