Testing related activities may consume a large part of the effort required during software development, and test code often constitutes a large part of the overall software code. Increasing maintenance overhead caused by large test suites can reduce the benefits testing offers. Therefore, in this dissertation we focus on supporting developers during test code comprehension to improve test code and reduce maintenance overhead.
We especially look at modular and dynamic systems because of their interesting and specific characteristics, which aggravate testing and test comprehension. Modular and dynamic systems can change at runtime, and are conglomerates of sub-systems often developed by different project teams and companies. In particular, we investigate plug-in based systems as they are a widely spread form of modular and dynamic systems.
In an in-depth study, we reveal which barriers developers are facing testing such systems, which test practices they adopt, and how limited testing is compensated for. In particular, we show that unit testing is the most adopted testing practice, whereas integration testing and system testing efforts are limited. Participation in the testing process by the community, consisting of other developers and end users, is essential to overcome challenges during integration and system testing.
Within this dissertation we also present four techniques we developed, which use static or dynamic analysis, or a combination of both to extract information from the system under study in its test suite to support the developers during test suite comprehension and maintenance. Within the studies we show that our techniques and tools can support developers (1) to understand plug-in test suites from an integration perspective, (2) to get familiar with prior unknown test suites, (3) to automatically detect test anti-patterns and (4) to improve and maintain test suites.
We use a wide variety of research methods, such as grounded theory, interviews, surveys, case study research and software repository mining. Often, we use a mixed method approach, combining several methods in one study to triangulate the findings. Empirical evaluations and the involvement of the industry is a key factor, and by involving people, in particular knowledgeable practitioners, we are able to reveal testing practices and problems during plug-in testing experienced in industry, to reveal challenges during test code comprehension and to develop tools and techniques that are useful for practitioners. Case study research, mainly applied to open source software systems, helps us to evaluate the scalability, applicability and accuracy of our techniques and tools.
While, most of the techniques and tools developed in this dissertation focus on test comprehension, they also often gave an idea of the quality of a given test suite. We think that an investigation of the extent in which the techniques can be useful to assess test quality is an interesting area for future research. Further, the subject systems used within our studies were Java systems, and often based on the Eclipse plug-in architecture. We assume that an expansion to other dynamic and modular systems, such as the Mozilla or Adobe plug-in system and community would be interesting to further shed light on the generalizability of our approaches and findings.