Software


TestEvoHound

TestEvoHound is a static analysis tool, which allows to analyse multiple revisions of a software project for test fixture smells.

Overview of TestEvoHound

We developed TestEvoHound to analyze fixture smell evolution. TestEvoHound works with Git and SVN repositories. When analyzing code, TestEvoHound executes four tasks.

During the Revision Checkout task, TestEvoHound checks out each revision of the project under analysis, and for each revision it starts the Build Process task. Here, the tool searches for ANT or MAVEN build files, initiates the build process and compiles the source code (including tests). Then, the Test Fixture Smell Analysis task invokes the TestHound tool to analyze the current revision for smells and stores the outcome. Finally, when all revisions have been analyzed, the Trend Analysis occurs.

Here, the TestEvoHound tool calculates the trends and measurements among all revisions. This information is stored in comma-separated value format to allow easy visualization by tools like Excel or R.

What TestEvoHound solves

An important challenge in creating automated tests is how to design test fixtures, i.e., the setup code that initializes the system under test before actual automated testing can start. Test designers have to choose between different approaches for the setup, trading off maintenance overhead with slow test execution. Over time, test code quality can erode and test smells can develop, such as the occurrence of overly general fixtures, obscure in-line code and dead fields. In this paper, we investigate how fixture-related test smells evolve over time by analyzing several thousand revisions of five open source systems. Our findings indicate that setup management strategies strongly influence the types of test fixture smells that emerge in code, and that several types of fixture smells often emerge at the same time. Based on this information, we recommend important guidelines for setup strategies, and suggest how tool support can be improved to help in both avoiding the emergence of such smells as well as how to refactor code when test smells do appear.

TestHound

Overview of TestHound

TestHound is a static analysis tool which analyses test code, detects test smells and guides developers and test engineers during improvement and refactoring of test code.

What TestHound solves

Designing automated tests is a challenging task. One important concern is how to design test fixtures, i.e. code that initializes and configures the system under test so that it is in an appropriate state for running particular automated tests. Test designers may have to choose between writing in-line fixture code for each test or refactor fixture code so that it can be reused for other tests. Deciding on which approach to use is a balancing act, often trading off maintenance overhead with slow test execution. Additionally, over time, test code quality can erode and test smells can develop, such as the occurrence of overly general fixtures, obscure in-line code and dead fields. Test smells are poor solutions to recurring implementation and design problems in test code. Until now, no support has been made available to developers during the analysis and adjustment of test fixtures.

To address this shortcoming, we developed a TestHound. TestHound automatically analyzes test fixtures to detect fixture-related smells and guides improvement activities.

Publication

More details on TestHound can be found in the associated publication.

Automated Detection of Test Fixture Strategies and Smells
Michaela Greiler, Arie van Deursen, Margaret-Anne Storey
Proceedings of the International Conference on Software Testing, Verification and Validation, pages xxx-xxx.
IEEE, Luxembourg, March 2013.
Download: PDF

Download TestHound

Please feel free to download it here:
Latest Version: TestHound_v0.2.zip
Previous versions:

Eclipse Test Suite Exploration Tool (ETSE)

The Eclipse Test Suite Exploration Tool implements five architectural views that can be used to understand test suites for plug-in-based systems from an extensibility perspective. To create these views static meta data and dynamic trace data is used. For more information on the views and the tool please download the Technical Report.

Overview of ETSE's Architecture

ETSEArchitecture.png

Publication

More details on ETSE can be found in the associated publications.

What your Plug-in Test Suites Really Test: An Integration Perspective on Test Suite Understanding
Michaela Greiler and Arie van Deursen
Empirical Software Engineering Journal, 2012, Springer.
Download: PDF

and

Understanding Plug-in Test Suites from an Extensibility Perspective, Best Paper Award
Michaela Greiler, Hans-Gerhard Gross, Arie van Deursen
Working Conference on Reverse Engineering (WCRE), October 13-17 2010, Boston, USA
Download: PDF

Download ETSE

Download and install ETSE via ETSE's Update site: Download via Update Site

ETSE is in development. Please check regularly for updates...

Test Similarity Correlator

Test similarity Correlator (TestSim) can support test suite understanding by automatically deriving relations between test cases. In particular, it shows trace-based similarities between (high-level) end-to-end tests on the one hand, and fine grained unit tests on the other. TestSim implements three similarity metrics based on shared word count, Levenshtein distance, and pattern matching.

TestSim is a Java based framework and offers an API to steer customized test similarity measurements, varying in trace reduction, thresholds and similarity calculations.

To instrument the test execution we use the AspectJ framework. We offer three different annotations to facilitate tracing of execution of test-methods, set-up and tear-down methods. Test Similarity Correlator comprises several aspects, addressing join points to weave in our tracing advices, including the aspect to address code generated by the mocking library JMock.

Publication

More details on Test Similarity Correlator can be found in the associated publication.

Measuring Test Case Similarity to Support Test Suite Understanding
Michaela Greiler, Arie van Deursen, Andy Zaidman
Proceedings of the International Conference on Objects, Models, Components, Patterns (TOOLS), pages xxx-xxx.
Springer, Prague, Czech Republic, May-June 2012.
Download: PDF

Download Test Similarity Correlator

Online testing method for SOA configuration faults

Within the ARTOSC Project, on Automated Runtime Testability of SOA Composites, we have deveolped an online testing method for detecting SOA configuration faults.

This project focuses on developing model-based built-in testing strategies for SOA that can be executed when a SOA-based system is assembled and deployed, during runtime. In particular, this project aims at devising testability artifacts, e.g., additional query and testing interfaces in services and additional testing services in their own right. A service acquiring another service can invoke its testing service. The testing service has built-in test software that is aware of the query and testing interfaces of the other service. Every service can, invoke its testing service automatically, to check coherence and correctness of its peers, if, for example, peers are exchanged or updated.

As proof-of-concept we have implemented the online testing method in a OSGi framework.

Our information service example application is available for download: Download Online Testing Application.

Publication

More details on Online Testing can be found in the associated publications.

Evaluation of Online Testing for Services A Case Study
Michaela Greiler, Hans-Gerhard Gross, Arie van Deursen
International Workshop on Principles of Engineering Service Oriented Systems (PESOS), May 2-8 2010, CapeTown, SouthAfrica
Download: PDF

Online Testing of Service-Oriented Architectures to detect State-based Faults
Michaela Greiler, Hans-Gerhard Gross, Arie van Deursen
International Conference on Service Oriented Computing, 2009, Doctoral Symposium
Download: PDF


Revision: r1.6 - 19 Feb 2013 - 14:39 - MichaelaGreiler
MichaelaGreiler > Software
Copyright © 2003-2017, Software Engineering Research Group, Delft University of Technology, The Netherlands