Test Early and Often

Developing software requires a mix of analytical and creative approaches, involving different types of people who have different blends of thinking. But how can one achieve software quality in the competitive IT marketplace? I argue here that the maxim “Test early and often” is one of the most useful tools for achieving software quality.

In a report published by School of Computer Science, Carnegie Mellon University; Lu Luo described the evolution of different types of software testing techniques. She called special attention to the “Test Gap.” Although testing is gradually becoming an engineering discipline in practice by the larger enterprises, it is still viewed as time-consuming and not cost-effective. Despite numerous research projects on testing as a specialized engineering discipline, very few research results have been put to practice in the industry. Testing often ends up being an end activity because the project runs out of schedule or budget.

It is unfortunate that even today many software teams do not understand the importance of “Test early and often.” They do not understand the long-term positive effects it may have on the software product being developed. I have seen teams where smart engineers push software testing to the end because they think other development tasks take precedence against a looming delivery deadline. They claim to be agile but do not follow all the best testing practices of agile methodology. I have also seen teams who take testing very seriously right from the beginning of the project. Currently, I am fortunate to work on a team where we believe if we are taking testing for granted, we are in fact taking the product for granted.

In another simulation study published in the Journal of Information Technology Management, a group of authors from the University of Memphis and University of North Alabama pointed out the significant benefits of early software testing in software development. By practicing proper scheduling and time allocation of the testing activities, and effective collaboration among the individuals involved in the software development process, they achieved early bug identification and fix, which resulted in a shorter project cycle time and significant cost savings. The authors documented these benefits by analyzing several empirical studies and by going through the quantitative evidence of project benefits from early testing documented by other authors.

These authors found the cost to rectify a bug in software increases roughly 10 times with each passing stage of development. An error that costs $100 to rectify in the business requirements stage would cost $1000 to rectify in the system requirements stage, $10,000 in the high-level design stage, $100,000 in the detailed design stage, and $1,000,000 in the implementation stage. The authors demonstrated these figures as part of an expected cost analysis. Although these statistics have not been tested in other software development projects, they align with many developers’ experiences.

The OWASP (Open Web Application Security Project) Testing Guide also points out most people today do not test software until late in the deployment stage of the software life cycle. The guide says late testing is an ineffective process. Testing should start early.

Is It The Responsibility Of The Programmers Or The Testers?

Programmers and testers share the responsibility to test the software early and often. Both have their own part to do to ensure the overall correctness of the developed software. They do best to focus on unit testing their segments of code to verify whether the piece of code works in the way intended. In the same way, software testers also play their part in analyzing the functionality, usability, performance, and security related parameters of the software being developed. They ascertain that the partially developed software meets the specifications, and the overall design and architecture do not constrain in the desired functionality.

Test-driven development, advocated in “extreme programming,” is a unique approach that supports the “test early and often” concept of software engineering. Its test-code-test cycle enables programmers to ascertain right away that the particular piece of code is working; and whether expected and actual results are the same.

In test-driven development cycle, programmers start by creating a test for the new function before any code is written for the function. If the existing code satisfies the test, nothing more is needed. Otherwise they write just enough code to pass the test and then clean it up and integrate it with existing code.

The “test early and often” supports requirements engineering. Designing the test cases early reveals gaps in the requirements definition. Correcting the gaps early is much easier than having to unwind a lot of development that took place before the gaps were discovered in late testing.

Early Testing through the STREW and ISO Models

In a recent research paper, Nachiappan Nagappan wrote that software developers can effectively reduce the product risks by getting early warning regarding the software reliability and performance. Early warning enables early corrective action.

Nagappan’s STREW (Software Testing and Reliability Early Warning) model helps the software development teams organize testing to get an accurate assessment of the software quality and reliability in object oriented designs. The STREW metric model was validated after practical applications in 22 academic projects, 27 open source projects, and five industrial projects.

The STREW metrics comprise nine components in three groups:

  • Test quantification metrics–evaluate coding and testing styles of multiple developers
  • Complexity and object-oriented metrics – evaluate ratio of test size to code size
  • Size adjustment metric – evaluate defect density

The international standard ISO 9126 also defines a software quality model with six principal software quality characteristics: functionality, reliability, usability, efficiency, maintainability, and portability.

The ISO 9126 standard is not limited to assessing object-oriented design. A published paper by authors from the University Politehnica of Timisoara, concluded the STREW model measures the testing intensity ratio and the ISO 9126 standard measures the fault intensity ratio.

Also in another publication titled “Test Code Quality and Its Relation to Issue Handling Performance” published in the journal IEEE Transactions on Software Engineering, the authors found a weak point of the STREW model. The STREW model relies on users to provide tests for metrics whereas the ISO model relies on predetermined tests.

The STREW model is well suited for software development teams that follow the extreme programming (XP) methodology and write extensive automated test cases. It encourages developers to define overall quality metrics early. However, the model is not meant for development environments where script-based automation testing is being performed. The STREW metric suite does not play any role or extract any analytics out of the black box test results.

What model are you following to test early and often?

References

Ciprian-Bogdan, C. et al. Towards a Software Quality Assessment Model Based on Open-Source Statical Code Analyzers. In the  Sixtth IEEE International Symposium on Applied Computational Intelligence and Informatics (SACI). IEEE, Washington D.C., 2004, 341–346.

Athanasiou, D.,  Nugroho, A.,  Visser, J., and  Zaidman, A. Test Code Quality and Its Relation to Issue Handling Performance. IEEE Transactions on Software Engineering 40, 11 (2000).

ISO 9126 Software Quality Characteristics.

Nagappan, N. Toward a Software Testing and Reliability Early Warning Metric Suite. In Proceedings of the 26th International Conference on Software Engineering  (ICSE’04) (Edinburgh, May 24—28). IEEE, Washington D.C., 2004, 60–62.

Nagappan, N. A Software Testing and Reliability Early Warning (STREW) Metric Suite. Dissertation.  North Carolina State University. 2005.

Olan, M. Unit Testing: Test Early, Test Often.  Journal of Computing Sciences in Colleges 19, 2 (2003), 319–328.