May 7, 2009

How to Know when should we Stop our Testing?

All Test Engineers come across this typical question as to "when to stop" testing. Fact is that testing can never be considered complete. We can never be able to prove scientifically that our software system is free from errors now.

Most of the time we are facing the following conditions:

  • Stop the testing when the committed testing deadlines are expiring.
  • Stop the testing when we are not able to detect any more errors even after execution of all the planned test Cases.

We can see that both the above statements do not carry any meaning and are contradictory since we can satisfy the first statement even by doing nothing while the second statement is equally meaningless since it can not ensure the quality of our test cases.

When to stop testing is difficult. Many modern software applications are so complex and run in such an Interdependent environment, that complete testing can never be done.

Most of the time we follow the following conditions:

  • Stop the Testing when deadlines like release deadlines or testing deadlines have reached.
  • Stop the Testing when the test cases have been completed with some prescribed pass percentage.
  • Stop the Testing when the testing budget comes to its end.
  • Stop the Testing when the code coverage and functionality requirements come to a desired level.
  • Stop the Testing when bug rate drops below a prescribed level.
  • Stop the Testing when the period of beta testing / alpha testing gets over.

Testing metrics can help the Test Engineer to take better and accurate decisions; like when to stop testing or when the application is ready for release.

The best way is to have a fixed number of test cases ready well before the beginning of test execution cycle. Subsequently measure the testing progress by recording the total number of test cases executed.

We can use the following metrics that are quite helpful in measuring the quality of the software product.

  • Percentage Completion:

(Number of executed test cases) / (Total number of test cases)

  • Percentage Test cases Passed:

(Number of passed test cases) / (Number of executed test cases)

  • Percentage Test cases Failed:

(Number of failed test cases) / (Number of executed test cases)

A test case is declared - Failed even when just one bug is found while executing it, otherwise it is considered as - Passed

There are some Scientific Methods to decide when to stop testing. These are:

Decision based upon Number of Pass/Fail test Cases:

  • Preparation of predefined number of test cases ready before test execution cycle.
  • Execution of all test cases in every testing cycle.
  • Stopping the testing process when all the test cases get passed.
  • Alternatively testing can be stopped when percentage of failure in the last testing cycle is observed to be extremely low.

Decision based upon Metrics:

  • Mean Time Between Failures (MTBF): by recording the average operational time before the system failure.

  • Coverage metrics: by recording the percentage of instructions executed during tests.

  • Defect density: by recording the defects related to size of software like "defects per 1000 lines of code" or the number of open bugs and their severity levels.

Finally how to decide Stop the testing?

If:

  • Coverage of the code is good
  • Mean time between failure is quite large
  • Defect density is very low
  • Number of high severity Open Bugs is very low.

After even doing all, Test Engineers are not pleased about their job. They think some testing is not done yet. There are some gap about their test case and the business. Fact is that Test engineer can never be considered test complete.