When do you stop testing?

In your practice, what measures do you use to find out when it is time to stop testing the application and move it to production?

+4
source share
8 answers

For projects in my organization, I usually use the following measures:

  • No problem with severity level 1 (show stopper).
  • No severity 2 (major functional problems)
  • Allowable number of problems with severity 3 (minor functionality)

An "acceptable number" is, of course, a very soft number, depending on the size of the application, etc.

Once these prerequisites are met, I will gather a meeting of all interested parties (QA Guide, Dev Guide, Application Support Guide, etc.) and review the list of unresolved issues and make sure that there is agreement regarding the seriousness assigned to the unresolved issues. As soon as I confirm that there are no problems with the release of Sev 1 and Sev 2, I receive Go / No Go calls from each interested party. If everyone says "Let's go," I am comfortable moving to Production. If at least one of the interested parties says โ€œNoโ€, we study the reasons for โ€œNoโ€ and, if necessary, take steps to solve the problems behind it.

In small projects, the process can be more streamlined, and if it is just a one-person operation, your set of preconditions can be much simpler, that is, "the application provides a reasonable benefit, having (apparently) an acceptable number of errors - let it be there!" . So far, the advantage provided by the application exceeds the annoyance of errors, especially if you follow the โ€œearly and frequent releaseโ€ recommendation, which may work for you.

+8
source

First, you never stop testing. When you finish testing and you release it, all this means that your users are testing in your place.

Secondly, when your comprehensive test scripts pass with an acceptable level of failure, you are ready to move on.

Finally, it is very specific to your business. For some projects, we have a 3-week beta test in which many people can crack the system until the smallest changes. In other areas (less critical), small changes can be moved using only the kivz of another developer.

+3
source

One interesting testing methodology that I always wanted to try is the "sowing errors." The idea is that you have a person who introduces intentional errors into the system that fall into different categories.

For instance:

  • Cosmetic, spelling mistakes, etc.
  • Noncritical errors
  • Critical Errors and Failures
  • Data problems. Mistakes do not occur, but something deeper in the results.
  • and etc.

The seeder documents have been precisely modified to insert these errors so that they can be returned quickly. Because the test team finds the error they see, they also find real errors, but they donโ€™t know the difference. Theoretically, if a test team detects 90% of critical errors visited, then they probably found a proportional amount of real critical errors.

From these statistics, you can start making decisions when it is acceptable to have a release. Of course, this would not even be close to flawless due to the random nature of which errors are detected (real or expended), but probably better than not having any idea how many errors you could produce.

+2
source

At my workplace, one metric is used, which we sometimes use, is that we tested well enough when we started searching for old and unregistered errors that are present in the latest versions of our product. The idea is that if these are errors that we find during testing, and these errors have been present for years, and the client does not complain about them, then we will probably be safe to send.

Of course, you also have all the manual tests, automatic tests that force developers to use the product, beta versions and materials for continuous testing, but using the number of errors that we find now that were present but not reported, there was a new version in previous versions an idea for me when I first heard it.

+1
source

When all the major show stops have disappeared.

Seriously, you should do a user acceptance test and let your users use your system and find out if they are all cut out for them. If this is not practical, make a closed beta with select users who resemble your target audience.

It is not possible to find all the errors on your system, so sometimes the only real rule is to simply send it.

+1
source

mharen

I find that if you have comprehensive automated tests, it is completely irresponsible to send software if all of them fail. Automated tests mean that these are areas that are either core functionality or bugs that occurred in the past that you know about and can be fixed to have a passing test. It would be irresponsible to supply software that does not miss 100% of its automated tests.

0
source

John

I did not mean test cases to imply automatic testing. I had in mind a more traditional approach to a step-by-step list of what to test and how to test it.

However, I do not agree that all automatic tests should pass. It all depends on the seriousness and priority. In a large project, we may have developers who cannot run tests based on problems reported by users. Since we cannot fix every error with each release, this means that some tests simply fail.

0
source

Measuring the amount of testing time placed in a product between "showstopper" or major functionality errors can tell you that you are almost there. In times of fast flow in a product with new functionality, it is usually found for the testing team that most of the errors they report are serious functionality errors. As they are faced with, often a large number of small, adjustable, and finite types arise, aimed at improving the smoothness and clarity of the interaction; collectively, they significantly affect product quality, but each of them is not very important. As they become fixed and testing continues, you probably continue to receive error reports as testers insert errors and unusual usage patterns. At this point, it depends on when you see the value of the business for the release and the risk of inconspicuous shoppers.

0
source

Source: https://habr.com/ru/post/1276466/


All Articles