Create a testing strategy to verify data consistency between two systems

With a quick search on stackoverflow, could not find anything, here is my question.

I am trying to write down a testing strategy for an application where two applications are synchronized with each other every day in order to synchronize a huge amount of data. enter image description here

Like its huge amount of data, I really don't want to cross-check everything. But just want to do a random check every time the data is synchronized. What should be the strategy for such a system?

I think of these two approaches. 1) Get a count of all data and cross-checks, both are the same 2) Select a random data entry 5 and make sure that their function is synchronized.

Any suggestion would be great.

+4
source share
1 answer

What you need is called Risk Management , in software testing it is called Software Risk Management .
It seems your question is not โ€œhow to verifyโ€ what you are going to test, but how to describe what you are doing and why you are doing it (based on a question that I believe you need this explanation too ... ) <ears>
Adding SRM to your test strategy should describe:

  • Risks of not fully testing all and all data in a mirrored system
  • A table that reduces the level of SRM compared to the amount of data checked (i.e., the probability of error, if only n% of the checked data is against -eg-2n% checked), in other words, by saying -eg! - 5% of lost data / invalid data / data corrupption / etc, if x% of the data was tested using a minute / hour ak
  • Based on the previous paragraph, a breakdown of resources used for different parameters (for example, HW load% for n hours, used man-hours - y, the cost of using HW / SW / HR - z USD)
  • The likelihood is the cost of errors / problems with the automation code (for example, comparing data goes wrong and leads to false positive or false negation, which gives overhead for DBA, dev and / or testing).
  • What happens if the SRM option is selected ( !! eg !! 10% of the verified data, which gives a 3% risk of data corruption / loss and 0.75% overhead - false positive / negative results -) in the actual error, i.e. reference to Business Continuity and the consequences of data loss, integrity, etc.


Everything else comes to mind, and you feel that it relates to your current problem in your current system with your actual preferences.

+2
source

Source: https://habr.com/ru/post/1487854/


All Articles