.NET Automated Testing for Longer Processes

We would like to conduct automatic integration testing of a process that requires sending data to an external source, and then checking that the data is displayed correctly on their website.

However, it may take several hours before the data appears on the website.

The problem with traditional NUnit or MSTest is that the test will be held for several hours, waiting for the result.

I saw PNUnit that could be used to run all tests in parallel, but that doesn't seem like an elegant solution to me. What if there are 1000 tests? Would this not create many processes / threads on the server? And how to follow them.

So has anyone solved this problem? Did you have a solution at home, or is there an open source solution?

+6
source share
4 answers

This problem can be easily solved by separating input and checking test data. Just upload all available test data to the system, wait a few hours, until processing is completed, and then perform checks.

+2
source

PNUnit seems to be a good solution. If you are concerned about β€œtoo many processes / threads” on the server, just activate the number of PNUnit tests right away (say, Max N); when the test is complete, schedule the next test. I am not saying that PNUnit knows how to do this; You may need to follow this custom.

+1
source

This problem is currently being discussed in the NUnit-Discuss Google group: http://groups.google.com/group/nunit-discuss/browse_thread/thread/645ecaefb4f978fa?hl=en

Hope this helps :)

0
source

Martin, before I get to my solution, for unit testing it seemed to me that you only want to check what you can control. The above is more like what I call regression testing. I guess their "site" is a different site. May I ask what will happen if you follow the rules of the interface / integration, but nothing will ever appear on their screen, although there may be a problem that you could or could not do about it? Moreover, what happens when they change their site or algorithms? You will eventually have to write code based on what they do, what sucks.

Thus, as mentioned above, you can separate the load tests and data validation. I admit, I don't know anything about PNunit, but just throwing threads at it is not going to solve the 3-hour latency for each test round-trip.

if you need to execute synchronously, you can load all the data into ClassInitialize (), and then hibernate until you start checking and running the actual tests.

If it were me, I would have only one test project for stress tests, and then one project to check the results in a few hours. Having synchronous does not look like you are buying a big profit, except to provide preconditions before testing the results, which can be processed in other ways.

0
source

Source: https://habr.com/ru/post/900743/


All Articles