Unit testing code with unpredictable external dependencies

I participate in a project which, among other things, should control various laboratory instruments (robots, readers, etc.)

Most of these tools are controlled either by using DCOM drivers, a serial port, or by running proprietary programs with various arguments. Some of these programs or drivers include simulation mode, some do not. Obviously, my development computer cannot be connected to all tools, and although I can run virtual machines for tools whose drivers enable simulation mode, some things cannot be tested without a real tool.

Now my own code does not mainly concern actual operations with tools, but also about launching operations, ensuring proper functioning and synchronization between them. It is written in Java, using various libraries to interact with tools and their drivers.

I want to write unit tests for various instrument control modules. However, due to the fact that the tools can fail in many ways (some of them are documented, some of which are not), since the code depends on these partially random outputs, I am a little lost on how to write unit tests for these parts my the code. I reviewed the following solutions:

  • only with the actual devices connected, perhaps the most accurate method, but this is not at all practical (insert the plate into the reader, run the unit test, remove the plate, run the unit test, etc.), and not mention the potentially dangerous ones,
  • use the layout of the object to simulate the part that actually communicates with the thing; while this one is certainly easier to implement (and run), it will not be able to reproduce the full range of potential failures (as mentioned above, a lot is undocumented, which can sometimes cause unpleasant surprises).

While I'm thinking about going with the latter, am I missing something? Is there a better way to do this?

+6
source share
3 answers

Both of you bullet points are valid parameters, but each of them represents two different types of testing.

At a very high level, using Mock objects (for every second marker point) is great for Unit Testing - it's just checking your code (which is system testing or SUT) and nothing else extraneous for it. Any other dependencies are crossed out. You can then write test cases to throw as many different error conditions as you can think of (as well as, of course, testing the “bon voyage”). The fact that your error domain is undocumented is unsuccessful, and something that you should work as best as possible. Each time you encounter a new error condition with an actual external device, you must figure out how to reproduce it using the code, and then write another new unit test to recreate this condition through the framework.

In addition, testing with the actual tools connected (according to your first marker point) is great for integration testing - it checks your code more along with the actual external dependencies.

In general, Unit Testing should be fast (ideally, up to 10 minutes to compile your code and run the entire unit test kit.) This means that you will receive feedback quickly from your unit tests if you need any new code " I wrote that all tests fail. Integration Testing by nature may take longer (if, for example, one of your external devices takes 1 minute to calculate the result or complete a task, and you have 15 different sets of inputs that you test it 15 minutes right there for one about a small set of tests.) Your CI server (you should have one that automatically compiles and runs all the tests) should start automatically when committing in your version control repository. It should compile and run unit tests as one step. After that As this part is completed, it should provide you with feedback (good or bad), and then, if all the module tests pass, it will automatically start your integration tests. This assumes that there is either an actual device connected to your CI server or an appropriate replacement (whatever that means in your particular environment.)

Hope this helps.

+6
source

If you use mocks, you can replace different layouts to execute in different ways. That is, your tests will be consistent. This is valuable since tests for accidental operation of the system will not give you a sense of security. Each launch can / will execute a different code path.

Since you do not know all the failure scenarios beforehand, I think there are two (non-exclusive) scenarios:

  • Grab the details of these failures as you see them, and encode additional tests in your poppies to replicate them. Therefore, your log must be audible in order to capture the details of the failure. Over time, your test suite will expand to include and test these scenarios.
  • Your interfaces to this system can catch all errors, but present them in a finite subset of errors. for example, classify all errors in (say) connection errors, timeouts, etc. This way you limit your scripts to a small set of crashes. Unfortunately, I do not know if this is suitable for your application.
+4
source

You cannot unit test something that you did not expect, by definition.

The second approach is suitable for unit tests. The presence of actual connected devices makes this an integration test at best.

Separate the dependencies so that you can create a fake tool and then do it to simulate reality as much as possible. As your understanding of reality improves, update your fake and add tests to deal with this situation. (In some cases, mocking may be appropriate, fake in others.)

+3
source

Source: https://habr.com/ru/post/900613/


All Articles