Integration tests, but how much?

The recent discussion on my team made me think. The main topic is how much and what we will look at using functionality / integration tests (of course, they do not match, but an example is a layout where it does not matter).

Say you have a controller class:

public class SomeController { @Autowired Validator val; @Autowired DataAccess da; @Autowired SomeTransformer tr; @Autowired Calculator calc; public boolean doCheck(Input input) { if (val.validate(input)) { return false; } List<Stuff> stuffs = da.loadStuffs(input); if (stuffs.isEmpty()) { return false; } BusinessStuff businessStuff = tr.transform(stuffs); if (null == businessStuff) { return false; } return calc.check(businessStuff); } } 

We need a lot of unit testing (for example, if the check is not performed, or there is no data in the database, ...), this is out of the question.

Our main problem and what we cannot agree on is how much integration tests will cover it :-)

I am on the side that we will strive for smaller integration tests (test pyramid). What I would consider from this is just one happy, unhappy path, when the performance returns from the last line, just to see if I am holding these things together, it will not explode.

The problem is that it is not easy to say why the test result was false, and this makes some of the guys feel uncomfortable (for example, if we just check only the return value, then it hides that the test is green because someone changed the validation and returns false). Of course, yes, we can cover all cases, but it will be a hard outsmart IMHO.

Does anyone have a good rule for such problems? Or a recommendation? Reading? Talk? Blog post? Anything on the topic?

Thank you very much!

PS: Sry for an ugly example, but it's pretty hard to translate a specific piece of code into an example. Yes, you can argue about throwing exceptions / using another type of return / etc. but our hand is more or less tied due to external dependencies.

+5
source share
2 answers

It’s easy to understand where the test should be if you follow these rules:

  • We check the logic of the Unit Tests level and check if the logic is called at the Component or System levels.
  • We do not use mocking frameworks (mockito, jmock, etc.).

Give the dive, but first agree to the terminology:

  • Unit tests - check a method, class or several of them in isolation
  • Component Test - Initializes part of the application, but does not deploy it to the application server. An example could be initialization of Spring Contexts in tests.
  • System test - requires full deployment to the application server. An example would be: sending HTTP REST requests to a remote server.

If we build a balanced pyramid, we will get most of the tests at the Unit and Component levels, and some of them will be left to test the system. This is good because lower-level tests are faster and easier. For this:

  • We should put the business logic as low as possible (preferably in the Domain Model), as this will allow us to easily test it in isolation. Each time you look at a collection of objects and set conditions there, ideally you need to go to the domain model.
  • But the fact that logic works does not mean that it is called correctly. This is where you need Component Tests. Initialize your controllers, as well as services and the DAO, and then call it once or twice to find out if the logic is invoked.

Example: a username cannot exceed 50 characters, it can have only Latin, as well as some special characters.

  • Testing devices - create users with valid and incorrect usernames, make sure exceptions are selected, or vice versa - valid names are passed
  • Component Tests - check that when transferring an invalid user to the controller (if you use Spring MVC - you can do it using MockMVC), it gives an error. Here you will need to transfer only one user - all the rules have already been checked, here you are only interested in knowing whether these rules are invoked.
  • System tests - you may not really need these scripts.

Here is a more detailed example on how you can implement a balanced pyramid.

+3
source

In the general case, we write an integration test at each starting point of the application (say, each controller). We confirm some happy streams and some streams of errors, with a few statements, to give us some peace of mind that we did not break anything.

However, we also write tests at lower levels in response to regressions or when several classes are involved in complex behavior.

We use integration tests mainly to search for the following types of regressions:

  • Refactoring errors (do not fall into unit tests).

For refactoring problems, a few IT tests that hit a large part of your application are more than sufficient. Refactoring often falls into most classes, and therefore these tests will expose things, for example, to the wrong class or parameter.

  1. Early detection of injection problems (context not loading, Spring)

Injection problems often arise due to the absence of comments or errors in the XML configuration. The first integration test, which starts and adjusts the entire context (in addition to bullying the back-end), will capture them every time.

  1. Errors in super complex logic that are almost impossible to test without controlling all inputs

Sometimes you have code that spreads across several classes, you need to filter, convert, etc., and sometimes no one understands what is happening. To make matters worse, it is almost impossible to test on a live system, because underlying data sources cannot easily provide an accurate script that will cause an error.

In these cases (after detection), we add a new integration test, in which we submit the system to the input that caused the error, and then check whether it runs as expected. This gives a lot of peace of mind after extensive code changes.

+1
source

Source: https://habr.com/ru/post/1264791/


All Articles