The recent discussion on my team made me think. The main topic is how much and what we will look at using functionality / integration tests (of course, they do not match, but an example is a layout where it does not matter).
Say you have a controller class:
public class SomeController { @Autowired Validator val; @Autowired DataAccess da; @Autowired SomeTransformer tr; @Autowired Calculator calc; public boolean doCheck(Input input) { if (val.validate(input)) { return false; } List<Stuff> stuffs = da.loadStuffs(input); if (stuffs.isEmpty()) { return false; } BusinessStuff businessStuff = tr.transform(stuffs); if (null == businessStuff) { return false; } return calc.check(businessStuff); } }
We need a lot of unit testing (for example, if the check is not performed, or there is no data in the database, ...), this is out of the question.
Our main problem and what we cannot agree on is how much integration tests will cover it :-)
I am on the side that we will strive for smaller integration tests (test pyramid). What I would consider from this is just one happy, unhappy path, when the performance returns from the last line, just to see if I am holding these things together, it will not explode.
The problem is that it is not easy to say why the test result was false, and this makes some of the guys feel uncomfortable (for example, if we just check only the return value, then it hides that the test is green because someone changed the validation and returns false). Of course, yes, we can cover all cases, but it will be a hard outsmart IMHO.
Does anyone have a good rule for such problems? Or a recommendation? Reading? Talk? Blog post? Anything on the topic?
Thank you very much!
PS: Sry for an ugly example, but it's pretty hard to translate a specific piece of code into an example. Yes, you can argue about throwing exceptions / using another type of return / etc. but our hand is more or less tied due to external dependencies.