Avoid TDDs by making large refactoring more difficult

I'm still relatively new to TDD, and I often find myself in a trap where at some point I designed myself in a corner, trying to add new functionality.

This basically means that the API, which grew out of, say, the first 10 requirements, does not scale when adding the following requirement, and I understand that I need to do a big reorganization of existing functions, including the structure, to add something new in a nice way .

This is good, except that in this case the API will change, so all initial tests will then need to change. This is usually more than just renaming.

I assume that my question is twofold: how should I avoid getting into this position in the first place and considering that I enter it, what are safe test refactoring templates and allow new functionality with the new API to grow

Edit : a lot of great answers, experimenting with several methods. Marked as a solution, I felt most helpful.

+4
source share
6 answers

How should I avoid getting into this position in the first place

The most general rule: write tests with such refactoring. In particular:

  • Tests should use helper methods whenever they create something specific to the API (i.e., sample objects). Thus, you have only one place to change if the design is changed (for example, after adding the required fields to the constructed object)

  • The same goes for checking API output

  • Tests should be designed as "diff from default", with the default value given above. For example, if your test checks the influence of the method on the field x , you should set only the field x in the test, and the rest of the fields are taken from the default value.

In fact, these are the same rules that apply to code in general.

What are the safe test refactoring patterns and new features for developing a new API?

Whenever you find that changing an API forces you to change tests in several places, try to figure out how to move the code to one place. This follows from the above.

+6
source

Make your tests small. A good test calls maybe 1-3 methods on the subject of the test and some statements about the result. These tests should only change if one of these three methods changes.

Make your test code clean. If you have not read Robert C. Martin's Clean Code. Apply the rules to your production code and your test code. This tends to reduce the affected surface area of ​​any refactoring.

Do refactoring more often. Instead of (perhaps unconsciously) waiting until you do a lot of refactoring, do a lot of small refactoring.

If you are faced with huge refactoring, divide it into a pair (or, if necessary, a couple of hundred) of tiny refactoring.

+2
source

In this case, I suggest you block functions rather than short iterations.

Because iterations are short, functions are grouped together into smaller, isolated groups. This reduces the need to come up with some kind of grand design that may not adapt to the needs of users. Code this week will only work with code this week. This reduces the chances that new material will remove old things.

+1
source

The following may help.

  • Follow good coding standards and techniques, including TDD and the use of design patterns to create well-structured designs. Then the application as a whole should be easier to extend to new features.
  • Good separation of concerns. If your API is well separated from other functionalities (computing, database access, etc.), then changing the functionality without changing the API or vice versa should be easier.
  • Use BDDs to provide automated tests at a higher level (for example, more than user tests than unit tests). This should help you trust refactoring, even if all of your unit tests break due to refactoring.
  • Use a dependency injection container such as Windsor. If we ignore class dependencies when these dependencies change, this creates much less processing (especially if you have many tests) than coding them to your classes.
0
source

Yes, this is a problem that is difficult to avoid with TDD, because the whole idea is to avoid over-engineering caused by large projects. Thus, with TDD, your design will change frequently. The more tests you have, the more effort is required for each refactoring, which effectively prevents it, contradicts the whole idea of ​​TDD.

Despite the fact that your design will change, your basic requirements will be quite stable, so at a "high level" the way your application works should not change too much. Therefore, I advise you to put all your tests at a "high level" (integration testing, curious). Low-level tests are a commitment because you have to change them with a refactor. Testing at a high level is a little more work, but I think in the end it is worth it.

I wrote an article about this a year ago: http://www.hardcoded.net/articles/high-level-testing.htm

0
source

You do not have to find yourself “in the corner” with TDD. The eleventh test should not seriously change the design that needs to be changed in the first ten tests. Think about why so many tests need to be changed - look at it in detail, one after the other - and see if you can come up with a way to make changes without breaking existing tests.

If, for example, you need to add a parameter to a method that they all call:

  • you can leave an existing method with smaller parameters, delegate a new method and add a default parameter;
  • you could have all the tests call the utility method (or perhaps the installation method) that calls the old method, so you need to change the method call in only one place;
  • You can let your IDE do all the changes with a single command.

TDD and refactoring work symbiotically; each helps the other. Because you exit TDD with complex tests, refactoring is safe; because you have organizational, intellectual, and editing tools for refactoring, you can synchronize your tests well with your design. You say you are new to TDD; you may need to develop your refactoring skills when you learn TDD.

0
source

Source: https://habr.com/ru/post/1346009/


All Articles