What are some good examples of * perfectly acceptable * approaches that don't use / don't require / require test-based development?

A TDD loop is a test, code, refactor, (replay), and then sending. TDD implies development that is driven by testing, in particular, it means understanding the requirements and then performing tests before developing or writing code.

My natural inclination is a philosophical bias in favor of TDD; I would like to make sure there are other approaches that now work well or even better than TDD, so I asked this question. There are other questions that suggest that TDD is expensive, difficult to implement, presents problems ... I agree, but what are some good alternatives?


What are some good examples of perfectly acceptable approaches that don't use / don't require / require test-based development?

I can come up with a lot of approaches that are not TDD, but there can be a lot more problems than what they cost ... this is not a moral judgment, but simply that they cost more than they cost ... just examples things that may be in order, like training exercises, but approaches that I find unacceptable for serious production, rather than TDD, may include:

  • Quality control in your product . Focusing your efforts on developing test / QA skills can be problematic, especially if you are not working with requirements and the development side first ... a symptom of this includes an error where developers have so many different errors, in order to deal with this, you must use the form sorting - each development cycle gets worse and worse, programmers work more and more hours, sleep less and less, fight continue to march to death until they are absorbed.
  • Superstition ... believing that you do not understand - this will imply a borrowing code that, in your opinion, has been verified or verified somewhere, for example. legacy code, a magic code launcher, or an open source project, and you go ahead by cracking a storm of modifications, pushing FaceBook Connect into your user interface, inventing some new magic features on the fly (for example, mashup using the Twitter API, API GoogleMaps and possibly the Zappos API), demonstrating several new “products”, and then wrote a simple “specification” and a list of “test cases” and submitted it to the Mechanical Turk for testing. (Extra points are awarded for believing that your product is the next Facebook, Twitter, or YouTube.)
+4
source share
3 answers

Cleanroom Software Engineering is a methodology that sounds, on the one hand, extremely tough, static, “unchecked” and almost the opposite of TDD, but, on the other hand, it is actually very similar. It is very iterative, for example, like all Agile methods. Like TDD, you first write a specification, but unlike TDD, this specification takes the form of a mathematical proof of correctness, rather than an executable test.

Where is the TDD loop

  • Indicate
  • Code
  • Refactor
  • Demonstrate Correctness Using Executable Specifications

Cleanroom cycle

  • Indicate
  • Code
  • Refactor
  • Prove the correctness
  • (test)

I put the tests in parentheses because they are usually (semi) automatically generated from the specification. Thus, although they are part of the development cycle, they are not part of the work that the programmer must do.

From what I read, Cleanroom's performance metrics are very similar to TDD's, which makes me think that the real value of TDD is actually not part of the testing, it is in the mind. It would be interesting to conduct an experiment in which you replace the "red" part of the TDD with a simple stopwatch that locks your keyboard for 30 seconds before you can write a new method.

+4
source

There are cases when automatic testing is simply not relevant or cannot be implemented:

  • The interactive user interface is usually very difficult to check automatically, many things actually depend on the real user experience, and it is difficult to check without a real person - QA.
  • Some things cannot be “measured” or “checked”, but rather passed on to a person for testing. For example, when developing a kind of image processing algorithm that should be built into any device, it is very difficult to determine what the “right” approach is or even measure it. Not to mention that some cases are very difficult to verify.

I would say that there are many cases that cannot be covered by automatic scripts that perform this work - most of them require a human human response and quality assessment.

It is acceptable not to be tested automatically.

+2
source

defects-related development, we are sometimes forced to follow this when interested parties want to do a 1-year project in 3-4 months, keeping everything else constant. ;-)

It really worked pretty well. Without waiting for the requirements to be completed, QA receives early builds, stakeholders are involved in resolving problems from start to finish (since QA / dev does not have enough information at their disposal). The only flip side is the lack of refactoring. But hey, if users are satisfied and the product came out in just 3 months, thereby saving a lot of money, the refactoring / SOLID principles do not really matter ... But only if the original developers are gone, then the service becomes a pain in the neck.

+1
source

Source: https://habr.com/ru/post/1333386/


All Articles