When / How Often Should I Test?

As a novice developer who is in the rhythm of my first professional project, I try to develop good habits as soon as possible. However, I found that I often forget to test, turn it off, or run a whole bunch of tests at the end of the build instead of one at a time.

My question is, what rhythm do you prefer when working on large projects, and where testing fits into it.

+4
source share
12 answers

Well, if you want to follow the TDD guys , before you start the code ;)

I am very in the same position as you. I want to get more for testing, but now I am in a situation where we are working on “getting the code” and not “getting the code in order”, which scares me. Therefore, I am slowly trying to integrate testing processes into my development cycle.

I am currently testing the code, trying to spoil the code when I write it . It's hard for me to get into TDD thinking .. Its time, but this is how I would like to work.

EDIT:

I thought that I should probably expand this, this is my main “workflow" ...

  • Plan what I want from the code, possible object design, whatever.
  • Create your first class, add a huge comment to the top plan that my “vision” is for the class.
  • Outline the main test cases. It will basically become unit tests.
  • Create your first method. Also write a short comment explaining how it is expected to work.
  • Write an automatic test to make sure it does what I expect.
  • Repeat steps 4-6 for each method (note that automatic tests are on a huge list that works on F5).
  • Then I create some wise tests to emulate a class in a production environment, obviously fixing any problems.
  • If after that new errors appear, I will go back and write a new test, make sure that it does not work (this also serves as a proof of concept for the error), then correct it.

I hope this helps. Open for comments on how to improve this, as I said that this is my problem ..

+6
source

Before checking the code.

+2
source

First and often. If I create some new features for the system, I will first define the interfaces, and then write unit tests for these interfaces. To determine which tests to write, consider the API interface and the functions it provides, pull out a pen and paper and think about possible error conditions or ways to prove that it does the right job. If this is too complicated, most likely your API is not good enough. As for the tests, see if you can avoid writing “integration” tests that test more than one particular object and save them as a “single” test.

Then create a standard implementation of your interface (which does nothing, returns garbage values, but does not throw an exception), connect it to the tests to make sure the tests do not work (this checks that your tests work! :)), Then write to functions and repeat the tests. This mechanism is not ideal, but will cover many simple coding errors and provide you with the ability to launch a new function without having to connect it to the entire application.

After that, you need to test it in the main application using a combination of existing functions. In this case, testing is more complicated, and if possible, it should be partially transferred to a good QA tester, since they will have the ability to break things. Although it helps if you have these skills too. Proper testing is a skill that must be selected in order to be honest. My own experience comes from my naive deployments and subsequent errors that users report when they use it in anger.

At first, when this happened to me, I found it annoying that the user intentionally tried to break my software, and I wanted to mark all the “errors” as “learning issues”. However, after I thought about it, I realized that our role (as developers) is to make the application as simple and reliable as possible, even idiots. Our role is to empower idiots, which is why we are paid for the dollar. Work with idiots.

To test effectively, you must understand how to try to break everything. Assume the mantle of a user who presses buttons and usually tries to destroy your application in strange and beautiful ways. Suppose that if you do not find flaws, then they will be found in production for your companies, a serious loss of face. Take full responsibility for all these problems and curse yourself when the error that you are responsible (or even part of the responsibility) for is discovered in production.

If you do most of the above, you should start creating much more robust code, however it is a bit of an art form and requires a lot of experience to be good.

+1
source

A good key to remember is

"Test early, often check and check again when you think you are done"

+1
source

When to test? When it is important that the code works correctly!

+1
source

When I hack something together for myself, I test at the end. Bad practice, but these are usually small things that I will use several times and what it is.

In a larger project, I write tests before writing a class, and I run tests after every change to this class.

0
source

I am constantly testing. After I finish at least the loop inside the function, I start the program and click the breakpoint at the top of the loop, and then start it. This is all to make sure that the process does exactly what I want.

Then, as soon as the function is completed, you will check it in its entirety. You probably want to set a breakpoint before calling the function and check your debugger to make sure it works fine.

I think I would say: "Test often."

0
source

I just recently added unit testing to my regular workflow, but I am writing unit tests:

  • to express the requirements for each new code module (right after I write the interface, but before writing the implementation)
  • every time I think, "that would be better ... by the time I finished"
  • when something breaks down to quantify the error and prove that I fixed it.
  • when I write code that explicitly allocates or frees memory --- I don't want to look for memory leaks ...

I run tests for most builders and always before running the code.

0
source

Start with unit testing. In particular, check out TDD, Test Driven Development. The concept of TDD is that you write unit tests first and then write your code. If the test fails, you will return and rework your code. If it passes, you will move on to the next.

I use a hybrid approach to TDD. I don't like writing tests against anything, so I usually write part of the code first, and then turn on unit tests. This is an iterative process that you have never dealt with. You change the code, you run your tests. If there is a malfunction, correct and retry.

Another type of testing is integration testing, which occurs later in the process and can usually be performed by the QA testing team. In any case, integration testing is aimed at the need to verify the parts as a whole. This is a work product that you experience when testing. This is more difficult to deal with b / c, it is usually associated with the use of automated testing tools (for example, Robot, for example).

Also, take a look at a product, such as CruiseControl.NET, for continuous builds. CC.NET is good b / c, it will run your unit tests with every build, immediately notifying you of any failures.

0
source

We do not do TDD here (although some of them advocated this), but our rule is that you should check your unit tests for your changes. This does not always happen, but it’s easy to go back and look at a specific set of changes and see if tests have been written.

0
source

I find that if I wait until the end of writing any new function for testing, I will forget many of the extreme cases that I thought might violate this function. This is normal if you are doing something for yourself, but in a professional environment, I believe that my flow is a classic form: Red, Green, Refactor.

Red : write your test so that it does not pass. Thus, you know that the test argues against the correct variable.

Green Make your new test pass the easiest way. If that means hard coding, that's fine. This is great for those who just want something to work right away.

Refactoring Now that your test passes, you can go back and change your code with confidence. Did your new change break your test? Great, your changes touched you that you did not understand, now your test tells you.

This rhythm made me speed up my development over time, because I basically have a story collector for all the things that I thought I needed to check for the function to work! This, in turn, leads to many other benefits that I will not get here ...

0
source

Here are a lot of great answers!

I am trying to test at the lowest level, which makes sense:

  • If one computation or conditional is complex or complex, add test code as you write it and make sure each part works. Comment on the test code when you are done, but leave it there to document how you tested the algorithm.

  • Check each function.

    • Manage each branch at least once.
    • Manage boundary conditions - input values, at which the code changes its behavior - in order to catch one-on-one.
    • Check the various combinations of valid and invalid inputs.
    • Look at situations that could break code and test them.
  • Check each module with the same strategy as above.

  • Check the body of the code as a whole to ensure that components interact correctly. If you were diligent in testing at a lower level, this is essentially a “confidence test” so that nothing breaks during the build.

Since most of my code is for embedded devices, I pay particular attention to reliability, the interaction between various threads, tasks and components and the unexpected use of resources: memory, processor, file system space, etc.

In general, the sooner you encounter a mistake, the easier it is to isolate, identify and correct it - and the more time you spend on creating, rather than chasing a tail. *

** I know -1 for a free reference to a pointer buffer! *

0
source

Source: https://habr.com/ru/post/1276411/


All Articles