How do you say your unit tests are correct?

I did a little unit testing at different points in my career. Whenever I begin to dive into it again, it is always difficult for me to prove that my tests are correct. How can I say that there is no error in my unit test? Usually I launch the application, proving its operability, and then using the unit test as a kind of regression test. What is the recommended approach and / or what approach do you apply to this problem?

Edit: I also understand that you could write small, granular unit tests that would be easy to understand. However, if you assume that the small, granular code is flawless and bulletproof, you can simply write small, granular programs and not need unit testing.

Edit2: for the arguments “unit testing is so that your changes do not break anything” and “this will only happen if the test has the same flaw as the code”, but what if the test add-in? This can pass both good and bad code with a bad test. My main question is that the good is unit testing, because if your tests can be spoiled, you cannot really increase your confidence in your code, you cannot prove that your refactoring works, and you cannot really prove that you fulfilled the specification?

+19
unit-testing tdd
Feb 18 '09 at 15:25
source share
16 answers

unit test should express the “contract” of what you are testing. This is more or less a specification of a device placed in code. Thus, given the specifications, it should be more or less obvious whether unit tests are “correct”.

But I would not worry too much about the "correctness" of unit tests. They are part of the software, and therefore they may also be incorrect. The point of unit tests - from my POV - is that they guarantee that the "contract" of your software will not be corrupted by accident. Here's what makes such tests so valuable: you can dig into the software, reorganize some parts, change the algorithms in others, and your unit tests will tell you that you have broken something. Even the wrong unit tests will tell you this.

If an error is detected in your unit tests, you will find out because the unit test fails until the verified code is correct. Then fix the unit test. Nothing wrong.

+14
Feb 18 '09 at 15:34
source share

Well, Dijkstra famously said:

"Testing shows the presence, not the absence of errors"

IOW, how would you write unit test for the add (int, int) function?

IOW, this is hard.

+7
Feb 18 '09 at 15:30
source share

There are two ways to ensure the correctness of your unit tests:

  • TDD: write the test first, then write the code that he should have tested. This means that you see how they fail. If you know that it detects at least some error classes (for example, “I have not yet implemented any functionality in the function I want to test”), you know that this is not entirely useless. You may still be able to skip some other errors, but we know that the test is not quite right.
  • Have a lot of tests. If one test allows you to skip some errors, they are more likely to cause errors further down the line, resulting in other tests failing. As you noticed this, and correct the violating code, you will get the opportunity to study why the first test did not catch the error as expected.

And finally, of course, keep unit tests so simple that they are unlikely to contain errors.

+7
Feb 18 '09 at 15:58
source share

In order for this to be a problem, your code would have to be a bug in order to randomly force your tests to pass. This has happened to me recently when I checked that this condition (a) caused a method failure. The test passed (i.e., the Method failed), but it passed because another condition (b) caused a failure. Write your tests carefully and make sure unit tests check one thing.

In general, although tests cannot be written to prove that the code is an error. This is a step in the right direction.

+3
Feb 18 '09 at 15:31
source share

I had the same question, and after reading the comments, here is what I think now (with due respect to the previous answers):

I think the problem may be that both of us allegedly took the goal of unit tests - to prove the correctness of the code and applied this goal to the tests themselves. This is as beautiful as possible, except for the purpose of unit tests not to prove the correctness of the code .

As with all non-trivial endeavors, you can never be 100% sure. The correct goal of unit tests is to reduce errors , not eliminate them. In particular, as others have noted, when you make changes later, it can accidentally break something. Unit tests are just one tool to reduce errors and, of course, should not be the only one. Ideally, you combine unit testing with code review and robust QA to reduce errors to an acceptable level.

Unit tests are much simpler than your code; it's impossible to make your code as simple as unit test if your code does something meaningful. If you write “small, granular” code that is easy to prove correct, then your code will consist of a huge number of small functions, and you still have to determine if they all work correctly.

Since unit tests are inevitably simpler than those they test, they are less likely to have errors. Even if some of your unit tests are buggy, in general, they will still improve the quality of your main code base. (If your unit tests are so flawed that this is not true, then probably your main code base is also a bunch, and you are completely screwed. I think we all accept the basic level of competency.)

If you want to apply the second level of unit testing to validate your unit tests, you can do this, but it will decrease. To look at this fake:

Suppose unit testing reduces production errors by 50%. Then you write meta-object tests (unit tests for finding errors in unit tests). Say it detects problems with your unit tests, reducing production errors by up to 40%. But it took 80% of the time to write meta-unit tests, as was done to write unit tests. For 80% of the effort, you got another 20% of the gain. Writing meta-unit tests may give you another 5 percentage points, but now it took 80% of the time needed to write tests on the meta-module, so for 64% of the effort to write unit tests (which you gave 50%) 5%. Even with significantly more liberal numbers, this is not an effective way to spend time.

In this scenario, it’s clear that moving to the time of writing unit tests is not worth the effort.

+3
May 20 '09 at 13:53
source share

I think writing a test first (before writing code) is a pretty good way to make sure your test is valid.

Or you could write tests for your unit tests ...: P

+2
Feb 18 '09 at 15:28
source share
  • Unit test code complexity (or should be) less (several orders of magnitude less) than real code
  • The probability of your coding an error in your unit test, which exactly matches the error in your real code, is much less than just coding an error in your real code (if you code an error in your unit test that does not work) t corresponds to an error in your real code this should fail). Of course, if you made the wrong assumptions in your real code, you are likely to repeat the same assumption - although the set of mental units for unit testing should still reduce even this case.
  • As already mentioned, when you write unit test, you have (or should have) a different mindset. When writing real code, you think "how to solve this problem." When writing a unit test that you think of, "how can I test every way that this can break"

As others have said, it’s not about whether you can prove that the unit tests are correct and complete (although this is almost certainly much simpler with the test code), as this reduces the number of errors to a very low number - and clicking it below and below.

Of course, there should come a moment when your confidence in your module is tested enough to rely on them - for example, when conducting refactoring. Achieving this goal is usually just a case of experience and intuition (although there are tools to cover the code that help).

+2
Feb 18 '09 at 15:41
source share

First, let me begin by saying that unit testing is not just testing. This is more about application design. To see this in action, you must place a camera with a display and record your coding while writing unit tests. You will understand that when writing unit tests, you make many design decisions.

How do I know if my unit tests are good?

You cannot check the period of the logical part! If your code says 2 + 2 = 5 and your test makes sure 2 + 2 = 5, then you have 2 + 2 - 5. To write good unit tests, you MUST have a good understanding of the domain you're working with. When you know what you are trying to execute, you write good tests and good code to execute it. If you have a lot of unit tests and your assumptions are wrong, sooner or later you will find out your mistakes.

+2
Feb 18 '09 at 23:14
source share

You do not speak. As a rule, tests will be simpler than those that they test, so the idea is that they will have fewer errors than real code.

+1
Feb 18 '09 at 15:28
source share

This is what causes errors for anyone using unit tests. If I were to give you a short answer, I would tell you to always trust your unit tests. But I would say that this should be supported by your previous experience:

  • Did you have any defects that were reported during the manual testing, but the unit test could not be caught (although this was the reason), because there was an error in your test?
  • Have you had false negatives in the past?
  • Are your device tests simple enough?
  • Do you write them before the new code, or at least in parallel?
+1
Feb 18 '09 at 15:34
source share

You cannot prove that the tests are correct, and if you try, you are doing it wrong.

Unit tests - this is the first screen - smoke test - like all automated tests. They are primarily there to tell you if you change the situation later by going through things. They are not intended to prove quality even with 100% coverage.

The metric makes management better, though, and it is useful in itself sometimes!

+1
Feb 18 '09 at 15:57
source share

This is one of the benefits of TDD: the code acts as a test for tests.

You may make equivalent mistakes, but this is unusual in my experience.

But of course, I had a case when I write a test that should fail only if it passes, which tells me that my test was wrong.

When I first studied unit testing, and before I worked on TDD, I would also intentionally break the code after writing the test to make sure it failed as I expected. When I did not, I knew that the test was broken.

I really like Bob Martin's description as the equivalent of double recording.

+1
Feb 19 '09 at 0:30
source share

Dominic mentioned that "for this to be a problem, your code would be a mistake to randomly force your tests to pass." One method you can use to determine if this is a problem is mutation testing . It makes changes to your code and checks to see if unit tests fail. If this is not the case, then this may indicate areas where testing is not 100% thorough.

+1
May 05 '10 at 6:52 a.m.
source share

Unit tests - your requirements are specified. I do not know about you, but I like the requirements indicated before the start of the code (TDD). Having written them and processed them, like any other piece of code, you will begin to feel confident in introducing new functions without breaking old functions. To make sure that all your code is needed, and that the tests actually verify the code that I use pitest (other mutation testing options exist for other languages). For me, untested code is a buggy code, but it can be clean.

If a test tests complex code and is complex, I often write tests for my tests ( example ).

+1
May 31 '13 at 23:31
source share

As above, the best way is to write a test before the actual code. Find real examples of your testing code, if applicable (mathematical formula or similar), and compare unit test and expected result with this.

0
Feb 18 '09 at 15:29
source share

Editing: I also understand that you could write small, granular unit tests that would be easy to understand. However, if you assume that the small, granular code is flawless and bulletproof, you can simply write small, granular programs and not need to test the modules.

The idea of ​​unit testing is to test the most granular things, and then put the tests together to prove a larger case. If you write large tests, you lose a few advantages there, although it is probably faster to write large tests.

0
Nov 30 '09 at 20:26
source share



All Articles