Should I test internal implementation or test social behavior?

This software, where ...

  • The system consists of several subsystems
  • Each subsystem consists of several components
  • Each component is implemented using many classes.

... I like to write automatic tests of each subsystem or component.

I do not write a test for each internal component class (except that each class contributes to the component’s public functionality and, therefore, is tested / tested externally through the component’s public API).

When I reorganize the implementation of a component (which I often do as part of adding new features), so I don’t need to modify any existing automatic tests: since the tests depend only on the public API of the component and the public APIs usually expand rather than change.

I think this policy contrasts with the document, for example Test code refactoring , which states that ...

  • "... unit testing ..."
  • "... a test class for each class in the system ..."
  • "... test code / production code ... perfect for approaching a 1: 1 ratio ..."

... all of which, I believe, do not agree (or at least not practice).

My question is: if you do not agree with my policy, explain why? In what scenarios is this degree of testing insufficient?

In short:

  • Public interfaces are tested (and retested) and rarely changed (they are added, but rarely changed)
  • Internal APIs are hidden behind public APIs and can be changed without overwriting test cases that test public APIs



Footnote: Some of my “test cases” are actually implemented as data. For example, test cases for the user interface consist of data files that contain various user inputs and the corresponding expected system outputs. System testing means having a test code that reads each data file, repeats the input to the system and claims that it receives the corresponding expected result.

Although I rarely have to change test code (because public APIs are usually added rather than changed), I have found that sometimes (for example, twice a week) I need to modify some existing data files. This can happen when I change the system output for the better (that is, new functions improve the existing output), which may lead to the existing test failing (because the test code is only trying to assert that the output has not changed). To handle these cases, I do the following:

  • Restart the automated test suite, which uses a special run-time flag that tells it not to approve the output, but instead write the new output to a new directory
  • Use the visual demarcation tool to see which output files (for example, which test cases) have changed, and to make sure that these changes are good and expected with the new functionality
  • Update existing tests by copying the new output files from the new directory to the directory from which the test cases are executed (rewriting old tests)



Footnote: by “component” I mean something like “one DLL” or “one assembly” ... something big enough to be visible on the architecture or system deployment diagram, often implemented using dozens or 100 classes and a public API that consists of only 1 or more interfaces ... something that can be assigned to one development team (where another component is assigned to another team), and therefore, according to Conway Law , which has a relatively stable public API.




Footnote: An Object Oriented Testing article : Myth and Reality reads:

Myth: checking a black box is enough. If you do a thorough job of testing the design using the class interface or specification, you can be sure that the class is fully implemented. White box validation (looking at the implementation of a method for designing tests) violates the very concept of encapsulation.

Reality: The structure of the TOE is important, Part II. Many studies have shown that black-box drawers are considered painfully carefully by developers to only exercise one-third to one-half (not to mention paths or states) when implementing test work. There are three reasons for this. Firstly, the inputs or states selected usually follow the normal path, but do not force all possible paths / states. Secondly, black box testing alone cannot reveal surprises. Suppose we have checked all the specified system behavior under testing. To be sure, there is no vague behavior that we need to know if testing in any part of the system was not performed by the black box. The only way information can be obtained from the code is measuring instruments. Thirdly, it is often difficult to implement exception and error handling without checking the source code.

I have to add that I am doing whitebox functional testing: I see the code (in the implementation) and I write functional tests (which give the public API) to implement the various branches of the code (details of the implementation of the function).

+41
unit-testing integration-testing code-coverage refactoring automated-tests
May 13 '09 at 4:52
source share
15 answers

My practice is to test internal components through an open API / UI. If any internal code cannot be reached externally, then I will reorganize it for deletion.

+14
May 13 '09 at 5:27
source share

The answer is very simple: you describe functional testing, which is an important part of QA software. Internal implementation testing is unit testing, which is another piece of QA software for a different purpose. That is why you feel that people disagree with your approach.

Functional testing is important to verify that a system or subsystem is doing what it should do. Everything that the client sees should be verified in this way.

Unit-test is here to verify that 10 lines of code you just wrote do what it should do. This gives you more confidence in your code.

Both are complementary. If you work in an existing system, functional testing is the first thing you can work on. But once you add the code, unit testing is also a good idea.

+30
May 13 '09 at 8:15
source share

I do not have my copy of Lakos in front of me, so instead of quoting, I simply indicate that it does better than I explain why testing is important at all levels.

The problem with testing only “public behavior” is a test that gives you very little information. It will catch many errors (just like the compiler will catch many errors), but cannot tell you where the errors are. For a poorly implemented unit, it is often useful to return good values ​​for a long time, and then stop doing this when conditions change; if this unit were tested directly, the fact that it was poorly implemented would have been obvious earlier.

The best level of test granularity is the unit level. Provide tests for each device through its interface (s). This allows you to check and document your beliefs about how each component behaves, which, in turn, allows you to test dependent code only by testing new functions that it introduces, which, in turn, does not allow testing short and targeted ones. As a bonus, he conducts tests with the code they are testing.

In order to formulate this in different ways, it is only correct to check social behavior if you notice that every public class has social behavior.

+8
May 13 '09 at 5:43
source share

So far, there have been many great answers to this question, but I want to add a few of my own notes. As a preface: I am a consultant for a large company that delivers technological solutions to a wide range of large customers. I say this because, in my experience, we should test much more thoroughly than most software stores (with the exception of API developers). Here are some of the steps we take to ensure quality:

  • Internal Unit Test:
    Developers are expected to create unit tests for all the code they write (read: each method). Unit tests should cover positive test conditions (does my method work?) And negative test conditions (the method throws an ArgumentNullException when one of my required arguments is NULL?). We usually incorporate these tests into the build process using a tool such as CruiseControl.net
  • System Test / Build Test:
    Sometimes this step is called something else, but this is when we begin to test public functionality. As soon as you find out that all of your individual units are functioning properly, you want to know that your external functions also work the way you think. This is a form of functional verification, because the goal is to determine if the whole system works as it should. Please note that this does not include any integration points. For a system test, you should use original interfaces instead, so that you can control the conclusions and create test cases around it.
  • System Integration Test:
    At this point in the process, you want to connect your integration points to the system. For example, if you use a credit card processing system, at this point you will want to enable the live system to make sure that it still works. You would like to do similar testing to test the system / build.
  • Functional check:
    Functional checks are users who start the system or use the API to verify that it is working properly. If you created a billing system, this is the stage where you will run your test scripts from end to end to make sure that everything works as it is developed. This is obviously a critical step in this process, as it tells you whether you have completed your work.
  • Certification test:
    Here you put real users in front of the system and let them go to it. Ideally, you already tested your user interface at some point with your stakeholders, but at this point you will be told if your target audience likes your product. You may have heard that this was called a “release candidate” from other suppliers. If everything goes well at this stage, you know that it’s good to move into production. Certification tests should always be performed in the same environment that you will use for production (or at least in the same environment).

Of course, I know that not everyone follows this process, but if you look at it from end to end, you can begin to see the benefits of the individual components. I did not include things like build validation tests as they happen on a different timeline (e.g. daily). I personally believe that unit tests are crucial because they give you a deep idea of ​​which specific application component is working with a particular use case. Unit tests will also help you isolate which methods work correctly so that you don’t waste time looking at information about crashes if nothing happened to them.

Of course, unit tests can also be wrong, but if you develop your test cases from your functional / technical specification (do you have one, the correct one ?;)), you should not have too many problems.

+8
May 13 '09 at 18:28
source share

If you practice pure test development, then you only execute any code after you have some unsuccessful test, and only implement the test code if you do not have unsuccessful tests. Also, use only the simplest thing to complete the failure or passing the test.

In the limited TDD practice that I saw, I saw how this helps me clear up unit tests for every logical condition created by the code. I'm not quite sure that 100% of the logical functions of my personal code are opened by my public interfaces. TDD practice seems to complement this metric, but there are still hidden features that are not allowed by public APIs.

I suppose you could say that this practice protects me from future defects in my public interfaces. Either you find it useful (and allows you to add new features faster), or you find it a waste of time.

+2
May 13 '09 at 5:19
source share

You can program functional tests; it's great. But you have to check the use of test coverage for implementation to demonstrate that the code under test has a purpose related to functional tests and that it really does something relevant.

+2
Aug 29 '09 at 2:54
source share

You should not blindly think that unit == class. I think this can be counterproductive. When I say that I write unit test, I test a logical unit - "something", which provides some behavior. A unit can be one class, or it can be several classes working together to provide this behavior. Sometimes it begins as one class, but develops to become three or four classes later.

If I start with one class and write tests for this, but later it becomes several classes, I usually will not write separate tests for other classes - these are implementation details in the module under test. This way I let my design grow and my tests are not so fragile.

I used to think in the exact same way as CrisW demos on this subject - that testing at higher levels would be better, but after getting some experience my thoughts are softened by something in between, and "every class should have a test class." Each unit should have tests, but I decided to define my units, slightly different from what I once did. These may be the “components” CrisW talks about, but very often it is also just one class.

In addition, functional tests may be good enough to prove that your system does what it should have done, but if you want to use your design with examples / tests (TDD / BDD), lower lever test results are natural consequence. You can throw away these low-level tests when you finish the implementation, but it will be a waste - tests are a positive side effect. If you decide to make radical refactorings that invalidate your low-level tests, you will throw them away and write again once.

Separating the goal of testing / validating your software and using tests / examples to manage your design / implementation can make this discussion much clearer.

Update: In addition, there are two ways to make TDD: external and internal. BDD is pushing outwards, leading to higher levels of tests / specifications. However, if you start with the details, you will write detailed tests for all classes.

+1
Aug 15 '09 at 9:42
source share

I agree with most posts here, however I would add the following:

There is a primary priority for checking open interfaces, then security, and then private.

Typically, public and secure interfaces are a summary of a combination of private and secure interfaces.

Personally: you have to check everything. Given a strong test suite for smaller functions, you will be given a higher confidence that hidden methods work. I also agree with the comment of another person regarding refactoring. Code coverage helps you determine where the extra bits of code are and reorganize them if necessary.

+1
Oct 08 '09 at 23:26
source share

I personally test protected parts because they are "publicly available" for inherited types ...

0
May 13 '09 at 5:06
source share

I agree that code coverage should ideally be 100%. This does not necessarily mean that 60 lines of code will have 60 lines of test code, but each test path will be tested. The only thing that is annoying than the error is an error that has not yet been launched.

By checking only the public API, you run the risk of not testing all instances of inner classes. I really state the obvious by saying this, but I think it needs to be mentioned. The more each behavior is tested, the easier it is to recognize not only that it is broken, but also what is broken.

0
May 13 '09 at 5:25
source share

I am checking the details of a private implementation as well as the public interfaces. If I change the implementation details and the new version has an error, this will allow me to better understand where the error is actually, and not just what it produces.

0
May 13 '09 at 5:49 a.m.
source share

[Answer to my question]

Perhaps one of the variables that matters a lot is how many different programmers code:

  • Axiom: each programmer must check his own code

  • Therefore: if the programmer writes and delivers one “unit”, then they should also have tested this unit, quite possibly by writing “unit test”

  • Consequence: if one programmer writes the whole package, it is enough for the programmer to write functional tests of the whole package (there is no need to write "unit" unit tests inside the package, since these units are implementation details that other programmers do not have direct access / impact).

Similarly, the practice of creating “prototype” components that you can test against:

  • If you have two teams that build two components, each may need to “mock” the other component so that they have something (a layout) that can be used to test their own component before it is considered ready for the next component. "integration testing", and before another team provided its component with which your component can be tested.

  • If you are developing an entire system, you can develop an entire system ... for example, develop a new GUI field, a new database field, a new business transaction and one new system / function test, all as part of one iteration, without the need to develop “mockery” any layer (since instead you can test the real thing).

0
May 13 '09 at 18:44
source share

Axiom: each programmer must check his own code

I do not think this is universal.

There is a well-known saying in cryptography: "It is easy to create such a cipher so that you do not know how to break it yourself."

In your typical development process, you write your code, then compile and run it to verify that it does what you think it does. Repeat this a few times and you will feel confident in your code.

Your trust will make you a less alert tester. Anyone who does not share your experience with the code will not have a problem.

In addition, a fresh pair of eyes may have fewer prejudices not only about the reliability of the code, but also about what the code does. As a result, they can come up with test cases that the code author did not think about. One would expect them to either discover more bugs or increase knowledge that the code does a bit more around the organization.

In addition, there is an argument in favor of being a good programmer, you need to worry about extreme cases, but to be a good tester, you are obsessed with anxiety ;-) In addition, testers can be cheaper, so there may be a separate reason for this team for testing.

I think the main question is: what is the best methodology for finding bugs in software? I recently watched a video (without a link, sorry), stating that randomized testing is cheaper than and as effective as human-made tests.

0
May 26 '09 at 3:04
source share

It depends on your design and where the greatest value will be. One type of application may require a different approach to another. Sometimes you can barely catch anything interesting with unit tests, while functional / integration tests give surprises. Sometimes device tests fail hundreds of times during development, catching a lot of errors.

Sometimes this is trivial. The way some classes hang together makes the return on investment in testing each path less attractive, so you can just draw a line and move on to clog something more important / complex / heavily used.

Sometimes this is not enough to just check the public API, because it hides some particularly interesting logic, and it is too painful to set the system in motion and implement these specific paths. That when checking the gut it will pay off.

These days, I tend to write numerous (often extremely) simple classes that make one or two things tops. Then I implement the desired behavior by delegating all the complex functions to these inner classes. That is, I have somewhat more complex interactions, but really simple classes.

If I change my implementation and have to reorganize some of these classes, I usually don't care. I try my best to isolate my tests, so often this is a simple change to make them work again. However, if I do have to drop some of the inner classes, I often replace several classes and write completely new tests instead. I often hear people complaining about the need for tests to be up to date after refactoring, and although it is sometimes inevitable and tiring if the level of detail is good enough, it’s usually not difficult to throw away some code tests +.

I feel that this is one of the main differences between designing to check and not worry.

0
Oct 09 '09 at 0:05
source share

Do you continue to follow this approach? I also think this is the right approach. You should only test public interfaces. Now the public interface can be a service or some component that enters data from some user interface or any other source.

But you should be able to develop a puppet service or component using the Test First approach. those. Define an open interface and test it for basic functions. he will fail. Implement this basic functionality using the background classes APIs. Write API to satisfy just this test. Then keep asking what the service can do more and grow.

The only balancing decision that needs to be made is to split one large service or component into several smaller services and components that can be reused. If you firmly believe that the component can be reused in projects. Then, automatic tests should be written for this component. But again, tests written for a large service or component should duplicate functionally already tested as a component.

Some people may go on to the theoretical discussion that this is not unit testing. So everything is all right. The basic idea is automated tests that test your software. So if its not at the unit level. If it covers integration with a database (which you control), then this is only better.

Let me know if you have developed a good process that works for you ... from your first message.

Regards ameet

0
Jul 27 '10 at 8:00
source share



All Articles