First TDD, a simple two-level C # project - what am I unit test?

This is probably a stupid question, but my search engine does not find a satisfactory answer. I am starting a small project in C #, having only a business level and data access level - strange, the user interface will appear later, and I have very little (read: no) concept / control on how it will look.

I would like to try TDD for this project. I am using Visual Studio 2008 (coming soon 2010), I have ReSharper 5 and nUnit.

Again, I want to do Test-Driven Development, but not necessarily the whole XP system. My question is: when and where do I write the first unit test?

Am I just testing logic before writing, or am I testing everything? It seems counterproductive to test those things that have no reason for failure (auto-properties, empty constructors) ... but it seems that the "No new code without failing test" maxim requires this.

Links or links are in order (but preferably in online resources rather than in books - I would like to start as soon as possible).

Thank you in advance for any guidance!

+4
source share
4 answers

It seems counterproductive for testing things that have no reason to refuse (auto-properties, empty Constructors) ...

This ... There is no logic in the empty constructor or auto-properties to check.

when do i write the first unit test?

Before writing the first line of test code. Think about the behavior you want your method to execute. Write your test based on your desired behavior. This test (and all subsequent ones) embodies your program specifications.

where do i write the first unit test?

In the first test class you create.

This is probably the best online resource:

Introduction to Test Design (TDD)
http://www.agiledata.org/essays/tdd.html

+4
source

There is a shift in thinking that you must immerse yourself in TDD. If you are not using TDD, you usually write code and then write unit test to make sure the code does what you expect and handles several different angular cases. With TDD, you are actually writing a test that usually uses classes and methods that are not yet written.

Once you write your test and you are satisfied that the test is a good example of how your code should be used, you start writing the actual classes and methods to pass the test.

This is complicated at first because you will not have the help of intellisense, and your code will not be created until you actually implement the production code, but by writing the test first, you are forced to think about how your code will be used before you even write it.

+2
source

It seems counterproductive for testing things that have no reason to refuse (auto-properties, empty Constructors) ...

This may seem counterproductive, but if you have code that depends on the defualt state of your newly built objects, then this is worth checking. Someone may enter and change the default values ​​for the fields, and your code will break.

It may be useful to remember that your tests are designed not only to find things that fail, but also to verify that your code matches the declared contract. You can argue that you don’t need to worry - if the contract breaks, then the code will also break depending on this. This is true, but creates tests that do not work with "remote" problems. Ideally, a problem with the class should first cause the unit test to fail in it, and not in the unit tests of its clients.

Performing TDD without any requirements or design to work is difficult. With traditional coding, where you sit down and knock out something that does what you want, the requirements evolve as the code evolves - the requirements almost come from the code as you dig deeper and learn in more detail what you need. But with TDD, tests are the embodiment of requirements, and so you must have these front, crystal clear in your mind. If you start with a blank sheet, then you need to constantly analyze requirements, test design, and code design.

+2
source

One way to boot TDD is to record an integration test first , i.e. before any unit check is done. This integration test is aimed at proving that your application works in anticipation in the end.

Obviously, the application has not been written yet, so your initial integration test will not check so many things. For example, suppose you are writing a data hash program that should analyze a flat file and create a summary report. A simple end-to-end test will call the program and then confirm that the file was created in the expected location. This test will fail because your application does nothing.

Then you write a minimal version of your application to satisfy a simple integration test. For example, the application starts and writes the header of the summary report to a file. In its current form, your application is sometimes called a walking skeleton - the thinnest fragment of somewhat realistic functionality.

From now on, you add meat to the skeleton - of course, write a test before each new bit of functionality. As soon as you start doing this, the question β€œWhat to check first” will become a little more convenient, and many of your new tests will be unit oriented (rather, integration oriented).

+1
source

Source: https://habr.com/ru/post/1310575/


All Articles