TDD Workflow Best Practices for .NET Using NUnit

UPDATE: I made major changes to this post - check the changelog for details.

I am starting to dive into TDD with NUnit and although I liked checking out some of the resources I found here on stackoverflow, I often don’t find good traction.

What I'm really trying to achieve is to get some kind of checklist / workflow — and here, where I need you guys to help me — or a “Test Plan” that will give me decent code coverage.

So, let's say we have an ideal scenario in which we could start a project from scratch, and let's say the Mailer helper class, which will have the following code:

(I created the class just to help with the code sample issue, so any criticism or advice is encouraged and will be very welcome)

Mailer.cs

using System.Net.Mail; using System; namespace Dotnet.Samples.NUnit { public class Mailer { readonly string from; public string From { get { return from; } } readonly string to; public string To { get { return to; } } readonly string subject; public string Subject { get { return subject; } } readonly string cc; public string Cc { get { return cc; } } readonly string bcc; public string BCc { get { return bcc; } } readonly string body; public string Body { get { return body; } } readonly string smtpHost; public string SmtpHost { get { return smtpHost; } } readonly string attachment; public string Attachment { get { return Attachment; } } public Mailer(string from = null, string to = null, string body = null, string subject = null, string cc = null, string bcc = null, string smtpHost = "localhost", string attachment = null) { this.from = from; this.to = to; this.subject = subject; this.body = body; this.cc = cc; this.bcc = bcc; this.smtpHost = smtpHost; this.attachment = attachment; } public void SendMail() { if (string.IsNullOrEmpty(From)) throw new ArgumentNullException("Sender e-mail address cannot be null or empty.", from); SmtpClient smtp = new SmtpClient(); MailMessage mail = new MailMessage(); smtp.Send(mail); } } } 

MailerTests.cs

 using System; using NUnit.Framework; using FluentAssertions; namespace Dotnet.Samples.NUnit { [TestFixture] public class MailerTests { [Test, Ignore("No longer needed as the required code to pass has been already implemented.")] public void SendMail_FromArgumentIsNotNullOrEmpty_ReturnsTrue() { // Arrange dynamic argument = null; // Act Mailer mailer = new Mailer(from: argument); // Assert Assert.IsNotNullOrEmpty(mailer.From, "Parameter cannot be null or empty."); } [Test] public void SendMail_FromArgumentIsNullOrEmpty_ThrowsException() { // Arrange dynamic argument = null; Mailer mailer = new Mailer(from: argument); // Act Action act = () => mailer.SendMail(); act.ShouldThrow<ArgumentNullException>(); // Assert Assert.Throws<ArgumentNullException>(new TestDelegate(act)); } [Test] public void SendMail_FromArgumentIsOfTypeString_ReturnsTrue() { // Arrange dynamic argument = String.Empty; // Act Mailer mailer = new Mailer(from: argument); // Assert mailer.From.Should().Be(argument, "Parameter should be of type string."); } // INFO: At this first 'iteration' I've almost covered the first argument of the method so logically this sample is nowhere near completed. // TODO: Create a test that will eventually require the implementation of a method to validate a well-formed email address. // TODO: Create as much tests as needed to give the remaining parameters good code coverage. } } 

So, after the first two unsuccessful tests, the next obvious step will be to implement the functionality so that they pass, but should I continue the tests with an error and create new ones after implementing the code that will force them to pass, or should I change the existing ones after passing them?

Any advice on this topic would really be greatly appreciated.

+4
source share
4 answers

I would advise you to take some kind of tool, for example NCover , which can connect to your test cases to get code coverage statistics. There is also a publication by the NCover community if you do not want a licensed version.

+1
source

If you install TestDriven.net , one of the components (called NCover) will actually help you understand how your code is covered by unit test.

Preventing the best solution is to check each line and run each test to make sure you at least hit that line once.

+3
source

If you use a framework such as NUnit, methods such as AssertThrows , where you can argue that the method throws the required exception based on the input: http://www.nunit.org/index.php?p=assertThrows&r=2.5

In principle, the best place to start is to check your expected behavior with good and bad entries.

+1
source

When people (finally!) Decide to apply test coverage to an existing code base, it is impractical to test everything; you have no resources, and often not much real value.

What you ideally want to do is make sure that your tests are applicable to the newly written / modified code and everything that these changes can affect.

For this you need to know:

  • what code you changed. Your version control system will help you here at the edit level of this file.

  • what code is executed as a result of the execution of the new code. To do this, you need either a static analyzer that can track the impact of the code below (I don’t know many of them), or a testing tool that can show what was performed when performing your specific tests. Any such executable code probably also needs to be re-tested.

Since you want to minimize the amount of test code that you write, you clearly need better than the dimensionality of precision “modified”. You can use the diff tool (often built into your version control system) to help focus on specific lines. Diff tools don't really understand the structure of the code, so what they say tends to be line-oriented rather than structurally oriented, creating more differences than necessary; and they don’t tell you about a convenient access point to the test, which is likely to be a method, because the whole unit test style is focused on test methods.

You can get the best tools for comparison. Our Smart Differencer tools provide differences in the structure of programs (expressions, instructions, methods) and abstract the editing operations (insert, delete, copy, move, replace, rename), which will simplify the interpretation of code changes. This does not directly solve "which method has changed?" a question, but it often means looking at much less material to make this decision.

You can get testing tools that will answer this question. Our Test Coverage tools have the ability to compare previous test runs with current runs in testing to tell you which tests to re-run. They do this by examining the code difference (something like Smart Differencer), but draw the changes back to the method level.

+1
source

Source: https://habr.com/ru/post/1347284/


All Articles