How did I do?*

Introduction to Testing with xUnit and NSubstitute

Assumptions

  • Moderate experience with modern C# and .NET
  • No experience writing unit tests
  • Visual Studio 2022

Introduction

I don't like writing tests. There I said it.

I tend to leave them until the end of a task - a habit I really need to get out of - because once the work is done I just want to ship my code - you know, the actual functionality - which you know works just fine because you've (hopefully) done extensive manual testing, and you just want to raise a PR and get it merged. The last thing you want to do is spend hours writing a bunch more code which proves it works.

In more extreme cases where the solution you're working on doesn't have any tests at all, or the test coverage is poor due to only being added to sporadically, or even worse, the codebase hasn't been written in a way which allows proper testing, they can be awkward to set up, and time consuming to get up to a reasonable standard (e.g. >80% coverage).

However, I appreciate their value and understand their importance, especially in logically-complex solutions, or applications with limited margins for error, such as in the medical, financial and security sectors. You'll also be glad for the presence of tests when it comes to extending or refactoring, as this ensures the existing code still works as intended.

It's good to get into the habit of writing tests for every piece of functional code. If you work as part of a team using a source control system such as Github, Bitbucket etc., it's common practise to include them as part of every pull request. Coverage doesn't need to be 100% - in many cases that's not even possible - but a minimum standard should be set and adhered to where possible.

There are various tools which can be attached to your build process when a PR is raised which can summarise test coverage alongside details of the build. There are also extensions which can be added to your IDE (e.g. Fine Code Coverage) which highlight sections of code to let you quickly see which parts need testing.

This article will guide you through the basic steps to getting started with writing unit and integration tests, using the popular xUnit and NSubstitute packages. We will be writing tests for a web API controller, using the example WeatherForecastController which comes bundled with most Visual Studio templates.

Set up the test project

Since tests have nothing to do with an app's functionality, they should be created in a separate project so they remain completely isolated from code. If your solution is formed of multiple projects, there should be a test project for each, and the only references between the projects will be from the test project to your application.

A common naming convention in .NET projects is the dot notation of your target project, suffixed by ".Tests". For example, if your solution has two projects which need testing called HelloWorld.Web and HelloWorld.Api, you would have two test projects called HelloWorld.Web.Tests and HelloWorld.Api.Tests respectively.

Visual Studio has templates which already include xUnit, so add a new project (right click your solution, Add > New Project...), select the "xUnit Test Project" and select the target framework which matches the project you'll be testing.

xUnit test project template in Visual Studio
xUnit test project template in Visual Studio

Alternatively, you could also create a project from the command line:

cd <your solution directory>
mkdir HelloWorld.Web.Tests
cd HelloWorld.Web.Tests
dotnet new xunit
cd ..
dotnet sln add .\HelloWorld.Web.Tests\

If you don't have this template option, or if you're not using Visual Studio, simply create a new project as you would normally, and add the "xunit" NuGet package. If you are using Visual Studio, you'll also need to add "xunit.runner.visualstudio". In either situation, you will also need to add the "NSubstitute" package to your test project.

The controller we will be testing has a list of strings to describe the weather, an ILogger dependency, and a single Get endpoint which returns randomised data.

[ApiController]
[Route("[controller]")]
public class WeatherForecastController : ControllerBase
{
    private static readonly string[] Summaries = new[]
    {
    "Freezing", "Bracing", "Chilly", "Cool", "Mild", "Warm", "Balmy", "Hot", "Sweltering", "Scorching"
};

    private readonly ILogger<WeatherForecastController> _logger;

    public WeatherForecastController(ILogger<WeatherForecastController> logger)
    {
        _logger = logger;
    }

    [HttpGet]
    public IEnumerable<WeatherForecast> Get()
    {
        return Enumerable.Range(1, 5).Select(index => new WeatherForecast
        {
            Date = DateTime.Now.AddDays(index),
            TemperatureC = Random.Shared.Next(-20, 55),
            Summary = Summaries[Random.Shared.Next(Summaries.Length)]
        })
        .ToArray();
    }
}

Add your first test class

The template will create a default class file for you.

public class UnitTest1
{
    [Fact]
    public void Test1()
    {

    }
}

Rename the class and filename to something suitable for your test cases. A common convention is the name of the class you're testing, suffixed with "Tests".

Open the Test Explorer window (View > Test Explorer), and run the test to ensure it's working correctly. An empty test case will always pass.

Test Explorer window in Visual Studio
Test Explorer window in Visual Studio

Set up the class

Since we are testing the WeatherForecastController, we will need to reference the main project and create an instance of this class to allow us to call the endpoints programmatically. Add a private field and a constructor to instantiate our "subject under test" (_sut).

private readonly WeatherForecastController _sut;

public WeatherForecastControllerTests()
{
    _sut = new WeatherForecastController();
}

This will generate a compilation error as the controller is missing the ILogger dependency.

As our unit tests will be focusing on the controller, we don't want to have to deal with any additional logic or dependencies outside of the controller. As such, we will replace the dependency with a "substitute" using NSubstitute.

private readonly ILogger<WeatherForecastController> _logger; // ๐Ÿ‘ˆ
private readonly WeatherForecastController _sut;

public WeatherForecastControllerTests()
{
    _logger = Substitute.For<ILogger<WeatherForecastController>>(); // ๐Ÿ‘ˆ
    _sut = new WeatherForecastController(_logger);
}

Write the test cases

Since the example controller uses random values, there are a few properties which will produce variable results, and since these random values fall within pre-defined ranges, we can use this information to create test cases to ensure those ranges aren't exceeded.

Assuming a typical implementation of this application, where data is being fetched from an external API, the Get method should return five days' worth of data, no more, no less, and they should be in the future. If the date of either of these responses diverges from these rules, something has gone wrong either with the request in our code, or with the external provider.

A common structure when writing unit tests is the "Arrange, Act, Assert" pattern:

  • Arrange: set up any variable or dependencies needed for the subject under test
  • Act: call the method of the subject under test
  • Assert: confirm that the actual results of the call match the expected results

It's a fairly standard practise to write the steps out for each test. This isn't necessary, but helps quickly see which parts of the code relate to which step of the test.

[Fact]
public void Test1()
{
    // Arrange

    // Act

    // Assert
}

All test cases should be annotated with the [Fact] or [Theory] attributes in order for them to be picked up by the xUnit test runner. We'll start with facts, then look at how to use theories for more dynamic test scenarios.

Test 1: Correct number of days returned

Let's start off simple. We know that the method should return a list of five items, so let's write a test which will flag any deviations.

Rename the initial test case to something more descriptive. I typically opt for a popular convention which is structured like "MethodName_Circumstance_ExpectedResult", for example Get_FiveDayForecast_ReturnsFiveItems.

There's nothing to arrange in this test, so we just need to call the Get method under "Act", and confirm that five items are returned under "Assert".

[Fact]
public void Get_FiveDayForecast_ReturnsFiveItems()
{
    // Act
    var result = _sut.Get();

    // Assert
    Assert.Equal(5, result.Count());
}

Run the test and you should see that the condition stated in the assertion resolves to true, and the test passes.

To check that this test will fail, change the Enumerable.Range(1, 5) in the controller to Enumerable.Range(1, 10). Run the test again, and it will fail because the result has fallen outside the bounds of the expected behaviour.

Test failure message showing expected result and actual result
Test failure message showing expected result and actual result

Test 2: Dates are within expected range

Looking at the controller code, we can assume that the expected dates should start from tomorrow, through five consecutive days in the future.

For this test case, we will make use of the "Arrange" step to store a fixed DateTime value, and define the first and last dates we expect to see in the results.

Although unlikely to cause problems with this simple test, testing with DateTime can be awkward since it literally changes every nanosecond, so if your tests make use of the time portion of this object, you're more likely to encounter issues of values not matching between test start and finish.

A way to avoid some of the problems you may encounter is to store Now to a variable and test against that value, rather than another Now elsewhere in the test code.

Another issue you may encounter is if you compare hard-coded DateTime strings against Now (which is localised to your machine's time-zone), for example in test data. These tests may periodically fail if run by colleagues or build tools based in different time-zones. therefore it's typically better to use the time-zone-agnostic UtcNow.

[Fact]
public void Get_FiveDayForecast_ReturnsNextFiveDaysData()
{
    // Arrange
    var dateNow = DateTime.Now;
    var firstDay = dateNow.AddDays(1).Day;
    var lastDay = dateNow.AddDays(5).Day;

    // Act
    var result = _sut.Get();

    // Assert
    Assert.Equal(firstDay, result.First().Date.Day);
    Assert.Equal(lastDay, result.Last().Date.Day);
}

This one is also relatively simple. We define the start and end date of the expected data in "Arrange", call the method in "Act", then in "Assert" confirm the first value in the list has tomorrow's date, and the last one is tomorrow's date plus five.

Test 3: Dependency is used as expected

Up to this point, we haven't used the ILogger registered in the controller, and substituted in our test class, so let's add some logging to our controller method.

[HttpGet]
public IEnumerable<WeatherForecast> Get()
{
    _logger.LogInformation("Get method was called"); // ๐Ÿ‘ˆ

    return Enumerable.Range(1, 5).Select(index => new WeatherForecast
    {
        Date = DateTime.Now.AddDays(index),
        TemperatureC = Random.Shared.Next(-20, 55),
        Summary = Summaries[Random.Shared.Next(Summaries.Length)]
    })
    .ToArray();
}

Now let's ensure that LogInformation has been called properly - it should receive a single call with the string value of "Get method was called".

[Fact]
public void Get_LogInformation_MessageLoggedCorrectly()
{
    // Act
    _sut.Get();

    // Assert
    _logger
        .Received(1)
        .LogInformation("Get method was called");
}

Test 4: Mock dependency behaviour with Returns

You may encounter scenarios where the class function you're testing calls off to a dependency, and waits for data to be returned. This data is then used by your function during the remainder of the process.

If a dependency's return value isn't mocked as part of a test, the function call will still be attempted, but the result will always be null, which may then cause the rest of the process to break, and the test to fail.

Since the ILogger dependency doesn't return anything, let's move the controller functionality into a separate class to mimic a more realistic service which we can return values from.

public class WeatherForecastService : IWeatherForecastService
{
    private static readonly string[] Summaries = new[]
    {
        "Freezing", "Bracing", "Chilly", "Cool", "Mild", "Warm", "Balmy", "Hot", "Sweltering", "Scorching"
    };

    public async Task<WeatherForecast[]> GetData(string districtCode)
    {
        return Enumerable.Range(1, 5).Select(index => new WeatherForecast
        {
            Date = DateTime.Now.AddDays(index),
            TemperatureC = Random.Shared.Next(-20, 55),
            Summary = Summaries[Random.Shared.Next(Summaries.Length)]
        })
        .ToArray();
    }
}

public interface IWeatherForecastService
{
    Task<WeatherForecast[]> GetData(string districtCode);
}

This service will also need to be registered in the Startup/Program class

builder.Services.AddSingleton<IWeatherForecastService, WeatherForecastService>();

and injected into our WeatherForecastController

private readonly ILogger<WeatherForecastController> _logger;
private readonly IWeatherForecastService _weatherForecastService; // ๐Ÿ‘ˆ

public WeatherForecastController(
    ILogger<WeatherForecastController> logger,
    IWeatherForecastService weatherForecastService) // ๐Ÿ‘ˆ
{
    _logger = logger;
    _weatherForecastService = weatherForecastService; // ๐Ÿ‘ˆ
}

and WeatherForecastControllerTests classes

private readonly ILogger<WeatherForecastController> _logger;
private readonly IWeatherForecastService _weatherForecastService; // ๐Ÿ‘ˆ
private readonly WeatherForecastController _sut;

public WeatherForecastControllerTests()
{
    _logger = Substitute.For<ILogger<WeatherForecastController>>();
    _weatherForecastService = Substitute.For<IWeatherForecastService>(); // ๐Ÿ‘ˆ
    _sut = new WeatherForecastController(_logger, _weatherForecastService);
}

and change the controller method to

[HttpGet]
public async Task<IEnumerable<WeatherForecast>> Get(string districtCode)
{
    if (string.IsNullOrWhiteSpace(districtCode))
    {
        throw new ArgumentNullException(nameof(districtCode));
    }

    _logger.LogInformation("Get method was called");
    return await _weatherForecastService.GetData(districtCode);
}

These changes will cause your tests to fail for a few reasons:

  1. The controller signature now expects a string argument
  2. The service returns an asynchronous Task rather than an array
  3. The service dependency in our tests isn't real, and therefore won't return any data.

The tests caught changes in the code which would be breaking changes in a real API, so they served their purpose well.

Let's adjust our tests.

  1. Add a string to each test method call for the districtCode parameter, e.g. _sut.Get("W1")
  2. Convert the test methods to use async-await

For the 3rd adjustment, we will make use of some test data, and NSubstitute's Returns method to provide a mock response during test execution.

For the former, just copy the functionality from the service into the test class as static properties.

private static readonly string[] Summaries = new[]
{
    "Freezing", "Bracing", "Chilly", "Cool", "Mild", "Warm", "Balmy", "Hot", "Sweltering", "Scorching"
};

private static readonly WeatherForecast[] WeatherForecasts
    = Enumerable.Range(1, 5).Select(index => new WeatherForecast
    {
        Date = DateTime.Now.AddDays(index),
        TemperatureC = Random.Shared.Next(-20, 55),
        Summary = Summaries[Random.Shared.Next(Summaries.Length)]
    })
    .ToArray();

For the latter, we'll use a fluent syntax to define what the dependency should return when provided with specific call details. Add the following to each test.

// Arrange
_weatherForecastService
    .GetData("W1")
    .Returns(WeatherForecasts);

This instructs the substitute to return our WeatherForecasts object when called with the argument of "W1".

Running these tests now should all pass.

Test 5: Testing for exceptions

Our controller expects a string argument, and handles empty strings by throwing an exception. Let's add a test to check for this possibility.

[Fact]
public async Task Get_NoDistrictCode_ThrowsException()
{
    // Act & Assert
    await Assert.ThrowsAsync<ArgumentNullException>(
        async () => await _sut.Get(""));
}

If your exception also throws a specific message, this can be tested by saving the result to a variable and asserting equality.

// Act & Assert
var exception = await Assert.ThrowsAsync<ArgumentNullException>(
    async () => await _sut.Get(""));
}

Assert.Equal("Your exception message", exception.Message);

Similarly, you may also wish to test the parameter names referenced in the method under test, since changes to parameter names can in certain circumstances impact consumers. For example, if you're using named arguments, or writing an API endpoint where consumers submit JSON objects which need to match a defined schema.

// Act & Assert
var exception = await Assert.ThrowsAsync<ArgumentNullException>(
    async () => await _sut.Get(""));
}

Assert.Equal("Your exception message (Parameter 'yourParameterName')", exception.Message);
Assert.Equal("yourParameterName", exception.ParamName);

Test 6: Handle test case variations using Theory and MemberData

Let's now assume that the service we're interacting with only provides forecasts for certain regions. We'll just add a simple property defining the supported regions for this example, and provide feedback to the consumer when unsupported data is provided.

// ๐Ÿ‘‡
private static readonly string[] SupportedRegions = new[]
{
    "W1", "W3", "SW15", "NW10"
};
// ๐Ÿ‘†

[HttpGet]
public async Task<IEnumerable<WeatherForecast>> Get(string districtCode)
{
    if (string.IsNullOrWhiteSpace(districtCode))
    {
        throw new ArgumentNullException(nameof(districtCode));
    }

    _logger.LogInformation("Get method was called");

    // ๐Ÿ‘‡
    if (!SupportedRegions.Contains(districtCode))
    {
        throw new NotSupportedException("No data available for " + districtCode);
    }
    // ๐Ÿ‘†

    return await _weatherForecastService.GetData(districtCode);
}

The Fact test case "Get_FiveDayForecast_ReturnsFiveItems" already ensures we're receiving five results when "W1" is provided as the districtCode, and with a limited range of options it's easy enough to write 3 more of these to match the other district codes, however there's a simpler, DRYer way.

Replacing the [Fact] attribute with a [Theory] lets you repeat a test case using any number of data variations using [InlineData].

[Theory] // ๐Ÿ‘ˆ
[InlineData("W1")] // ๐Ÿ‘ˆ
[InlineData("W3")] // ๐Ÿ‘ˆ
[InlineData("SW15")] // ๐Ÿ‘ˆ
[InlineData("NW10")] // ๐Ÿ‘ˆ
public async Task Get_FiveDayForecast_ReturnsFiveItems(string districtCode) // ๐Ÿ‘ˆ
{
    // Arrange
    _weatherForecastService
        .GetData(districtCode) // ๐Ÿ‘ˆ
        .Returns(WeatherForecasts);

    // Act
    var result = await _sut.Get(districtCode); // ๐Ÿ‘ˆ

    // Assert
    Assert.Equal(5, result.Count());
}

The [MemberData] attribute functions in a similar manner, but allows you to abstract the data from your test methods and classes. The below shows the same test adjusted to use data from a TestData object which can be defined anywhere in the project.

// ๐Ÿ‘‡
public static readonly IEnumerable<object[]> TestData = new[]
{
    new object[] { "W1" },
    new object[] { "W3" },
    new object[] { "SW15" },
    new object[] { "NW10" },
};
// ๐Ÿ‘†

[Theory]
[MemberData(nameof(TestData))] // ๐Ÿ‘ˆ
public async Task Get_FiveDayForecast_ReturnsFiveItems(string districtCode)
{
    // Arrange
    _weatherForecastService
        .GetData(districtCode)
        .Returns(WeatherForecasts);

    // Act
    var result = await _sut.Get(districtCode);

    // Assert
    Assert.Equal(5, result.Count());
}

We've added four test cases to cover each of the values which should return an array of five items, specifying the values in the test method's signature, and then using that parameter for each iteration of the test. This method lets you run multiple tests from a single block of code.

MemberData runs your test once for each iteration of InlineData
Screenshot of Test Explorer showing 4 passing tests from a single test method

We've tested the positive result of this call, but we should also ensure the correct exception is thrown when unsupported data is provided, and confirm that the message provided matches what's expected.

[Fact]
public async Task Get_UnrecognisedDistrictCode_ThrowsException()
{
    // Act & Assert
    var exception = await Assert.ThrowsAsync<NotSupportedException>(
        async () => await _sut.Get("test"));

    // Assert
    Assert.Equal("No data available for test", exception.Message);
}

This is similar to our previous exception assertion, however in this case we're saving the response to a variable so that we can use the Message property in our comparison.

F.I.R.S.T. - The principles of writing clean tests

The originators of this principle, Tim Ottinger and Jeff Langr (The Pragmatic Programmers), define clean tests as those which follow five rules:

  • Fast: Tests should run quickly, and ideally in parallel. The issue here is a very human one; if tests run slowly, developers are less likely to run them.

  • Independent: Individual tests should not rely upon one another. Assuming they're running in parallel, the order in which a test executes is likely to change each time the suite is ran, which means that if one test creates an object required for another test, you will find yourself with flaky tests which sometimes pass and sometimes fail.

  • Repeatable: Test suites should be environment-agnostic, meaning that they will run and behave in exactly the same manner wherever they are executed, whether this be in your local environment, your colleagues', or a test or production environment.

  • Self-validating: A test's result should always be either "pass" or "fail" (a green tick or a red cross). If a test requires external validation, for example a developer manually checking an output file, then the test will take longer to resolve, and is prone to human error.

  • Timely: The author's intent for this rule implies a Test Driven Development (TDD) approach, whereby the test should be written in a timely fashion, just before the code which makes it pass, as this ensures that the production code is written in a way which can be tested effectively.

    An alternative rule for non-TDD fans, and one I'm personally more likely to adhere to, is that the tests should be written as close as reasonably possible to the function which is being tested. This ensures that the function's behaviour (i.e. input, process, expected output) is still fresh when writing test cases.

A sixth rule which has also been widely documented, and unfortunately ruins the acronym, is "Readable" or "Understandable" (which would make it FIRSTR/FIRSTU...๐Ÿคฎ).

Much like any other well-written function in production code (see my other article: Developer Compendium: Functions), a test function should be small, and easy to read and comprehend. If you follow the examples used above when naming your tests using the "MethodName_Circumstance_ExpectedResult" pattern mentioned earlier, and ensure you maintain a logical separation between Arrange/Act/Assert, then you're golden.

Organising your test project files

Separate

It's common practise, especially in .NET solutions, to organise tests by storing test classes in a separate project, then matching the structure of the project being tested so that source and test classes can be found easily.

For example, if you have a project called MyApp.Api, create a test project called MyApp.Api.Tests. Your API project may contain directories for Controllers, Services, Utilities, which can be copied into the Tests project, where the test classes for each of those namespaces can be found.

Basic test project structure in .NET API solution
Basic test project structure in .NET API solution

Combined

An alternative, which is more common in JavaScript framework projects, is to combine the tests with the source code. For example, with a React project, you may have a directory containing components for a form, such as a Button.tsx, TextInput.tsx, with the UI and logical tests right next to them in the same directory, typically named Button.tests.tsx, TextInput.tests.tsx.

Mocking database contexts [INCOMPLETE]

When you find yourself writing unit tests for a class which interacts with a database, it's a good time to start looking at ways to simulate database behaviour so that you can be confident that your code is handling data correctly.

In the following section, we'll add Entity Framework to the project with some basic database queries, along with some simple tests against an in-memory version of our application's DbContext class.

...

Tips

  • Variables used for test data can be abstracted into static properties in the same file, or an external test data class to avoid duplication and a keep test classes and reduce clutter
  • Use Arg.Any<T>() in your Returns statements if the arguments supplied arenโ€™t being tested or aren't important to the test
  • Test for nulls and default values as well as expected and unexpected values to ensure as many bases as possible are covered
  • Use Assert.True/Assert.False rather than Assert.Equal when testing simple boolean conditions
  • When comparing object equality, often you may only want to ensure the values are equal, rather than the object reference itself, in these cases you can use Assert.Equivalent

Tools: UUID/GUID generator

If you want to test with known GUIDs (i.e. rather than generating them on the fly in the test with Guid.NewGuid()), you may want to hard-code them into variables for comparison. You can use the below tool to generate UUIDs/GUIDs:

ย 

If you find yourself using it regularly, the generator tool can be bookmarked here.

Tools: JSON escape/unescape

When writing test cases, you may find yourself working with raw JSON strings which are littered with escape characters. The below tool can be used to escape or unescape these strings:

ย 

If you find yourself using it regularly, the tool can be bookmarked here.