An OO Application Technical Test Strategy

As the Test Pyramid tells us, unit and integration tests should make up the bulk of our test strategy. But where we draw the line between the two, how they complement each other and their relationship with our acceptance tests can be a source of confusion.

Building on my Hexagonal Architecture series of posts, I’ll walkthrough an example that outlines my usual strategy for technical tests by building the test configuration of a Ports and Adapters application.

Unit tests are purely technical

Unfortunately, there is no single universally accepted definition to describe a unit test. Personally, I use the phrase in relation to an OO codebase as follows:

A Unit Test tests a single class or small group of interrelated classes in isolation.

I appreciate that this definition doesn’t work for everyone and the appearance of the word isolation might not fit with your preferred approach.

Unit tests are always purely technical, they know nothing about how the classes under test contribute to application use cases. They focus on details of the codebase and as such they are written by developers, for developers.

Integration tests can have differing objectives

Again, there is no widely accepted industry standard when we try to define integration tests. Here’s what the term means to me:

An Integration Test exercises as much of the system as is possible and necessary in order to accurately simulate the application without compromising test speed and reliability.

I like to think of them as coming in two forms, those that are purely technical – again they are written by developers, for developers:

A Technical Integration Test is an integration test that verifies a technical detail.

However, here I will focus on those that are used to simulate an application use case and contribute to your acceptance tests:

An Integration Acceptance Test is an integration test that validates a customer requirement.

An example using hexagonal architecture

Org Chart is a small example solution I have created to demonstrate this approach and how it might apply to a hexagonal architecture. It consists of four projects:

  • OrgChart – The domain model.
  • ViewModels – A presentation view of the domain as per the MVVM pattern using MVVMLight libraries.
  • IntegrationTests – Acceptance tests for our application.
  • UnitTests – Unit tests for the view models and domain model.

The project contains no UI. Hopefully you can see that one could easily be added to turn this into a full-blown MVVM application such as a WPF or Xamarin app. This further highlights that in a hexagonal architecture, our core application has no concept of how it is being used. In this example, only the Integration Test configuration exists.


The system consists of two ports, the primary port that drives the application through the view models, and IOrgRepository – a persistence port. We create the integration test configuration by writing tests that execute through the view models and by registering an OrgRepositoryStub with our IoC container.

The model

The OrgChart project models different roles in an organisation:


The organisation methods use a visitor pattern to separate the organisation data structure from the operations we want to perform on that data:


An example unit test

The UnitTests project contains tests that thoroughly exercise the logic of a particular class or group of classes. Here as an example, we verify that the AllEmployeesVisitor correctly includes the CEO in its employee list.

public void VisitorAddsCeosToTheEmployeeList()
    // Arrange
    var ceo = new HeadOfOrg("Mr Ceo");
    var visitor = new AllEmployeesVisitor();

    // Act

    // Assert
    var employees = visitor.GetEmployees();
    Assert.AreEqual(1, employees.Count);
    Assert.AreEqual(ceo, employees[0]);

These tests are coupled to the public interface of the class and the visitor pattern design. This means that while they will highlight regressions when we change the internal implementation, we would have to delete them if we decided to move away from this pattern.

Unit tests care about the fine detail, they verify the correctness of the code under test. They may use boundary checking and equivalence partitioning, they might be written to target a particular level of code coverage. They could cover edge cases, check the default state of the class, ensure that invalid input is handled or that exceptions are raised correctly. All these things are in scope for unit tests.

A BDD use case

The ViewModel project contains an AddEmployeeViewModel class which exposes commands and data that would allow you to add new employees to the organisation.

Let’s imagine that I have a use case declared using gherkin syntax as follows:

    GIVEN I am on the Add Employee view
    WHEN I enter a new employee name
    AND I choose the employee's line manager
    AND I add the employee to the organisation
    THEN The new employee is added to the list of possible superiors

We can easily create an integration acceptance test to validate this behaviour:

public void WhenIAddAnEmployeeItIsAddedToTheListOfPossibleSuperiors()
    // GIVEN I am on the Add Employee view
    // WHEN I enter a new employee name
    var employeeName = "Dave Biggs";
    ViewModelLocator.AddEmployeeVm.NewEmployeeName = employeeName;

    // AND I choose the employee's line manager
    var boss = ViewModelLocator.AddEmployeeVm.PossibleSuperiors[0];

    // AND I add the employee to the organisation

    // THEN The new employee is added to the list of possible superiors
    Assert.IsTrue(ViewModelLocator.AddEmployeeVm.PossibleSuperiors.Any(x => x.Name.Equals(employeeName)), $"Expected {employeeName} in possible superiors");

The important things to note here are:

  • The test interacts only with the view models, in the same way that a UI would.
  • Each step of the requirement is ideally implemented by a single line that closely mirrors the domain language.

Integration tests as a use case safety net

My initial definition says that an integration test exercises as much of the system as is possible and necessary, so unless you have a good reason to isolate the testable code at a particular boundary, then don’t. Instead hit a real database. Read from the file system. Call real production APIs over the network. Have high standards for execution speed and reliability of your test cases and then push them to work as hard as they can within those boundaries. I stubbed at the persistence level here purely to keep the example as simple as possible.

The tests have no knowledge of any of the internal details which makes them excellent safety nets for larger refactorings. For example, if we were to replace the visitor pattern with an alternative implementation, tests at this level would give us immediate feedback on whether this use case still works.

Or would they? We are not actually testing through our application UI or our real persistence mechanism. So how good a test is this?

I’ll answer that question with a look at risk-based testing in a later post.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s