Effective Tests: Test Doubles

On May 16, 2011, in Uncategorized, by derekgreer
This entry is part 11 of 17 in the series Effective Tests

In our last installment, we concluded our Test-First example which demonstrated the Test Driven Development process through the creation of a Tic-tac-toe component. When writing automated tests using either a Test-First or classic unit testing approach, it often becomes necessary to verify and/or exercise control over the interactions of a component with its collaborators. In this article, I’ll introduce a family of strategies for addressing these needs, known collectively as Test Doubles. The examples within this article will be presented using the Java programming language.

Doubles

The term “Test Double” was popularized by Gerard Meszaros in his book xUnit Test Patterns. Similar to the role of the “stunt double” in the movie industry in which a leading actor or actress is replaced in scenes requiring a more specialized level of training and/or physical ability, Test Doubles likewise play a substitute role in the orchestration of a System Under Test.

Test Doubles serve two primary roles within automated tests. First, they facilitate the ability to isolate portions of behavior being designed and/or tested from undesired influences of collaborating components. Second, they facilitate the ability to verify the collaboration of one component with another.

Isolating Behavior

There are two primary motivations for isolating the behavior being designed from influences of dependencies: Control and Feedback.

It is often necessary to exercise control over the behavior provided by dependencies of a System Under Test in order to effect a deterministic outcome or eliminate unwanted side-effects. When a real dependency can’t be adequately manipulated for these purposes, test doubles can provide control over how a dependency responds to consuming components.

A second motivation for isolating behavior is to aid in identifying the source of regressions within a system. By isolating a component completely from the behavior of its dependencies, the source of a failing test can more readily be identified when a regression of behavior is introduced.

Identifying Regression Sources

While test isolation aids in identifying the source of regressions, Extreme Programming (XP) offers an alternative process.

As discussed briefly in the series introduction, XP categories tests as Programmer Tests and Customer Tests rather than the categories of Unit, Integration or Acceptance Tests. One characteristic of Programmer Tests, which differs from classic unit testing, is the lack of emphasis on test isolation. Programmer tests are often written in the form of Component Tests which test subsystems within an application rather than designing/testing the individual units comprising the overall system. One issue this presents is a decreased ability to identify the source of a newly introduced regression based on a failing test due to the fact that the regression may have occurred in any one of the components exercised during the test. Another consequence of this approach is a potential increase in the number of tests which may fail due to a single regression being introduced. Since a single class may be used by multiple subsystems, a regression in behavior of a single class can potentially break the tests for every component which consumes that class.

The strategy used for identifying sources of regressions within a system when writing Programmer Tests is to rely upon knowledge of the last change made within the system. This becomes a non-issue when using emergent design strategies like Test-Driven Development since the addition or modification of behavior within a system tends to happen in very small steps. The XP practice of Pair-Programming also helps to mitigate such issues due to an increase in the number of participants during the design process. Practices such as Continuous Integration and associated check-in guidelines (e.g. The Check-in Dance) also help to mitigate issues with identifying sources of regression. The topic of Programmer Tests will be discussed as a separate topic later in the series.

 

Verifying Collaboration

To maximize maintainability, we should strive to keep our tests as decoupled from implementation details as possible. Unfortunately, the behavior of a component being designed can’t always be verified through the component’s public interface alone. In such cases, test doubles aid in verifying the indirect outputs of a System Under Test. By replacing a real dependency with one of several test double strategies, the interactions of a component with the double can be verified by the test.

 

Test Double Types

While a number of variations on test double patterns exist, the following presents the five primary types of test doubles: Stubs, Fakes, Dummies, Spies and Mocks.

Stubs

When writing specifications, the System Under Test often collaborates with dependencies which need to be supplied as part of the setup or interaction stages of a specification. In some cases, the verification of a component’s behavior depends upon providing specific indirect inputs which can’t be controlled by using real dependencies. Test doubles which serve as substitutes for controlling the indirect input to a System Under Test are known as Test Stubs.

The following example illustrates the use of a Test Stub within the context of a package shipment rate calculator specification. In this example, a feature is specified for a shipment application to allow customers to inquire about rates based upon a set of shipment details (e.g. weight, contents, desired delivery time, etc.) and a base rate structure (flat rate, delivery time-based rate, etc.).

In the following listing, a RateCalculator has a dependency upon an abstract BaseRateStructure implementation which is used to calculate the actual rate:

public class RateCalculator {

	private BaseRateStructure baseRateStructure;
	private ShipmentDetails shipmentDetails;

	public RateCalculator(BaseRateStructure baseRateStructure, ShipmentDetails shipmentDetails) {
		this.baseRateStructure = baseRateStructure;
		this.shipmentDetails = shipmentDetails;
	}

	public BigDecimal CalculateRateFor(ShipmentDetails shipmentDetails) {
		BigDecimal rate =  baseRateStructure.calculateRateFor(shipmentDetails);

		// other processing ...
		
		return rate;
	}
}

The following shows the BaseRateStructure contract which defines a method that accepts shipment details and returns a rate:

public abstract class BaseRateStructure {
	public abstract BigDecimal calculateRateFor(ShipmentDetails shipmentDetails);
}

To ensure a deterministic outcome, the specification used to drive the feature’s development can substitute a BaseRateStrctureStub which will always return the configured value:

public class RateCalculatorSpecifications {

	public static class when_calculating_a_shipment_rate extends ContextSpecification {

		static Reference<BigDecimal> rate = new Reference<BigDecimal>(BigDecimal.ZERO);
		static ShipmentDetails shipmentDetails;
		static RateCalculator calculator;

		Establish context = new Establish() {
			public void execute() {
				shipmentDetails = new ShipmentDetails();
				BaseRateStructure baseRateStructureStub = new BaseRateStructureStub(10.0);
				calculator = new RateCalculator(baseRateStructureStub, shipmentDetails);
			}
		};

		Because of = new Because() {
			protected void execute() {
				rate.setValue(calculator.CalculateRateFor(shipmentDetails));
			}
		};

		It should_return_the_expected_rate = assertThat(rate).isEqualTo(new BigDecimal(10.0));
	}
}

For this specificaiton, the BaseRateStructureStub merely accepts a value as a constructor parameter and returns the value when the calculateRateFor() method is called:

public class BaseRateStructureStub extends BaseRateStructure {

	BigDecimal value;

	public BaseRateStructureStub(double value) {
		this.value = new BigDecimal(value);
	}

	public BigDecimal calculateRateFor(ShipmentDetails shipmentDetails) {
		return value;
	}
}

 

Fakes

While it isn’t always necessary to control the indirect inputs of collaborating dependencies to ensure a deterministic outcome, some real components may have other undesired side-effects which make their use prohibitive. For example, components which rely upon an external data store for persistence concerns can significant impact the speed of a test suite which tends to discourage frequent regression testing during development. In cases such as these, a lighter-weight version of the real dependency can be substituted which provides the behavior needed by the specification without the undesired side-effects. Test doubles which provide a simplified implementation of a real dependency for these purposes are referred to as Fakes.

In the following example, a feature is specified for an application serving as a third-party distributor for the sale of tickets to a local community arts theatre to display the itemized commission amount on the receipt. The theatre provides a Web service which handles the payment processing and ticket distribution process, but does not provide a test environment for vendors to use for integration testing purposes. To test the third-party application’s behavior without incurring the side-effects of using the real Web service, a Fake service can be substituted in its place.

Consider that the theatre’s service interface is as follows:

public abstract class TheatreService {

	public abstract TheatreReceipt ProcessOrder(TicketOrder ticketOrder);
	
	public abstract CancellationReceipt CancelOrder(int orderId);

	// other methods ...
}

To provide the expected behavior without the undesired side-effects, a fake version of the service can be implemented:

public class TheatreServiceFake extends TheatreService {

	// private field declarations used in light implementation ...
	
	public TheatreReceipt ProcessOrder(TicketOrder ticketOrder) {

		// light implementation details ...

		TheatreReceipt receipt = createReceipt();
		return new TheatreReceipt();
	}

	public CancellationReceipt CancelOrder(int orderId) {

		// light implementation details ...

		CancellationReceipt receipt = createCancellationReceipt();
		return receipt;
	}

	// private methods …

}

The fake service may then be supplied to a PaymentProcessor class within the set up phase of the specification:

public class PaymentProcessorSpecifications {
	public static class when_processing_a_ticket_sale extends ContextSpecification {

		static Reference<BigDecimal> receipt = new Reference<BigDecimal>(BigDecimal.ZERO);
		static PaymentProcessor processor;

		Establish context = new Establish() {
			protected void execute() {
				processor = new PaymentProcessor(new TheatreServiceFake());			
			}
		};

		Because of = new Because() {
			protected void execute() {
				receipt.setValue(processor.ProcessOrder(new Order(1)).getCommission());
			}
		};

		It should_return_a_receipt_with_itemized_commission =
				assertThat(receipt).isEqualTo(new BigDecimal(1.00));
	}
}

 

Dummies

There are times when a dependency is required in order to instantiate the System Under Test, but which isn’t required for the behavior being designed. If use of the real dependency is prohibitive in such a case, a Test Double with no behavior can be used. Test Doubles which serve only to provide mandatory instances of dependencies are referred to as Test Dummies.

The following example illustrates the use of a Test Dummy within the context of a specification for a ShipmentManifest class. The specification concerns verification of the class’ behavior when adding new packages, but no message exchange is conducted between the manifest and the package during execution of the addPackage() method.

public class ShipmentManifestSpecifications {
	public static class when_adding_packages_to_the_shipment_manifest extends ContextSpecification {

		static private ShipmentManifest manifest;

		Establish context = new Establish() {
			protected void execute() {
				manifest = new ShipmentManifest();
			}
		};

		Because of = new Because() {
			protected void execute() {
				manifest.addPackage(new DummyPackage());
			}
		};

		It should_update_the_total_package_count = new It() {
			protected void execute() {
				assert manifest.getPackageCount() == 1;
			}
		};	   
	}
}

 

Test Spies

In some cases, a feature requires collaborative behavior between the System Under Test and its dependencies which can’t be verified through its public interface. One approach to verifying such behavior is to substitute the associated dependency with a test double which stores information about the messages received from the System Under Test. Test doubles which record information about the indirect outputs from the System Under Test for later verification by the specification are referred to as Test Spies.

In the following example, a feature is specified for an online car sales application to keep an audit trail of all car searches. This information will be used later to help inform purchases made at auction sales based upon which makes, models and price ranges are the most highly sought in the area.

The following listing contains the specification which installs the Test Spy during the context setup phase and examines the state of the Test Spy in the observation stage:

public class SearchServiceSpecifications {
	public static class when_a_customer_searches_for_an_automobile extends ContextSpecification {

		static AuditServiceSpy auditServiceSpy;
		static SearchService searchService;

		Establish context = new Establish() {
			protected void execute() {
				auditServiceSpy = new AuditServiceSpy();
				searchService = new SearchService(auditServiceSpy);
			}
		};

		Because of = new Because() {
			protected void execute() {
				searchService.search(new MakeSearch("Ford"));
			}
		};

		It should_report_the_search_to_the_audit_service = new It() {
			protected void execute() {
				assert auditServiceSpy.WasSearchCalledOnce() == true : "Expected true, but was false.";
			}
		};
	}
}

For this specification, the Test Spy is implemented to simply increment a private field each time the recordSearch() method is called, allowing the specification to then call the WasSearchCalledOnce() method in an observation to verify the expected behavior:

public class AuditServiceSpy extends AuditService{
	private int calls;

	public boolean WasSearchCalledOnce() {
		return calls == 1;
	}

	public void recordSearch(Search criteria) {
		calls++;
	}
}

 

Mocks

Another technique for verifying the interaction of a System Under Test with its dependencies is to create a test double which encapsulates the desired verification within the test double itself. Test Doubles which validate the interaction between a System Under Test and the test double are referred to as Mocks.

Mock validation falls into two categories: Mock Stories and Mock Observations.

Mock Stories

Mock Stories are a scripted set of expected interactions between the Mock and the System Under Test. Using this strategy, the exact set of interactions are accounted for within the Mock object. Upon executing the specification, any deviation from the script results in an exception.

Mock Observations

Mock Observations are discrete verifications of individual interactions between the Mock and the System Under Test. Using this strategy, the interactions pertinent to the specification context are verified during the observation stage of the specification.

Mock Observations and Test Spies

The use of Mock Observations in practice looks very similar to the use of Test Spies. The distinction between the two is whether a method is called on the Mock to assert that a particular interaction occurred or whether state is retrieved from the Test Spy to assert that a particular interaction occurred.

To illustrate the concept of Mock objects, the following shows the previous example implemented using a Mock Observation instead of a Test Spy.

In the following listing, a second specification is added to the previous SearchServiceSpecifications class which replaces the use of the Test Spy with a Mock:

public class SearchServiceSpecifications {
	...   

	public static class when_a_customer_searches_for_an_automobile_2 extends ContextSpecification {
		static AuditServiceMock auditServiceMock;
		static SearchService searchService;

		Establish context = new Establish() {
			protected void execute() {
				auditServiceMock = new AuditServiceMock();
				searchService = new SearchService(auditServiceMock);
			}
		};

		Because of = new Because() {
			protected void execute() {
				searchService.search(new MakeSearch("Ford"));
			}
		};

		It should_report_the_search_to_the_audit_service = new It() {
			protected void execute() {
				auditServiceMock.verifySearchWasCalledOnce();
			}
		};
	}
}

The Mock implementation is similar to the Test Spy, but encapsulates the assert call within the verifySearchWasCalledOnce() method rather than returning the recorded state for the specification to assert:

public class AuditServiceMock extends AuditService {
	private int calls;

	public void verifySearchWasCalledOnce() {
		assert calls == 1;
	}

	public void recordSearch(Search criteria) {
		super.recordSearch(criteria);
		calls++;
	}
}

While both the Mock Observation and Mock Story approaches can be implemented using custom Mock classes, it is generally easier to leverage a Mocking Framework.

Mocking Frameworks

A Mocking Framework is testing library written to facilitate the creation of Test Doubles with programmable expectations. Rather than writing a custom Mock object for each unique testing scenario, Mock frameworks allow the developer to specify the expected interactions within the context setup phase of the specification.

To illustrate the use of a Mocking Framework, the following listing presents the previous example implemented using the Java Mockito framework rather than a custom Mock object:

public class SearchServiceSpecifications {
	... 

	public static class when_a_customer_searches_for_an_automobile_3 extends ContextSpecification {
		static AuditService auditServiceMock;
		static SearchService searchService;

		Establish context = new Establish() {
			protected void execute() {
				auditServiceMock = mock(AuditService.class);
				searchService = new SearchService(auditServiceMock);
			}
		};

		Because of = new Because() {
			protected void execute() {
				searchService.search(new MakeSearch("Ford"));
			}
		};

		It should_report_the_search_to_the_audit_service = new It() {
			protected void execute() {
				verify(auditServiceMock).recordSearch(any(Search.class));
			}
		};
	}
}

In this example, the observation stage of the specification uses Mockito’s static verify() method to assert that the recordSearch() method was called with any instance of the Search class.

In many circumstances, messages are exchanged between a System Under Test and its dependencies. For this reason, Mock objects often need to return stub values when called by the System Under Test. As a consequence, most mocking frameworks can be used to also create Test Doubles whose role is only to serve as a Test Stub. Mocking frameworks which facilitate Mock Observations can also be used to easily create Test Dummies.

Conclusion

In this article, the five primary types of Test Doubles were presented: Stubs, Fakes, Dummies, Spies, and Mocks. Next time, we’ll discuss strategies for using Test Doubles effectively.

Tagged with:  

Effective Tests: Double Strategies

On May 26, 2011, in Uncategorized, by derekgreer
This entry is part 12 of 17 in the series Effective Tests

In our last installment, the topic of Test Doubles was introduced as a mechanism for verifying and/or controlling the interactions of a component with its collaborators. In this article, we’ll consider a few recommendations for using Test Doubles effectively.

 

Recommendation 1: Clarify Intent

Apart from guiding the software implementation process and guarding the application’s current behavior against regression, executable specifications (i.e. Automated Tests) serve as the system’s documentation. While well-named specifications can serve to describe what the system should do, we should take equal care in clarifying the intent of how the system’s behavior is verified.

When using test doubles, one simple practice that helps to clarify the verification strategies employed by the specification is to use intention-revealing names for test double instances. Consider the following example which uses the Rhino Mocks framework for creating a Test Stub:

	public class when_a_user_views_the_product_detail
	{
		public const string ProductId = "1";
		static ProductDetail _results;
		static DisplayOrderDetailCommand _subject;

		Establish context = () =>
			{
				var productDetailRepositoryStub = MockRepository.GenerateStub<IProductDetailRepository>();
				productDetailRepositoryStub.Stub(x => x.GetProduct(Arg<string>.Is.Anything))
					.Return(new ProductDetail {NumberInStock = 42});

				_subject = new DisplayOrderDetailCommand(productDetailRepositoryStub);
			};

		Because of = () => _results = _subject.QueryProductDetails(ProductId);

		It should_display_the_number_of_items_currently_in_stock = () => _results.NumberInStock.ShouldEqual(42);
	}

 

In this example, a Test Stub is created for an IProductDetailRepository type which serves as a dependency for the System Under Test (i.e. the DisplayOrderDetailCommand type). By choosing to explicitly name the Test Double instance with a suffix of “Stub”, this specification communicates that the double serves only to provide indirect input to the System Under Test.

 

Note to Rhino Mock and Machine.Specification Users

For Rhino Mock users, there are some additional hints in this example which help to indicate that the test double used by this specification is intended to serve as a Test Stub. This includes use of Rhino Mock’s GenerateStub() method, the lack of “Record/Replay” artifacts from either the old or new mocking APIs and the absence of assertions on the generated test double. Additionally, those familiar with the Machines.Specifications framework (a.k.a. MSpec) would have an expectation of explicit and discrete observations if this were being used as a Mock or Test Spy. Nevertheless, we should strive to make the chosen verification strategy as discoverable as possible and not rely upon framework familiarity alone.

 

While this test also indicates that the test double is being used as a Stub by its use of the Rhino Mock framework’s GenerateStub() method, Rhino Mocks doesn’t provide intention-revealing method names for each type of test double and some mocking frameworks don’t distinguish between the creation of mocks and stubs at all. Using intention-revealing names is a consistent practice that can be adopted regardless of the framework being used.

 

Recommendation 2: Only Substitute Your Types

Applications often make use of third-party libraries and frameworks. When designing an application which leverages such libraries, there’s often a temptation to substitute types within the framework. Rather than providing a test double for these dependencies, create an abstraction representing the required behavior and provide a test double for the abstraction instead.

There are several issues with providing test doubles for third party components:

First, it precludes any ability to adapt to feedback received from the specification. Since we don’t control the components contained within a third-party libraries, coupling our design to these components limits our ability to guide our designs based upon our interaction with the system through our specifications.

Second, we don’t control when or how the API of third-party libraries may change in future releases. We can exercise some control over when we choose to upgrade to a newer release of a library, but aside from the benefits of keeping external dependencies up to date, there are often external motivating factors outside of our control. By remaining loosely-coupled from such dependencies, we minimize the amount of work it takes to migrate to new versions.

Third, we don’t always have a full understanding of the behavior of third-party libraries. Using test doubles for dependencies presumes that the doubles are going to mimic the behavior of the type they are substituting correctly (at least within the context of the specification). Substituting behavior which you don’t fully understand or control may lead to unreliable specifications. Once a specification passes, there should be no reason for it to ever fail unless you change the behavior of your own code. This can’t be guaranteed when substituting third-party libraries.

While we shouldn’t substitute types from third-party libraries, we should verify that our systems work properly when using third-party libraries. This is achieved through integration and/or acceptance tests. With integration tests, we verify that our systems display the expected behavior when integrated with external systems. If our systems have been properly decoupled from the use of third-party libraries, only the Adaptors need to be tested. A system which has taken measures to remain decoupled from third-party libraries should have far fewer integration tests than those that test the native behavior of the application. With acceptance tests, we verify the behavior of the entire application from end to end which would exercise the system along with its external dependencies.

 

Recommendation 3: Don’t Substitute Concrete Types

The following principle is set forth in the book Design Patterns: Elements of Reusable Object-Oriented Software by Gamma, et al:

Program to an interface, not an implementation.

All objects possess a public interface and in this sense all object-oriented systems are collaborations of objects interacting through interfaces. What is meant by this principle, however, is that objects should only depend upon the interface of another object, not the implementation of that object. By taking dependencies upon concrete types, objects are implicitly bound by the implementation details of that object. Subtypes can be substituted, but subtypes are inextricably coupled to their base types.

Set forth in the book Agile Software Development, Principles, Patterns, and Practices by Robert C. Martin, a related principle referred to as the Interface Segregation Principe states:

Clients should not be forced to depend on methods that they do not use.

The Interface Segregation Principle is set forth to address several issues that arise from non-cohesive or “fat” interfaces, but the issue most pertinent to our discussion is the problem of associative coupling. When a component takes a dependency upon a concrete type, it forms an associative coupling with all other clients of that dependency. As new requirements drive changes to the internals of the dependency for one client, all other clients coupled directly to the same dependency may be affected regardless of whether they depend upon the same sets of behavior or not. This problem can be mitigated by defining dependencies upon Role-based Interfaces. In this way, objects declare their dependencies in terms of behavior, not specific implementations of behavior.

So, one might ask, “What does this have to do with Test Doubles?” There is nothing particularly problematic about replacing concrete types from an implementation perspective. There’s certainly the issue in some languages of needing to take measures to ensure virtual dispatching can take place thereby allowing the behavior of a concrete type to be overridden, but where this actually becomes relevant to our discussion is in what our specifications are trying to tell us about our design. When you find yourself creating test doubles for concrete types, it’s as if your specifications are crying out: “Hey dummy, you have some coupling here!” By listening to the feedback provided by our specification, we can begin to spot code smells which may point to problems in our implementation.

 

Recommendation 4: Focus on Behavior

When writing specifications, it can be easy to fall into the trap of over-specifying the components of the system. This occurs when we write specifications that not only verify the expected behavior of a system, but which also verify that the behavior is achieved using a specific implementation.

Writing component-level specifications will always require some level of coupling to the component’s implementation details. Nevertheless, we should strive to minimize the coupling to those interactions which are required to verify the system requirements. If the System Under Test takes 10 steps to achieve the desired outcome, but the outcome of step 10 by itself is sufficient to verify that the desired behavior occurred, our specifications shouldn’t care about steps 1 through 9. If someone figures out a way to achieve the same outcome with only 3 steps, the specifications of the system shouldn’t need to change.

This leads us to a recommendation from the book XUnit Test Patterns: Refactoring Test Code by Gerard Meszaros:

Use the Front Door First

By “front door”, the author means that we should strive to verify behavior using the public interface of our components when possible, and use interaction-based verification when necessary. For example, when the behavior of a component can be verified by checking return values from operations performed by the object or by checking the interactions which occurred with its dependencies, we should prefer checking the return values over checking its interactions. At times, verifying the behavior of an object requires that we examine how the object interacted with its collaborators. When this is necessary, we should strive to remain as loosely coupled as possible by only specifying the minimal interactions required to verify the expected behavior.

 

Conclusion

In this article, we discussed a few strategies for using Test Doubles effectively. Next time we’ll take a look at a technique for creating Test Doubles which aids in both reducing coupling and obscurity … but at a cost.

Tagged with:  

Effective Tests: Auto-mocking Containers

On May 31, 2011, in Uncategorized, by derekgreer
This entry is part 13 of 17 in the series Effective Tests

In the last installment, I set forth some recommendations for using Test Doubles effectively. In this article, I’ll discuss a class of tools which can aid in reducing some of the coupling and obscurity that comes with the use of Test Doubles: Auto-mocking Containers.

Auto-mocking Containers

Executable specifications can provide valuable documentation of a system’s behavior. When written well, they can not only clearly describe what the system does, but also serve as an example for how the system is intended to be used. Unfortunately, it is this aspect of our specifications which can often end up working against our goal of writing maintainable software.

Ideally, an executable specification would describe the expected behavior of a system in such a way as to also clearly demonstrate it’s intended use without obscuring its purpose with extraneous implementation details. One class of tools which aid in achieving this goal are Auto-mocking Containers.

An Auto-mocking Container is a specialized inversion of control container for constructing a System Under Test with Test Doubles automatically supplied for any dependencies. By using an auto-mocking container, details such as the declaration of test double fields and test double instantiation can be removed from the specification, rendering a cleaner implementation void of such extraneous details.

Consider the following class which displays part details to a user and is responsible for retrieving the details requested form a cached copy if present:

 public class DisplayPartDetailsAction
    {
        readonly ICachingService _cachingService;
        readonly IPartDisplayAdaptor _partDisplayAdaptor;
        readonly IPartRepository _partRepository;

        public DisplayPartDetailsAction(
            ICachingService cachingService,
            IPartRepository partRepository,
            IPartDisplayAdaptor partDisplayAdaptor)
        {
            _cachingService = cachingService;
            _partRepository = partRepository;
            _partDisplayAdaptor = partDisplayAdaptor;
        }

        public void Display(string partId)
        {
            PartDetail details = _cachingService.RetrievePartDetails(partId) ??
                                 _partRepository.GetPartDetailByPartId(partId);

            _partDisplayAdaptor.Display(details);
        }
    }

The specification for this behavior would need to verify that the System Under Test attempts to retrieve the PartDetail from the ICachingService, but would also need to supply implementations for the IPartRepository and IPartDisplayAdaptor as shown in the following listing:

    public class when_displaying_part_details
    {
        const string PartId = "12345";
        static Mock<ICachingService> _cachingServiceMock;
        static DisplayPartDetailsAction _subject;

        Establish context = () =>
            {
                _cachingServiceMock = new Mock<ICachingService>();
                var partRepositoryDummy = new Mock<IPartRepository>();
                var partDisplayAdaptorDummy = new Mock<IPartDisplayAdaptor>();
                _subject = new DisplayPartDetailsAction(_cachingServiceMock.Object, partRepositoryDummy.Object,
                                                        partDisplayAdaptorDummy.Object);
            };

        Because of = () => _subject.Display(PartId);

        It should_retrieve_the_part_information_from_the_cache =
            () => _cachingServiceMock.Verify(x => x.RetrievePartDetails(PartId), Times.Exactly(1));
    }

By using an auto-mocking container, the specification can be written without the need of an explicit Mock field, or instantiating Dummy instances for the IPartRepository and IPartDisplayAdaptor dependencies. The following demonstrates such an example using AutoMock, an auto-mocking container which leverages the Moq framework:

    public class when_displaying_part_details
    {
        const string PartId = "12345";
        static AutoMockContainer _container;
        static DisplayPartDetailsAction _subject;

        Establish context = () =>
            {
                _container = new AutoMockContainer(new MockFactory(MockBehavior.Loose));
                _subject = _container.Create<DisplayPartDetailsAction>();
            };

        Because of = () => _subject.Display(PartId);

        It should_retrieve_the_part_information_from_the_cache =
            () => _container.GetMock<ICachingService>().Verify(x => x.RetrievePartDetails(PartId), Times.Exactly(1));
    }

While this implementation eliminates references to the extraneous dependencies, it does impose a bit of extraneous implementation details of its own. To further relieve this specification of implementation details associated with the auto-mocking container, a reusable base context can be extracted:

    public abstract class WithSubject<T> where T : class
    {
        protected static AutoMockContainer Container;
        protected static T Subject;

         Establish context = () =>
            {
                Container = new AutoMockContainer(new MockFactory(MockBehavior.Loose));
                Subject = Container.Create<T>();
            };

        protected static Mock<TDouble> For<TDouble>() where TDouble : class
        {
            return Container.GetMock<TDouble>();
        }
    }

By extending the auto-mocking base context, the specification can be written more concisely:

    public class when_displaying_part_details : WithSubject<DisplayPartDetailsAction>
    {
        const string PartId = "12345";

        Because of = () => Subject.Display(PartId);

        It should_retrieve_the_part_information_from_the_cache =
            () => For<ICachingService>().Verify(x => x.RetrievePartDetails(PartId), Times.Exactly(1));
    }

Another advantage gained by the use of auto-mocking containers is decoupling. By inverting the concern of how the System Under Test is constructed, dependencies can be added, modified, or deleted without affecting specifications for which the dependency has no bearing.

Trade-offs

While auto-mocking containers can make specifications cleaner, easier to write, and more adaptable to change, their use can come at a slight cost. By using mocking frameworks and hand-rolled doubles directly, there is always at least one point of reference where the requirements of instantiating the System Under Test provides feedback about its design as a whole.

Use of auto-mocking containers allows us to produce contextual slices of how the system works, limiting the information about the system’s dependencies to that knowledge required by the context in question. From a documentation perspective, this can aid in understanding how the system is used to facilitate a particular feature. From a design perspective, however, their use can eliminate one source of feedback about the evolving design of the system. Without such inversion of control, hints of violating the Single Responsibility Principle can be seen within the specifications, evidenced by overly complex constructor initialization. By removing the explicit declaration of the system’s dependencies from the specifications, we also remove this point of feedback.

That said, the benefits of leveraging auto-mocking containers tend to outweigh the cost of removing this point of feedback. Cases of mutually-exclusive dependencies are usually in the minority and each addition and/or modification to a constructor provides an equal level of feedback about a class’s potential complexity.

Conclusion

In this article, we looked at the use of auto-mocking containers as a tool for reducing obscurity and coupling within our specifications. Next time, we’ll look at a technique for reducing the obscurity that comes from overly complex assertions.

Effective Tests: Custom Assertions

On June 11, 2011, in Uncategorized, by derekgreer
This entry is part 14 of 17 in the series Effective Tests

In our last installment, we took a look at using Auto-mocking Containers as a way of reducing coupling and obscurity within our tests. This time, we’ll take a look at another technique which aids in preventing obscurity caused by complex assertions: Custom Assertions.

Custom Assertions

As discussed earlier in the series, executable specifications are a model for our requirements. Whether serving as a guide for our implementation or as documentation for existing behavior, specifications should be easy to understand. Assertion implementation is one of the areas that can often begin to obscure the intent of our specifications. When standard assertions fall short of expressing what is being validated clearly and concisely, they can be replaced with Custom Assertions. Custom assertions are domain-specific assertions which encapsulate complex or obscure testing logic within intention-revealing methods.

Consider the following example which validates that the items returned from a ReviewService class are sorted in descending order:

    public class when_a_customer_retrieves_comment_history : WithSubject<ReviewService>
    {
        const string ItemId = "123";
        static IEnumerable<Comment> _comments;

        Establish context = () => For<IItemRepository>()
                                      .Setup(x => x.Get(ItemId))
                                      .Returns(new Item(new[]
                                                            {
                                                                new Comment("comment 1", DateTime.MinValue.AddDays(1)),
                                                                new Comment("comment 2", DateTime.MinValue.AddDays(2)),
                                                                new Comment("comment 3", DateTime.MinValue.AddDays(3))
                                                            }));

        Because of = () => _comments = Subject.GetCommentsForItem(ItemId);
      
        It should_return_comments_sorted_by_date_in_descending_order = () =>
            {
                Comment[] commentsArray = _comments.ToArray();
                for (int i = commentsArray.Length - 1; i > 0; i--)
                {
                    if (commentsArray[i].TimeStamp > commentsArray[i - 1].TimeStamp)
                    {
                        throw new Exception(
                            string.Format(
                                "Expected comments sorted in descending order, but found comment \'{0}\' on {1} after \'{2}\' on {3}",
                                commentsArray[i].Text, commentsArray[i].TimeStamp, commentsArray[i - 1].Text,
                                commentsArray[i - 1].TimeStamp));
                    }
                }
            };
    }

 

While the identifiers used to describe the specification are clear, the observation’s implementation contains a significant amount of verification logic which makes the specification more difficult to read. By moving the verification logic into a custom assertion which describes what is expected, we can clarify the specification’s intent.

When developing on the .Net platform, Extension Methods provide a nice way of encapsulating assertion logic while achieving an expressive API. The following listing shows the same assertion logic contained within an extension method:

    public static class CustomAssertions
    {
        public static void ShouldBeSortedByDateInDescendingOrder(this IEnumerable<Comment> comments)
        {
            Comment[] commentsArray = comments.ToArray();
            for (int i = commentsArray.Length - 1; i > 0; i--)
            {
                if (commentsArray[i].TimeStamp > commentsArray[i - 1].TimeStamp)
                {
                    throw new Exception(
                        string.Format(
                            "Expected comments sorted in descending order, but found comment \'{0}\' on {1} after \'{2}\' on {3}",
                            commentsArray[i].Text, commentsArray[i].TimeStamp, commentsArray[i - 1].Text,
                            commentsArray[i - 1].TimeStamp));
                }
            }
        }
    }

 

Using this new custom assertion, the specification can be rewritten to be more intention-revealing:

    public class when_a_customer_retrieves_comment_history : WithSubject<ReviewService>
    {
        const string ItemId = "123";
        static IEnumerable<Comment> _comments;

        Establish context = () => For<IItemRepository>()
                                      .Setup(x => x.Get(ItemId))
                                      .Returns(new Item(new[]
                                                            {
                                                                new Comment("comment 1", DateTime.MinValue.AddDays(1)),
                                                                new Comment("comment 2", DateTime.MinValue.AddDays(2)),
                                                                new Comment("comment 3", DateTime.MinValue.AddDays(3))
                                                            }));

        Because of = () => _comments = Subject.GetCommentsForItem(ItemId);

        It should_return_comments_sorted_by_date_in_descending_order = () => _comments.ShouldBeSortedByDateInDescendingOrder();
    }

Verifying Assertion Logic

Another advantage of factoring out complex assertion logic into custom assertion methods is the ability to verify that the logic actually works as expected. This can be particularly valuable if the assertion logic is reused by many specifications.

The following listing shows positive and negative tests for our custom assertion:

    public class when_asserting_unsorted_comments_are_sorted_in_descending_order
    {
        static Exception _exception;
        static List<Comment> _unsortedComments;

        Establish context = () =>
            {
                _unsortedComments = new List<Comment>
                                        {
                                            new Comment("comment 1", DateTime.MinValue.AddDays(1)),
                                            new Comment("comment 4", DateTime.MinValue.AddDays(4)),
                                            new Comment("comment 3", DateTime.MinValue.AddDays(3)),
                                            new Comment("comment 2", DateTime.MinValue.AddDays(2))
                                        };
            };

        Because of = () => _exception = Catch.Exception(() => _unsortedComments.ShouldBeSortedByDateInDescendingOrder());

        It should_throw_an_exception = () => _exception.ShouldBeOfType(typeof(Exception));
    }

    public class when_asserting_sorted_comments_are_sorted_in_descending_order
    {
        static Exception _exception;
        static List<Comment> _unsortedComments;

        Establish context = () =>
        {
            _unsortedComments = new List<Comment>
                                        {
                                            new Comment("comment 4", DateTime.MinValue.AddDays(4)),
                                            new Comment("comment 3", DateTime.MinValue.AddDays(3)),
                                            new Comment("comment 2", DateTime.MinValue.AddDays(2)),
                                            new Comment("comment 1", DateTime.MinValue.AddDays(1))
                                        };
        };

        Because of = () => _exception = Catch.Exception(() => _unsortedComments.ShouldBeSortedByDateInDescendingOrder());

        It should_not_throw_an_exception = () => _exception.ShouldBeNull();
    }

Conclusion

This time, we examined a simple strategy for clarifying the intent of our specifications involving the movement of complex verification logic into custom assertions methods. Next time we’ll take a look at another strategy for clarifying the intent of our specifications which also serves to reduce test code duplication and test-specific code within production.

Tagged with:  

Effective Tests: Expected Objects

On June 24, 2011, in Uncategorized, by derekgreer
This entry is part 15 of 17 in the series Effective Tests

In the last installment of the Effective Tests series, the topic of Custom Assertions was presented as a strategy for helping to clarify the intent of our tests. This time we’ll take a look at another test pattern for improving the communication of our tests in addition to reducing test code duplication and the need to add test-specific code to our production types.

Expected Objects

Writing tests often involves inspecting the state of collaborating objects and the messages they exchange within a system. This often leads to declaring multiple assertions on fields of the same object which can lead to several maintenance issues. First, if multiple specifications need to verify the same values then this can result in test code duplication. For example, two searches for a customer record with different criteria may be expected to return the same result. Second, when many fine-grained assertions are performed within a specification, the overall purpose can become obscured. For example, a specification may indicate that a value returned from an order process “should contain a first name and a last name and a home phone number and an address line 1 and …” while the intended perspective may be that the operation “should return a shipment confirmation”.

One solution to this problem is to override an object’s equality operators and/or methods to suit the needs of the test. Unfortunately, this is not without its own set of issues. Aside from introducing behavior into the system which is only exercised by the tests, this strategy may conflict with the existing or future needs of the system due to a difference in how each define equality for the objects being compared. While a test may need to compare all the properties of two objects, the system may require equality to be based upon the object’s identity (e.g. two customers are the same if they have the same customer Id). It may happen that the system already defines equality suitable to the needs of the test, but this is subject to change. A system may compare two objects by value for the purposes of indexing, ensuring cardinality, or an assortment of domain-specific reasons whose needs may change as the system evolves. While the initial state of an object’s definition of equality may coincide with the needs of the test, the needs of both represent two axes of change which could lead to higher maintenance costs if not dealt with separately.

When using state-based verification, one way of avoiding test code duplication, obscurity and the need to equip the system with test-specific equality code is to implement the Expected Object pattern. The Expected Object pattern defines objects which encapsulate test-specific equality separate from the objects they are compared against. An expected object may be implemented as a sub-type whose equality members have been overloaded to perform the desired comparisons or as a test-specific type designed to compare itself against another object type.

Consider the following specification which validates that placing an order returns an order receipt populated with the expected values:

[Subject(typeof (OrderService))]
public class when_an_order_is_placed : WithSubject<OrderService>
{
	static readonly Guid CustomerId = new Guid("061F3CED-405F-4261-AF8C-AA2B0694DAD8");
	const long OrderNumber = 1L;
	static Customer _customer;
	static Order _order;
	static OrderReceipt _orderReceipt;


	Establish context = () =>
		{
			_customer = new TestCustomer(CustomerId)
				            {
				            	FirstName = "First",
				            	LastName = "Last",
				            	PhoneNumber = "5129130000",
				            	Address = new Address
				            		        {
				            		          	LineOne = "123 Street",
				            		          	LineTwo = string.Empty,
				            		          	City = "Austin",
				            		          	State = "TX",
				            		          	ZipCode = "78717"
				            		        }
				            };
			For<IOrderNumberProvider<long>>().Setup(x => x.GetNext()).Returns(OrderNumber);
			For<ICustomerRepository>().Setup(x => x.Get(Parameter.IsAny<Guid>())).Returns(_customer);
			_order = new Order(1, "Product A");
		};

	Because of = () => _orderReceipt = Subject.PlaceOrder(_order, _customer.Id);

	It should_return_a_receipt_with_order_number = () => _orderReceipt.OrderNumber.ShouldEqual(OrderNumber.ToString());

	It should_return_a_receipt_with_order_description = () => _orderReceipt.Orders.ShouldContain(_order);

	It should_return_a_receipt_with_customer_id = () => _orderReceipt.CustomerId.ShouldEqual(_customer.Id.ToString());
		
	It should_return_an_order_receipt_with_customer_name = () => _orderReceipt.CustomerName.ShouldEqual(_customer.FirstName + " " + _customer.LastName);

	It should_return_a_receipt_with_customer_phone = () => _orderReceipt.CustomerPhone.ShouldEqual(_customer.PhoneNumber);

	It should_return_a_receipt_with_address_line_1 = () => _orderReceipt.AddressLineOne.ShouldEqual(_customer.Address.LineOne);

	It should_return_a_receipt_with_address_line_2 = () => _orderReceipt.AddressLineTwo.ShouldEqual(_customer.Address.LineTwo);
		
	It should_return_a_receipt_with_city = () => _orderReceipt.City.ShouldEqual(_customer.Address.City);

	It should_return_a_receipt_with_state = () => _orderReceipt.State.ShouldEqual(_customer.Address.State);

	It should_return_a_receipt_with_zip = () => _orderReceipt.ZipCode.ShouldEqual(_customer.Address.ZipCode);
}
Listing 1

While the specification in listing 1 provides ample detail about the values that should be present on the returned receipt, such an implementation precludes reuse and tends to overwhelm the purpose of the specification. This problem is further compounded as the composition complexity increases.

As an alternative to declaring what each field of a particular object should contain, the Expected Object pattern allows you to declare what a particular object should look like. By replacing the specification’s discrete assertions with a single assertion comparing an Expected Object against a resulting state, the essence of the specification can be preserved while maintaining an equivalent level of verification.

Consider the following simple implementation for an Expected Object:

class ExpectedOrderReceipt : OrderReceipt
{
	public override bool Equals(object obj)
	{
		var otherReceipt = obj as OrderReceipt;

		return OrderNumber.Equals(otherReceipt.OrderNumber) &&
			    CustomerId.Equals(otherReceipt.CustomerId) &&
			    CustomerName.Equals(otherReceipt.CustomerName) &&
			    CustomerPhone.Equals(otherReceipt.CustomerPhone) &&
			    AddressLineOne.Equals(otherReceipt.AddressLineOne) &&
			    AddressLineTwo.Equals(otherReceipt.AddressLineTwo) &&
			    City.Equals(otherReceipt.City) &&
			    State.Equals(otherReceipt.State) &&
			    ZipCode.Equals(otherReceipt.ZipCode) &&
			    Orders.ToList().SequenceEqual(otherReceipt.Orders);
	}
}
Listing 2

Establishing an instance of the expected object in listing 2 allows the previous discrete assertions to be replaced with a single assertion declaring what the returned receipt should look like:

[Subject(typeof (OrderService))]
public class when_an_order_is_placed : WithSubject<OrderService>
{
	const long OrderNumber = 1L;
	static readonly Guid CustomerId = new Guid("061F3CED-405F-4261-AF8C-AA2B0694DAD8");
	static Customer _customer;
	static ExpectedOrderReceipt _expectedOrderReceipt;
	static Order _order;
	static OrderReceipt _orderReceipt;


	Establish context = () =>
		{
			_customer = new TestCustomer(CustomerId)
				            {
				            	FirstName = "First",
				            	LastName = "Last",
				            	PhoneNumber = "5129130000",
				            	Address = new Address
				            		        {
				            		          	LineOne = "123 Street",
				            		          	LineTwo = string.Empty,
				            		          	City = "Austin",
				            		          	State = "TX",
				            		          	ZipCode = "78717"
				            		        }
				            };
			For<IOrderNumberProvider<long>>().Setup(x => x.GetNext()).Returns(OrderNumber);
			For<ICustomerRepository>().Setup(x => x.Get(Parameter.IsAny<Guid>())).Returns(_customer);
			_order = new Order(1, "Product A");

			_expectedOrderReceipt = new ExpectedOrderReceipt
				                        {
				                        	OrderNumber = OrderNumber.ToString(),
				                        	CustomerName = "First Last",
				                        	CustomerPhone = "5129130000",
				                        	AddressLineOne = "123 Street",
				                        	AddressLineTwo = string.Empty,
				                        	City = "Austin",
				                        	State = "TX",
				                        	ZipCode = "78717",
				                        	CustomerId = CustomerId.ToString(),
				                        	Orders = new List<Order> {_order}
				                        };
		};

	Because of = () => _orderReceipt = Subject.PlaceOrder(_order, _customer.Id);

	It should_return_an_receipt_with_shipping_information_and_order_number =
		() => _expectedOrderReceipt.Equals(_orderReceipt).ShouldBeTrue();
}
Listing 3

The implementation strategy in listing 3 offers a subtle shift in perspective, but one which may more closely model the language of the business.

This is not to say that discrete assertions are always wrong. The level of detail modeled by an application’s specifications should be based upon the needs of the business. Consider the test runner output for both implementations:

ExpectedObjectContrast

Figure 1

Examining the results of executing both specifications in figure 1, we see that the first describes each field being validated, while the second describes what the validations of these fields collectively mean. Which is best will depend upon your particular business needs. While the first implementation provides a more detailed specification of the receipt, this may or may not be as important to the business as knowing that the receipt as a whole is correct. For example, consider if the order number were missing. Is the correct perspective that the receipt is 90% correct or 100% wrong? The correct answer is … it depends.

Explicit Feedback

While the Expected Object implementation shown in listing 2 may be an adequate approach in some cases, it does have the shortcoming of not providing explicit feedback of how the two objects differ. To address this, we can implement our Expected Object as a Custom Assertion. Instead of asserting on the return value of comparing the expected object to an object returned from our system, we can design the Expected Object to throw an exception detailing what state differed between the two objects. The following listing demonstrates this approach:

class ExpectedOrderReceipt : OrderReceipt
{
	public void ShouldEqual(object obj)
	{
		var otherReceipt = obj as OrderReceipt;
		var messages = new List<string>();

		if (!OrderNumber.Equals(otherReceipt.OrderNumber))
			messages.Add(string.Format("For OrderNumber, expected '{0}' but found '{1}'", OrderNumber, otherReceipt.OrderNumber));

		if (!CustomerId.Equals(otherReceipt.CustomerId))
			messages.Add(string.Format("For CustomerId, expected '{0}' but found '{1}'", CustomerId, otherReceipt.CustomerId));

		if (!CustomerName.Equals(otherReceipt.CustomerName))
			messages.Add(string.Format("For CustomerName, expected '{0}' but found '{1}'", CustomerName, otherReceipt.CustomerName));

		if (!CustomerPhone.Equals(otherReceipt.CustomerPhone))
			messages.Add(string.Format("For CustomerPhone, expected '{0}' but found '{1}'", CustomerPhone, otherReceipt.CustomerPhone));

		if (!AddressLineOne.Equals(otherReceipt.AddressLineOne))
			messages.Add(string.Format("For AddressLineOne, expected '{0}' but found '{1}'", AddressLineOne, otherReceipt.AddressLineOne));

		if (!AddressLineTwo.Equals(otherReceipt.AddressLineTwo))
			messages.Add(string.Format("For AddressLineTwo, expected '{0}' but found '{1}'", AddressLineTwo, otherReceipt.AddressLineOne));

		if (!City.Equals(otherReceipt.City))
			messages.Add(string.Format("For City, expected '{0}' but found '{1}'", City, otherReceipt.City));

		if (!State.Equals(otherReceipt.State))
			messages.Add(string.Format("For State, expected '{0}' but found '{1}'", State, otherReceipt.State));

		if (!ZipCode.Equals(otherReceipt.ZipCode))
			messages.Add(string.Format("For ZipCode, expected '{0}' but found '{1}'", ZipCode, otherReceipt.ZipCode));

		if (!Orders.ToList().SequenceEqual(otherReceipt.Orders))
			messages.Add("For Orders, expected the same sequence but was different.");

		if(messages.Count > 0)
			throw new Exception(string.Join(Environment.NewLine, messages));
	}
}
Listing 4

The following listing shows the specification modified to use the new Expected Object implementation with several values on the TestCustomer modified to return values differing from the expected value:

[Subject(typeof (OrderService))]
public class when_an_order_is_placed : WithSubject<OrderService>
{
	const long OrderNumber = 1L;
	static readonly Guid CustomerId = new Guid("061F3CED-405F-4261-AF8C-AA2B0694DAD8");
	static Customer _customer;
	static ExpectedOrderReceipt _expectedOrderReceipt;
	static Order _order;
	static OrderReceipt _orderReceipt;


	Establish context = () =>
		{
			_customer = new TestCustomer(CustomerId)
				            {
				            	FirstName = "Wrong",
				            	LastName = "Wrong",
				            	PhoneNumber = "Wrong",
				            	Address = new Address
				            		        {
				            		          	LineOne = "Wrong",
				            		          	LineTwo = "Wrong",
				            		          	City = "Austin",
				            		          	State = "TX",
				            		          	ZipCode = "78717"
				            		        }
				            };
			For<IOrderNumberProvider<long>>().Setup(x => x.GetNext()).Returns(OrderNumber);
			For<ICustomerRepository>().Setup(x => x.Get(Parameter.IsAny<Guid>())).Returns(_customer);
			_order = new Order(1, "Product A");

			_expectedOrderReceipt = new ExpectedOrderReceipt
				                        {
				                        	OrderNumber = OrderNumber.ToString(),
				                        	CustomerName = "First Last",
				                        	CustomerPhone = "5129130000",
				                        	AddressLineOne = "123 Street",
				                        	AddressLineTwo = string.Empty,
				                        	City = "Austin",
				                        	State = "TX",
				                        	ZipCode = "78717",
				                        	CustomerId = CustomerId.ToString(),
				                        	Orders = new List<Order> {_order}
				                        };
		};

	Because of = () => _orderReceipt = Subject.PlaceOrder(_order, _customer.Id);

	It should_return_an_receipt_with_shipping_and_order_information = () => _expectedOrderReceipt.ShouldEqual(_orderReceipt);
}
Listing 5

Running the specification produces the following output:

ExpectedObjectExplicitFeedback
Figure 2

Conclusion

This time, we took a look at the Expected Object pattern which aids in reducing code duplication, eliminating the need to put test-specific equality behavior in our production code and serves as a strategy for further clarifying the intent of our specifications. Next time, we’ll look at some strategies for combating obscurity and test-code duplication caused by test data.

Tagged with:  

Effective Tests: Avoiding Context Obscurity

On July 19, 2011, in Uncategorized, by derekgreer
This entry is part 16 of 17 in the series Effective Tests

In the last installment of our series, we looked at the Expected Object pattern as a way to reduce code duplication, eliminate the need to add test-specific equality concerns to production code and to aid in clarifying the intent of our tests. This time we’ll take a look at some practices and techniques for avoiding context obscurity.

Context Obscurity

Validating the behavior of a system generally requires instantiating the System Under Test along with the setup of various dependencies and/or parameter objects which serve to define the context for the system’s execution. When the essential context is not discernible from the test implementation, this results in Context Obscurity.

Context Obscurity Causes

The following sections list some of the main causes of context obscurity.

Incidental Context

The setup needs for a test often includes information necessary for the behavior’s execution, yet irrelevant to the behavior being validated. For example, a given component may require a logging dependency to be supplied for instantiation, but the behavior being tested may have nothing to do with whether the component logs information or not. These type of setup concerns lead to Incidental Context which can affect the clarity of the test.

Consider the following specification for a payment gateway component which validates that an exception is thrown when the system is asked to process a payment containing an expired credit card:

public class when_processing_a_payment_with_an_expired_credit_card
{
	static Exception _exception;
	static PaymentGateway _paymentGateway;
	static PaymentInformation _paymentInformation;
	static Mock<ILoggger> _nullLogger;
	static Mock<IPaymentProvider> _stubPaymentProvider;

	Establish context = () =>
		{
			_nullLogger = new Mock<ILoggger>();
			_stubPaymentProvider = new Mock<IPaymentProvider>();
			_stubPaymentProvider.Setup(x => x.ProcessPayment(Parameter.IsAny<Payment>()))
				.Returns(new PaymentReceipt
					        {
					         	ReceiptId = "12345",
						ChargeAmount = 300.00m,
						CardType = "Visa",
						CardLastFour = "1111",
						VenderId = "FF26AA123"

					        });
			_paymentGateway = new PaymentGateway(_stubPaymentProvider.Object, _nullLogger.Object);

			_paymentInformation = new PaymentInformation
				                      {
				                      	Amount = 300.00m,
				                      	CreditCardNumber = "41111111111111",
				                      	CreditCardType = CardType.Visa,
				                      	ExpirationDate = Date.Today.Subtract(TimeSpan.FromDays(1)),
				                      	CardHolderName = "John P. Doe",
				                      	BillingAddress = new Address
				                      		                {
				                      		                 	Name = "John Doe",
				                      		                 	AddressLineOne = "123 Street",
				                      		                 	City = "Springfield",
				                      		                 	StateCode = "MO",
				                      		                 	Zipcode = "65807"
				                      		                }
				                      };
		};

	Because of = () => _exception = Catch.Exception(() => _paymentGateway.Process(_paymentInformation));

	It should_throw_an_expired_card_exception = () => _exception.ShouldBeOfType<ExpiredCardException>();
}
Listing 1

While the specification in listing 1 only validates that an error occurs when an expired credit card is provided, there is quite a bit of set up necessary for the specification’s execution. Since the only information relevant to understanding how the system fulfills this behavior is how the PaymentGateway type is called, the value of the expiration date and the expected result, the majority of the setup is incidental to the modeling of the specification leading to to a low signal to noise ratio.

Lack of Cohesion

Another source of incidental context can occur when a System Under Test lacks Cohesion. Cohesion can be defined as the functional relatedness of a module. When a module serving as the System Under Test lacks cohesion (i.e. has multiple unrelated responsibilities), this can result in the need to setup dependencies which are never used by the area of concern being exercised by the test.

Consider the following specification which verifies that related product information is returned when a customer requests the details for a product:

public class when_the_customer_requests_product_information
{
	static readonly Guid ProductId = new Guid("BD1F1F9A-85BC-48B9-95B5-0CA8219A97A1");
	static readonly Guid RelatedProductId = new Guid("C363577B-1720-43C1-93D9-2C9F239B3D52");
	static Mock<IAuditService> _auditServiceStub;
	static Mock<IOrderHistoryRepository> _orderHistoryRepositoryStub;
	static Mock<IOrderReturnService> _orderReturnServiceStub;
	static Mock<IProductHistoryRepository> _productHistoryRepositoryStub;
	static ProductInformation _productInformation;
	static Mock<IProductRepository> _productRepositoryStub;
	static ProductService _productService;

	Establish context = () =>
		{
			_auditServiceStub = new Mock<IAuditService>();
			_productHistoryRepositoryStub = new Mock<IProductHistoryRepository>();
			_orderHistoryRepositoryStub = new Mock<IOrderHistoryRepository>();
			_orderReturnServiceStub = new Mock<IOrderReturnService>();
			_productRepositoryStub = new Mock<IProductRepository>();

			_productRepositoryStub.Setup(x => x.Get(ProductId)).Returns(new Product(ProductId, "Product description", 20.00m));
			_productRepositoryStub.Setup(x => x.GetRelatedProducts(ProductId))
				.Returns(new List<Product>
					        {
					         	new Product(RelatedProductId,
					         		          "Related product description",
					         		          30.10m)
					        });

			_productService = new ProductService(_auditServiceStub.Object, _productHistoryRepositoryStub.Object,
				                                    _orderHistoryRepositoryStub.Object, _orderReturnServiceStub.Object,
				                                    _productRepositoryStub.Object);
		};

	Because of = () => _productInformation = _productService.GetProductInformation(ProductId);

	It should_return_related_products = () => _productInformation.RelatedProducts.ShouldNotBeEmpty();
}
listing 2

The ProductService class in listing 2 fulfills a number of responsibilities including those dealing with related order history, product returns and various auditing needs. Though the specification is only concerned with verifying that related product information is retrieved, a number of unused test doubles are required to instantiate a ProductService instance which may lead to a false assumption about the role these dependencies play in behavior being validated.

Missing Context

In some cases, setup code may be factored out of a concrete test implementation to enable reuse by other tests, or perhaps merely to reduce the visible amount of setup code being used. This practice obscures the context of the test when all of the information necessary for understanding how the system is used to facilitate the expected behavior isn’t discernible.

Consider the following variation on the expired credit card specification:

public class when_processing_a_payment_with_an_expired_credit_card : PaymentContext
{
	static Exception _exception;

	Because of = () => _exception = Catch.Exception(() => PaymentGateway.Process(PaymentInformation));

	It should_throw_an_expired_card_exception = () => _exception.ShouldBeOfType<ExpiredCardException>();
}
listing 3

While this implementation doesn’t overwhelm its reader with incidental or non-cohesive setup needs, key pieces of information for understanding how the system is used to achieve the expected behavior are missing, namely, the type information for the key objects used and the input values necessary to trigger the expected behavior.

Guidelines for Avoiding Obscurity

To avoid writing obscure tests, information pertinent to understanding how the system is used to achieve the expected behavior should be discernible within the test’s concrete implementation. While what’s considered pertinent is somewhat subjective, the following are some guidelines for helping to avoid obscure tests:

  • Ensure the type of the System Under Test is discernible.
  • Ensure the input and return parameter types used by the methods being exercised are discernible.
  • Ensure all collaborating test double types and setup which are consequential to the behavior being validated is discernible.
  • Ensure all direct and indirect input values which are consequential to the behavior being validated are discernible.
  • Minimize any setup which is incidental to understanding the behavior being validated.
  • Refactor system components whose behavior results in setup needs unrelated to the area of concern being tested.

Strategies for Avoiding Obscurity

Base Fixtures

As previously noted, context obscurity can result from including both too much or too little context information. One strategy which can both cause or help to alleviate context obscurity is the use of Base Fixtures. Base fixtures are types which define setup and/or behavior which may be inherited by one or more concrete test cases. Base fixtures are commonly used to eliminate duplication across multiple test cases sharing the same context, but this often leads to obscurity due to the absence of setup information essential to understanding the derived test cases. Unlike the role stereotypes of other objects within a system, the role of executable specifications is to model the requirements of the system. The use of design principles and programming language capabilities should therefore be subjugated to this purpose.

One use of base fixtures which can reduce incidental context while preserving relevant context is the use of generic base fixtures. Establishing the System Under Test often requires the setup of test doubles to be used in the object’s instantiation. While any test double configuration needed for understanding a particular behavior of the system should be declared by the verifying test case, the instantiation of the System Under Test and any parameters needed are generally not pertinent to the test case. By allowing derived test cases to specify the type of the System Under Test as a generic parameter to the base fixture and providing methods for accessing any test doubles for unique setup needs, the non-essential portions of the context setup can be removed from the deriving test cases. This technique was demonstrated earlier in this series in the article Effective Tests: Auto-mocking Containers.

As a review, the following listing shows a base fixture which defines common code for setting up an auto-mocking container along with methods for configuring any test doubles used:

public abstract class WithSubject<T> where T : class
{
    protected static AutoMockContainer Container;
    protected static T Subject;

     Establish context = () =>
        {
            Container = new AutoMockContainer(new MockFactory(MockBehavior.Loose));
            Subject = Container.Create<T>();
        };

    protected static Mock<TDouble> For<TDouble>() where TDouble : class
    {
        return Container.GetMock<TDouble>();
    }
}
listing 4

Given this base fixture, the following specification can be derived which specifies the type of the System Under Test without including the ancillary concerns of setting up the Auto-mocking container or any Test Dummies required:

public class when_displaying_part_details : WithSubject<DisplayPartDetailsAction>
{
    const string PartId = "12345";

    Because of = () => Subject.Display(PartId);

    It should_retrieve_the_part_information_from_the_cache =
        () => For<ICachingService>().Verify(x => x.RetrievePartDetails(PartId), Times.Exactly(1));
}
listing 5

Base fixtures can be a benefit or a detriment to the clarity of our tests. When used, care should be taken to ensure the context of each deriving test case can be understood without needing to consult its base fixture.

Object Mothers

One of the specific types of setup needs which can lead to obscurity is the setup of test data. It is often necessary to construct test data objects to be used as either parameters, test stub values, or expected objects which can begin to dilute the clarity of a test as such needs increase. In some cases, the actual test data values aren’t pertinent to the subject of the test (e.g. an application submitted after the deadline), or a meaningful configuration of test data values can be abstracted into logical, well-known entities (e.g. a valid application). In such cases, the setup of test data can be delegated to an Object Mother. An Object Mother is a specialized factory whose role is to create canned test data objects.

The following demonstrates an Object Mother which provides canned Order objects:

public static class OrderObjectMother
{
	public static Order CreateOrder()
	{
		var cart = new Cart();
		cart.AddItem(new Guid(), 2);
		var billingInformation = new BillingInformation
			                        {
			                         	BillingAddress = new Address
			                         		                {
			                         		                 	Name = "John Doe",
			                         		                 	AddressLineOne = "123 Street",
			                         		                 	City = "Springfield",
			                         		                 	StateCode = "MO",
			                         		                 	Zipcode = "65807"
			                         		                },
			                         	CreditCardNumber = "41111111111111",
			                         	CreditCardType = CardType.Visa,
			                         	ExpirationDate = Date.Today.Subtract(TimeSpan.FromDays(1)),
			                         	CardHolderName = "John P. Doe",
			                        };

		var shippingInformation = new ShippingInformation(billingInformation);

		return new Order(cart, billingInformation, shippingInformation);
	}
}
listing 6

Note that the CreateOrder() method is required to create several intermediate objects to build up an instance of the Order object. If these intermediate objects are required by other specifications, these might be factored out into their own Object Mother factories.

By delegating the creation of an Order object to the Object Mother in listing 6, the following specification can be implemented with minimal visible context setup while preserving the essence of the declared context information:

public class when_placing_a_valid_order : WithSubject<OrderService>
{
	static Order _order;
	static OrderReceipt _receipt;

	Establish context = () => _order = OrderObjectMother.CreateOrder();

	Because of = () => _receipt = Subject.PlaceOrder(_order);

	It should_return_the_order_number = () => _receipt.OrderNumber.ShouldNotBeNull();
}
listing 7

Since the specification in listing 7 concerns what happens when a valid order is placed rather than what constitutes a valid order, there is no need to show the values contained by the Order object created.

If a variation of the object is needed, new methods can be added to the Object Mother to denote the variation. The following specification assumes the existence of an additional factory method used to validate behavior associated with invalid orders:

public class when_placing_an_invalid_order : WithSubject<OrderService>
{
	static Exception _exception;
	static Order _invalidOrder;

	Establish context = () => _invalidOrder = OrderObjectMother.CreateInvalidOrder();

	Because of = () => _exception = Catch.Exception(() => Subject.PlaceOrder(_invalidOrder));

	It should_throw_an_invalid_order_exception = () => _exception.ShouldBeOfType<InvalidOrderException>();
}
listing 8

Test Builders

While Object Mothers provide a nice way to retrieve canned test data, they don’t present an elegant way to deal with variability. For cases where a number of variations of the test data are needed, or when a subset of the values required for setting up test data objects are relevant to the declaring test case, the test data objects can be created using Test Builders. Test Builders are based upon the Builder pattern which creates objects based upon the accumulation of information from successive method calls terminated by a final construction method.

The following demonstrates a Test Builder for creating variations of an Order object:

public class OrderBuilder
{
	readonly BillingInformation _billingInformation;
	readonly ShippingInformation _shippingInformation;
	Cart _cart;

	public OrderBuilder()
	{
		_cart = new Cart();
		_cart.AddItem(new Guid(), 2);
			
		_billingInformation = new BillingInformation
		{
			BillingAddress = new Address
			{
				Name = "John Doe",
				AddressLineOne = "123 Street",
				City = "Springfield",
				StateCode = "MO",
				Zipcode = "65807"
			},
			CreditCardNumber = "41111111111111",
			CreditCardType = CardType.Visa,
			ExpirationDate = Date.Today.Subtract(TimeSpan.FromDays(1)),
			CardHolderName = "John P. Doe",
		};

		_shippingInformation = new ShippingInformation(_billingInformation);
	}

	public OrderBuilder WithCreditCardNumber(string creditCardNumber)
	{
		_billingInformation.CreditCardNumber = creditCardNumber;
		return this;
	}

	public OrderBuilder WithExpirationDate(Date expirationDate)
	{
		_billingInformation.ExpirationDate = expirationDate;
		return this;
	}

	public OrderBuilder WithCreditCardType(CardType cardType)
	{
		_billingInformation.CreditCardType = cardType;
		return this;
	}

	public OrderBuilder WithCardHolderName(string cardHolderName)
	{
		_billingInformation.CardHolderName = cardHolderName;
		return this;
	}

	public OrderBuilder WithCart(Cart cart)
	{
		_cart = cart;
		return this;
	}

	public Order Build()
	{
		return new Order(_cart, _billingInformation, _shippingInformation);
	}
}
listing 9

The following specification demonstrates how the Test Builder in listing 9 might be used to validate the results of placing an order with an invalid credit card:

public class when_placing_an_order_with_an_invalid_credit_card : WithSubject<OrderService>
{
	static Exception _exception;
	static Order _invalidOrder;

	Establish context = () => _invalidOrder = new OrderBuilder()
		                                          .WithCreditCardNumber("12345")
		                                          .Build();

	Because of = () => _exception = Catch.Exception(() => Subject.PlaceOrder(_invalidOrder));


	It should_throw_an_invalid_credit_card_exception = () => _exception.ShouldBeOfType<InvalidCreditCardException>();
}
listing 10

Conclusion

In this article, we considered several causes of context obscurity and discussed a few ways of avoiding it. Next time, we’ll move on to the topic of writing automated acceptance tests.

Tagged with:  

Effective Tests: Acceptance Tests

On September 5, 2011, in Uncategorized, by derekgreer
This entry is part 17 of 17 in the series Effective Tests

In the last installment of our series, we discussed the topic of Context Obscurity along with strategies for avoiding the creation of obscure tests. As the final topic of this series, we’ll take an introductory look at the practice of writing Automated Acceptance Tests.

Acceptance Tests

Acceptance testing is the process of validating the behavior of a system from end-to-end. That is to say, acceptance tests ask the question: “When all the pieces are put together, does it work?” Often, when components are designed in isolation at the component or unit level, issues are discovered at the point those components are integrated together. Regression in system behavior can also occur after an initial successful integration of the system due to on-going development without the protection of an end-to-end regression test suite.

Automated acceptance testing is accomplished by scripting interactions with a system along with verification of an observable outcome. For Graphical User Interface applications, acceptance tests typically employ the use of UI-automated testing tools. Examples include Selenium, Watir and WatiN for testing Web applications; ThoughtWorks’ Project White for testing Windows desktop applications; and Window Licker for Java Swing-based applications.

The authorship of acceptance tests fall into two general categories: collaborative and non-collaborative. Collaborative acceptance tests use tools which separate a testing grammar from the test implementation, allowing non-technical team members to collaborate on the writing of acceptance tests. Tools used for authoring collaborative acceptance tests include FitNesse, Cucumber, and StoryTeller. Non-collaborative acceptance tests combine the grammar and implementation of the tests, are typically written in the same language as the application being tested and can be authored using traditional xUnit testing frameworks.

Acceptance Test Driven Development

Acceptance Test Driven Development (A-TDD) is a software development process in which the features of an application are developed incrementally to satisfy specifications expressed in the form of automated acceptance tests where the feature implementation phase is development using the Test-Driven Development methodology (i.e. Red/Green/Refactor).

The relationship of A-TDD to TDD can be expressed as two concentric processes where the outer process represents the authoring of automated acceptance tests which serve as the catalyst for an inner process which represents the implementation of features developed following the TDD methodology. The following diagram depicts this relationship:

 

ATDD

 

When following the A-TDD methodology, one or more acceptance tests are written to reflect the acceptance criteria of a given user story. Based upon the expectations of the acceptance tests, one or more components are implemented using the TDD methodology until the acceptance tests pass. This process is depicted in the following diagram:

 

TDD-Process

 

An Example

The following is a simple example of an automated acceptance test for testing a feature to be developed for a Web commerce application. This test will validate that the first five products in the system are displayed to the user upon visiting the landing page for the site. The tools we’ll be using to implement our acceptance test are the Machine.Specifications library (a.k.a. MSpec) and the Selenium Web UI testing library.

A Few Recommendations

When testing .Net Web applications on a Windows platform, the following are a few recommendations you may want to consider for establishing your acceptance testing infrastructure:

  • Use IIS Express or a portable Web server such as CassiniDev.
  • Use a portable instance of Firefox or other targeted browser supported by your selected UI automated testing library.
  • Establish testing infrastructure which allows single instances of time-consuming processes to be started once for all acceptance tests. Examples of such processes would include starting up Web server and/or browser processes and configuring any Object-Relational Mapping components (e.g. NHibernate SessionFactory initialization).
  • Establish testing infrastructure which performs a complete setup and tear down of the application database. Consider setting up multiple strategies for running full test suites vs. running the current use case under test.

 

To keep our example simple, we’ll assume our web site is already deployed as the default site on localhost port 80, that the database utilized by the application is already installed and configured, that our current machine has Firefox installed and that each test will be responsible for launching an instance of Selenium’s FirefoxDriver.

Here’s our acceptance test:

    [Subject("List Products")]
    public class when_a_user_requests_the_default_view
    {
        static Database _database;
        static FirefoxDriver _driver;

        Establish context = () =>
            {
                // Establish known database state
                _database = _database = new Database().Open();
                _database.RemoveAllProducts();
                Enumerable.Range(0, 10).ForEach(i => _database.AddProduct(i));
                _database.Close();

                // Start the browser
                _driver = new FirefoxDriver();
            };

        Cleanup after = () => _driver.Close();

        Because of = () => _driver.Navigate().GoToUrl("http://localhost/");

        It should_display_the_first_five_products = () => _driver.FindElements(By.ClassName("customer")).Count().ShouldEqual(5);
    }

In this test, the Establish delegate is used to set up the expectations for this scenario (i.e. the context). Those expectations include the presence of at least five product records in the database and that a Firefox browser is ready to use. The Because delete is used to initiate our singe action which should trigger our expected outcome. The It delegate is then used to verify that a call to the Selenium WebDriver’s FindElements() method returns exactly five elements with a class of ‘customer’. Upon completion of the test, the Cleanup delegate is used to close the browser.

Conclusion

The intent of this article was merely to introduce the topic of automated acceptance testing since a thorough treatment of the subject is beyond the scope of this series. For further reading on this topic, I highly recommend the book Growing Object-Oriented Software Guided By Tests by Steve Freeman and Nat Pryce. While written from a Java perspective, this book is an excellent guide to Acceptance Test Driven Development and test writing practices in general.

This article concludes the Effective Tests series.  I hope it will serve as a useful resource for the software development community.

Tagged with: