Effective Tests: Auto-mocking Containers

On May 31, 2011, in Uncategorized, by derekgreer

In the last installment, I set forth some recommendations for using Test Doubles effectively. In this article, I’ll discuss a class of tools which can aid in reducing some of the coupling and obscurity that comes with the use of Test Doubles: Auto-mocking Containers.

Auto-mocking Containers

Executable specifications can provide valuable documentation of a system’s behavior. When written well, they can not only clearly describe what the system does, but also serve as an example for how the system is intended to be used. Unfortunately, it is this aspect of our specifications which can often end up working against our goal of writing maintainable software.

Ideally, an executable specification would describe the expected behavior of a system in such a way as to also clearly demonstrate it’s intended use without obscuring its purpose with extraneous implementation details. One class of tools which aid in achieving this goal are Auto-mocking Containers.

An Auto-mocking Container is a specialized inversion of control container for constructing a System Under Test with Test Doubles automatically supplied for any dependencies. By using an auto-mocking container, details such as the declaration of test double fields and test double instantiation can be removed from the specification, rendering a cleaner implementation void of such extraneous details.

Consider the following class which displays part details to a user and is responsible for retrieving the details requested form a cached copy if present:

 public class DisplayPartDetailsAction
    {
        readonly ICachingService _cachingService;
        readonly IPartDisplayAdaptor _partDisplayAdaptor;
        readonly IPartRepository _partRepository;

        public DisplayPartDetailsAction(
            ICachingService cachingService,
            IPartRepository partRepository,
            IPartDisplayAdaptor partDisplayAdaptor)
        {
            _cachingService = cachingService;
            _partRepository = partRepository;
            _partDisplayAdaptor = partDisplayAdaptor;
        }

        public void Display(string partId)
        {
            PartDetail details = _cachingService.RetrievePartDetails(partId) ??
                                 _partRepository.GetPartDetailByPartId(partId);

            _partDisplayAdaptor.Display(details);
        }
    }

The specification for this behavior would need to verify that the System Under Test attempts to retrieve the PartDetail from the ICachingService, but would also need to supply implementations for the IPartRepository and IPartDisplayAdaptor as shown in the following listing:

    public class when_displaying_part_details
    {
        const string PartId = "12345";
        static Mock<ICachingService> _cachingServiceMock;
        static DisplayPartDetailsAction _subject;

        Establish context = () =>
            {
                _cachingServiceMock = new Mock<ICachingService>();
                var partRepositoryDummy = new Mock<IPartRepository>();
                var partDisplayAdaptorDummy = new Mock<IPartDisplayAdaptor>();
                _subject = new DisplayPartDetailsAction(_cachingServiceMock.Object, partRepositoryDummy.Object,
                                                        partDisplayAdaptorDummy.Object);
            };

        Because of = () => _subject.Display(PartId);

        It should_retrieve_the_part_information_from_the_cache =
            () => _cachingServiceMock.Verify(x => x.RetrievePartDetails(PartId), Times.Exactly(1));
    }

By using an auto-mocking container, the specification can be written without the need of an explicit Mock field, or instantiating Dummy instances for the IPartRepository and IPartDisplayAdaptor dependencies. The following demonstrates such an example using AutoMock, an auto-mocking container which leverages the Moq framework:

    public class when_displaying_part_details
    {
        const string PartId = "12345";
        static AutoMockContainer _container;
        static DisplayPartDetailsAction _subject;

        Establish context = () =>
            {
                _container = new AutoMockContainer(new MockFactory(MockBehavior.Loose));
                _subject = _container.Create<DisplayPartDetailsAction>();
            };

        Because of = () => _subject.Display(PartId);

        It should_retrieve_the_part_information_from_the_cache =
            () => _container.GetMock<ICachingService>().Verify(x => x.RetrievePartDetails(PartId), Times.Exactly(1));
    }

While this implementation eliminates references to the extraneous dependencies, it does impose a bit of extraneous implementation details of its own. To further relieve this specification of implementation details associated with the auto-mocking container, a reusable base context can be extracted:

    public abstract class WithSubject<T> where T : class
    {
        protected static AutoMockContainer Container;
        protected static T Subject;

         Establish context = () =>
            {
                Container = new AutoMockContainer(new MockFactory(MockBehavior.Loose));
                Subject = Container.Create<T>();
            };

        protected static Mock<TDouble> For<TDouble>() where TDouble : class
        {
            return Container.GetMock<TDouble>();
        }
    }

By extending the auto-mocking base context, the specification can be written more concisely:

    public class when_displaying_part_details : WithSubject<DisplayPartDetailsAction>
    {
        const string PartId = "12345";

        Because of = () => Subject.Display(PartId);

        It should_retrieve_the_part_information_from_the_cache =
            () => For<ICachingService>().Verify(x => x.RetrievePartDetails(PartId), Times.Exactly(1));
    }

Another advantage gained by the use of auto-mocking containers is decoupling. By inverting the concern of how the System Under Test is constructed, dependencies can be added, modified, or deleted without affecting specifications for which the dependency has no bearing.

Trade-offs

While auto-mocking containers can make specifications cleaner, easier to write, and more adaptable to change, their use can come at a slight cost. By using mocking frameworks and hand-rolled doubles directly, there is always at least one point of reference where the requirements of instantiating the System Under Test provides feedback about its design as a whole.

Use of auto-mocking containers allows us to produce contextual slices of how the system works, limiting the information about the system’s dependencies to that knowledge required by the context in question. From a documentation perspective, this can aid in understanding how the system is used to facilitate a particular feature. From a design perspective, however, their use can eliminate one source of feedback about the evolving design of the system. Without such inversion of control, hints of violating the Single Responsibility Principle can be seen within the specifications, evidenced by overly complex constructor initialization. By removing the explicit declaration of the system’s dependencies from the specifications, we also remove this point of feedback.

That said, the benefits of leveraging auto-mocking containers tend to outweigh the cost of removing this point of feedback. Cases of mutually-exclusive dependencies are usually in the minority and each addition and/or modification to a constructor provides an equal level of feedback about a class’s potential complexity.

Conclusion

In this article, we looked at the use of auto-mocking containers as a tool for reducing obscurity and coupling within our specifications. Next time, we’ll look at a technique for reducing the obscurity that comes from overly complex assertions.

Effective Tests: Double Strategies

On May 26, 2011, in Uncategorized, by derekgreer

In our last installment, the topic of Test Doubles was introduced as a mechanism for verifying and/or controlling the interactions of a component with its collaborators. In this article, we’ll consider a few recommendations for using Test Doubles effectively.

 

Recommendation 1: Clarify Intent

Apart from guiding the software implementation process and guarding the application’s current behavior against regression, executable specifications (i.e. Automated Tests) serve as the system’s documentation. While well-named specifications can serve to describe what the system should do, we should take equal care in clarifying the intent of how the system’s behavior is verified.

When using test doubles, one simple practice that helps to clarify the verification strategies employed by the specification is to use intention-revealing names for test double instances. Consider the following example which uses the Rhino Mocks framework for creating a Test Stub:

	public class when_a_user_views_the_product_detail
	{
		public const string ProductId = "1";
		static ProductDetail _results;
		static DisplayOrderDetailCommand _subject;

		Establish context = () =>
			{
				var productDetailRepositoryStub = MockRepository.GenerateStub<IProductDetailRepository>();
				productDetailRepositoryStub.Stub(x => x.GetProduct(Arg<string>.Is.Anything))
					.Return(new ProductDetail {NumberInStock = 42});

				_subject = new DisplayOrderDetailCommand(productDetailRepositoryStub);
			};

		Because of = () => _results = _subject.QueryProductDetails(ProductId);

		It should_display_the_number_of_items_currently_in_stock = () => _results.NumberInStock.ShouldEqual(42);
	}

 

In this example, a Test Stub is created for an IProductDetailRepository type which serves as a dependency for the System Under Test (i.e. the DisplayOrderDetailCommand type). By choosing to explicitly name the Test Double instance with a suffix of “Stub”, this specification communicates that the double serves only to provide indirect input to the System Under Test.

 

Note to Rhino Mock and Machine.Specification Users

For Rhino Mock users, there are some additional hints in this example which help to indicate that the test double used by this specification is intended to serve as a Test Stub. This includes use of Rhino Mock’s GenerateStub() method, the lack of “Record/Replay” artifacts from either the old or new mocking APIs and the absence of assertions on the generated test double. Additionally, those familiar with the Machines.Specifications framework (a.k.a. MSpec) would have an expectation of explicit and discrete observations if this were being used as a Mock or Test Spy. Nevertheless, we should strive to make the chosen verification strategy as discoverable as possible and not rely upon framework familiarity alone.

 

While this test also indicates that the test double is being used as a Stub by its use of the Rhino Mock framework’s GenerateStub() method, Rhino Mocks doesn’t provide intention-revealing method names for each type of test double and some mocking frameworks don’t distinguish between the creation of mocks and stubs at all. Using intention-revealing names is a consistent practice that can be adopted regardless of the framework being used.

 

Recommendation 2: Only Substitute Your Types

Applications often make use of third-party libraries and frameworks. When designing an application which leverages such libraries, there’s often a temptation to substitute types within the framework. Rather than providing a test double for these dependencies, create an abstraction representing the required behavior and provide a test double for the abstraction instead.

There are several issues with providing test doubles for third party components:

First, it precludes any ability to adapt to feedback received from the specification. Since we don’t control the components contained within a third-party libraries, coupling our design to these components limits our ability to guide our designs based upon our interaction with the system through our specifications.

Second, we don’t control when or how the API of third-party libraries may change in future releases. We can exercise some control over when we choose to upgrade to a newer release of a library, but aside from the benefits of keeping external dependencies up to date, there are often external motivating factors outside of our control. By remaining loosely-coupled from such dependencies, we minimize the amount of work it takes to migrate to new versions.

Third, we don’t always have a full understanding of the behavior of third-party libraries. Using test doubles for dependencies presumes that the doubles are going to mimic the behavior of the type they are substituting correctly (at least within the context of the specification). Substituting behavior which you don’t fully understand or control may lead to unreliable specifications. Once a specification passes, there should be no reason for it to ever fail unless you change the behavior of your own code. This can’t be guaranteed when substituting third-party libraries.

While we shouldn’t substitute types from third-party libraries, we should verify that our systems work properly when using third-party libraries. This is achieved through integration and/or acceptance tests. With integration tests, we verify that our systems display the expected behavior when integrated with external systems. If our systems have been properly decoupled from the use of third-party libraries, only the Adaptors need to be tested. A system which has taken measures to remain decoupled from third-party libraries should have far fewer integration tests than those that test the native behavior of the application. With acceptance tests, we verify the behavior of the entire application from end to end which would exercise the system along with its external dependencies.

 

Recommendation 3: Don’t Substitute Concrete Types

The following principle is set forth in the book Design Patterns: Elements of Reusable Object-Oriented Software by Gamma, et al:

Program to an interface, not an implementation.

All objects possess a public interface and in this sense all object-oriented systems are collaborations of objects interacting through interfaces. What is meant by this principle, however, is that objects should only depend upon the interface of another object, not the implementation of that object. By taking dependencies upon concrete types, objects are implicitly bound by the implementation details of that object. Subtypes can be substituted, but subtypes are inextricably coupled to their base types.

Set forth in the book Agile Software Development, Principles, Patterns, and Practices by Robert C. Martin, a related principle referred to as the Interface Segregation Principe states:

Clients should not be forced to depend on methods that they do not use.

The Interface Segregation Principle is set forth to address several issues that arise from non-cohesive or “fat” interfaces, but the issue most pertinent to our discussion is the problem of associative coupling. When a component takes a dependency upon a concrete type, it forms an associative coupling with all other clients of that dependency. As new requirements drive changes to the internals of the dependency for one client, all other clients coupled directly to the same dependency may be affected regardless of whether they depend upon the same sets of behavior or not. This problem can be mitigated by defining dependencies upon Role-based Interfaces. In this way, objects declare their dependencies in terms of behavior, not specific implementations of behavior.

So, one might ask, “What does this have to do with Test Doubles?” There is nothing particularly problematic about replacing concrete types from an implementation perspective. There’s certainly the issue in some languages of needing to take measures to ensure virtual dispatching can take place thereby allowing the behavior of a concrete type to be overridden, but where this actually becomes relevant to our discussion is in what our specifications are trying to tell us about our design. When you find yourself creating test doubles for concrete types, it’s as if your specifications are crying out: “Hey dummy, you have some coupling here!” By listening to the feedback provided by our specification, we can begin to spot code smells which may point to problems in our implementation.

 

Recommendation 4: Focus on Behavior

When writing specifications, it can be easy to fall into the trap of over-specifying the components of the system. This occurs when we write specifications that not only verify the expected behavior of a system, but which also verify that the behavior is achieved using a specific implementation.

Writing component-level specifications will always require some level of coupling to the component’s implementation details. Nevertheless, we should strive to minimize the coupling to those interactions which are required to verify the system requirements. If the System Under Test takes 10 steps to achieve the desired outcome, but the outcome of step 10 by itself is sufficient to verify that the desired behavior occurred, our specifications shouldn’t care about steps 1 through 9. If someone figures out a way to achieve the same outcome with only 3 steps, the specifications of the system shouldn’t need to change.

This leads us to a recommendation from the book XUnit Test Patterns: Refactoring Test Code by Gerard Meszaros:

Use the Front Door First

By “front door”, the author means that we should strive to verify behavior using the public interface of our components when possible, and use interaction-based verification when necessary. For example, when the behavior of a component can be verified by checking return values from operations performed by the object or by checking the interactions which occurred with its dependencies, we should prefer checking the return values over checking its interactions. At times, verifying the behavior of an object requires that we examine how the object interacted with its collaborators. When this is necessary, we should strive to remain as loosely coupled as possible by only specifying the minimal interactions required to verify the expected behavior.

 

Conclusion

In this article, we discussed a few strategies for using Test Doubles effectively. Next time we’ll take a look at a technique for creating Test Doubles which aids in both reducing coupling and obscurity … but at a cost.

Tagged with:  

Effective Tests: Test Doubles

On May 16, 2011, in Uncategorized, by derekgreer

In our last installment, we concluded our Test-First example which demonstrated the Test Driven Development process through the creation of a Tic-tac-toe component. When writing automated tests using either a Test-First or classic unit testing approach, it often becomes necessary to verify and/or exercise control over the interactions of a component with its collaborators. In this article, I’ll introduce a family of strategies for addressing these needs, known collectively as Test Doubles. The examples within this article will be presented using the Java programming language.

Doubles

The term “Test Double” was popularized by Gerard Meszaros in his book xUnit Test Patterns. Similar to the role of the “stunt double” in the movie industry in which a leading actor or actress is replaced in scenes requiring a more specialized level of training and/or physical ability, Test Doubles likewise play a substitute role in the orchestration of a System Under Test.

Test Doubles serve two primary roles within automated tests. First, they facilitate the ability to isolate portions of behavior being designed and/or tested from undesired influences of collaborating components. Second, they facilitate the ability to verify the collaboration of one component with another.

Isolating Behavior

There are two primary motivations for isolating the behavior being designed from influences of dependencies: Control and Feedback.

It is often necessary to exercise control over the behavior provided by dependencies of a System Under Test in order to effect a deterministic outcome or eliminate unwanted side-effects. When a real dependency can’t be adequately manipulated for these purposes, test doubles can provide control over how a dependency responds to consuming components.

A second motivation for isolating behavior is to aid in identifying the source of regressions within a system. By isolating a component completely from the behavior of its dependencies, the source of a failing test can more readily be identified when a regression of behavior is introduced.

Identifying Regression Sources

While test isolation aids in identifying the source of regressions, Extreme Programming (XP) offers an alternative process.

As discussed briefly in the series introduction, XP categories tests as Programmer Tests and Customer Tests rather than the categories of Unit, Integration or Acceptance Tests. One characteristic of Programmer Tests, which differs from classic unit testing, is the lack of emphasis on test isolation. Programmer tests are often written in the form of Component Tests which test subsystems within an application rather than designing/testing the individual units comprising the overall system. One issue this presents is a decreased ability to identify the source of a newly introduced regression based on a failing test due to the fact that the regression may have occurred in any one of the components exercised during the test. Another consequence of this approach is a potential increase in the number of tests which may fail due to a single regression being introduced. Since a single class may be used by multiple subsystems, a regression in behavior of a single class can potentially break the tests for every component which consumes that class.

The strategy used for identifying sources of regressions within a system when writing Programmer Tests is to rely upon knowledge of the last change made within the system. This becomes a non-issue when using emergent design strategies like Test-Driven Development since the addition or modification of behavior within a system tends to happen in very small steps. The XP practice of Pair-Programming also helps to mitigate such issues due to an increase in the number of participants during the design process. Practices such as Continuous Integration and associated check-in guidelines (e.g. The Check-in Dance) also help to mitigate issues with identifying sources of regression. The topic of Programmer Tests will be discussed as a separate topic later in the series.

 

Verifying Collaboration

To maximize maintainability, we should strive to keep our tests as decoupled from implementation details as possible. Unfortunately, the behavior of a component being designed can’t always be verified through the component’s public interface alone. In such cases, test doubles aid in verifying the indirect outputs of a System Under Test. By replacing a real dependency with one of several test double strategies, the interactions of a component with the double can be verified by the test.

 

Test Double Types

While a number of variations on test double patterns exist, the following presents the five primary types of test doubles: Stubs, Fakes, Dummies, Spies and Mocks.

Stubs

When writing specifications, the System Under Test often collaborates with dependencies which need to be supplied as part of the setup or interaction stages of a specification. In some cases, the verification of a component’s behavior depends upon providing specific indirect inputs which can’t be controlled by using real dependencies. Test doubles which serve as substitutes for controlling the indirect input to a System Under Test are known as Test Stubs.

The following example illustrates the use of a Test Stub within the context of a package shipment rate calculator specification. In this example, a feature is specified for a shipment application to allow customers to inquire about rates based upon a set of shipment details (e.g. weight, contents, desired delivery time, etc.) and a base rate structure (flat rate, delivery time-based rate, etc.).

In the following listing, a RateCalculator has a dependency upon an abstract BaseRateStructure implementation which is used to calculate the actual rate:

public class RateCalculator {

	private BaseRateStructure baseRateStructure;
	private ShipmentDetails shipmentDetails;

	public RateCalculator(BaseRateStructure baseRateStructure, ShipmentDetails shipmentDetails) {
		this.baseRateStructure = baseRateStructure;
		this.shipmentDetails = shipmentDetails;
	}

	public BigDecimal CalculateRateFor(ShipmentDetails shipmentDetails) {
		BigDecimal rate =  baseRateStructure.calculateRateFor(shipmentDetails);

		// other processing ...
		
		return rate;
	}
}

The following shows the BaseRateStructure contract which defines a method that accepts shipment details and returns a rate:

public abstract class BaseRateStructure {
	public abstract BigDecimal calculateRateFor(ShipmentDetails shipmentDetails);
}

To ensure a deterministic outcome, the specification used to drive the feature’s development can substitute a BaseRateStrctureStub which will always return the configured value:

public class RateCalculatorSpecifications {

	public static class when_calculating_a_shipment_rate extends ContextSpecification {

		static Reference<BigDecimal> rate = new Reference<BigDecimal>(BigDecimal.ZERO);
		static ShipmentDetails shipmentDetails;
		static RateCalculator calculator;

		Establish context = new Establish() {
			public void execute() {
				shipmentDetails = new ShipmentDetails();
				BaseRateStructure baseRateStructureStub = new BaseRateStructureStub(10.0);
				calculator = new RateCalculator(baseRateStructureStub, shipmentDetails);
			}
		};

		Because of = new Because() {
			protected void execute() {
				rate.setValue(calculator.CalculateRateFor(shipmentDetails));
			}
		};

		It should_return_the_expected_rate = assertThat(rate).isEqualTo(new BigDecimal(10.0));
	}
}

For this specificaiton, the BaseRateStructureStub merely accepts a value as a constructor parameter and returns the value when the calculateRateFor() method is called:

public class BaseRateStructureStub extends BaseRateStructure {

	BigDecimal value;

	public BaseRateStructureStub(double value) {
		this.value = new BigDecimal(value);
	}

	public BigDecimal calculateRateFor(ShipmentDetails shipmentDetails) {
		return value;
	}
}

 

Fakes

While it isn’t always necessary to control the indirect inputs of collaborating dependencies to ensure a deterministic outcome, some real components may have other undesired side-effects which make their use prohibitive. For example, components which rely upon an external data store for persistence concerns can significant impact the speed of a test suite which tends to discourage frequent regression testing during development. In cases such as these, a lighter-weight version of the real dependency can be substituted which provides the behavior needed by the specification without the undesired side-effects. Test doubles which provide a simplified implementation of a real dependency for these purposes are referred to as Fakes.

In the following example, a feature is specified for an application serving as a third-party distributor for the sale of tickets to a local community arts theatre to display the itemized commission amount on the receipt. The theatre provides a Web service which handles the payment processing and ticket distribution process, but does not provide a test environment for vendors to use for integration testing purposes. To test the third-party application’s behavior without incurring the side-effects of using the real Web service, a Fake service can be substituted in its place.

Consider that the theatre’s service interface is as follows:

public abstract class TheatreService {

	public abstract TheatreReceipt ProcessOrder(TicketOrder ticketOrder);
	
	public abstract CancellationReceipt CancelOrder(int orderId);

	// other methods ...
}

To provide the expected behavior without the undesired side-effects, a fake version of the service can be implemented:

public class TheatreServiceFake extends TheatreService {

	// private field declarations used in light implementation ...
	
	public TheatreReceipt ProcessOrder(TicketOrder ticketOrder) {

		// light implementation details ...

		TheatreReceipt receipt = createReceipt();
		return new TheatreReceipt();
	}

	public CancellationReceipt CancelOrder(int orderId) {

		// light implementation details ...

		CancellationReceipt receipt = createCancellationReceipt();
		return receipt;
	}

	// private methods …

}

The fake service may then be supplied to a PaymentProcessor class within the set up phase of the specification:

public class PaymentProcessorSpecifications {
	public static class when_processing_a_ticket_sale extends ContextSpecification {

		static Reference<BigDecimal> receipt = new Reference<BigDecimal>(BigDecimal.ZERO);
		static PaymentProcessor processor;

		Establish context = new Establish() {
			protected void execute() {
				processor = new PaymentProcessor(new TheatreServiceFake());			
			}
		};

		Because of = new Because() {
			protected void execute() {
				receipt.setValue(processor.ProcessOrder(new Order(1)).getCommission());
			}
		};

		It should_return_a_receipt_with_itemized_commission =
				assertThat(receipt).isEqualTo(new BigDecimal(1.00));
	}
}

 

Dummies

There are times when a dependency is required in order to instantiate the System Under Test, but which isn’t required for the behavior being designed. If use of the real dependency is prohibitive in such a case, a Test Double with no behavior can be used. Test Doubles which serve only to provide mandatory instances of dependencies are referred to as Test Dummies.

The following example illustrates the use of a Test Dummy within the context of a specification for a ShipmentManifest class. The specification concerns verification of the class’ behavior when adding new packages, but no message exchange is conducted between the manifest and the package during execution of the addPackage() method.

public class ShipmentManifestSpecifications {
	public static class when_adding_packages_to_the_shipment_manifest extends ContextSpecification {

		static private ShipmentManifest manifest;

		Establish context = new Establish() {
			protected void execute() {
				manifest = new ShipmentManifest();
			}
		};

		Because of = new Because() {
			protected void execute() {
				manifest.addPackage(new DummyPackage());
			}
		};

		It should_update_the_total_package_count = new It() {
			protected void execute() {
				assert manifest.getPackageCount() == 1;
			}
		};	   
	}
}

 

Test Spies

In some cases, a feature requires collaborative behavior between the System Under Test and its dependencies which can’t be verified through its public interface. One approach to verifying such behavior is to substitute the associated dependency with a test double which stores information about the messages received from the System Under Test. Test doubles which record information about the indirect outputs from the System Under Test for later verification by the specification are referred to as Test Spies.

In the following example, a feature is specified for an online car sales application to keep an audit trail of all car searches. This information will be used later to help inform purchases made at auction sales based upon which makes, models and price ranges are the most highly sought in the area.

The following listing contains the specification which installs the Test Spy during the context setup phase and examines the state of the Test Spy in the observation stage:

public class SearchServiceSpecifications {
	public static class when_a_customer_searches_for_an_automobile extends ContextSpecification {

		static AuditServiceSpy auditServiceSpy;
		static SearchService searchService;

		Establish context = new Establish() {
			protected void execute() {
				auditServiceSpy = new AuditServiceSpy();
				searchService = new SearchService(auditServiceSpy);
			}
		};

		Because of = new Because() {
			protected void execute() {
				searchService.search(new MakeSearch("Ford"));
			}
		};

		It should_report_the_search_to_the_audit_service = new It() {
			protected void execute() {
				assert auditServiceSpy.WasSearchCalledOnce() == true : "Expected true, but was false.";
			}
		};
	}
}

For this specification, the Test Spy is implemented to simply increment a private field each time the recordSearch() method is called, allowing the specification to then call the WasSearchCalledOnce() method in an observation to verify the expected behavior:

public class AuditServiceSpy extends AuditService{
	private int calls;

	public boolean WasSearchCalledOnce() {
		return calls == 1;
	}

	public void recordSearch(Search criteria) {
		calls++;
	}
}

 

Mocks

Another technique for verifying the interaction of a System Under Test with its dependencies is to create a test double which encapsulates the desired verification within the test double itself. Test Doubles which validate the interaction between a System Under Test and the test double are referred to as Mocks.

Mock validation falls into two categories: Mock Stories and Mock Observations.

Mock Stories

Mock Stories are a scripted set of expected interactions between the Mock and the System Under Test. Using this strategy, the exact set of interactions are accounted for within the Mock object. Upon executing the specification, any deviation from the script results in an exception.

Mock Observations

Mock Observations are discrete verifications of individual interactions between the Mock and the System Under Test. Using this strategy, the interactions pertinent to the specification context are verified during the observation stage of the specification.

Mock Observations and Test Spies

The use of Mock Observations in practice looks very similar to the use of Test Spies. The distinction between the two is whether a method is called on the Mock to assert that a particular interaction occurred or whether state is retrieved from the Test Spy to assert that a particular interaction occurred.

To illustrate the concept of Mock objects, the following shows the previous example implemented using a Mock Observation instead of a Test Spy.

In the following listing, a second specification is added to the previous SearchServiceSpecifications class which replaces the use of the Test Spy with a Mock:

public class SearchServiceSpecifications {
	...   

	public static class when_a_customer_searches_for_an_automobile_2 extends ContextSpecification {
		static AuditServiceMock auditServiceMock;
		static SearchService searchService;

		Establish context = new Establish() {
			protected void execute() {
				auditServiceMock = new AuditServiceMock();
				searchService = new SearchService(auditServiceMock);
			}
		};

		Because of = new Because() {
			protected void execute() {
				searchService.search(new MakeSearch("Ford"));
			}
		};

		It should_report_the_search_to_the_audit_service = new It() {
			protected void execute() {
				auditServiceMock.verifySearchWasCalledOnce();
			}
		};
	}
}

The Mock implementation is similar to the Test Spy, but encapsulates the assert call within the verifySearchWasCalledOnce() method rather than returning the recorded state for the specification to assert:

public class AuditServiceMock extends AuditService {
	private int calls;

	public void verifySearchWasCalledOnce() {
		assert calls == 1;
	}

	public void recordSearch(Search criteria) {
		super.recordSearch(criteria);
		calls++;
	}
}

While both the Mock Observation and Mock Story approaches can be implemented using custom Mock classes, it is generally easier to leverage a Mocking Framework.

Mocking Frameworks

A Mocking Framework is testing library written to facilitate the creation of Test Doubles with programmable expectations. Rather than writing a custom Mock object for each unique testing scenario, Mock frameworks allow the developer to specify the expected interactions within the context setup phase of the specification.

To illustrate the use of a Mocking Framework, the following listing presents the previous example implemented using the Java Mockito framework rather than a custom Mock object:

public class SearchServiceSpecifications {
	... 

	public static class when_a_customer_searches_for_an_automobile_3 extends ContextSpecification {
		static AuditService auditServiceMock;
		static SearchService searchService;

		Establish context = new Establish() {
			protected void execute() {
				auditServiceMock = mock(AuditService.class);
				searchService = new SearchService(auditServiceMock);
			}
		};

		Because of = new Because() {
			protected void execute() {
				searchService.search(new MakeSearch("Ford"));
			}
		};

		It should_report_the_search_to_the_audit_service = new It() {
			protected void execute() {
				verify(auditServiceMock).recordSearch(any(Search.class));
			}
		};
	}
}

In this example, the observation stage of the specification uses Mockito’s static verify() method to assert that the recordSearch() method was called with any instance of the Search class.

In many circumstances, messages are exchanged between a System Under Test and its dependencies. For this reason, Mock objects often need to return stub values when called by the System Under Test. As a consequence, most mocking frameworks can be used to also create Test Doubles whose role is only to serve as a Test Stub. Mocking frameworks which facilitate Mock Observations can also be used to easily create Test Dummies.

Conclusion

In this article, the five primary types of Test Doubles were presented: Stubs, Fakes, Dummies, Spies, and Mocks. Next time, we’ll discuss strategies for using Test Doubles effectively.

Tagged with: