I’ve always had an interest in application build processes. From the start of my career, I’ve generally been in the position of establishing the solution architecture for the projects I’ve participated in and this has usually involved establishing a baseline build process.

My career began as a Unix C developer while still in college where much of my responsibilities required writing tools in both C and various Unix shell scripting languages which were deployed to other workstations throughout the country. From there, I moved on to Unix C-CGI Web development and worked a number of years with Make files. With the advent of Java, I begin using tools like Ant and Maven for several more years before switching to the .Net platform where I used open source build tools like NAnt until Microsoft introduced MSBuild with its 2.0 release. Upon moving to the Austin, TX area, I was greatly influenced by what was the early seat of the Alt.Net movement. It was there where I abandoned what in hindsight has always been a ridiculous idea … trying to script a build using XML. For the next 4-5 years, I used Rake to define all of my builds. Starting last year, I began using Gulp and associated tooling on the Node platform for authoring .Net builds.

Throughout this journey of working with various build technologies, I’ve formed a few opinions along the way. One of these opinions is that the Build process shouldn’t be coupled to the Continuous Integration process.

A project should have a build process which exists and can be executed independent of the particular continuous integration tool one chooses. This allows builds to be created and maintained on the developer’s local machine. The particular build steps involved in building a given application are inherently part of its ontology. What compilers and preprocessors need to be used, how dependencies are obtained and published, when and how configuration values are supplied for different environments, how and where automated test suites are run, how the application distribution is created … all of these are concerns whose definition and orchestration are particular to a given project. Such concerns should be encapsulated in a build script which lives with the rest of the application source, not as discrete build steps defined within your CI tool.

Ideally, builds should never break, but when they do it’s important to resolve the issue as quickly as possible. Not being able to run a build locally means potentially having to repeatedly introduce changes until the build is fixed. This tends to pollute the source code commit history with comments like: “Fixing the build”, “Fixing the build for realz this time”, and “Please let this be it … I’m ready to go home”. Of course, there are times when a build can break because of environmental issues that may not be mirrored locally (e.g. lack of disk space, network related issues, 3rd-party software dependencies, etc.), but encapsulating as much of your build as possible goes a long way to keeping builds running along smoothly. Anyone on your team should be able to clone/check-out the project, issue a single command from the command line (e.g. gulp, rake, psake, etc.) and watch the full build process execute including any pre-processing steps, compilation, distribution packaging and even deployment to a target environment.

Aside from being able to run a build locally, decoupling the build from the CI process allows the technologies used by each to vary independently. Switching from one CI tool to another should ideally just require installing the software, pointing it to your source control, defining the single step to issue the build, and defining the triggers that initiate the process.

The creation of a project distribution and the scheduling mechanism for how often this happens are separate concerns. Just because a CI tool allows you to script out your build steps doesn’t mean you should.

 

Survey of Entity Framework Unit of Work Patterns

On November 1, 2015, in Uncategorized, by derekgreer

Earlier this year I joined a development team which chose Entity Framework for the persistence needs of a new greenfield project. While I’ve worked on a few projects which used Entity Framework here and there over the years, the bulk of my experience has been with NHibernate and, more recently, Dapper.Net. As a result, there hasn’t been all that much occasion for me to explore it in any level of depth until this year.

One area I recently took some time to research is how the Unit of Work pattern is best implemented within the context of using Entity Framework. While the topic is still relatively fresh on my mind, I thought I’d use this as an opportunity to create a catalog of various approaches I’ve encountered and include some thoughts about each approach.

Unit of Work

To start, it may be helpful to give a basic definition of the Unit of Work pattern. A Unit of Work can be defined as a collection of operations that succeed or fail as a single unit. Given a series of operations which need to be executed in response to some interaction with an application, it’s often necessary to ensure that none of the operations cause side-effects if any one of them fails. This is accomplished by having participating operations respond to either a commit or rollback message indicating whether the operation performed should be completed or reverted.

A Unit of Work can consist of different types of operations such as Web Service calls, Database operations, or even in-memory operations, however, the focus of this article will be on approaches to facilitating the Unit of Work pattern with Entity Framework.

With that out of the way, let’s take a look at various approaches to facilitating the Unit of Work pattern with Entity Framework.

Implicit Transactions

The first approach to achieving a Unit of Work around a series of Entity Framework operations is to simply create an instance of a DbContext class, make changes to one or more DbSet instances, and then call SaveChanges() on the context. Entity Framework automatically creates an implicit transaction for changesets which include INSERTs, UPDATEs, and DELETEs.

Here’s an example:

public Customer CreateCustomer(CreateCustomerRequest request)
{
  Customer customer = null;

  using (var context = new MyStoreContext())
  {
    customer = new Customer { FirstName = request.FirstName, LastName = request.LastName };
    context.Customers.Add(customer);
    context.SaveChanges();
    return customer;
  }
}

The benefit of this approach is that a transaction is created only when necessary and is kept alive only for the duration of the SaveChanges() call. Some drawbacks to this approach, however, are that it leads to opaque dependencies and adds a bit of repetitive infrastructure code to each of your applications services.

If you prefer to work directly with Entity Framework then this approach may be fine for simple needs.

TransactionScope

Another approach is to use the System.Transactions.TransactionScope class provided by the .Net framework. When any of the Entity Framework operations are used which cause a connection to be opened (e.g. SaveChanges()), the connection will enlist in the ambient transaction defined by the TransactionScope class and close the transaction once the TransactionScope is successfully completed. Here’s an example of this approach:

public Customer CreateCustomer(CreateCustomerRequest request)
{
  Customer customer = null;

  using (var transaction = new TransactionScope())
  {
    using (var context = new MyStoreContext())
    {
      customer = new Customer { FirstName = request.FirstName, LastName = request.LastName };
      context.Customers.Add(customer);
      context.SaveChanges();
      transaction.Complete();
    }

    return customer;
  }
}

In general, I find using TransactionScope to be a good general-purpose solution for defining a Unit of Work around Entity Framework operations as it works with ADO.Net, all versions of Entity Framework, and other ORMs which provides the ability to use multiple libraries if needed. Additionally, it provides a foundation for building a more comprehensive Unit of Work pattern which would allow other types of operations to enlist in the Unit of Work.

Caution should be exercised when using TransactionScope, however, as certain operations can implicitly escalate the transaction to a distributed transaction causing undesired overhead. For those choosing solutions involving TransactionScope, I would recommend educating yourself on how and when transactions are escalated.

While I find using the TransactionScope class to be a good general-purpose solution, using it directly does couple your services to a specific strategy and adds a bit of noise to your code. While it’s a viable choice, I would recommend inverting the concerns of managing the Unit of Work boundary as shown in approaches we’ll look at later.

ADO.Net Transactions

This approach involves creating an instance of DbTransaction and instructing the participating DbContext instance to use the existing transaction:

public Customer CreateCustomer(CreateCustomerRequest request)
{
  Customer customer = null;

  var connectionString = ConfigurationManager.ConnectionStrings["MyStoreContext"].ConnectionString;
  using (var connection = new SqlConnection(connectionString))
  {
    connection.Open();
    using (var transaction = connection.BeginTransaction())
    {
      using (var context = new MyStoreContext(connection))
      {
        context.Database.UseTransaction(transaction);
        try
        {
          customer = new Customer { FirstName = request.FirstName, LastName = request.LastName };
          context.Customers.Add(customer);
          context.SaveChanges();
        }
        catch (Exception e)
        {
          transaction.Rollback();
          throw;
        }
      }

      transaction.Commit();
      return customer;
    }
  }

As can be seen from the example, this approach adds quite a bit of infrastructure noise to your code. While not something I’d recommend standardizing upon, this approach provides another avenue for sharing transactions between Entity Framework and straight ADO.Net code which might prove useful in certain situations. In general, I wouldn’t recommend such an approach.

Entity Framework Transactions

The relative new-comer to the mix is the new transaction API introduced with Entity Framework 6. Here’s a basic example of it’s use:

public Customer CreateCustomer(CreateCustomerRequest request)
{
  Customer customer = null;

  using (var context = new MyStoreContext())
  {
    using (var transaction = context.Database.BeginTransaction())
    {
      try
      {
        customer = new Customer { FirstName = request.FirstName, LastName = request.LastName };
        context.Customers.Add(customer);
        context.SaveChanges();
        transaction.Commit();
      }
      catch (Exception e)
      {
        transaction.Rollback();
        throw;
      }
    }
  }

  return customer;
}

This is the approach recommended by Microsoft for achieving transactions with Entity Framework going forward. If you’re deploying applications with Entity Framework 6 and beyond, this will be your safest choice for Unit of Work implementations which only require database operation participation. Similar to a couple of the previous approaches we’ve already considered, the drawbacks of using this directly are that it creates opaque dependencies and adds repetitive infrastructure code to all of your application services. This is also a viable option, but I would recommend coupling this with other approaches we’ll look at later to improve the readability and maintainability of your application services.

Unit of Work Repository Manager

The first approach I encountered when researching how others were facilitating the Unit of Work pattern with Entity Framework was a strategy set forth by Microsoft’s guidance on the topic here. This strategy involves creating a UnitOfWork class which encapsulates an instance of the DbContext and exposes each repository as a property. Clients of repositories take a dependency upon an instance of UnitOfWork and access each repository as needed through properties on the UnitOfWork instance. The UnitOfWork type exposes a SaveChanges() method to be used when all the changes made through the repositories are to be persisted to the database. Here is an example of this approach:

public interface IUnitOfWork
{
  ICustomerRepository CustomerRepository { get; }
  IOrderRepository OrderRepository { get; }
  void Save();
}

public class UnitOfWork : IDisposable, IUnitOfWork
{
  readonly MyContext _context = new MyContext();
  ICustomerRepository _customerRepository;
  IOrderRepository _orderRepository;

  public ICustomerRepository CustomerRepository
  {
    get { return _customerRepository ?? (_customerRepository = new CustomerRepository(_context)); }
  }

  public IOrderRepository OrderRepository
  {
    get { return _orderRepository ?? (_orderRepository = new OrderRepository(_context)); }
  }

  public void Dispose()
  {
    if (_context != null)
    {
      _context.Dispose();
    }
  }

  public void Save()
  {
    _context.SaveChanges();
  }
}

public class CustomerService : ICustomerService
{
  readonly IUnitOfWork _unitOfWork;

  public CustomerService(IUnitOfWork unitOfWork)
  {
    _unitOfWork = unitOfWork;
  }

  public void CreateCustomer(CreateCustomerRequest request)
  {
    customer = new Customer { FirstName = request.FirstName, LastName = request.LastName };
    _unitOfWork.CustomerRepository.Add(customer);
    _unitOfWork.Save();
  }
}

It isn’t hard to imagine how this approach was conceived given it closely mirrors the typical implementation of the DbContext instance you find in Entity Framework guidance where public instances of DbSet are exposed for each aggregate root. Given this pattern is presented on the ASP.Net website and comes up as one of the first results when doing a search for “Entity Framework” and “Unit of Work”, I imagine this approach has gained some popularity among .Net developers. There are, however, a number of issues I have with this approach.

First, this approach leads to opaque dependencies. Due to the fact that classes interact with repositories through the UnitOfWork instance, the client interface doesn’t clearly express the inherent business-level collaborators it depends upon (i.e. any aggregate root collections).

Second, this violates the Open/Closed Principle. To add new aggregate roots to the system requires modifying the UnitOfWork each time.

Third, this violates the Single Responsibility Principle. The single responsibility of a Unit of Work implementation should be to encapsulate the behavior necessary to commit or rollback an set of operations atomically. The instantiation and management of repositories or any other component which may wish to enlist in a unit of work is a separate concern.

Lastly, this results in a nominal abstraction which is semantically coupled with Entity Framework. The example code for this approach sets forth an interface to the UnitOfWork implementation which isn’t the approach used in the aforementioned Microsoft article. Whether you take a dependency upon the interface or the implementation directly, however, the presumption of such an abstraction is to decouple the application from using Entity Framework directly. While such an abstraction might provide some benefits, it reflects Entity Framework usage semantics and as such doesn’t really decouple you from the particular persistence technology you’re using. While you could use this approach with another ORM (e.g. NHibernate), this approach is more of a reflection of Entity Framework operations (e.g. it’s flushing model) and usage patterns. As such, you probably wouldn’t arrive at this same abstraction were you to have started by defining the abstraction in terms of the behavior required by your application prior to choosing a specific ORM (i.e. following The Dependency Inversion Principle). You might even find yourself violating the Liskof Substitution Principle if you actually attempted to provide an alternate ORM implementation. Given these issues, I would advise people to avoid this approach.

Injected Unit of Work and Repositories

For those inclined to make all dependencies transparent while maintaining an abstraction from Entity Framework, the next strategy may seem the natural next step. This strategy involves creating an abstraction around the call to DbContext.SaveChanges() and requires sharing a single instance of DbContext among all the components whose operations need to participate within the underlying SaveChanges() call as a single transaction.

Here is an example:

public class CustomerService : ICustomerService
{
  readonly IUnitOfWork _unitOfWork;
  readonly ICustomerRepository _customerRepository;

  public CustomerService(IUnitOfWork unitOfWork, ICustomerRepository customerRepository)
  {
    _unitOfWork = unitOfWork;
    _customerRepository = customerRepository;
  }

  public void CreateCustomer(CreateCustomerRequest request)
  {
    customer = new Customer { FirstName = request.FirstName, LastName = request.LastName };
    _customerRepository.Add(customer);
    _unitOfWork.Save();
  }
}

While this approach improves upon the opaque design of the Repository Manager, there are several issues I find with this approach as well.

Similar to the first example, this UnitOfWork implementation is still semantically coupled to how Entity Framework is urging you to think about things. Entity Framework wants you to call SaveChanges() whenever you’re ready to flush any INSERT, UPDATE, or DELETE operations you’ve issued against the database and this abstraction basically surfaces this behavior. If you were to use an alternate framework that supported a different flushing model (e.g. NHibernate), you likely wouldn’t end up with the same abstraction.

Moreover, this approach has no definitive Unit of Work boundary. With this approach, you aren’t defining a logical Unit of Work, but are merely injecting a UnitOfWork you can participate within. When you invoke the underlying DBContext.SaveChanges() method, it isn’t explicit what work will be committed.

While this approach corrects a few design issues I find with the Repository Manager, overall I like this approach even less. At least with the Repository Manager approach you have a defined Unit of Work boundary which is kind of the whole point. My recommendation would be to avoid this approach as well.

Repository SaveChanges Method

The next strategy is basically a variation on the previous one. Rather than injecting a separate type whose sole purpose is to provide an indirect way to call the SaveChanges() method, some merely expose this through the Repository:

public class CustomerService : ICustomerService
{
  readonly ICustomerRepository _customerRepository;

  public CustomerService(ICustomerRepository customerRepository)
  {
    _customerRepository = customerRepository;
  }

  public void CreateCustomer(CreateCustomerRequest request)
  {
    customer = new Customer { FirstName = request.FirstName, LastName = request.LastName };
    _customerRepository.Add(customer);
    _customerRepository.SaveChanges();
  }
}

This approach shares many of the same issues with the previous one. While it reduces a bit of infrastructure noise, it’s still semantically coupled to Entity Framework’s approach and still lacks a defined Unit of Work boundary. Additionally, it lacks clarity as to what happens when you call the SaveChanges() method. Given the Repository pattern is intended to be a virtual collection of all the entities within your system of a given type, one might suppose a method named “SaveChanges” means that you are somehow persisting any changes made to the particular entities represented by the repository (setting aside the fact that doing so is really a subversion of the pattern’s purpose). On the contrary, it really means “save all the changes made to any entities tracked by the underlying DbContext”. I would also recommend avoiding this approach.

Unit of Work Per Request

A pattern I’m a bit embarrassed to admit has been characteristic of many projects I’ve worked on in the past (though not with EF) is to create a Unit of Work implementation which is scoped to a Web Application’s Request lifetime. Using this approach, whatever method is used to facilitate a Unit of Work is configured with a DI container using a Per-HttpRequest lifetime scope and the Unit of Work boundary is opened by the first component being injected by the UnitOfWork and committed/rolled-back when the HttpRequest is disposed by the container.

There are a few different manifestations of this approach depending upon the particular framework and strategy you’re using, but here’s a pseudo-code example of how configuring this might look for Entity Framework with the Autofac DI container:

builder.RegisterType()
        .As()
        .InstancePerRequest()
        .OnActivating(x =>
        {
          // start a transaction
        })
        .OnRelease(context =>
        {
          try
          {
            // commit or rollback the transaction
          }
          catch (Exception e)
          {
            // log the exception
            throw;
          }
        });

public class SomeService : ISomeService
{
  public void DoSomething()
  {
    // do some work
  }
}

While this approach eliminates the need for your services to be concerned with the Unit of Work infrastructure, the biggest issue with this is when an error happens to occur. When the application can’t successfully commit a transaction for whatever reason, the rollback occurs AFTER you’ve typically relinquished control of the request (e.g. You’ve already returned results from a controller). When this occurs, you may end up telling your customer that something happened when it actually didn’t and your client state may end up out of sync with the actual persisted state of the application.

While I used this strategy without incident for some time with NHibernate, I eventually ran into a problem and concluded that the concern of transaction boundary management inherently belongs to the application-level entry point for a particular interaction with the system. This is another approach I’d recommend avoiding.

Instantiated Unit of Work

The next strategy involves instantiating a UnitOfWork implemented using either the .Net framework TransactionScope class or the transaction API introduced by Entity Framework 6 to define a transaction boundary within the application service. Here’s an example:

public class CustomerService : ICustomerService
{
  readonly ICustomerRepository _customerRepository;

  public CustomerService(ICustomerRepository customerRepository)
  {
    _customerRepository = customerRepository;
  }

  public void CreateCustomer(CreateCustomerRequest request)
  {
    using (var unitOfWork = new UnitOfWork())
    {
      try
      {
        customer = new Customer { FirstName = request.FirstName, LastName = request.LastName };
        _customerRepository.Add(customer);        
        unitOfWork.Commit();
      }
      catch (Exception ex)
      {
        unitOfWork.Rollback();
      }
    }
  }
}

Functionally, this is a viable approach to facilitating a Unit of Work boundary with Entity Framework. A few drawbacks, however, are that the dependency upon the Unit Of Work implementation is opaque and that it’s coupled to a specific implementation. While this isn’t a terrible approach, I would recommend other approaches discussed here which either surface any dependencies being taken on the Unit of Work infrastructure or invert the concerns of transaction management completely.

Injected Unit of Work Factory

This strategy is similar to the one presented in the Instantiated Unit of Work example, but makes its dependence upon the Unit of Work infrastructure transparent and provides a point of abstraction which allows for an alternate implementation to be provided by the factory:

public class CustomerService : ICustomerService
{
  readonly ICustomerRepository _customerRepository;
  readonly IUnitOfWorkFactory _unitOfWorkFactory;

  public CustomerService(IUnitOfWorkFactory unitOfWorkFactory, ICustomerRepository customerRepository)
  {
    _customerRepository = customerRepository;
    _unitOfWorkFactory = unitOfWorkFactory;
  }

  public void CreateCustomer(CreateCustomerRequest request)
  {
    using (var unitOfWork = _unitOfWorkFactory.Create())
    {
      try
      {
        customer = new Customer { FirstName = request.FirstName, LastName = request.LastName };
        _customerRepository.Add(customer);
        unitOfWork.Commit();
      }
      catch (Exception ex)
      {
        unitOfWork.Rollback();
      }
    }
  }
}

While I personally prefer to invert such concerns, I consider this to be a sound approach.

As a side note, if you decide to use this approach, you might also consider utilizing your DI Container to just inject a Func to avoid the overhead of maintaining an IUnitOfWorkFactory abstraction and implementation.

Unit of Work ActionFilterAttribute

For those who prefer to invert the Unit of Work concerns as I do, the following approach provides an easy to implement solution for those using ASP.Net MVC and/or Web API. This technique involves creating a custom Action filter which can be used to control the boundary of a Unit of Work at the Controller action level. The particular implementation may vary, but here’s a general template:

public class UnitOfWorkFilter : ActionFilterAttribute
{
  public override void OnActionExecuting(ActionExecutingContext filterContext)
  {
    // begin transaction
  }

  public override void OnActionExecuted(ActionExecutedContext filterContext)
  {
    // commit/rollback transaction
  }
}

The benefits of this approach are that it’s easy to implement and that it eliminates the need for introducing repetitive infrastructure code into your application services. This attribute can be registered with the global action filters, or for the more discriminant, only placed on actions resulting in state changes to the database. Overall, this would be my recommended approach for Web applications. It’s easy to implement, simple, and keeps your code clean.

Unit of Work Decorator

A similar approach to the use of a custom ActionFilterAttribute is the creation of a custom decorator. This approach can be accomplished by utilizing a DI container to automatically decorate specific application service interfaces with a class which implements a Unit of Work boundary.

Here is a pseudo-code example of how configuring this might look for Entity Framework with the Autofac DI container which presumes that some form of command/command-handler pattern is being utilized (e.g. frameworks like MediatR , ShortBus, etc.):

// DI Registration
builder.RegisterGenericDecorator(
     typeof(TransactionRequestHandler<,>), // the decorator instance
     typeof(IRequestHandler<,>), // the types to decorate
    "requestHandler", // the name of the key to decorate
     null); // the name of the key to this decorator



public class TransactionRequestHandler : IRequestHandler where TResponse : ApplicationResponse
{
  readonly DbContext _context;
  readonly IRequestHandler _decorated;

  public TransactionRequestHandler(IRequestHandler decorated, DbContext context)
  {
    _decorated = decorated;
    _context = context;
  }

  public TResponse Handle(TRequest request)
  {
    TResponse response;

    // Open transaction here

    try
    {
      response = _decorated.Handle(request);

      // commit transaction

    }
    catch (Exception e)
    {
      //rollback transaction
      throw;
    }

    return response;
  }
}


public class SomeRequestHandler : IRequestHandler
{
  public ApplicationResponse Handle()
  {
    // do some work
    return new SuccessResponse();
  }
}

While this approach requires a bit of setup, it provides an alternate means of facilitating the Unit of Work pattern through a decorator which can be used by other consumers of the application layer aside from just ASP.Net (i.e. Windows services, CLI, etc.) It also provides the ability to move the Unit of Work boundary closer to the point of need for those who would rather provide any error handling prior to returning control to the application service client (e.g. the Controller actions) as well as giving more control over the types of operations decorated (e.g. IQueryHandler vs. ICommandHandler). For Web applications, I’d recommend trying the custom Action Filter approach first, as it’s easier to implement and doesn’t presume upon the design of your application layer, but this is certainly a good approach if it fits your needs.

Conclusion

Out of the approaches I’ve evaluated, there are several that I see as sound approaches which maintain some minimum adherence to good design practices. Of course, which approach is best for your application will be dependent upon the context of what you’re doing and to some extent the design values of your team.

 

Introducing NUnit.Specifications

On March 8, 2015, in Uncategorized, by derekgreer

 

I recently started working with a new team that uses NUnit as their testing framework.  While I think NUnit is a solid framework, I don’t think the default API and style lead to effective tests.

As an advocate of Test-Driven Development, I’ve always appreciated how context/specification-style frameworks such as Machine.Specifications (MSpec) allow for the expression of executable specifications which model how a system is expected to be used rather than the typical unit-test style of testing which tends to obscure the overall purpose of the system.

To facilitate a context/specification-style API, I created a base class which makes use of the hooks provided by the NUnit testing framework to emulate MSpec.  I’ve published this code under the project name NUnit.Specifications.

The following is an example NUnit test written using the ContextSpecification based class from NUnit.Specifications using the Should assertion library:

image01

One nice benefit of building on top of NUnit is the wide-spread tool support available.  Here is the test as seen through various test runners:

Resharper Test Runner:

image03

TestDriven.Net: (see notes below)

image04

NUnit Test Runner:

image00

NUnit Test Adaptor for Visual Studio:

image02

 

One caveat I discovered with the TestDriven.Net runner is it’s failure to recognize tests without the specification referencing types from the NUnit.Framework namespace (e.g. TestFixtureAttribute, CategoryAttribute, use of Assert, etc.).  That is to say, it didn’t seem to be enough that the spec inherited from a base type with NUnit attributes, but something in the derived class had to reference a type from the NUnit.Framework namespace for the test to be recognized.  Therefore, the TestDriven.Net results shown above were actually achieved by annotating the class with [Category(“component”)] explicitly.

 

Other Stuff

As a convenience, NUnit.Specifications also provides attributes for denoting categories of Unit, Component, Integration, Acceptance, and Subcutaneous as well as a Catch class (as provided by the MSpec library) for working with exceptions.

You can obtain the NUnit.Specifications from NuGet or grab the source from github.

 

Being Agile

On March 5, 2014, in Uncategorized, by derekgreer

When the term “agile” is used in reference to one’s development processes, it more often than not seems to be used in a monolithic way.  It isn’t that many aren’t cognizant of the fact that people tend to use a subset, combination, or modified form of the main agile processes marketed today, but even in this recognition there seems to be a tendency to  think about each variation in monolithic terms.

Which Process Do You Use?

If you’ve ever attended a local agile user group to hear various processes compared, you may have encountered the speaker asking for a show of hands to find out what various processes people are using.  If you’ve been to one of these meetings, it might have gone a little like this:

Let’s see a show of hands of those in the room who are using Scrum.  That’s quite a few of you.  How about Scrum-ban?  Nice.  Now let see a show of hands for people using Extreme Programming or XP.  Not as many of you guys here.  Who’s using the Rational Unified Process or RUP?  Anyone?  Who isn’t currently using an agile process, but came here today to learn more about agile?  Glad to have you guys here!  I hope this session will be informative.  So, is anyone using any other process we haven’t mentioned?  Yes, you sir.  What is your group using?

Um, we’ll, we use a process we call ‘Scrum-but’.  We use Scrum, but we leave out some things. [a chuckle is heard from a few individuals].

While such a meeting generally turns to discussing the various attributes of different processes, usually hitting major highlights of Scrum such as iterations, stand-ups, planning meetings, user stories, retrospectives, etc., it still does so in terms of the attributes of different monolithic processes.  The deficiency in this line of thinking is that it trains people to think in terms of what percentage of a name-brand agile process their team is adhering to, or should seek to adopt, rather than thinking about what problems these processes individually have evolved to solve.  “Which Process Do You Use” is the wrong question.

Are you Agile?

Many teams like to say “We’re an agile development shop”.  Now to be fair, when we want to convey to someone which common group of practices we follow, it can be useful to use labels such as Scrum, XP, Scrum-ban, etc.  That said, there seems to be an awful lot of shops that say they are agile when what they mean is that they do “stand-ups” (i.e. a daily status meetings for keeping their managers informed about what their up to) and “iterate” (i.e. chunky waterfall)  their way to a deadline that’s been handed down by upper management or a sales department.

What is Agile?

If you ask what agile is in a typical development shop today, you’ll more than likely find yourself in a conversation about Scrum or some other process than to talk about the actual meaning of the word.  Let’s actually go back and look at the definition:

ag·ile -  adjective \ˈa-jəl, -ˌjī(-ə)l\

1:  marked by ready ability to move with quick easy grace <an agile dancer>
2:  having a quick resourceful and adaptable character <an agile mind>

Based on this definition from Merriam-Webster’s dictionary, being agile is “an ability to change, or adapt to change, quickly”.  While communicating that you adhere to a given agile process may have it’s usefulness at times, thinking of your process in this monolithic way doesn’t promote the kind of thinking that leads to continuous improvement.  Rather than thinking in terms of which process we use, we should think in terms of what aspects of change our processes help us adapt to.

Toward An Agile View of Process

It seems that many team’s first foray into agile processes is the selection of Scrum by their management.  They’ve heard about this Scrum and how it can save them money, so they’ve sent the managers and Business Analysts off to Scrum Master training to outfit them with their Scrum-capes and Scrum-tights.

png;base645f2e29de6785c7d7

Introducing a process like Scrum (or whatever portions of Scrum a company’s existing  process will tolerate) will sometimes improve upon matters, but only insofar as one’s cargo-cult emulation of the prescribed practices happen to match up with the problems for which they were conceived.  Unfortunately this approach to adopting agile processes often seems to lead to a bunch of people going through the motions without really understanding what the purpose is.  Worse, when the local Scrum training consultants sell them on the fact that they don’t really have to give up things like deadlines, using business analysts to gather all the requirements, or otherwise restructuring their organization, they generally end up with some empty shell of a process which is really nothing more than their old waterfall process with more micro-management.

A better approach is to first learn about the types of issues different agile practices seek to address and then consider how your team’s existing process can improve if each practice were applied individually.  Rather than thinking of your team as “agile” or “not agile”, consider asking the following types of questions:

Is my team agile WITH RESPECT TO …

                   changes to the product’s desired features? 
               changes to the product’s code base? 
               changes to the team’s understanding of the domain? 
               changes to the team’s understanding of the technologies used? 
               changes to team member’s hours of availability? 
               changes to individuals on the team? 
               changes to skillsets within the team? 
               changes to the cost of materials and resources? 
               changes to the compatibility or availability of 3rd-party software? 
               etc.

Different agile practices address different kinds of problems, but to really become an agile team you need to learn how to identify problems and solutions on an ongoing basis, not just implement processes.  Let’s stop thinking of ourselves as agile or not agile and start asking the question “What are we agile at?

Tagged with:  

Expected Objects Custom Comparisons

On November 17, 2013, in Uncategorized, by derekgreer

ExpectedObjects is a testing library I developed a few years ago to facilitate using the Expected Objects pattern within my specifications to avoid obscure tests.  You can find the original introduction to the library here.

As of version 1.1.0, the ExpectedObjects library has been updated to include a feature called Custom Comparisons.  The standard behavior of the library is to traverse a strategy chain (which is itself configurable) to determine which comparison strategy is to be used for each type of object encountered within the object graph.  The Custom Comparisons feature allows you to override this behavior for specific properties.

For example, let’s say we’re writing a end-to-end test which validates a Receipt class as follows:

public class Receipt

{
    public string Name { get; set; }
    public DateTime TransactionDate { get; set; }
    public string VerificationCode { get; set; }
}

 

Given the following class, the VerificationCode property would probably not be a value you could anticipate.  In such a case, while you can’t verify that the property has a specific value, you may care that it at least has some value.  This is where the Custom Comparisons feature can help.  We can verify that the actual Receipt received matches the expected receipt structure using the following expected object configuration:

var expected = new
{
	Name = "John Doe",
	DateTime = DateTime.Today,
	VerificationCode = Expect.NotNull()
}.ToExpectedObject();


var actual = new Receipt
{
	Name = "John Doe",
	DateTime = DateTime.Today,
	VerificationCode = "ABC123"
};



expected.ShouldMatch(actual);

In the event that the VerificationCode property is null, the library will raise an exception with the following message:

For Receipt.VerificationCode, expected a non-null value but found [null].

The ExpectedObjects library currently provides a static Expect class which  includes convenience methods to check for null, not null, and an Any<T> comparison for checking that an object is of a specific type (e.g. Expect.Any<Receipt>()).  To supply your own comparisons, simply implement the IComparsion interface which defines the custom comparison and the text to include within any exception messages raised (e.g. “For SomeType.SomeProperty, expected [text you supply here] but found “42”).

 
 

This is the first article in a new sporadic series I’ll contribute to from time to time wherein I’ll discuss some noteworthy issues I’ve wrestled with. In this installment, I’ll be discussing an NHibernate issue which took me some time to work through. So, let’s dive into the story …

The Context

In an application I was recently working on, a need arose to modify a section of code involving two entities which should have been modeled using a parent/child relationship but which only had a loose association in the database. The primary table in the database schema for what needed to be the parent object in the domain only contained a unenforced foreign key column which matched up with a candidate key on the table used for what needed to be the child object. In the section of code I needed to modify, a View Model was being created by first retrieving data for the parent object and subsequently for the child object. I’m not exactly sure what lead to this path, but I think it had something to do with the original developer’s attempt at using a surrogate key strategy for all the tables and later attempts by others to pull the data into a domain model with NHibernate.

At any rate, while I wasn’t in a position to revamp the whole design, I knew there was a way to express many-to-one mappings in NHibernate using non-primary keys, so after a little searching and some trial-and-error I got the parent entity referencing the child entity with a Fluent NHibernate Auto-Mapping configuration similar to the following:

return AutoMap.AssemblyOf(new AutomappingConfiguration())
  .Override(map => map.References(p => p.Child, 
  "ParentColumnChildKeyName").PropertyRef("ChildCandidateKeyColumnName")
    .Fetch.Join());

 

Part of the changes required to make this work was some refactoring of an import job used to populate the database which relied upon the domain model and mappings to populate the parent and child data. After changing the parent entity to reference the child entity instead of just a candidate key to the child entity, I needed to modify the import job to persist the relationship between the parent and the child. To do this, I injected a pre-existing ChildRepository to query for existing instances of the child entities (which had its own separate import process) so I could associate it with the parent entity upon saving. All of the changes worked as expected for the client portion of the application, but the changes broke some acceptance tests for the import job. The error I started receiving in the tests was as follows:

null id in “MyEntityType” entry (don't flush the Session after an exception occurs)

In this case the “MyEntityType” was another entity which had a many-to-one mapping with the aforementioned parent entity. After looking over the code and scratching my head for a bit, I decided to do a search on this particular error and read a few articles which at first didn’t seem to speak to my particular scenario. The advice I read basically boiled down to “Don’t try to do stuff with the session after you receive an error”. That certainly made sense, but upon stepping through the code I couldn’t see anywhere I was catching an error and proceeding to do something further with the session. I then decided to add a try/catch around the offending code and suddenly I saw the issue: trying to save an entity associated with one open session with an entity from another open session.

The Solution

Ultimately, the reason I couldn’t see the error was due to an issue with a manifestation of some common infrastructure code my team uses when working working with NHibernate. We use Autofac for dependency injection, and to facilitate transactions we use Autofac’s OnActivating() and OnRelease() methods to begin an NHibernate transaction and to handle the rollback or commit of the transaction when complete. Here was the offending code:

builder.Register(c => c.ResolveNamed(RegistrationKey).OpenSession())
	.As()
	.OnActivating(x => x.Instance.BeginTransaction())
	.OnRelease(session =>
		{
			try
			{
				if (!session.Transaction.WasRolledBack && session.Transaction.IsActive)
				{				
					session.Transaction.Commit();
				}
			}
			finally
			{
				session.Close();
				session.Dispose();
			}
		});
When used within the context of our Web applications, this code would contain a call to register the ISession with an HTTP Request lifetime scope, but this import job didn’t require a shared ISession prior to my changes. To fix the problem, I added a call to register the ISession as InstancePerLifetimeScope() which causes the same lifetime scope used to resolve the job to be used for resolving any instances of ISession. Additionally, I added a try/catch/throw around the session to at least provide some logging of similar issues should this ever come up again.
Tagged with:  

Introducing RabbitBus

On June 1, 2012, in Uncategorized, by derekgreer

What Is It?

RabbitBus is a .Net client API for use with RabbitMQ.  RabbitBus was designed to make working with RabbitMQ easy by providing a fluent-interface which places a focus on discoverability and by providing commonly needed constructs not provided through the official RabbitMQ .Net client API.

 

How Do I Use It?

The RabbitBus library was designed to allow for the centralization of all RabbitMQ configuration at application startup, separating the concerns of routing, serialization, and error handling from the central concerns of publishing and consuming messages.

RabbitBus works with object-based messages.  For example, if you have an application from which you would like to publish status update messages, you might model your message using the following class:

[Serializable]
public class StatusUpdate
{
  public StatusUpdate(string status)
  {
    Status = status;
  }

  public string Status { get; set; }
}

After configuring how messages are to be handled, you’ll then use an instance of a Bus type to publish or subscribe to each message.

Configuration of the Bus is handled through a BusBuilder.  The BusBuilder type provides an API for specifying how serialization, publication, consumption, and other concerns will be handled by the Bus.

If you’re already familiar with RabbitMQ concepts then you should find working with RabbitBus to be fairly easy.  The following demonstrates some of the basic usage scenarios:

Message Publication

To configure a producer application to publish messages of type StatusUpdate to a direct exchange named “status-update-exchange” on localhost, you would then use the following configuration:

 
Bus bus = new BusBuilder()
  .Configure(ctx => ctx.Publish<StatusUpdate>()
                         .WithExchange("status-update-exchange"))
  .Build();
bus.Connect();

To publish a StatusUpdate message, you would then make the following invocation:

bus.Publish(new StatusUpdate("OK"));

Message Subscription

To configure a consumer application to subscribe to StatusUpdate messages on localhost, you would use the following configuration:

Bus bus = new BusBuilder()
  .Configure(ctx => ctx.Consume<StatusUpdate>()
                         .WithExchange("status-update-exchange")
                         .WithQueue("status-update-queue"))
  .Build();

To subscribe to StatusUpdate messages, you would then make the following invocation:

bus.Subscribe<StatusUpdate>(messageContext => { /* handle message */ });

 

What Other Features Are Provided?

RabbitBus provides the following features:

  • support of all AMQP 0.9.1 exchange types (i.e. direct, fanout, topic, and headers)
  • remote procedure calls (RPC)
  • deadletter queue support
  • convention based auto-subscription
  • RabbitMQ push and pull API support
  • extensible serialization (Binary serialization by default, Json serialization provided by RabbitBus.Serialization.Json)
  • customizable error handling
  • RabbitMQ server restart recovery
  • configurable offline queuing support
  • logging

 

Where Can I Learn More?

You can find more information about how to use RabbitBus on the RabbitBus Wiki.  Additionally, RabbitBus was developed using Test-Driven Development and care was taken in the implementation of its executable specification suite to maximize demonstration of the API’s intended use.

 

Where Do I Get It?

RabbitBus is available as a NuGet package and the source is available on Github.

Tagged with:  

RabbitMQ for Windows: Headers Exchanges

On May 29, 2012, in Uncategorized, by derekgreer

This is the eighth and final installment to the series: RabbitMQ for Windows.  In the last installment, we walked through creating a topic exchange example.  As the last installment, we’ll walk through a headers exchange example.

Headers exchanges examine the message headers to determine which queues a message should be routed to.  As discussed earlier in this series, headers exchanges are similar to topic exchanges in that they allow you to specify multiple criteria, but offer a bit more flexibility in that the headers can be constructed using a wider range of data types (1).

To subscribe to receive messages from a headers exchange, a dictionary of headers is specified as part of the binding arguments.  In addition to the headers, a key of “x-match” is also included in the dictionary with a value of “all”, specifying that messages must be published with all the specified headers in order to match, or “any”, specifying that the message needs to only have one of the specified headers specified.

As our final example, we’ll create a Producer application which publishes the message “Hello, World!” using a headers exchange.  Here’s our Producer code:

using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Text;
using System.Threading;
using RabbitMQ.Client;
using RabbitMQ.Client.Framing.v0_9_1;

namespace Producer
{
  class Program
  {
    const string ExchangeName = "header-exchange-example";

    static void Main(string[] args)
    {
      var connectionFactory = new ConnectionFactory();
      connectionFactory.HostName = "localhost";

      IConnection connection = connectionFactory.CreateConnection();
      IModel channel = connection.CreateModel();
      channel.ExchangeDeclare(ExchangeName, ExchangeType.Headers, false, true, null);
      byte[] message = Encoding.UTF8.GetBytes("Hello, World!");

      var properties = new BasicProperties();
      properties.Headers = new Dictionary<string, object>();
      properties.Headers.Add("key1", "12345");
      
      TimeSpan time = TimeSpan.FromSeconds(10);
      var stopwatch = new Stopwatch();
      Console.WriteLine("Running for {0} seconds", time.ToString("ss"));
      stopwatch.Start();
      var messageCount = 0;

      while (stopwatch.Elapsed < time)
      {
        channel.BasicPublish(ExchangeName, "", properties, message);
        messageCount++;
        Console.Write("Time to complete: {0} seconds - Messages published: {1}\r", (time - stopwatch.Elapsed).ToString("ss"), messageCount);
        Thread.Sleep(1000);
      }

      Console.Write(new string(' ', 70) + "\r");
      Console.WriteLine("Press any key to exit");
      Console.ReadKey();
      message = Encoding.UTF8.GetBytes("quit");
      channel.BasicPublish(ExchangeName, "", properties, message);
      connection.Close();
    }
  }
}

In the Producer, we’ve used a generic dictionary of type Dictionary<string, object> and added a single key “key1” with a value of “12345”.  As with our previous example, we’re using a stopwatch as a way to publish messages continually for 10 seconds.

For our Consumer application, we can use an “x-match” argument of “all” with the single key/value pair specified by the Producer, or we can use an “x-match” argument of “any” which includes the key/value pair specified by the Producer along with other potential matches.  We’ll use the latter for our example.   Here’s our Consumer code:

using System;
using System.Collections;
using System.Collections.Generic;
using System.Text;
using RabbitMQ.Client;
using RabbitMQ.Client.Events;

namespace Consumer
{
  class Program
  {
    const string QueueName = "header-exchange-example";
    const string ExchangeName = "header-exchange-example";

    static void Main(string[] args)
    {
      var connectionFactory = new ConnectionFactory();
      connectionFactory.HostName = "localhost";

      IConnection connection = connectionFactory.CreateConnection();
      IModel channel = connection.CreateModel();
      channel.ExchangeDeclare(ExchangeName, ExchangeType.Headers, false, true, null);
      channel.QueueDeclare(QueueName, false, false, true, null);

      IDictionary specs = new Dictionary();
      specs.Add("x-match", "any");
      specs.Add("key1", "12345");
      specs.Add("key2", "123455");
      channel.QueueBind(QueueName, ExchangeName, string.Empty, specs);

      channel.StartConsume(QueueName, MessageHandler);
      connection.Close();
    }

    public static void MessageHandler(IModel channel, DefaultBasicConsumer consumer, BasicDeliverEventArgs eventArgs)
    {
      string message = Encoding.UTF8.GetString(eventArgs.Body);
      Console.WriteLine("Message received: " + message);
      foreach (object headerKey in eventArgs.BasicProperties.Headers.Keys)
      {
        Console.WriteLine(headerKey + ": " + eventArgs.BasicProperties.Headers[headerKey]);
      }

      if (message == "quit")
        channel.BasicCancel(consumer.ConsumerTag);
    }
  }
}

Rather than handling our messages inline as we’ve done in previous examples, this example uses an extension method named StartConsume() which accepts a callback to be invoked each time a message is received.  Here’s the extension method used by our example:

using System;
using System.IO;
using System.Threading;
using RabbitMQ.Client;
using RabbitMQ.Client.Events;

namespace Consumer
{
  public static class ChannelExtensions
  {
    public static void StartConsume(this IModel channel, string queueName,  Action<IModel, DefaultBasicConsumer, BasicDeliverEventArgs> callback)
    {
      var consumer = new QueueingBasicConsumer(channel);
      channel.BasicConsume(queueName, true, consumer);

      while (true)
      {
        try
        {
          var eventArgs = (BasicDeliverEventArgs)consumer.Queue.Dequeue();
          new Thread(() => callback(channel, consumer, eventArgs)).Start();
        }
        catch (EndOfStreamException)
        {
          // The consumer was cancelled, the model closed, or the connection went away.
          break;
        }
      }
    }
  }
}

Setting our solution to run both the Producer and Consumer applications upon startup, running our example produces output similar to the following:

Producer

Running for 10 seconds
Time to complete: 08 seconds - Messages published: 2

Consumer

Message received: Hello, World!
key1: 12345
Message received: Hello, World!
key1: 12345

That concludes our headers exchange example as well as the RabbitMQ for Windows series.  For more information on working with RabbitMQ, see the documentation at http://www.rabbitmq.com or the purchase the book RabbitMQ in Action by Alvaro Videla and Jason Williams.  I hope you enjoyed the series.

 

Footnotes:

1 – See http://www.rabbitmq.com/amqp-0-9-1-errata.html#section_3 and http://hg.rabbitmq.com/rabbitmq-dotnet-client/diff/4def852523e2/projects/client/RabbitMQ.Client/src/client/impl/WireFormatting.cs for supported field types.

Tagged with:  

RabbitMQ for Windows: Topic Exchanges

On May 18, 2012, in Uncategorized, by derekgreer

This is the seventh installment to the series: RabbitMQ for Windows.  In the last installment, we walked through creating a fanout exchange example.  In this installment, we’ll be walking through a topic exchange example.

Topic exchanges are similar to direct exchanges in that they use a routing key to determine which queue a message should be delivered to, but they differ in that they provide the ability to match on portions of a routing key.  When publishing to a topic exchange, a routing key consisting of multiple words separated by periods (e.g. “word1.word2.word3”) will be matched against a pattern supplied by the binding queue.  Patterns may contain an asterisk (“*”) to match a word in a specific segment or a hash (“#”) to match zero or more words.  As discussed earlier in the series, the topic exchange type can be useful for directing messages based on multiple categories or for routing messages originating from multiple sources.

To demonstrate topic exchanges, we’ll return to our logging example, but this time we’ll subscribe to a subset of the messages being published to demonstrate the flexibility of how routing keys are used by topic exchanges.  For this example, we’ll be modeling a scenario where a company may have multiple client installations, each of which may be used to service different sectors of a company’s business model (e.g. Business or Personal sectors).  We’ll use a routing key that specifies the sector and subscribe to messages published for the Personal sector only.

As with our previous examples, we’ll keep things simple by creating console applications for a Producer and a Consumer.  Let’s start by creating the Producer app and establishing a connection using the default settings:

using RabbitMQ.Client;

namespace Producer
{
  class Program
  {
    const long ClientId = 10843;

    static void Main(string[] args)
    {
      var connectionFactory = new ConnectionFactory();
      IConnection connection = connectionFactory.CreateConnection();
    }
  }
}

 

Rather than just publishing messages directly from the Main() method as with our first logging example, let’s create a separate logger object this time.  Here the logger interface and implementation we’ll be using:

  interface ILogger
  {
    void Write(Sector sector, string entry, TraceEventType traceEventType);
  }

  class RabbitLogger : ILogger, IDisposable
  {
    readonly long _clientId;
    readonly IModel _channel;
    bool _disposed;

    public RabbitLogger(IConnection connection, long clientId)
    {
      _clientId = clientId;
      _channel = connection.CreateModel();
      _channel.ExchangeDeclare("direct-exchange-example", ExchangeType.Topic, false, true, null);
    }

    public void Dispose()
    {
      if (!_disposed)
      {
        if (_channel != null && _channel.IsOpen)
        {
          _channel.Close();
        }
      }
      GC.SuppressFinalize(this);
    }

    public void Write(Sector sector, string entry, TraceEventType traceEventType)
    {
      byte[] message = Encoding.UTF8.GetBytes(entry);
      string routingKey = string.Format("{0}.{1}.{2}", _clientId, sector.ToString(), traceEventType.ToString());
      _channel.BasicPublish("topic-exchange-example", routingKey, null, message);
    }

    ~RabbitLogger()
    {
      Dispose();
    }
  }

In addition to an open IConnection, our RabbitLogger class is instantiated with a client Id.  We use this as part of the routing key.  Since each log can vary by sector, we pass a Sector enum as part of the Write() method.  Here’s our Sector enum:

  public enum Sector
  {
    Personal,
    Business
  }

Returning to our Main() method, we now need to instantiate our RabbitLogger and log messages with differing sectors.  As as way to ensure our client has an opportunity to subscribe to our messages and to help emulate a continual stream of log messages being published, let’s use the logger to publish a series of log messages every second for 10 seconds:

      TimeSpan time = TimeSpan.FromSeconds(10);
      var stopwatch = new Stopwatch();
      Console.WriteLine("Running for {0} seconds", time.ToString("ss"));
      stopwatch.Start();

      while (stopwatch.Elapsed < time)
      {
        using (var logger = new RabbitLogger(connection, ClientId))
        {
          Console.Write("Time to complete: {0} seconds\r", (time - stopwatch.Elapsed).ToString("ss"));
          logger.Write(Sector.Personal, "This is an information message", TraceEventType.Information);
          logger.Write(Sector.Business, "This is an warning message", TraceEventType.Warning);
          logger.Write(Sector.Business, "This is an error message", TraceEventType.Error);
          Thread.Sleep(1000);
        }
      }

This code prints out the time remaining just to give us a little feedback on the publishing progress.  Finally, we’ll close our our connection and prompt the user to exit the console application:

      connection.Close();
      Console.Write("                             \r");
      Console.WriteLine("Press any key to exit");
      Console.ReadKey();

 

Here’s the full Producer listing:

using System;
using System.Diagnostics;
using System.Text;
using System.Threading;
using RabbitMQ.Client;

namespace Producer
{
  public enum Sector
  {
    Personal,
    Business
  }

  interface ILogger
  {
    void Write(Sector sector, string entry, TraceEventType traceEventType);
  }

  class RabbitLogger : ILogger, IDisposable
  {
    readonly long _clientId;
    readonly IModel _channel;
    bool _disposed;

    public RabbitLogger(IConnection connection, long clientId)
    {
      _clientId = clientId;
      _channel = connection.CreateModel();
      _channel.ExchangeDeclare("direct-exchange-example", ExchangeType.Topic, false, true, null);
    }

    public void Dispose()
    {
      if (!_disposed)
      {
        if (_channel != null && _channel.IsOpen)
        {
          _channel.Close();
        }
      }
      GC.SuppressFinalize(this);
    }

    public void Write(Sector sector, string entry, TraceEventType traceEventType)
    {
      byte[] message = Encoding.UTF8.GetBytes(entry);
      string routingKey = string.Format("{0}.{1}.{2}", _clientId, sector.ToString(), traceEventType.ToString());
      _channel.BasicPublish("topic-exchange-example", routingKey, null, message);
    }

    ~RabbitLogger()
    {
      Dispose();
    }
  }

  class Program
  {
    const long ClientId = 10843;

    static void Main(string[] args)
    {
      var connectionFactory = new ConnectionFactory();
      IConnection connection = connectionFactory.CreateConnection();

      TimeSpan time = TimeSpan.FromSeconds(10);
      var stopwatch = new Stopwatch();
      Console.WriteLine("Running for {0} seconds", time.ToString("ss"));
      stopwatch.Start();

      while (stopwatch.Elapsed < time)
      {
        using (var logger = new RabbitLogger(connection, ClientId))
        {
          Console.Write("Time to complete: {0} seconds\r", (time - stopwatch.Elapsed).ToString("ss"));
          logger.Write(Sector.Personal, "This is an information message", TraceEventType.Information);
          logger.Write(Sector.Business, "This is an warning message", TraceEventType.Warning);
          logger.Write(Sector.Business, "This is an error message", TraceEventType.Error);
          Thread.Sleep(1000);
        }
      }

      connection.Close();
      Console.Write("                             \r");
      Console.WriteLine("Press any key to exit");
      Console.ReadKey();
    }
  }
}

 

For our Consumer app, we’ll pretty much be using the same code as with our fanout exchange example, but we’ll need to change the exchange type along with the exchange and queue names.  Additionally, we also need to provide a routing key that registers for logs in the Personal sector only.  The messages published by the Producer will be in the form: [client Id].[sector].[log severity], so we can use a routing key of “*.Personal.*” (or alternately “*.Personal.#”).  Here’s the full Consumer listing:

using System;
using System.IO;
using System.Text;
using RabbitMQ.Client;
using RabbitMQ.Client.Events;

namespace Consumer
{
  class Program
  {
    static void Main(string[] args)
    {
      var connectionFactory = new ConnectionFactory();
      IConnection connection = connectionFactory.CreateConnection();
      IModel channel = connection.CreateModel();

      channel.ExchangeDeclare("topic-exchange-example", ExchangeType.Topic, false, true, null);
      channel.QueueDeclare("log", false, false, true, null);
      channel.QueueBind("log", "topic-exchange-example", "*.Personal.*");

      var consumer = new QueueingBasicConsumer(channel);
      channel.BasicConsume("log", true, consumer);

      while (true)
      {
        try
        {
          var eventArgs = (BasicDeliverEventArgs) consumer.Queue.Dequeue();
          string message = Encoding.UTF8.GetString(eventArgs.Body);
          Console.WriteLine(string.Format("{0} - {1}", eventArgs.RoutingKey, message));
        }
        catch (EndOfStreamException)
        {
          // The consumer was cancelled, the model closed, or the connection went away.
          break;
        }
      }

      channel.Close();
      connection.Close();
    }
  }
}

 

Setting the solution to run both the Producer and Consumer on startup, we should see similar output to the following listings:

 

Producer

Running for 10 seconds
Time to complete: 06 seconds

 

Consumer

10843.Personal.Information - This is an information message
10843.Personal.Information - This is an information message
10843.Personal.Information - This is an information message
10843.Personal.Information - This is an information message
10843.Personal.Information - This is an information message
10843.Personal.Information - This is an information message
10843.Personal.Information - This is an information message

 

This concludes our topic exchange example.  Next time, we’ll walk through an example using the final exchange type: Header Exchanges.

Tagged with: