Exploring TypeScript

On August 30, 2016, in Uncategorized, by derekgreer

A proposal to use TypeScript was recently made within my development team, so I’ve taken a bit of time to investigate the platform.  This article reflects my thoughts and conclusions on where the platform is at this point.

 

TypeScript: What is It?

TypeScript is a scripting language created by Microsoft which provides static typing and a class-based object-oriented programming paradigm for transpiling to JavaScript.  In contrast to other compile-to-javascript languages such as CoffeeScript and Dart, TypeScript is a superset of JavaScript which means that TypeScript introduces syntax enhancements to the JavaScript language.

 

Recent Rise In Popularity

TypeScript made it’s debut in late 2012 and was first released in April 2014.  Community interest has been fairly marginal since it’s debut, but has shown an increase since an announcement that the next version of Google’s popular Angular framework would be written in TypeScript.

The following Google Trends chart shows the interest parallel between Angular 2 and TypeScript from 2014 to present:

 

The Good

Type System

TypeScript provides an optional type system which can aid in catching certain types of programing errors at compile time.  The information derived from the type system also serves as the foundation for most of the tooling surrounding TypeScript.

The following is a simple example showing a basic usage of the type system:

interface Person {
    firstName: string;
    lastName: string;
}

class Greeter {
    greeting: string;
    constructor(message: string) {
        this.greeting = message;
    }
    greet(person: Person) {
        return this.greeting + " " + person.firstName + " " + person.lastName;
    }
}

let greeter = new Greeter("Hello,");
let person = { firstName: "John", lastName: "Doe" };

document.body.innerHTML = greeter.greet(person);

In this example, a Person interface is declared with two string properties: firstName and lastName.  Next, a Greeter class is created with a greet() function which is declared to take a parameter of type Person.  Next, instances of Greeter and Person are instantiated and the Greeter instance’s greet() function is invoked passing in the Person instance.  At compile time, TypeScript is able to detect whether the object passed to the greet() function conforms to the Person interface and whether the values assigned to the expected properties are of the expected type.

Tooling

While the type system and programming paradigm introduced by TypeScript are its key features, it’s really the tooling facilitated by the type system that makes the platform shine.  Being notified of syntax errors at compile time is helpful, but it’s really the productivity that stems from features such as design-time type checking, intellisense/code-completion, and refactoring that make TypeScript compelling.

TypeScript is currently supported by many popular IDEs including Visual Studio, WebStorm, Sublime Text, Brackets, and Eclipse.

EcmaScript Foundation

One of the differentiators of TypeScript from other languages which transpile to JavaScript (CoffeeScript, Dart, etc.) is that TypeScript builds upon the JavaScript language.  This means that all valid JavaScript code is valid TypeScript code.

Idiomatic JavaScript Generation

One of the goals of the TypeScript team was to ensure the TypeScript compiler emitted idiomatic JavaScript.  This means the code produced by the TypeScript compiler is readable and generally follows normal JavaScript conventions.

 

The Not So Good

Type Definitions and 3rd-Party Libraries

Typescript requires type definitions to be created for 3rd-party code to realize many of the benefits of the tooling.  While  the DefinitelyTyped project provides type definitions for the most popular JavaScript libraries used today, there will probably be the occasion where the library you want to use has no type definition file.

Moreover, interfaces maintained by 3rd-party sources are somewhat antithetical to their primary purpose.  Interfaces should serve as contracts for the behavior of a library.  If the interfaces are maintained by a 3rd-party, however, they can’t be accurately described as “contracts” since no implicit promise is being made by the library author that the interface being provided accurately matches the library’s behavior.  It’s probably the case that this doesn’t prove to be much of an issue in practice, but at minimum I would think relying upon type definitions created by 3rd parties would eventually lead to the available type definitions lagging behind new releases of the libraries being used.

Type System Overhead

Introducing a typesystem is a bit of a double-edged sword.  While a type system can provide a lot of benefits, it also adds syntactical overhead to a codebase.  In some cases this can result in the code you maintain actually being harder to read and understand than the code being generated.  This can be illustrated using Anders Hejlsberg’s example presented at Build 2014.

The TypeScript source in the first listing shows a generic sortBy method which takes a callback for retrieving the value by which to sort while the second listing shows the generated JavaScript source:

interface Entity {
	name: string;
}

function sortBy(a: T[], keyOf: (item: T) => any): T[] {
	var result = a.slice(0);
	result.sort(function(x, y) {
		var kx = keyOf(x);
		var ky = keyOf(y);
		return kx > ky ? 1: kx < ky ? -1 : 0; }); return result; } var products = [ { name: "Lawnmower", price: 395.00, id: 345801 }, { name: "Hammer", price: 5.75, id: 266701 }, { name: "Toaster", price: 19.95, id: 400670 }, { name: "Padlock", price: 4.50, id: 560004 } ]; var sorted = sortBy(products, x => x.price);
document.body.innerText = JSON.stringify(sorted, null, 4);
function sortBy(a, keyOf) {
    var result = a.slice(0);
    result.sort(function (x, y) {
        var kx = keyOf(x);
        var ky = keyOf(y);
        return kx > ky ? 1 : kx < ky ? -1 : 0;
    });
    return result;
}
var products = [
    { name: "Lawnmower", price: 395.00, id: 345801 },
    { name: "Hammer", price: 5.75, id: 266701 },
    { name: "Toaster", price: 19.95, id: 400670 },
    { name: "Padlock", price: 4.50, id: 560004 }
];
var sorted = sortBy(products, function (x) { return x.price; });
document.body.innerText = JSON.stringify(sorted, null, 4);

Comparing the two signatures, which is easier to understand?

TypeScript

function sortBy<T>(a: T[], keyOf: (item: T) => any): T[]

JavaScript

function sortBy(a, keyOf)

It might be reasoned that the TypeScript version should be easier to understand given that it provides more information, but many would disagree that this is in fact the case.  The reason for this is that the TypeScript version adds quite a bit of syntax to explicitly describe information that can otherwise be deduced fairly easily.  In many ways this is similar to how we process natural language.  When we communicate, we don’t encode each word with its grammatical function (e.g. “I [subject] bought [past tense verb] you [indirect object] a [indefinite article] gift [direct object].”)  Rather, we rapidly and subconsciously make guesses based on familiarity with the vocabulary, context, convention and other such signals.

 In the case of the sortBy example, we can guess at the parameters and return type for the function faster than we can parse the type syntax.  This becomes even easier if descriptive names are used (e.g. sortByKey(array, keySelector)).  Sometimes implicit expression is simply easier to understand.

Now to be fair, there are cases where TypeScript is arguably going to be more clear than the generated JavaScript (and for similar reasons).  Consider the following listing:

class Auto{
  constructor(public wheels = 4, public doors?){
  }
}
var car = new Auto();
car.doors = 2;
var Auto = (function () {
    function Auto(wheels, doors) {
        if (wheels === void 0) { wheels = 4; }
        this.wheels = wheels;
        this.doors = doors;
    }
    return Auto;
}());
var car = new Auto();
car.doors = 2;

In this example, the TypeScript version results in less syntax noise than the generated JavaScript version.   Of course, this is a comparison between TypeScript and it’s generated syntax rather than the following syntax many may have used:

wheels = wheels || 4;

Community Alignment

While TypeScript is a superset of JavaScript, this deserves some qualification.  Unlike languages such as CoffeeScript and Dart which also compile to JavaScript, TypeScript starts with the EcmaScript specification as the base of it’s language.  Nevertheless, TypeScript is still a separate language.

A team’s choice to maintain an application in TypeScript over JavaScript isn’t quite the same thing as choosing to implement an application in C# version 6 instead of C# version 5.  TypeScript isn’t the promise: “Programming with the ECMAScript of tomorrow … today!”.  Rather, it’s a language that layers a different programming paradigm on top of JavaScript.  While you can choose how much of the feature superset and programming paradigm you wish to use, the more features and approaches peculiar to TypeScript that are adopted the further the codebase will diverge from standard JavaScript syntax and conventions.

A codebase that fully leverages TypeScript can tend to look far more like C# than standard JavaScript.  In many ways, TypeScript is the perfect front-end development environment for C# developers as it provides a familiar syntax and programming paradigm to which they are already accustomed.  Unfortunately, developers who spend most of their time in C# often struggle with JavaScript syntax, conventions, and patterns.  The same might be expected to be true for TypeScript developers who utilize the language to emulate object-oriented development in C#.

Ultimately, the real negative I see with this is that (at least right now) TypeScript doesn’t represent how the majority of Web development is being done in the community.  This has implications on the availability of documentation, availability of online help, candidate pool size, marketability, and skill portability.

Consider the following chart which compares the current job openings available for JavaScript and TypeScript:

Source: simplyhired.com – August 2016

Now, the fact that there may be far less TypeScript jobs out there than JavaScript jobs doesn’t mean that TypeScript isn’t going to be the next big thing.  What it does mean, however, is that you are going to experience less friction in the aforementioned areas if you stick with standard EcmaScript.

Alternatives

For those considering TypeScript, the following are a couple of options you might consider before converting just yet.

ECMAScript 2015

If you’re  interested in TypeScript and currently still writing ES5 code, one step you might consider is to begin using ES2015.  In John Papa’s article: “Understanding ES5, ES2015 and TypeScript”, he writes:

Why Not Just use ES2015?  That’s a great option! Learning ES2015 is a huge leap from ES5. Once you master ES2015, I argue that going from there to TypeScript is a very small step.

In many ways, taking the time to learn ECMAScript 2015 is the best option even if you think you’re ready to start using TypeScript.  Making the journey from ES5 to ES2015 and then later on to TypeScript will help you to clearly understand which new features are standard ECMAScript and which are TypeScript … knowledge you’re likely to be fuzzy on if you move straight from ES5 to TypeScript.

Flow

If you’ve already become convinced that you need a type system for JavaScript development or you’re just looking to test the waters, you might consider a lighter-weight alternative to the TypeScript platform: Facebook’s Flow project.  Flow is a static type checker for JavaScript designed to gain static type checking benefits  without losing the “feel” of coding in JavaScript and in some cases it does a better job at catching type-related errors than TypeScript.

For the most part, Flow’s type system is identical to that of TypeScript, so it shouldn’t be too hard to convert to TypeScript down the road if desired.  Several IDEs have Flow support including Web Storm, Sublime Text, Atom, and of course Facebook’s own Nuclide.

As of August 2016, Flow also supports Windows.  Unfortunately this support has only recently become available, so Flow doesn’t yet enjoy the same IDE support on Windows as it does on OSX and Linux platforms.  IDE support can likely be expected to improve going forward.

Test-Driven Development

If you’ve found the primary appeal of TypeScript to be the immediate feedback you receive from the tooling, another methodology for achieving this (which has far greater benefits) is the practice of Test-Driven Development (TDD). The TDD methodology not only provides a rapid feedback cycle, but (if done properly) results in duplication-free code that is more maintainable by constraining the team to only developing the behavior needed by the application, and results in a regression-test suite which provides a safety net for future modifications as well as documentation for how the system is intended to be used. Of course, these same benefits can be realized with TypeScript development as well, but teams practicing TDD may find less need for TypeScript’s compiler-generated error checking.

 

Conclusion

After taking some time to explore TypeScript, I’ve found that aspects of its ecosystem are very compelling, particularly the tooling that’s available for the platform.  Nevertheless, it still seems a bit early to know what role the platform will play in the future of Web development.

Personally, I like the JavaScript language and, while I see some advantages of introducing type checking, I think a wiser course for now would be to invest in learning EcmaScript 2015 and keep a watchful eye on TypeScript adoption going forward.

 

Git on Windows: Whence Cometh Configuration

On August 22, 2016, in Uncategorized, by derekgreer

I recently went through the process of setting up a new development environment on Windows which included installing Git for Windows. At one point in the course of tweaking my environment, I found myself trying to determine which config file a particular setting originated. The command ‘git config –list’ showed the setting, but ‘git config –list –system’, ‘git config –list –global’, and ‘git config –list –local’ all failed to reflect the setting. Looking at the options for config, I discovered you can add a ‘–show-origin’ which led to a discovery: Git for Windows has an additional location from which it derives your configuration.

It turns out, since the last time I installed git on Windows, a change was made for the purposes of sharing git configuration across different git projects (namely, libgit2 and Git for Windows) where a Windows-specific location is now used as the lowest setting precedence (i.e. the default settings). This is the file: C:\ProgramData\Git\config. It doesn’t appear git added a way to list or edit this file as a well-known location (e.g. ‘git config –list windows’), so it’s not particularly discoverable aside from knowing about the ‘–show-origin’ switch.

So the order in which Git for Windows sources configuration information is as follows:

  1. C:\ProgramData\Git\config
  2. system config (e.g. C:\Program Files\Git\mingw64\etc\gitconfig)
  3. global config (%HOMEPATH%\.gitconfig
  4. local config (repository-specific .git/config)

Perhaps this article might help the next soul who finds themselves trying to figure out from where some seemingly magical git setting is originating.

Tagged with:  

I’ve always had an interest in application build processes. From the start of my career, I’ve generally been in the position of establishing the solution architecture for the projects I’ve participated in and this has usually involved establishing a baseline build process.

My career began as a Unix C developer while still in college where much of my responsibilities required writing tools in both C and various Unix shell scripting languages which were deployed to other workstations throughout the country. From there, I moved on to Unix C-CGI Web development and worked a number of years with Make files. With the advent of Java, I begin using tools like Ant and Maven for several more years before switching to the .Net platform where I used open source build tools like NAnt until Microsoft introduced MSBuild with its 2.0 release. Upon moving to the Austin, TX area, I was greatly influenced by what was the early seat of the Alt.Net movement. It was there where I abandoned what in hindsight has always been a ridiculous idea … trying to script a build using XML. For the next 4-5 years, I used Rake to define all of my builds. Starting last year, I began using Gulp and associated tooling on the Node platform for authoring .Net builds.

Throughout this journey of working with various build technologies, I’ve formed a few opinions along the way. One of these opinions is that the Build process shouldn’t be coupled to the Continuous Integration process.

A project should have a build process which exists and can be executed independent of the particular continuous integration tool one chooses. This allows builds to be created and maintained on the developer’s local machine. The particular build steps involved in building a given application are inherently part of its ontology. What compilers and preprocessors need to be used, how dependencies are obtained and published, when and how configuration values are supplied for different environments, how and where automated test suites are run, how the application distribution is created … all of these are concerns whose definition and orchestration are particular to a given project. Such concerns should be encapsulated in a build script which lives with the rest of the application source, not as discrete build steps defined within your CI tool.

Ideally, builds should never break, but when they do it’s important to resolve the issue as quickly as possible. Not being able to run a build locally means potentially having to repeatedly introduce changes until the build is fixed. This tends to pollute the source code commit history with comments like: “Fixing the build”, “Fixing the build for realz this time”, and “Please let this be it … I’m ready to go home”. Of course, there are times when a build can break because of environmental issues that may not be mirrored locally (e.g. lack of disk space, network related issues, 3rd-party software dependencies, etc.), but encapsulating as much of your build as possible goes a long way to keeping builds running along smoothly. Anyone on your team should be able to clone/check-out the project, issue a single command from the command line (e.g. gulp, rake, psake, etc.) and watch the full build process execute including any pre-processing steps, compilation, distribution packaging and even deployment to a target environment.

Aside from being able to run a build locally, decoupling the build from the CI process allows the technologies used by each to vary independently. Switching from one CI tool to another should ideally just require installing the software, pointing it to your source control, defining the single step to issue the build, and defining the triggers that initiate the process.

The creation of a project distribution and the scheduling mechanism for how often this happens are separate concerns. Just because a CI tool allows you to script out your build steps doesn’t mean you should.

Tagged with:  

Survey of Entity Framework Unit of Work Patterns

On November 1, 2015, in Uncategorized, by derekgreer

Earlier this year I joined a development team which chose Entity Framework for the persistence needs of a new greenfield project. While I’ve worked on a few projects which used Entity Framework here and there over the years, the bulk of my experience has been with NHibernate and, more recently, Dapper.Net. As a result, there hasn’t been all that much occasion for me to explore it in any level of depth until this year.

One area I recently took some time to research is how the Unit of Work pattern is best implemented within the context of using Entity Framework. While the topic is still relatively fresh on my mind, I thought I’d use this as an opportunity to create a catalog of various approaches I’ve encountered and include some thoughts about each approach.

Unit of Work

To start, it may be helpful to give a basic definition of the Unit of Work pattern. A Unit of Work can be defined as a collection of operations that succeed or fail as a single unit. Given a series of operations which need to be executed in response to some interaction with an application, it’s often necessary to ensure that none of the operations cause side-effects if any one of them fails. This is accomplished by having participating operations respond to either a commit or rollback message indicating whether the operation performed should be completed or reverted.

A Unit of Work can consist of different types of operations such as Web Service calls, Database operations, or even in-memory operations, however, the focus of this article will be on approaches to facilitating the Unit of Work pattern with Entity Framework.

With that out of the way, let’s take a look at various approaches to facilitating the Unit of Work pattern with Entity Framework.

Implicit Transactions

The first approach to achieving a Unit of Work around a series of Entity Framework operations is to simply create an instance of a DbContext class, make changes to one or more DbSet instances, and then call SaveChanges() on the context. Entity Framework automatically creates an implicit transaction for changesets which include INSERTs, UPDATEs, and DELETEs.

Here’s an example:

public Customer CreateCustomer(CreateCustomerRequest request)
{
  Customer customer = null;

  using (var context = new MyStoreContext())
  {
    customer = new Customer { FirstName = request.FirstName, LastName = request.LastName };
    context.Customers.Add(customer);
    context.SaveChanges();
    return customer;
  }
}

The benefit of this approach is that a transaction is created only when necessary and is kept alive only for the duration of the SaveChanges() call. Some drawbacks to this approach, however, are that it leads to opaque dependencies and adds a bit of repetitive infrastructure code to each of your applications services.

If you prefer to work directly with Entity Framework then this approach may be fine for simple needs.

TransactionScope

Another approach is to use the System.Transactions.TransactionScope class provided by the .Net framework. When any of the Entity Framework operations are used which cause a connection to be opened (e.g. SaveChanges()), the connection will enlist in the ambient transaction defined by the TransactionScope class and close the transaction once the TransactionScope is successfully completed. Here’s an example of this approach:

public Customer CreateCustomer(CreateCustomerRequest request)
{
  Customer customer = null;

  using (var transaction = new TransactionScope())
  {
    using (var context = new MyStoreContext())
    {
      customer = new Customer { FirstName = request.FirstName, LastName = request.LastName };
      context.Customers.Add(customer);
      context.SaveChanges();
      transaction.Complete();
    }

    return customer;
  }
}

In general, I find using TransactionScope to be a good general-purpose solution for defining a Unit of Work around Entity Framework operations as it works with ADO.Net, all versions of Entity Framework, and other ORMs which provides the ability to use multiple libraries if needed. Additionally, it provides a foundation for building a more comprehensive Unit of Work pattern which would allow other types of operations to enlist in the Unit of Work.

Caution should be exercised when using TransactionScope, however, as certain operations can implicitly escalate the transaction to a distributed transaction causing undesired overhead. For those choosing solutions involving TransactionScope, I would recommend educating yourself on how and when transactions are escalated.

While I find using the TransactionScope class to be a good general-purpose solution, using it directly does couple your services to a specific strategy and adds a bit of noise to your code. While it’s a viable choice, I would recommend inverting the concerns of managing the Unit of Work boundary as shown in approaches we’ll look at later.

ADO.Net Transactions

This approach involves creating an instance of DbTransaction and instructing the participating DbContext instance to use the existing transaction:

public Customer CreateCustomer(CreateCustomerRequest request)
{
  Customer customer = null;

  var connectionString = ConfigurationManager.ConnectionStrings["MyStoreContext"].ConnectionString;
  using (var connection = new SqlConnection(connectionString))
  {
    connection.Open();
    using (var transaction = connection.BeginTransaction())
    {
      using (var context = new MyStoreContext(connection))
      {
        context.Database.UseTransaction(transaction);
        try
        {
          customer = new Customer { FirstName = request.FirstName, LastName = request.LastName };
          context.Customers.Add(customer);
          context.SaveChanges();
        }
        catch (Exception e)
        {
          transaction.Rollback();
          throw;
        }
      }

      transaction.Commit();
      return customer;
    }
  }

As can be seen from the example, this approach adds quite a bit of infrastructure noise to your code. While not something I’d recommend standardizing upon, this approach provides another avenue for sharing transactions between Entity Framework and straight ADO.Net code which might prove useful in certain situations. In general, I wouldn’t recommend such an approach.

Entity Framework Transactions

The relative new-comer to the mix is the new transaction API introduced with Entity Framework 6. Here’s a basic example of it’s use:

public Customer CreateCustomer(CreateCustomerRequest request)
{
  Customer customer = null;

  using (var context = new MyStoreContext())
  {
    using (var transaction = context.Database.BeginTransaction())
    {
      try
      {
        customer = new Customer { FirstName = request.FirstName, LastName = request.LastName };
        context.Customers.Add(customer);
        context.SaveChanges();
        transaction.Commit();
      }
      catch (Exception e)
      {
        transaction.Rollback();
        throw;
      }
    }
  }

  return customer;
}

This is the approach recommended by Microsoft for achieving transactions with Entity Framework going forward. If you’re deploying applications with Entity Framework 6 and beyond, this will be your safest choice for Unit of Work implementations which only require database operation participation. Similar to a couple of the previous approaches we’ve already considered, the drawbacks of using this directly are that it creates opaque dependencies and adds repetitive infrastructure code to all of your application services. This is also a viable option, but I would recommend coupling this with other approaches we’ll look at later to improve the readability and maintainability of your application services.

Unit of Work Repository Manager

The first approach I encountered when researching how others were facilitating the Unit of Work pattern with Entity Framework was a strategy set forth by Microsoft’s guidance on the topic here. This strategy involves creating a UnitOfWork class which encapsulates an instance of the DbContext and exposes each repository as a property. Clients of repositories take a dependency upon an instance of UnitOfWork and access each repository as needed through properties on the UnitOfWork instance. The UnitOfWork type exposes a SaveChanges() method to be used when all the changes made through the repositories are to be persisted to the database. Here is an example of this approach:

public interface IUnitOfWork
{
  ICustomerRepository CustomerRepository { get; }
  IOrderRepository OrderRepository { get; }
  void Save();
}

public class UnitOfWork : IDisposable, IUnitOfWork
{
  readonly MyContext _context = new MyContext();
  ICustomerRepository _customerRepository;
  IOrderRepository _orderRepository;

  public ICustomerRepository CustomerRepository
  {
    get { return _customerRepository ?? (_customerRepository = new CustomerRepository(_context)); }
  }

  public IOrderRepository OrderRepository
  {
    get { return _orderRepository ?? (_orderRepository = new OrderRepository(_context)); }
  }

  public void Dispose()
  {
    if (_context != null)
    {
      _context.Dispose();
    }
  }

  public void Save()
  {
    _context.SaveChanges();
  }
}

public class CustomerService : ICustomerService
{
  readonly IUnitOfWork _unitOfWork;

  public CustomerService(IUnitOfWork unitOfWork)
  {
    _unitOfWork = unitOfWork;
  }

  public void CreateCustomer(CreateCustomerRequest request)
  {
    customer = new Customer { FirstName = request.FirstName, LastName = request.LastName };
    _unitOfWork.CustomerRepository.Add(customer);
    _unitOfWork.Save();
  }
}

It isn’t hard to imagine how this approach was conceived given it closely mirrors the typical implementation of the DbContext instance you find in Entity Framework guidance where public instances of DbSet are exposed for each aggregate root. Given this pattern is presented on the ASP.Net website and comes up as one of the first results when doing a search for “Entity Framework” and “Unit of Work”, I imagine this approach has gained some popularity among .Net developers. There are, however, a number of issues I have with this approach.

First, this approach leads to opaque dependencies. Due to the fact that classes interact with repositories through the UnitOfWork instance, the client interface doesn’t clearly express the inherent business-level collaborators it depends upon (i.e. any aggregate root collections).

Second, this violates the Open/Closed Principle. To add new aggregate roots to the system requires modifying the UnitOfWork each time.

Third, this violates the Single Responsibility Principle. The single responsibility of a Unit of Work implementation should be to encapsulate the behavior necessary to commit or rollback an set of operations atomically. The instantiation and management of repositories or any other component which may wish to enlist in a unit of work is a separate concern.

Lastly, this results in a nominal abstraction which is semantically coupled with Entity Framework. The example code for this approach sets forth an interface to the UnitOfWork implementation which isn’t the approach used in the aforementioned Microsoft article. Whether you take a dependency upon the interface or the implementation directly, however, the presumption of such an abstraction is to decouple the application from using Entity Framework directly. While such an abstraction might provide some benefits, it reflects Entity Framework usage semantics and as such doesn’t really decouple you from the particular persistence technology you’re using. While you could use this approach with another ORM (e.g. NHibernate), this approach is more of a reflection of Entity Framework operations (e.g. it’s flushing model) and usage patterns. As such, you probably wouldn’t arrive at this same abstraction were you to have started by defining the abstraction in terms of the behavior required by your application prior to choosing a specific ORM (i.e. following The Dependency Inversion Principle). You might even find yourself violating the Liskof Substitution Principle if you actually attempted to provide an alternate ORM implementation. Given these issues, I would advise people to avoid this approach.

Injected Unit of Work and Repositories

For those inclined to make all dependencies transparent while maintaining an abstraction from Entity Framework, the next strategy may seem the natural next step. This strategy involves creating an abstraction around the call to DbContext.SaveChanges() and requires sharing a single instance of DbContext among all the components whose operations need to participate within the underlying SaveChanges() call as a single transaction.

Here is an example:

public class CustomerService : ICustomerService
{
  readonly IUnitOfWork _unitOfWork;
  readonly ICustomerRepository _customerRepository;

  public CustomerService(IUnitOfWork unitOfWork, ICustomerRepository customerRepository)
  {
    _unitOfWork = unitOfWork;
    _customerRepository = customerRepository;
  }

  public void CreateCustomer(CreateCustomerRequest request)
  {
    customer = new Customer { FirstName = request.FirstName, LastName = request.LastName };
    _customerRepository.Add(customer);
    _unitOfWork.Save();
  }
}

While this approach improves upon the opaque design of the Repository Manager, there are several issues I find with this approach as well.

Similar to the first example, this UnitOfWork implementation is still semantically coupled to how Entity Framework is urging you to think about things. Entity Framework wants you to call SaveChanges() whenever you’re ready to flush any INSERT, UPDATE, or DELETE operations you’ve issued against the database and this abstraction basically surfaces this behavior. If you were to use an alternate framework that supported a different flushing model (e.g. NHibernate), you likely wouldn’t end up with the same abstraction.

Moreover, this approach has no definitive Unit of Work boundary. With this approach, you aren’t defining a logical Unit of Work, but are merely injecting a UnitOfWork you can participate within. When you invoke the underlying DBContext.SaveChanges() method, it isn’t explicit what work will be committed.

While this approach corrects a few design issues I find with the Repository Manager, overall I like this approach even less. At least with the Repository Manager approach you have a defined Unit of Work boundary which is kind of the whole point. My recommendation would be to avoid this approach as well.

Repository SaveChanges Method

The next strategy is basically a variation on the previous one. Rather than injecting a separate type whose sole purpose is to provide an indirect way to call the SaveChanges() method, some merely expose this through the Repository:

public class CustomerService : ICustomerService
{
  readonly ICustomerRepository _customerRepository;

  public CustomerService(ICustomerRepository customerRepository)
  {
    _customerRepository = customerRepository;
  }

  public void CreateCustomer(CreateCustomerRequest request)
  {
    customer = new Customer { FirstName = request.FirstName, LastName = request.LastName };
    _customerRepository.Add(customer);
    _customerRepository.SaveChanges();
  }
}

This approach shares many of the same issues with the previous one. While it reduces a bit of infrastructure noise, it’s still semantically coupled to Entity Framework’s approach and still lacks a defined Unit of Work boundary. Additionally, it lacks clarity as to what happens when you call the SaveChanges() method. Given the Repository pattern is intended to be a virtual collection of all the entities within your system of a given type, one might suppose a method named “SaveChanges” means that you are somehow persisting any changes made to the particular entities represented by the repository (setting aside the fact that doing so is really a subversion of the pattern’s purpose). On the contrary, it really means “save all the changes made to any entities tracked by the underlying DbContext”. I would also recommend avoiding this approach.

Unit of Work Per Request

A pattern I’m a bit embarrassed to admit has been characteristic of many projects I’ve worked on in the past (though not with EF) is to create a Unit of Work implementation which is scoped to a Web Application’s Request lifetime. Using this approach, whatever method is used to facilitate a Unit of Work is configured with a DI container using a Per-HttpRequest lifetime scope and the Unit of Work boundary is opened by the first component being injected by the UnitOfWork and committed/rolled-back when the HttpRequest is disposed by the container.

There are a few different manifestations of this approach depending upon the particular framework and strategy you’re using, but here’s a pseudo-code example of how configuring this might look for Entity Framework with the Autofac DI container:

builder.RegisterType()
        .As()
        .InstancePerRequest()
        .OnActivating(x =>
        {
          // start a transaction
        })
        .OnRelease(context =>
        {
          try
          {
            // commit or rollback the transaction
          }
          catch (Exception e)
          {
            // log the exception
            throw;
          }
        });

public class SomeService : ISomeService
{
  public void DoSomething()
  {
    // do some work
  }
}

While this approach eliminates the need for your services to be concerned with the Unit of Work infrastructure, the biggest issue with this is when an error happens to occur. When the application can’t successfully commit a transaction for whatever reason, the rollback occurs AFTER you’ve typically relinquished control of the request (e.g. You’ve already returned results from a controller). When this occurs, you may end up telling your customer that something happened when it actually didn’t and your client state may end up out of sync with the actual persisted state of the application.

While I used this strategy without incident for some time with NHibernate, I eventually ran into a problem and concluded that the concern of transaction boundary management inherently belongs to the application-level entry point for a particular interaction with the system. This is another approach I’d recommend avoiding.

Instantiated Unit of Work

The next strategy involves instantiating a UnitOfWork implemented using either the .Net framework TransactionScope class or the transaction API introduced by Entity Framework 6 to define a transaction boundary within the application service. Here’s an example:

public class CustomerService : ICustomerService
{
  readonly ICustomerRepository _customerRepository;

  public CustomerService(ICustomerRepository customerRepository)
  {
    _customerRepository = customerRepository;
  }

  public void CreateCustomer(CreateCustomerRequest request)
  {
    using (var unitOfWork = new UnitOfWork())
    {
      try
      {
        customer = new Customer { FirstName = request.FirstName, LastName = request.LastName };
        _customerRepository.Add(customer);        
        unitOfWork.Commit();
      }
      catch (Exception ex)
      {
        unitOfWork.Rollback();
      }
    }
  }
}

Functionally, this is a viable approach to facilitating a Unit of Work boundary with Entity Framework. A few drawbacks, however, are that the dependency upon the Unit Of Work implementation is opaque and that it’s coupled to a specific implementation. While this isn’t a terrible approach, I would recommend other approaches discussed here which either surface any dependencies being taken on the Unit of Work infrastructure or invert the concerns of transaction management completely.

Injected Unit of Work Factory

This strategy is similar to the one presented in the Instantiated Unit of Work example, but makes its dependence upon the Unit of Work infrastructure transparent and provides a point of abstraction which allows for an alternate implementation to be provided by the factory:

public class CustomerService : ICustomerService
{
  readonly ICustomerRepository _customerRepository;
  readonly IUnitOfWorkFactory _unitOfWorkFactory;

  public CustomerService(IUnitOfWorkFactory unitOfWorkFactory, ICustomerRepository customerRepository)
  {
    _customerRepository = customerRepository;
    _unitOfWorkFactory = unitOfWorkFactory;
  }

  public void CreateCustomer(CreateCustomerRequest request)
  {
    using (var unitOfWork = _unitOfWorkFactory.Create())
    {
      try
      {
        customer = new Customer { FirstName = request.FirstName, LastName = request.LastName };
        _customerRepository.Add(customer);
        unitOfWork.Commit();
      }
      catch (Exception ex)
      {
        unitOfWork.Rollback();
      }
    }
  }
}

While I personally prefer to invert such concerns, I consider this to be a sound approach.

As a side note, if you decide to use this approach, you might also consider utilizing your DI Container to just inject a Func to avoid the overhead of maintaining an IUnitOfWorkFactory abstraction and implementation.

Unit of Work ActionFilterAttribute

For those who prefer to invert the Unit of Work concerns as I do, the following approach provides an easy to implement solution for those using ASP.Net MVC and/or Web API. This technique involves creating a custom Action filter which can be used to control the boundary of a Unit of Work at the Controller action level. The particular implementation may vary, but here’s a general template:

public class UnitOfWorkFilter : ActionFilterAttribute
{
  public override void OnActionExecuting(ActionExecutingContext filterContext)
  {
    // begin transaction
  }

  public override void OnActionExecuted(ActionExecutedContext filterContext)
  {
    // commit/rollback transaction
  }
}

The benefits of this approach are that it’s easy to implement and that it eliminates the need for introducing repetitive infrastructure code into your application services. This attribute can be registered with the global action filters, or for the more discriminant, only placed on actions resulting in state changes to the database. Overall, this would be my recommended approach for Web applications. It’s easy to implement, simple, and keeps your code clean.

Unit of Work Decorator

A similar approach to the use of a custom ActionFilterAttribute is the creation of a custom decorator. This approach can be accomplished by utilizing a DI container to automatically decorate specific application service interfaces with a class which implements a Unit of Work boundary.

Here is a pseudo-code example of how configuring this might look for Entity Framework with the Autofac DI container which presumes that some form of command/command-handler pattern is being utilized (e.g. frameworks like MediatR , ShortBus, etc.):

// DI Registration
builder.RegisterGenericDecorator(
     typeof(TransactionRequestHandler<,>), // the decorator instance
     typeof(IRequestHandler<,>), // the types to decorate
    "requestHandler", // the name of the key to decorate
     null); // the name of the key to this decorator



public class TransactionRequestHandler : IRequestHandler where TResponse : ApplicationResponse
{
  readonly DbContext _context;
  readonly IRequestHandler _decorated;

  public TransactionRequestHandler(IRequestHandler decorated, DbContext context)
  {
    _decorated = decorated;
    _context = context;
  }

  public TResponse Handle(TRequest request)
  {
    TResponse response;

    // Open transaction here

    try
    {
      response = _decorated.Handle(request);

      // commit transaction

    }
    catch (Exception e)
    {
      //rollback transaction
      throw;
    }

    return response;
  }
}


public class SomeRequestHandler : IRequestHandler
{
  public ApplicationResponse Handle()
  {
    // do some work
    return new SuccessResponse();
  }
}

While this approach requires a bit of setup, it provides an alternate means of facilitating the Unit of Work pattern through a decorator which can be used by other consumers of the application layer aside from just ASP.Net (i.e. Windows services, CLI, etc.) It also provides the ability to move the Unit of Work boundary closer to the point of need for those who would rather provide any error handling prior to returning control to the application service client (e.g. the Controller actions) as well as giving more control over the types of operations decorated (e.g. IQueryHandler vs. ICommandHandler). For Web applications, I’d recommend trying the custom Action Filter approach first, as it’s easier to implement and doesn’t presume upon the design of your application layer, but this is certainly a good approach if it fits your needs.

Conclusion

Out of the approaches I’ve evaluated, there are several that I see as sound approaches which maintain some minimum adherence to good design practices. Of course, which approach is best for your application will be dependent upon the context of what you’re doing and to some extent the design values of your team.

Tagged with:  

Introducing NUnit.Specifications

On March 8, 2015, in Uncategorized, by derekgreer

 

I recently started working with a new team that uses NUnit as their testing framework.  While I think NUnit is a solid framework, I don’t think the default API and style lead to effective tests.

As an advocate of Test-Driven Development, I’ve always appreciated how context/specification-style frameworks such as Machine.Specifications (MSpec) allow for the expression of executable specifications which model how a system is expected to be used rather than the typical unit-test style of testing which tends to obscure the overall purpose of the system.

To facilitate a context/specification-style API, I created a base class which makes use of the hooks provided by the NUnit testing framework to emulate MSpec.  I’ve published this code under the project name NUnit.Specifications.

The following is an example NUnit test written using the ContextSpecification based class from NUnit.Specifications using the Should assertion library:

image01

One nice benefit of building on top of NUnit is the wide-spread tool support available.  Here is the test as seen through various test runners:

Resharper Test Runner:

image03

TestDriven.Net: (see notes below)

image04

NUnit Test Runner:

image00

NUnit Test Adaptor for Visual Studio:

image02

 

One caveat I discovered with the TestDriven.Net runner is it’s failure to recognize tests without the specification referencing types from the NUnit.Framework namespace (e.g. TestFixtureAttribute, CategoryAttribute, use of Assert, etc.).  That is to say, it didn’t seem to be enough that the spec inherited from a base type with NUnit attributes, but something in the derived class had to reference a type from the NUnit.Framework namespace for the test to be recognized.  Therefore, the TestDriven.Net results shown above were actually achieved by annotating the class with [Category(“component”)] explicitly.

 

Other Stuff

As a convenience, NUnit.Specifications also provides attributes for denoting categories of Unit, Component, Integration, Acceptance, and Subcutaneous as well as a Catch class (as provided by the MSpec library) for working with exceptions.

You can obtain the NUnit.Specifications from NuGet or grab the source from github.

Tagged with:  

Being Agile

On March 5, 2014, in Uncategorized, by derekgreer

When the term “agile” is used in reference to one’s development processes, it more often than not seems to be used in a monolithic way.  It isn’t that many aren’t cognizant of the fact that people tend to use a subset, combination, or modified form of the main agile processes marketed today, but even in this recognition there seems to be a tendency to  think about each variation in monolithic terms.

Which Process Do You Use?

If you’ve ever attended a local agile user group to hear various processes compared, you may have encountered the speaker asking for a show of hands to find out what various processes people are using.  If you’ve been to one of these meetings, it might have gone a little like this:

Let’s see a show of hands of those in the room who are using Scrum.  That’s quite a few of you.  How about Scrum-ban?  Nice.  Now let see a show of hands for people using Extreme Programming or XP.  Not as many of you guys here.  Who’s using the Rational Unified Process or RUP?  Anyone?  Who isn’t currently using an agile process, but came here today to learn more about agile?  Glad to have you guys here!  I hope this session will be informative.  So, is anyone using any other process we haven’t mentioned?  Yes, you sir.  What is your group using?

Um, we’ll, we use a process we call ‘Scrum-but’.  We use Scrum, but we leave out some things. [a chuckle is heard from a few individuals].

While such a meeting generally turns to discussing the various attributes of different processes, usually hitting major highlights of Scrum such as iterations, stand-ups, planning meetings, user stories, retrospectives, etc., it still does so in terms of the attributes of different monolithic processes.  The deficiency in this line of thinking is that it trains people to think in terms of what percentage of a name-brand agile process their team is adhering to, or should seek to adopt, rather than thinking about what problems these processes individually have evolved to solve.  “Which Process Do You Use” is the wrong question.

Are you Agile?

Many teams like to say “We’re an agile development shop”.  Now to be fair, when we want to convey to someone which common group of practices we follow, it can be useful to use labels such as Scrum, XP, Scrum-ban, etc.  That said, there seems to be an awful lot of shops that say they are agile when what they mean is that they do “stand-ups” (i.e. a daily status meetings for keeping their managers informed about what their up to) and “iterate” (i.e. chunky waterfall)  their way to a deadline that’s been handed down by upper management or a sales department.

What is Agile?

If you ask what agile is in a typical development shop today, you’ll more than likely find yourself in a conversation about Scrum or some other process than to talk about the actual meaning of the word.  Let’s actually go back and look at the definition:

ag·ile -  adjective \ˈa-jəl, -ˌjī(-ə)l\

1:  marked by ready ability to move with quick easy grace <an agile dancer>
2:  having a quick resourceful and adaptable character <an agile mind>

Based on this definition from Merriam-Webster’s dictionary, being agile is “an ability to change, or adapt to change, quickly”.  While communicating that you adhere to a given agile process may have it’s usefulness at times, thinking of your process in this monolithic way doesn’t promote the kind of thinking that leads to continuous improvement.  Rather than thinking in terms of which process we use, we should think in terms of what aspects of change our processes help us adapt to.

Toward An Agile View of Process

It seems that many team’s first foray into agile processes is the selection of Scrum by their management.  They’ve heard about this Scrum and how it can save them money, so they’ve sent the managers and Business Analysts off to Scrum Master training to outfit them with their Scrum-capes and Scrum-tights.

png;base645f2e29de6785c7d7

Introducing a process like Scrum (or whatever portions of Scrum a company’s existing  process will tolerate) will sometimes improve upon matters, but only insofar as one’s cargo-cult emulation of the prescribed practices happen to match up with the problems for which they were conceived.  Unfortunately this approach to adopting agile processes often seems to lead to a bunch of people going through the motions without really understanding what the purpose is.  Worse, when the local Scrum training consultants sell them on the fact that they don’t really have to give up things like deadlines, using business analysts to gather all the requirements, or otherwise restructuring their organization, they generally end up with some empty shell of a process which is really nothing more than their old waterfall process with more micro-management.

A better approach is to first learn about the types of issues different agile practices seek to address and then consider how your team’s existing process can improve if each practice were applied individually.  Rather than thinking of your team as “agile” or “not agile”, consider asking the following types of questions:

Is my team agile WITH RESPECT TO …

                   changes to the product’s desired features? 
               changes to the product’s code base? 
               changes to the team’s understanding of the domain? 
               changes to the team’s understanding of the technologies used? 
               changes to team member’s hours of availability? 
               changes to individuals on the team? 
               changes to skillsets within the team? 
               changes to the cost of materials and resources? 
               changes to the compatibility or availability of 3rd-party software? 
               etc.

Different agile practices address different kinds of problems, but to really become an agile team you need to learn how to identify problems and solutions on an ongoing basis, not just implement processes.  Let’s stop thinking of ourselves as agile or not agile and start asking the question “What are we agile at?

Tagged with:  

Expected Objects Custom Comparisons

On November 17, 2013, in Uncategorized, by derekgreer

ExpectedObjects is a testing library I developed a few years ago to facilitate using the Expected Objects pattern within my specifications to avoid obscure tests.  You can find the original introduction to the library here.

As of version 1.1.0, the ExpectedObjects library has been updated to include a feature called Custom Comparisons.  The standard behavior of the library is to traverse a strategy chain (which is itself configurable) to determine which comparison strategy is to be used for each type of object encountered within the object graph.  The Custom Comparisons feature allows you to override this behavior for specific properties.

For example, let’s say we’re writing a end-to-end test which validates a Receipt class as follows:

public class Receipt

{
    public string Name { get; set; }
    public DateTime TransactionDate { get; set; }
    public string VerificationCode { get; set; }
}

 

Given the following class, the VerificationCode property would probably not be a value you could anticipate.  In such a case, while you can’t verify that the property has a specific value, you may care that it at least has some value.  This is where the Custom Comparisons feature can help.  We can verify that the actual Receipt received matches the expected receipt structure using the following expected object configuration:

var expected = new
{
	Name = "John Doe",
	DateTime = DateTime.Today,
	VerificationCode = Expect.NotNull()
}.ToExpectedObject();


var actual = new Receipt
{
	Name = "John Doe",
	DateTime = DateTime.Today,
	VerificationCode = "ABC123"
};



expected.ShouldMatch(actual);

In the event that the VerificationCode property is null, the library will raise an exception with the following message:

For Receipt.VerificationCode, expected a non-null value but found [null].

The ExpectedObjects library currently provides a static Expect class which  includes convenience methods to check for null, not null, and an Any<T> comparison for checking that an object is of a specific type (e.g. Expect.Any<Receipt>()).  To supply your own comparisons, simply implement the IComparsion interface which defines the custom comparison and the text to include within any exception messages raised (e.g. “For SomeType.SomeProperty, expected [text you supply here] but found “42”).

Tagged with:  
 

This is the first article in a new sporadic series I’ll contribute to from time to time wherein I’ll discuss some noteworthy issues I’ve wrestled with. In this installment, I’ll be discussing an NHibernate issue which took me some time to work through. So, let’s dive into the story …

The Context

In an application I was recently working on, a need arose to modify a section of code involving two entities which should have been modeled using a parent/child relationship but which only had a loose association in the database. The primary table in the database schema for what needed to be the parent object in the domain only contained a unenforced foreign key column which matched up with a candidate key on the table used for what needed to be the child object. In the section of code I needed to modify, a View Model was being created by first retrieving data for the parent object and subsequently for the child object. I’m not exactly sure what lead to this path, but I think it had something to do with the original developer’s attempt at using a surrogate key strategy for all the tables and later attempts by others to pull the data into a domain model with NHibernate.

At any rate, while I wasn’t in a position to revamp the whole design, I knew there was a way to express many-to-one mappings in NHibernate using non-primary keys, so after a little searching and some trial-and-error I got the parent entity referencing the child entity with a Fluent NHibernate Auto-Mapping configuration similar to the following:

return AutoMap.AssemblyOf(new AutomappingConfiguration())
  .Override(map => map.References(p => p.Child, 
  "ParentColumnChildKeyName").PropertyRef("ChildCandidateKeyColumnName")
    .Fetch.Join());

 

Part of the changes required to make this work was some refactoring of an import job used to populate the database which relied upon the domain model and mappings to populate the parent and child data. After changing the parent entity to reference the child entity instead of just a candidate key to the child entity, I needed to modify the import job to persist the relationship between the parent and the child. To do this, I injected a pre-existing ChildRepository to query for existing instances of the child entities (which had its own separate import process) so I could associate it with the parent entity upon saving. All of the changes worked as expected for the client portion of the application, but the changes broke some acceptance tests for the import job. The error I started receiving in the tests was as follows:

null id in “MyEntityType” entry (don't flush the Session after an exception occurs)

In this case the “MyEntityType” was another entity which had a many-to-one mapping with the aforementioned parent entity. After looking over the code and scratching my head for a bit, I decided to do a search on this particular error and read a few articles which at first didn’t seem to speak to my particular scenario. The advice I read basically boiled down to “Don’t try to do stuff with the session after you receive an error”. That certainly made sense, but upon stepping through the code I couldn’t see anywhere I was catching an error and proceeding to do something further with the session. I then decided to add a try/catch around the offending code and suddenly I saw the issue: trying to save an entity associated with one open session with an entity from another open session.

The Solution

Ultimately, the reason I couldn’t see the error was due to an issue with a manifestation of some common infrastructure code my team uses when working working with NHibernate. We use Autofac for dependency injection, and to facilitate transactions we use Autofac’s OnActivating() and OnRelease() methods to begin an NHibernate transaction and to handle the rollback or commit of the transaction when complete. Here was the offending code:

builder.Register(c => c.ResolveNamed(RegistrationKey).OpenSession())
	.As()
	.OnActivating(x => x.Instance.BeginTransaction())
	.OnRelease(session =>
		{
			try
			{
				if (!session.Transaction.WasRolledBack && session.Transaction.IsActive)
				{				
					session.Transaction.Commit();
				}
			}
			finally
			{
				session.Close();
				session.Dispose();
			}
		});

When used within the context of our Web applications, this code would contain a call to register the ISession with an HTTP Request lifetime scope, but this import job didn’t require a shared ISession prior to my changes. To fix the problem, I added a call to register the ISession as InstancePerLifetimeScope() which causes the same lifetime scope used to resolve the job to be used for resolving any instances of ISession. Additionally, I added a try/catch/throw around the session to at least provide some logging of similar issues should this ever come up again.

Tagged with:  

Introducing RabbitBus

On June 1, 2012, in Uncategorized, by derekgreer

What Is It?

RabbitBus is a .Net client API for use with RabbitMQ.  RabbitBus was designed to make working with RabbitMQ easy by providing a fluent-interface which places a focus on discoverability and by providing commonly needed constructs not provided through the official RabbitMQ .Net client API.

 

How Do I Use It?

The RabbitBus library was designed to allow for the centralization of all RabbitMQ configuration at application startup, separating the concerns of routing, serialization, and error handling from the central concerns of publishing and consuming messages.

RabbitBus works with object-based messages.  For example, if you have an application from which you would like to publish status update messages, you might model your message using the following class:

[Serializable]
public class StatusUpdate
{
  public StatusUpdate(string status)
  {
    Status = status;
  }

  public string Status { get; set; }
}

After configuring how messages are to be handled, you’ll then use an instance of a Bus type to publish or subscribe to each message.

Configuration of the Bus is handled through a BusBuilder.  The BusBuilder type provides an API for specifying how serialization, publication, consumption, and other concerns will be handled by the Bus.

If you’re already familiar with RabbitMQ concepts then you should find working with RabbitBus to be fairly easy.  The following demonstrates some of the basic usage scenarios:

Message Publication

To configure a producer application to publish messages of type StatusUpdate to a direct exchange named “status-update-exchange” on localhost, you would then use the following configuration:

 
Bus bus = new BusBuilder()
  .Configure(ctx => ctx.Publish<StatusUpdate>()
                         .WithExchange("status-update-exchange"))
  .Build();
bus.Connect();

To publish a StatusUpdate message, you would then make the following invocation:

bus.Publish(new StatusUpdate("OK"));

Message Subscription

To configure a consumer application to subscribe to StatusUpdate messages on localhost, you would use the following configuration:

Bus bus = new BusBuilder()
  .Configure(ctx => ctx.Consume<StatusUpdate>()
                         .WithExchange("status-update-exchange")
                         .WithQueue("status-update-queue"))
  .Build();

To subscribe to StatusUpdate messages, you would then make the following invocation:

bus.Subscribe<StatusUpdate>(messageContext => { /* handle message */ });

 

What Other Features Are Provided?

RabbitBus provides the following features:

  • support of all AMQP 0.9.1 exchange types (i.e. direct, fanout, topic, and headers)
  • remote procedure calls (RPC)
  • deadletter queue support
  • convention based auto-subscription
  • RabbitMQ push and pull API support
  • extensible serialization (Binary serialization by default, Json serialization provided by RabbitBus.Serialization.Json)
  • customizable error handling
  • RabbitMQ server restart recovery
  • configurable offline queuing support
  • logging

 

Where Can I Learn More?

You can find more information about how to use RabbitBus on the RabbitBus Wiki.  Additionally, RabbitBus was developed using Test-Driven Development and care was taken in the implementation of its executable specification suite to maximize demonstration of the API’s intended use.

 

Where Do I Get It?

RabbitBus is available as a NuGet package and the source is available on Github.

Tagged with: