Aspiring Craftsman

pursuing well-crafted software

    .Net Project Builds with Node Package Manager


    A few years ago, I wrote an article entitled Separation of Concerns: Application Builds & Continuous Integration wherein I discussed the benefits of separating project builds from CI/CD concerns by creating a local build script which lives with your project. Not long after writing that article, I was turned on to what I’ve come to believe is one of the easiest tools I’ve encountered for managing .Net project builds thus far: npm.

    Most development platforms provide a native task-based build technology. Microsoft’s tooling for these needs is MSBuild: a command-line tool whose build files double as Visual Studio’s project and solution definition files. I used MSBuild briefly for scripting custom build concerns for a couple of years, but found it to be awkward and cumbersome. Around 2007, I abandoned use of MSBuild for creating builds and began using Rake. While it had the downside of requiring a bit of knowledge of Ruby, it was a popular choice among those willing to look outside of the Microsoft camp for tooling and had community support for working with .Net builds through the Albacore library. I’ve used a few different technologies since, but about 5 years ago I saw a demonstration of the use of npm for building .Net projects at a conference and I was immediately sold. When used well, it really is the easiest and most terse way to script a custom build for the .Net platform I’ve encountered.

    “So what’s special about npm?” you might ask. The primary appeal of using npm for building applications is that it’s easy to use. Essentially, it’s just an orchestration of shell commands.

    Tasks

    With other build tools, you’re often required to know a specific language in addition to learning special constructs peculiar to the build tool to create build tasks. In contrast, npm’s expected package.json file simply defines an array of shell command scripts:

    {
      "name": "example",
      "version": "1.0.0",
      "description": "",
      "scripts": {
        "clean": "echo Clean the project.",
        "restore": "echo Restore dependencies.",
        "compile": "echo Compile the project.",
        "test": "echo Run the tests.",
        "dist": "echo Create a distribution."
      },
      "author": "Some author",
      "license": "ISC"
    }
    

    As with other build tools, NPM provides the ability to define dependencies between build tasks. This is done using pre- and post- lifecycle scripts. Simply, any task issued by NPM will first execute a script by the same name with a prefix of “pre” when present and will subsequently execute a script by the same name with a prefix of “post” when present. For example:

    {
      "name": "example",
      "version": "1.0.0",
      "description": "",
      "scripts": {
        "clean": "echo Clean the project.",
        "prerestore": "npm run clean",
        "restore": "echo Restore dependencies.",
        "precompile": "npm run restore",
        "compile": "echo Compile the project.",
        "pretest": "npm run compile",
        "test": "echo Run the tests.",
        "prebuild": "npm run test",
        "build": "echo Publish a distribution."
      },
      "author": "Some author",
      "license": "ISC"
    }
    

    Based on the above package.json file, issuing “npm run build” will result in running the tasks of clean, restore, compile, test, and build in that order by virtue of each declaring an appropriate dependency.

    Given you’re okay with limiting a fully-specified dependency chain where a subset of the build can be initiated at any stage (e.g. running “npm run test” and triggering clean, restore, and compile first) , the above orchestration can be simplified by installing the npm-run-all node dependency and defining a single pre- lifetime script for the main build target:

    {
      "name": "example",
      "version": "1.0.0",
      "description": "",
      "scripts": {
        "clean": "echo Clean the project.",
        "restore": "echo Restore dependencies.",
        "compile": "echo Compile the project.",
        "test": "echo Run the tests.",
        "prebuild": "npm-run-all clean restore compile test",
        "build": "echo Publish a distribution."
      },
      "author": "John Doe",
      "license": "ISC",
      "devDependencies": {
        "npm-run-all": "^4.1.5"
      }
    }
    

    In this example, issuing “npm run build” will result in the prebuild script executing npm-run-all with the parameters: clean, restore, compile and test which it will execute in the order listed.

    Variables

    Aside from understanding how to utilize the pre- and post- lifecycle scripts to denote task dependencies, the only other thing you really need to know is how to work with variables.

    Node’s npm command facilitates the definition of variables by command-line parameters as well as declaring package variables. When npm executes, each of the properties declared within the package.json are flattened and prefixed with “npm_package_”. For example, the standard “version” property can be used as part of a dotnet build to denote a project version by referencing ${npm_package_version}:

    {
      "name": "example",
      "version": "1.0.0",
      "description": "",
      "configuration": "Release",
      "scripts": {
        "build": "dotnet build ./src/*.sln /p:Version=${npm_package_version}"
      },
      "author": "John Doe",
      "license": "ISC",
      "devDependencies": {
        "npm-run-all": "^4.1.5"
      }
    }
    

    Command-line parameters can also be passed to npm and are similarly prefixed with “npm_config_” with any dashes (“-”) replaced with underscores (“_”). For example, the previous version setting could be passed to dotnet.exe in the following version of package.json by issuing the below command:

    npm run build --product-version=2.0.0
    
    {
      "name": "example",
      "version": "1.0.0",
      "description": "",
      "configuration": "Release",
      "scripts": {
        "build": "dotnet build ./src/*.sln /p:Version=${npm_config_product_version}"
      },
      "author": "John Doe",
      "license": "ISC",
      "devDependencies": {
        "npm-run-all": "^4.1.5"
      }
    }
    

    (Note: the parameter –version is an npm parameter for printing the version of npm being executed and therefore can’t be used as a script parameter.)

    The only other important thing to understand about the use of variables with npm is that the method of dereferencing is dependent upon the shell used. When using npm on Windows, the default shell is cmd.exe. If using the default shell on Windows, the version parameter would need to be deference as %npm_config_product_version%:

    {
      "name": "example",
      "version": "1.0.0",
      "description": "",
      "configuration": "Release",
      "scripts": {
        "build": "dotnet build ./src/*.sln /p:Version=%npm_config_product_version%"
      },
      "author": "John Doe",
      "license": "ISC",
      "devDependencies": {
        "npm-run-all": "^4.1.5"
      }
    }
    

    Until recently, I used a node package named “cross-env” which allows you to normalize how you dereference variables regardless of platform, but for several reasons including cross-env being placed in maintenance mode, the added dependency overhead, syntax noise, and support for advanced variable expansion cases such as default values, I’d recommend any cross-platform execution be supported by just standardizing on a single shell (e.g. “Bash”). With the introduction of Windows Subsystem for Linux and the virtual ubiquity of git for version control, most developer Windows systems already contain the bash shell. To configure npm to use bash at the project level, just create a file named .npmrc at the package root containing the following line:

    script-shell=bash
    

    Using Node Packages

    While not necessary, there are many CLI node packages that can be easily leveraged for aiding in authoring your builds. For example, a package named “rimraf”, which functions like Linux’s “rm -rf” command, is a utility you can use to implement a clean script for recursively deleting any temporary build folders created as part of previous builds. In the following package.json build, a package target builds a NuGet package which it outputs to a dist folder in the package root. The rimraf command is used to delete this temp folder as part of the build script’s dependencies:

    {
      "name": "example",
      "version": "1.0.0",
      "description": "",
      "scripts": {
        "clean": "rimraf dist",
        "prebuild": "npm run clean",
        "build": "dotnet pack ./src/ExampleLibrary/ExampleLibrary.csproj -o dist /p:Version=${npm_package_version}"
      },
      "author": "John Doe",
      "license": "ISC",
      "devDependencies": {
        "npm-run-all": "^4.1.5",
        "rimraf": "^3.0.2"
      }
    }
    

    If you’d like to see a more complete example of npm at work, you can check out the build for ConventionalOptions which supports tasks for building, testing, packaging, and publishing nuget packages for both release and prerelease versions of the library.

    Conventional Options


    I’ve really enjoyed working with the Microsoft Configuration libraries introduced with .Net Core approximately 5 years ago. The older XML-based API was quite a pain to work with, so the ConfigurationBuilder and associated types provided a long overdue need for the platform.

    I had long since adopted a practice of creating discrete configuration classes populated and registered with a DI container over direct use of the ConfigurationManager class within components, so I was pleased to see the platform nudge developers in this direction through the introduction of the IOptions type.

    A few aspects surrounded the prescribed use of the IOptions type of which I wasn't particularly fond were needing to inject IOptions rather than the actual options type, taking a dependency upon the Microsoft.Extensions.Options package from my library packages, and the cermony of binding the options to the IConfiguration instance. To address these concerns, I wrote some extension methods which took care of binding the type to my configuration by convention (i.e. binding a type with a suffix of Options to a section corresponding to the option type's prefix) and registering it with the container.

    I’ve recently released a new version of these extensions supporting several of the most popular containers as an open source library. You can find the project here.

    The following are the steps for using these extensions:

    Step 1

    Install ConventionalOptions for the target DI container:

    $> nuget install ConventionalOptions.DependencyInjection
    

    Step 2

    Add Microsoft’s Options feature and register option types:

      services.AddOptions();
      services.RegisterOptionsFromAssemblies(Configuration, Assembly.GetExecutingAssembly());
    

    Step 3

    Create an Options class with the desired properties:

        public class OrderServiceOptions
        {
            public string StringProperty { get; set; }
            public int IntProperty { get; set; }
        }
    

    Step 4

    Provide a corresponding configuration section matching the prefix of the Options class (e.g. in appsettings.json):

    {
      "OrderService": {
        "StringProperty": "Some value",
        "IntProperty": 42
      }
    }
    

    Step 5

    Inject the options into types resolved from the container:

        public class OrderService
        {
            public OrderService(OrderServiceOptions options)
            {
                // ... use options
            }
        }
    

    Currently ConventionalOptions works with Microsoft’s DI Container, Autofac, Lamar, Ninject, and StructureMap.

    Enjoy!

    Collaboration vs. Critique


    While there are certainly a number of apps developed by lone developers, it’s probably safe to say that the majority of professional software development occurs by teams. The people aspect of software development, more often than not, tends to be the most difficult part of software engineering. Unfortunately the software field isn’t quite like other engineering fields with well-established standards, guidelines, and apprenticeship programs. The nature of software development tends to follow an empirical process model rather than a defined process model. That is to say, software developers tend to be confronted with new problems every day and most of the problems developers are solving aren’t something they’ve ever done in the exact same way with the exact same toolset. Moreover, there are often many different ways to solve the same problem, both with respect to the overall process as well as the implementation. This means that team members are often required to work together to determine how to proceed. Teams are often confronted with the need to explore multiple competing approaches as well as review one another’s designs and implementation. One thing I’ve learned during the course of my career is that the stage these types of interactions occur within the overall process has a significant impact on whether the interaction is generally viewed as collaboration or critique.

    To help illustrate what I’ve seen happen countless times both in catch-up design sessions and code reviews, consider the following two scenarios:

    Scenario 1

    Tom and Sally are both developers on a team maintaining a large-scale application. Tom takes the next task in the development queue which happens to have some complex processes that will need to be addressed. Being the good development team that they are, both Tom and Sally are aware of the requirements of the application (i.e. how the app needs to work from the user’s perspective), but they have deferred design-level discussions until the time of implementation. After Tom gets into the process a little, seeing that the problem is non-trivial, he pings Sally to help him brainstorm different approaches to solving the problem. Tom and Sally have been working together for over a year and have become accustomed to these sort of ad-hoc design sessions. As they begin discussing the problem, they each start tossing ideas out on the proverbial table resulting in multiple approaches to compare and contrast. The nature of the discussion is such that neither Tom nor Sally are embarrassed or offended when the other points out flaws in a given design idea because there’s a sense of safety in their mutual understanding that this is a brainstorming session and that neither have thought in depth about the solutions being set forth yet. Tom throws out a couple of ideas, but ends up shooting them down himself as he uses Sally as a sounding board for the ideas. Sally does the same, but toward the end of the conversation suggests a slight alteration to one of Tom’s initial suggestions that they think may make it work after all. They end the session with a sense that they’ve worked together to arrive at the best solution.

    Scenario 2

    Bill and Jake are developers on another team. They tend to work in a more siloed fashion, but they do rely upon one another for help from time to time and they are required to do code reviews prior to their code being merged into the main branch of development. Bill takes the next task in the development queue and spends the better part of an afternoon working out a solution with a basic working skeleton of the direction he’s going. The next day he decides that it might be good to have Jake take a look at the design to make him aware of the direction. Seeing where Bill’s design misses a few opportunities to make the implementation more adaptable to changes in the future, Jake points out where he would have done things differently. Bill acknowledges that Jake’s suggestions would be better and would have probably been just as easy to implement from the beginning, but inwardly he’s a bit disappointed that Jake didn’t like his design as-is and that he has to do some rework. In the end, Bill is left with a feeling of critique rather than collaboration.

    Whether it’s a high-level UML diagram or working code, how one person tends to perceive feedback on the ideas comprising a potential solution has everything to do with timing. It can be the exact same feedback they would have received either way, but when the feedback occurs often makes a difference between whether it’s perceived as collaboration or critique. It’s all about when the conversation happens.

    Ditch the Repository Pattern Already


    One pattern that still seems particularly common among .Net developers is the Repository pattern. I began using this pattern with NHibernate around 2006 and only abandoned its use a few years ago.

    I had read several articles over the years advocating abandoning the Repository pattern in favor of other suggested approaches which served as a pebble in my shoe for a few years, but there were a few design principles whose application seemed to keep motivating me to use the pattern.  It wasn’t until a change of tooling and a shift in thinking about how these principles should be applied that I finally felt comfortable ditching the use of repositories, so I thought I’d recount my journey to provide some food for thought for those who still feel compelled to use the pattern.

    Mental Obstacle 1: Testing Isolation

    What I remember being the biggest barrier to moving away from the use of repositories was writing tests for components which interacted with the database.  About a year or so before I actually abandoned use of the pattern, I remember trying to stub out a class derived from Entity Framework’s DbContext after reading an anti-repository blog post.  I don’t remember the details now, but I remember it being painful and even exploring use of a 3rd-party library designed to help write tests for components dependent upon Entity Framework.  I gave up after a while, concluding it just wasn’t worth the effort.  It wasn’t as if my previous approach was pain-free, as at that point I was accustomed to stubbing out particularly complex repository method calls, but as with many things we often don’t notice friction to which we’ve become accustomed for one reason or another.  I had assumed that doing all that work to stub out my repositories was what I should be doing.

    Another principle that I picked up from somewhere (maybe the big xUnit Test Patterns book? … I don’t remember) that seemed to keep me bound to my repositories was that you shouldn’t write tests that depend upon dependencies you don’t own.  I believed at the time that I should be writing tests for Application Layer services (which later morphed into discrete dispatched command handlers) and the idea of stubbing out either NHIbernate or Entity Framework violated my sensibilities.

    Mental Obstacle 2: The Dependency Inversion Principle Adherence

    The Dependency Inversion Principle seems to be a source of confusion for many which stems in part from the similarity of wording with the practice of Dependency Injection as well as from the fact that the pattern’s formal definition reflects the platform from whence the principle was conceived (i.e. C++).  One might say that the abstract definition of the Dependency Inversion Principle was too dependent upon the details of its origin (ba dum tss).  I’ve written about the principle a few times (perhaps my most succinct being this Stack Overflow answer), but put simply, the Dependency Inversion Principle has at its primary goal the decoupling of the portions of your application which define policy from the portions which define implementation.  That is to say, this principle seeks to keep the portions of your application which govern what your application does (e.g. workflow, business logic, etc.) from being tightly coupled to the portions of your application which govern the low level details of how it gets done (e.g. persistence to an Sql Server database, use of Redis for caching, etc.).

    A good example of a violation of this principle, which I recall from my NHibernate days, was that once upon a time NHibernate was tightly coupled to log4net.  This was later corrected, but at one time the NHibernate assembly had a hard dependency on log4net.  You could use a different logging library for your own code if you wanted, and you could use binding redirects to use a different version of log4net if you wanted, but at one time if you had a dependency on NHibernate then you had to deploy the log4net library.  I think this went unnoticed by many due to the fact that most developers who used NHibernate also used log4net.

    When I first learned about the principle, I immediately recognized that it seemed to have limited advertized value for most business applications in light of what Udi Dahan labeled The Fallacy Of ReUse.  That is to say, properly understood, the Dependency Inversion Principle has as its primary goal the reuse of components and keeping those components decoupled from dependencies which would keep them from being easily reused with other implementation components, but your application and business logic isn’t something that is likely to ever be reused in a different context.  The take away from that is basically that the advertized value of adhering to the Dependency Inversion Principle is really more applicable to libraries like NHibernate, Automapper, etc. and not so much to that workflow your team built for Acme Inc.’s distribution system.  Nevertheless, the Dependency Inversion Principle had a practical value of implementing an architecture style Jeffrey Palermo labeled the Onion Architecture. Specifically, in contrast to traditional 3-layered architecture models where UI, Business, and Data Access layers precluded using something like Data Access Logic Components to encapsulate an ORM to map data directly to entities within the Business Layer, inverting the dependencies between the Business Layer and the Data Access layer provided the ability for the application to interact with the database while also seemingly abstracting away the details of the data access technology used.

    While I always saw the fallacy in strictly trying to apply the Dependency Inversion Principle to invert the implementation details of how I got my data from my application layer so that I’d someday be able to use the application in a completely different context, it seemed the academically astute and in vogue way of doing Domain-driven Design at the time, seemed consistent with the GoF’s advice to program to an interface rather than an implementation, and provided an easier way to write isolation tests than trying to partially stub out ORM types.

    The Catalyst

    For the longest time, I resisted using Entity Framework.  I had become fairly proficient at using NHibernate, the early versions of Entity Framework were years behind in features and maturity, it didn’t support Domain-driven Design well, and there was a fairly steep learning curve with little payoff. A combination of things happened, however, that began to make it harder to ignore. First, a lot of the NHibernate supporters (like many within the Alt.Net crowd) moved on to other platforms like Ruby and Node. Second, despite it lacking many features, .Net developers began flocking to the framework in droves due to it’s backing and promotion by Microsoft. So, eventually I found it impossible to avoid which led to me trying to apply the same patterns I’d used before with this newer-to-me framework.

    To be honest, once I adapted my repository implementation to Entity Framework everything mostly just worked, especially for the really simple stuff. Eventually, though, I began to see little ways I had to modify my abstraction to accommodate differences in how Entity Framework did things from how NHibernate did things.  What I discovered was that, while my repositories allowed my application code to be physically decoupled from the ORM, the way I was using the repositories was in small ways semantically coupled to the framework.  I wish I had kept some sort of record every time I ran into something, as the only real thing I can recall now were motivations with certain design approaches to expose the SaveChanges method for Unit of Work implementations. I don’t want to make more of the semantic coupling argument against repositories than it’s worth, but observing little places where my abstractions were leaking, combined with the pebble in my shoe from developers who I felt were far better than me which were saying I shouldn’t use them lead me to begin rethinking things.

    More Effective Testing Strategies

    It was actually a few years before I stopped using repositories that I stopped stubbing out repositories.  Around 2010, I learned that you can use Test-Driven Development to achieve 100% test coverage for the code for which you’re responsible, but when you plug your code in for the first time with that team that wasn’t designing to the same specification and not writing any tests at all that things may not work.  It was then that I got turned on to Acceptance Test Driven Development.  What I found was that writing high-level subcutaneous tests (i.e. skipping the UI layer, but otherwise end-to-end) was overall easier, was possible to align with acceptance criteria contained within a user story, provided more assurance that everything worked as a whole, and was easier to get teams on board with.  Later on, I surmised that I really shouldn’t have been writing isolation tests for components which, for the most part, are just specialized facades anyway.  All an isolation test for a facade really says is “did I delegate this operation correctly” and if you’re not careful you can end up just writing a whole bunch of tests that basically just validate whether you correctly configured your mocking library.

    So, by the time I started rethinking my use of repositories, I had long since stopped using them for test isolation.

    Taking the Plunge

    It was actually about a year after I had become convinced that repositories were unnecessary, useless abstractions that I started working with a new codebase I had the opportunity to steer.  Once I eliminated them from the equation, everything got so much simpler.   Having been repository-free for about two years now, I think I’d have a hard time joining a team that had an affinity for them.

    Conclusion

    If you’re still using repositories and you don’t have some other hangup you still need to get over like writing unit tests for your controllers or application services then give the repository-free lifestyle a try.  I bet you’ll love it.

    Hello, React! - A Beginner's Setup Tutorial


    React has been around for a few years now and there are quite a few tutorials available. Unfortunately, many are outdated, overly complex, or gloss over configuration for getting started. Tutorials which side-step configuration by using jsfiddle or code generator options are great when you’re wanting to just focus on the framework features itself, but many tutorials leave beginners struggling to piece things together when you’re ready to create a simple react application from scratch. This tutorial is intended to help beginners get up and going with React by manually walking through a minimal setup process.

    A Simple Tutorial

    This tutorial is merely intended to help walk you through the steps to getting a simple React example up and running. When you’re ready to dive into actually learning the React framework, a great list of tutorials can be found here.

    There are a several build, transpiler, or bundling tools from which to select when working with React. For this tutorial, we’ll be using be using Node, NPM, Webpack, and Babel.

    Step 1: Install Node

    Download and install Node for your target platform. Node distributions can be obtained here.

    Step 2: Create a Project Folder

    From a command line prompt, create a folder where you plan to develop your example.

    $> mkdir hello-react
    

    Step 3: Initialize Project

    Change directory into the example folder and use the Node Package Manager (npm) to initialize the project:

    $> cd hello-react
    $> npm init --yes
    

    This results in the creation of a package.json file. While not technically necessary for this example, creating this file will allow us to persist our packaging and runtime dependencies.

    Step 4: Install React

    React is broken up into a core framework package and a package related to rendering to the Document Object Model (DOM).

    From the hello-react folder, run the following command to install these packages and add them to your package.json file:

    $> npm install --save-dev react react-dom
    

    Step 5: Install Babel

    Babel is a transpiler, which is to say it’s a tool from converting one language or language version to another. In our case, we’ll be converting EcmaScript 2015 to EcmaScript 5.

    From the hello-react folder, run the following command to install babel:

    $> npm install --save-dev babel-core
    

    Step 6: Install Webpack

    Webpack is a module bundler. We’ll be using it to package all of our scripts into a single script we’ll include in our example Web page.

    From the hello-react folder, run the following command to install webpack globally:

    $> npm install webpack --global
    

    Step 7: Install Babel Loader

    Babel loader is a Webpack plugin for using Babel to transpile scripts during the bundling process.

    From the hello-react folder, run the following command to install babel loader:

    $> npm install --save-dev babel-loader
    

    Step 8: Install Babel Presets

    Babel presets are collections of plugins needed to support a given feature. For example, the latest version of babel-preset-es2015 at the time this writing will install 24 plugins which enables Babel to transpile ECMAScript 2015 to ECMAScript 5. We’ll be using presets for ES2015 as well as presets for React. The React presets are primarily needed for processing of JSX.

    From the hello-react folder, run the following command to install the babel presets for both ES2015 and React:

    $> npm install --save-dev babel-preset-es2015 babel-preset-react
    

    Step 9: Configure Babel

    In order to tell Babel which presets we want to use when transpiling our scripts, we need to provide a babel config file.

    Within the hello-react folder, create a file named .babelrc with the following contents:

    {                                    
      "presets" : ["es2015", "react"]    
    }                                 
    

    Step 10: Configure Webpack

    In order to tell Webpack we want to use Babel, where our entry point module is, and where we want the output bundle to be created, we need to create a Webpack config file.

    Within the hello-react folder, create a file named webpack.config.js with the following contents:

    const path = require('path');
     
    module.exports = {
      entry: './app/index.js',
      output: {
        path: path.resolve('dist'),
        filename: 'index_bundle.js'
      },
      module: {
        rules: [
          { test: /\.js$/, loader: 'babel-loader', exclude: /node_modules/ }
        ]
      }
    }

    Step 11: Create a React Component

    For our example, we’ll just be creating a simple component which renders the text “Hello, React!”.

    First, create an app sub-folder:

    $> mkdir app
    

    Next, create a file named app/index.js with the following content:

    import React from 'react';
    import ReactDOM from 'react-dom';
     
    class HelloWorld extends React.Component {
        render() {
              return (
                      <div>
                        Hello, React!
                      </div>
                    )
            }
    };
     
    ReactDOM.render(<HelloWorld />, document.getElementById('root'));

    Briefly, this code includes the react and react-dom modules, defines a HelloWorld class which returns an element containing the text “Hello, React!” expressed using JSX syntax, and finally renders an instance of the HelloWorld element (also using JSX syntax) to the DOM.

    If you’re completely new to React, don’t worry too much about trying to fully understand the code. Once you’ve completed this tutorial and have an example up and running, you can move on to one of the aforementioned tutorials, or work through React’s Hello World example to learn more about the syntax used in this example.

    Note: In many examples, you will see the following syntax:

    var HelloWorld = React.createClass( {
        render() {
              return (
                      <div>
                        Hello, React!
                      </div>
                    )
            }
    });

    This syntax is how classes were defined in older versions of React and will therefore be what you see in older tutorials. As of React version 15.5.0 use of this syntax will produce the following warning:

    Warning: HelloWorld: React.createClass is deprecated and will be removed in version 16. Use plain JavaScript classes instead. If you’re not yet ready to migrate, create-react-class is available on npm as a drop-in replacement.

    Step 12: Create a Webpage

    Next, we’ll create a simple html file which includes the bundled output defined in step 10 and declare a <div> element with the id “root” which is used by our react source in step 11 to render our HelloWorld component.

    Within the hello-react folder, create a file named index.html with the following contents:

    <html>
      <div id="root"></div>
      <script src="./dist/index_bundle.js"></script>
    </html>
    

    Step 13: Bundle the Application

    To convert our app/index.js source to ECMAScript 5 and bundle it with the react and react-dom modules we’ve included, we simply need to execute webpack.

    Within the hello-react folder, run the following command to create the dist/index_bundle.js file reference by our index.html file:

    $> webpack
    

    Step 14: Run the Example

    Using a browser, open up the index.html file. If you’ve followed all the steps correctly, you should see the following text displayed:

    Hello, React!
    

    Conclusion

    Congratulations! After completing this tutorial, you should have a pretty good idea about the steps involved in getting a basic React app up and going. Hopefully this will save some absolute beginners from spending too much time trying to piece these steps together.

    Exploring TypeScript


    A proposal to use TypeScript was recently made within my development team, so I’ve taken a bit of time to investigate the platform.  This article reflects my thoughts and conclusions on where the platform is at this point.

     

    TypeScript: What is It?

    TypeScript is a scripting language created by Microsoft which provides static typing and a class-based object-oriented programming paradigm for transpiling to JavaScript.  In contrast to other compile-to-javascript languages such as CoffeeScript and Dart, TypeScript is a superset of JavaScript which means that TypeScript introduces syntax enhancements to the JavaScript language.

     

    Recent Rise In Popularity

    TypeScript made it’s debut in late 2012 and was first released in April 2014.  Community interest has been fairly marginal since it’s debut, but has shown an increase since an announcement that the next version of Google’s popular Angular framework would be written in TypeScript.

    The following Google Trends chart shows the interest parallel between Angular 2 and TypeScript from 2014 to present:

     

    The Good

    Type System

    TypeScript provides an optional type system which can aid in catching certain types of programing errors at compile time.  The information derived from the type system also serves as the foundation for most of the tooling surrounding TypeScript.

    The following is a simple example showing a basic usage of the type system:

    interface Person {
        firstName: string;
        lastName: string;
    }
    
    class Greeter {
        greeting: string;
        constructor(message: string) {
            this.greeting = message;
        }
        greet(person: Person) {
            return this.greeting + " " + person.firstName + " " + person.lastName;
        }
    }
    
    let greeter = new Greeter("Hello,");
    let person = { firstName: "John", lastName: "Doe" };
    
    document.body.innerHTML = greeter.greet(person);
    

    In this example, a Person interface is declared with two string properties: firstName and lastName.  Next, a Greeter class is created with a greet() function which is declared to take a parameter of type Person.  Next, instances of Greeter and Person are instantiated and the Greeter instance’s greet() function is invoked passing in the Person instance.  At compile time, TypeScript is able to detect whether the object passed to the greet() function conforms to the Person interface and whether the values assigned to the expected properties are of the expected type.

    Tooling

    While the type system and programming paradigm introduced by TypeScript are its key features, it’s really the tooling facilitated by the type system that makes the platform shine.  Being notified of syntax errors at compile time is helpful, but it’s really the productivity that stems from features such as design-time type checking, intellisense/code-completion, and refactoring that make TypeScript compelling.

    TypeScript is currently supported by many popular IDEs including Visual Studio, WebStorm, Sublime Text, Brackets, and Eclipse.

    EcmaScript Foundation

    One of the differentiators of TypeScript from other languages which transpile to JavaScript (CoffeeScript, Dart, etc.) is that TypeScript builds upon the JavaScript language.  This means that all valid JavaScript code is valid TypeScript code.

    Idiomatic JavaScript Generation

    One of the goals of the TypeScript team was to ensure the TypeScript compiler emitted idiomatic JavaScript.  This means the code produced by the TypeScript compiler is readable and generally follows normal JavaScript conventions.

     

    The Not So Good

    Type Definitions and 3rd-Party Libraries

    Typescript requires type definitions to be created for 3rd-party code to realize many of the benefits of the tooling.  While  the DefinitelyTyped project provides type definitions for the most popular JavaScript libraries used today, there will probably be the occasion where the library you want to use has no type definition file.

    Moreover, interfaces maintained by 3rd-party sources are somewhat antithetical to their primary purpose.  Interfaces should serve as contracts for the behavior of a library.  If the interfaces are maintained by a 3rd-party, however, they can’t be accurately described as “contracts” since no implicit promise is being made by the library author that the interface being provided accurately matches the library’s behavior.  It’s probably the case that this doesn’t prove to be much of an issue in practice, but at minimum I would think relying upon type definitions created by 3rd parties would eventually lead to the available type definitions lagging behind new releases of the libraries being used.

    Type System Overhead

    Introducing a typesystem is a bit of a double-edged sword.  While a type system can provide a lot of benefits, it also adds syntactical overhead to a codebase.  In some cases this can result in the code you maintain actually being harder to read and understand than the code being generated.  This can be illustrated using Anders Hejlsberg’s example presented at Build 2014.

    The TypeScript source in the first listing shows a generic sortBy method which takes a callback for retrieving the value by which to sort while the second listing shows the generated JavaScript source:

    interface Entity {
    	name: string;
    }
    
    function sortBy(a: T[], keyOf: (item: T) => any): T[] {
    	var result = a.slice(0);
    	result.sort(function(x, y) {
    		var kx = keyOf(x);
    		var ky = keyOf(y);
    		return kx > ky ? 1: kx < ky ? -1 : 0; }); return result; } var products = [ { name: "Lawnmower", price: 395.00, id: 345801 }, { name: "Hammer", price: 5.75, id: 266701 }, { name: "Toaster", price: 19.95, id: 400670 }, { name: "Padlock", price: 4.50, id: 560004 } ]; var sorted = sortBy(products, x => x.price);
    document.body.innerText = JSON.stringify(sorted, null, 4);
    
    function sortBy(a, keyOf) {
        var result = a.slice(0);
        result.sort(function (x, y) {
            var kx = keyOf(x);
            var ky = keyOf(y);
            return kx > ky ? 1 : kx < ky ? -1 : 0;
        });
        return result;
    }
    var products = [
        { name: "Lawnmower", price: 395.00, id: 345801 },
        { name: "Hammer", price: 5.75, id: 266701 },
        { name: "Toaster", price: 19.95, id: 400670 },
        { name: "Padlock", price: 4.50, id: 560004 }
    ];
    var sorted = sortBy(products, function (x) { return x.price; });
    document.body.innerText = JSON.stringify(sorted, null, 4);
    

    Comparing the two signatures, which is easier to understand?

    TypeScript

    function sortBy<T>(a: T[], keyOf: (item: T) => any): T[]

    JavaScript

    function sortBy(a, keyOf)

    It might be reasoned that the TypeScript version should be easier to understand given that it provides more information, but many would disagree that this is in fact the case.  The reason for this is that the TypeScript version adds quite a bit of syntax to explicitly describe information that can otherwise be deduced fairly easily.  In many ways this is similar to how we process natural language.  When we communicate, we don’t encode each word with its grammatical function (e.g. “I [subject] bought [past tense verb] you [indirect object] a [indefinite article] gift [direct object].”)  Rather, we rapidly and subconsciously make guesses based on familiarity with the vocabulary, context, convention and other such signals.

     In the case of the sortBy example, we can guess at the parameters and return type for the function faster than we can parse the type syntax.  This becomes even easier if descriptive names are used (e.g. sortByKey(array, keySelector)).  Sometimes implicit expression is simply easier to understand.

    Now to be fair, there are cases where TypeScript is arguably going to be more clear than the generated JavaScript (and for similar reasons).  Consider the following listing:

    class Auto{
      constructor(public wheels = 4, public doors?){
      }
    }
    var car = new Auto();
    car.doors = 2;
    
    var Auto = (function () {
        function Auto(wheels, doors) {
            if (wheels === void 0) { wheels = 4; }
            this.wheels = wheels;
            this.doors = doors;
        }
        return Auto;
    }());
    var car = new Auto();
    car.doors = 2;
    

    In this example, the TypeScript version results in less syntax noise than the generated JavaScript version.   Of course, this is a comparison between TypeScript and it’s generated syntax rather than the following syntax many may have used:

    wheels = wheels || 4;

    Community Alignment

    While TypeScript is a superset of JavaScript, this deserves some qualification.  Unlike languages such as CoffeeScript and Dart which also compile to JavaScript, TypeScript starts with the EcmaScript specification as the base of it’s language.  Nevertheless, TypeScript is still a separate language.

    A team’s choice to maintain an application in TypeScript over JavaScript isn’t quite the same thing as choosing to implement an application in C# version 6 instead of C# version 5.  TypeScript isn’t the promise: “Programming with the ECMAScript of tomorrow … today!”.  Rather, it’s a language that layers a different programming paradigm on top of JavaScript.  While you can choose how much of the feature superset and programming paradigm you wish to use, the more features and approaches peculiar to TypeScript that are adopted the further the codebase will diverge from standard JavaScript syntax and conventions.

    A codebase that fully leverages TypeScript can tend to look far more like C# than standard JavaScript.  In many ways, TypeScript is the perfect front-end development environment for C# developers as it provides a familiar syntax and programming paradigm to which they are already accustomed.  Unfortunately, developers who spend most of their time in C# often struggle with JavaScript syntax, conventions, and patterns.  The same might be expected to be true for TypeScript developers who utilize the language to emulate object-oriented development in C#.

    Ultimately, the real negative I see with this is that (at least right now) TypeScript doesn’t represent how the majority of Web development is being done in the community.  This has implications on the availability of documentation, availability of online help, candidate pool size, marketability, and skill portability.

    Consider the following chart which compares the current job openings available for JavaScript and TypeScript:

    Source: simplyhired.com – August 2016

    Now, the fact that there may be far less TypeScript jobs out there than JavaScript jobs doesn’t mean that TypeScript isn’t going to be the next big thing.  What it does mean, however, is that you are going to experience less friction in the aforementioned areas if you stick with standard EcmaScript.

    Alternatives

    For those considering TypeScript, the following are a couple of options you might consider before converting just yet.

    ECMAScript 2015

    If you’re  interested in TypeScript and currently still writing ES5 code, one step you might consider is to begin using ES2015.  In John Papa’s article: “Understanding ES5, ES2015 and TypeScript”, he writes:

    Why Not Just use ES2015?  That’s a great option! Learning ES2015 is a huge leap from ES5. Once you master ES2015, I argue that going from there to TypeScript is a very small step.

    In many ways, taking the time to learn ECMAScript 2015 is the best option even if you think you’re ready to start using TypeScript.  Making the journey from ES5 to ES2015 and then later on to TypeScript will help you to clearly understand which new features are standard ECMAScript and which are TypeScript … knowledge you’re likely to be fuzzy on if you move straight from ES5 to TypeScript.

    Flow

    If you’ve already become convinced that you need a type system for JavaScript development or you’re just looking to test the waters, you might consider a lighter-weight alternative to the TypeScript platform: Facebook’s Flow project.  Flow is a static type checker for JavaScript designed to gain static type checking benefits  without losing the “feel” of coding in JavaScript and in some cases it does a better job at catching type-related errors than TypeScript.

    For the most part, Flow’s type system is identical to that of TypeScript, so it shouldn’t be too hard to convert to TypeScript down the road if desired.  Several IDEs have Flow support including Web Storm, Sublime Text, Atom, and of course Facebook’s own Nuclide.

    As of August 2016, Flow also supports Windows.  Unfortunately this support has only recently become available, so Flow doesn’t yet enjoy the same IDE support on Windows as it does on OSX and Linux platforms.  IDE support can likely be expected to improve going forward.

    Test-Driven Development

    If you’ve found the primary appeal of TypeScript to be the immediate feedback you receive from the tooling, another methodology for achieving this (which has far greater benefits) is the practice of Test-Driven Development (TDD). The TDD methodology not only provides a rapid feedback cycle, but (if done properly) results in duplication-free code that is more maintainable by constraining the team to only developing the behavior needed by the application, and results in a regression-test suite which provides a safety net for future modifications as well as documentation for how the system is intended to be used. Of course, these same benefits can be realized with TypeScript development as well, but teams practicing TDD may find less need for TypeScript’s compiler-generated error checking.

     

    Conclusion

    After taking some time to explore TypeScript, I’ve found that aspects of its ecosystem are very compelling, particularly the tooling that’s available for the platform.  Nevertheless, it still seems a bit early to know what role the platform will play in the future of Web development.

    Personally, I like the JavaScript language and, while I see some advantages of introducing type checking, I think a wiser course for now would be to invest in learning EcmaScript 2015 and keep a watchful eye on TypeScript adoption going forward.

    Git on Windows: Whence Cometh Configuration


    I recently went through the process of setting up a new development environment on Windows which included installing Git for Windows. At one point in the course of tweaking my environment, I found myself trying to determine which config file a particular setting originated. The command ‘git config –list’ showed the setting, but ‘git config –list –system’, ‘git config –list –global’, and ‘git config –list –local’ all failed to reflect the setting. Looking at the options for config, I discovered you can add a ‘–show-origin’ which led to a discovery: Git for Windows has an additional location from which it derives your configuration.

    It turns out, since the last time I installed git on Windows, a change was made for the purposes of sharing git configuration across different git projects (namely, libgit2 and Git for Windows) where a Windows-specific location is now used as the lowest setting precedence (i.e. the default settings). This is the file: C:\ProgramData\Git\config. It doesn’t appear git added a way to list or edit this file as a well-known location (e.g. ‘git config –list windows’), so it’s not particularly discoverable aside from knowing about the ‘–show-origin’ switch.

    So the order in which Git for Windows sources configuration information is as follows:

    1. C:\ProgramData\Git\config
    2. system config (e.g. C:\Program Files\Git\mingw64\etc\gitconfig)
    3. global config (%HOMEPATH%.gitconfig
    4. local config (repository-specific .git/config)

    Perhaps this article might help the next soul who finds themselves trying to figure out from where some seemingly magical git setting is originating.

    Separation of Concerns: Application Builds & Continuous Integration


    I’ve always had an interest in application build processes. From the start of my career, I’ve generally been in the position of establishing the solution architecture for the projects I’ve participated in and this has usually involved establishing a baseline build process.

    My career began as a Unix C developer while still in college where much of my responsibilities required writing tools in both C and various Unix shell scripting languages which were deployed to other workstations throughout the country. From there, I moved on to Unix C-CGI Web development and worked a number of years with Make files. With the advent of Java, I begin using tools like Ant and Maven for several more years before switching to the .Net platform where I used open source build tools like NAnt until Microsoft introduced MSBuild with its 2.0 release. Upon moving to the Austin, TX area, I was greatly influenced by what was the early seat of the Alt.Net movement. It was there where I abandoned what in hindsight has always been a ridiculous idea … trying to script a build using XML. For the next 4-5 years, I used Rake to define all of my builds. Starting last year, I began using Gulp and associated tooling on the Node platform for authoring .Net builds.

    Throughout this journey of working with various build technologies, I’ve formed a few opinions along the way. One of these opinions is that the Build process shouldn’t be coupled to the Continuous Integration process.

    A project should have a build process which exists and can be executed independent of the particular continuous integration tool one chooses. This allows builds to be created and maintained on the developer’s local machine. The particular build steps involved in building a given application are inherently part of its ontology. What compilers and preprocessors need to be used, how dependencies are obtained and published, when and how configuration values are supplied for different environments, how and where automated test suites are run, how the application distribution is created … all of these are concerns whose definition and orchestration are particular to a given project. Such concerns should be encapsulated in a build script which lives with the rest of the application source, not as discrete build steps defined within your CI tool.

    Ideally, builds should never break, but when they do it’s important to resolve the issue as quickly as possible. Not being able to run a build locally means potentially having to repeatedly introduce changes until the build is fixed. This tends to pollute the source code commit history with comments like: “Fixing the build”, “Fixing the build for realz this time”, and “Please let this be it … I’m ready to go home”. Of course, there are times when a build can break because of environmental issues that may not be mirrored locally (e.g. lack of disk space, network related issues, 3rd-party software dependencies, etc.), but encapsulating as much of your build as possible goes a long way to keeping builds running along smoothly. Anyone on your team should be able to clone/check-out the project, issue a single command from the command line (e.g. gulp, rake, psake, etc.) and watch the full build process execute including any pre-processing steps, compilation, distribution packaging and even deployment to a target environment.

    Aside from being able to run a build locally, decoupling the build from the CI process allows the technologies used by each to vary independently. Switching from one CI tool to another should ideally just require installing the software, pointing it to your source control, defining the single step to issue the build, and defining the triggers that initiate the process.

    The creation of a project distribution and the scheduling mechanism for how often this happens are separate concerns. Just because a CI tool allows you to script out your build steps doesn’t mean you should.

    Survey of Entity Framework Unit of Work Patterns


    Earlier this year I joined a development team which chose Entity Framework for the persistence needs of a new greenfield project. While I’ve worked on a few projects which used Entity Framework here and there over the years, the bulk of my experience has been with NHibernate and, more recently, Dapper.Net. As a result, there hasn’t been all that much occasion for me to explore it in any level of depth until this year.

    One area I recently took some time to research is how the Unit of Work pattern is best implemented within the context of using Entity Framework. While the topic is still relatively fresh on my mind, I thought I’d use this as an opportunity to create a catalog of various approaches I’ve encountered and include some thoughts about each approach.

    Unit of Work

    To start, it may be helpful to give a basic definition of the Unit of Work pattern. A Unit of Work can be defined as a collection of operations that succeed or fail as a single unit. Given a series of operations which need to be executed in response to some interaction with an application, it’s often necessary to ensure that none of the operations cause side-effects if any one of them fails. This is accomplished by having participating operations respond to either a commit or rollback message indicating whether the operation performed should be completed or reverted.

    A Unit of Work can consist of different types of operations such as Web Service calls, Database operations, or even in-memory operations, however, the focus of this article will be on approaches to facilitating the Unit of Work pattern with Entity Framework.

    With that out of the way, let’s take a look at various approaches to facilitating the Unit of Work pattern with Entity Framework.

    Implicit Transactions

    The first approach to achieving a Unit of Work around a series of Entity Framework operations is to simply create an instance of a DbContext class, make changes to one or more DbSet instances, and then call SaveChanges() on the context. Entity Framework automatically creates an implicit transaction for changesets which include INSERTs, UPDATEs, and DELETEs.

    Here’s an example:

    public Customer CreateCustomer(CreateCustomerRequest request)
    {
      Customer customer = null;
    
      using (var context = new MyStoreContext())
      {
        customer = new Customer { FirstName = request.FirstName, LastName = request.LastName };
        context.Customers.Add(customer);
        context.SaveChanges();
        return customer;
      }
    }
    

    The benefit of this approach is that a transaction is created only when necessary and is kept alive only for the duration of the SaveChanges() call. Some drawbacks to this approach, however, are that it leads to opaque dependencies and adds a bit of repetitive infrastructure code to each of your applications services.

    If you prefer to work directly with Entity Framework then this approach may be fine for simple needs.

    TransactionScope

    Another approach is to use the System.Transactions.TransactionScope class provided by the .Net framework. When any of the Entity Framework operations are used which cause a connection to be opened (e.g. SaveChanges()), the connection will enlist in the ambient transaction defined by the TransactionScope class and close the transaction once the TransactionScope is successfully completed. Here’s an example of this approach:

    public Customer CreateCustomer(CreateCustomerRequest request)
    {
      Customer customer = null;
    
      using (var transaction = new TransactionScope())
      {
        using (var context = new MyStoreContext())
        {
          customer = new Customer { FirstName = request.FirstName, LastName = request.LastName };
          context.Customers.Add(customer);
          context.SaveChanges();
          transaction.Complete();
        }
    
        return customer;
      }
    }
    

    In general, I find using TransactionScope to be a good general-purpose solution for defining a Unit of Work around Entity Framework operations as it works with ADO.Net, all versions of Entity Framework, and other ORMs which provides the ability to use multiple libraries if needed. Additionally, it provides a foundation for building a more comprehensive Unit of Work pattern which would allow other types of operations to enlist in the Unit of Work.

    Caution should be exercised when using TransactionScope, however, as certain operations can implicitly escalate the transaction to a distributed transaction causing undesired overhead. For those choosing solutions involving TransactionScope, I would recommend educating yourself on how and when transactions are escalated.

    While I find using the TransactionScope class to be a good general-purpose solution, using it directly does couple your services to a specific strategy and adds a bit of noise to your code. While it’s a viable choice, I would recommend inverting the concerns of managing the Unit of Work boundary as shown in approaches we’ll look at later.

    ADO.Net Transactions

    This approach involves creating an instance of DbTransaction and instructing the participating DbContext instance to use the existing transaction:

    public Customer CreateCustomer(CreateCustomerRequest request)
    {
      Customer customer = null;
    
      var connectionString = ConfigurationManager.ConnectionStrings["MyStoreContext"].ConnectionString;
      using (var connection = new SqlConnection(connectionString))
      {
        connection.Open();
        using (var transaction = connection.BeginTransaction())
        {
          using (var context = new MyStoreContext(connection))
          {
            context.Database.UseTransaction(transaction);
            try
            {
              customer = new Customer { FirstName = request.FirstName, LastName = request.LastName };
              context.Customers.Add(customer);
              context.SaveChanges();
            }
            catch (Exception e)
            {
              transaction.Rollback();
              throw;
            }
          }
    
          transaction.Commit();
          return customer;
        }
      }
    

    As can be seen from the example, this approach adds quite a bit of infrastructure noise to your code. While not something I’d recommend standardizing upon, this approach provides another avenue for sharing transactions between Entity Framework and straight ADO.Net code which might prove useful in certain situations. In general, I wouldn’t recommend such an approach.

    Entity Framework Transactions

    The relative new-comer to the mix is the new transaction API introduced with Entity Framework 6. Here’s a basic example of it’s use:

    public Customer CreateCustomer(CreateCustomerRequest request)
    {
      Customer customer = null;
    
      using (var context = new MyStoreContext())
      {
        using (var transaction = context.Database.BeginTransaction())
        {
          try
          {
            customer = new Customer { FirstName = request.FirstName, LastName = request.LastName };
            context.Customers.Add(customer);
            context.SaveChanges();
            transaction.Commit();
          }
          catch (Exception e)
          {
            transaction.Rollback();
            throw;
          }
        }
      }
    
      return customer;
    }
    

    This is the approach recommended by Microsoft for achieving transactions with Entity Framework going forward. If you’re deploying applications with Entity Framework 6 and beyond, this will be your safest choice for Unit of Work implementations which only require database operation participation. Similar to a couple of the previous approaches we’ve already considered, the drawbacks of using this directly are that it creates opaque dependencies and adds repetitive infrastructure code to all of your application services. This is also a viable option, but I would recommend coupling this with other approaches we’ll look at later to improve the readability and maintainability of your application services.

    Unit of Work Repository Manager

    The first approach I encountered when researching how others were facilitating the Unit of Work pattern with Entity Framework was a strategy set forth by Microsoft’s guidance on the topic here. This strategy involves creating a UnitOfWork class which encapsulates an instance of the DbContext and exposes each repository as a property. Clients of repositories take a dependency upon an instance of UnitOfWork and access each repository as needed through properties on the UnitOfWork instance. The UnitOfWork type exposes a SaveChanges() method to be used when all the changes made through the repositories are to be persisted to the database. Here is an example of this approach:

    public interface IUnitOfWork
    {
      ICustomerRepository CustomerRepository { get; }
      IOrderRepository OrderRepository { get; }
      void Save();
    }
    
    public class UnitOfWork : IDisposable, IUnitOfWork
    {
      readonly MyContext _context = new MyContext();
      ICustomerRepository _customerRepository;
      IOrderRepository _orderRepository;
    
      public ICustomerRepository CustomerRepository
      {
        get { return _customerRepository ?? (_customerRepository = new CustomerRepository(_context)); }
      }
    
      public IOrderRepository OrderRepository
      {
        get { return _orderRepository ?? (_orderRepository = new OrderRepository(_context)); }
      }
    
      public void Dispose()
      {
        if (_context != null)
        {
          _context.Dispose();
        }
      }
    
      public void Save()
      {
        _context.SaveChanges();
      }
    }
    
    public class CustomerService : ICustomerService
    {
      readonly IUnitOfWork _unitOfWork;
    
      public CustomerService(IUnitOfWork unitOfWork)
      {
        _unitOfWork = unitOfWork;
      }
    
      public void CreateCustomer(CreateCustomerRequest request)
      {
        customer = new Customer { FirstName = request.FirstName, LastName = request.LastName };
        _unitOfWork.CustomerRepository.Add(customer);
        _unitOfWork.Save();
      }
    }
    

    It isn’t hard to imagine how this approach was conceived given it closely mirrors the typical implementation of the DbContext instance you find in Entity Framework guidance where public instances of DbSet are exposed for each aggregate root. Given this pattern is presented on the ASP.Net website and comes up as one of the first results when doing a search for “Entity Framework” and “Unit of Work”, I imagine this approach has gained some popularity among .Net developers. There are, however, a number of issues I have with this approach.

    First, this approach leads to opaque dependencies. Due to the fact that classes interact with repositories through the UnitOfWork instance, the client interface doesn’t clearly express the inherent business-level collaborators it depends upon (i.e. any aggregate root collections).

    Second, this violates the Open/Closed Principle. To add new aggregate roots to the system requires modifying the UnitOfWork each time.

    Third, this violates the Single Responsibility Principle. The single responsibility of a Unit of Work implementation should be to encapsulate the behavior necessary to commit or rollback an set of operations atomically. The instantiation and management of repositories or any other component which may wish to enlist in a unit of work is a separate concern.

    Lastly, this results in a nominal abstraction which is semantically coupled with Entity Framework. The example code for this approach sets forth an interface to the UnitOfWork implementation which isn’t the approach used in the aforementioned Microsoft article. Whether you take a dependency upon the interface or the implementation directly, however, the presumption of such an abstraction is to decouple the application from using Entity Framework directly. While such an abstraction might provide some benefits, it reflects Entity Framework usage semantics and as such doesn’t really decouple you from the particular persistence technology you’re using. While you could use this approach with another ORM (e.g. NHibernate), this approach is more of a reflection of Entity Framework operations (e.g. it’s flushing model) and usage patterns. As such, you probably wouldn’t arrive at this same abstraction were you to have started by defining the abstraction in terms of the behavior required by your application prior to choosing a specific ORM (i.e. following The Dependency Inversion Principle). You might even find yourself violating the Liskof Substitution Principle if you actually attempted to provide an alternate ORM implementation. Given these issues, I would advise people to avoid this approach.

    Injected Unit of Work and Repositories

    For those inclined to make all dependencies transparent while maintaining an abstraction from Entity Framework, the next strategy may seem the natural next step. This strategy involves creating an abstraction around the call to DbContext.SaveChanges() and requires sharing a single instance of DbContext among all the components whose operations need to participate within the underlying SaveChanges() call as a single transaction.

    Here is an example:

    public class CustomerService : ICustomerService
    {
      readonly IUnitOfWork _unitOfWork;
      readonly ICustomerRepository _customerRepository;
    
      public CustomerService(IUnitOfWork unitOfWork, ICustomerRepository customerRepository)
      {
        _unitOfWork = unitOfWork;
        _customerRepository = customerRepository;
      }
    
      public void CreateCustomer(CreateCustomerRequest request)
      {
        customer = new Customer { FirstName = request.FirstName, LastName = request.LastName };
        _customerRepository.Add(customer);
        _unitOfWork.Save();
      }
    }
    

    While this approach improves upon the opaque design of the Repository Manager, there are several issues I find with this approach as well.

    Similar to the first example, this UnitOfWork implementation is still semantically coupled to how Entity Framework is urging you to think about things. Entity Framework wants you to call SaveChanges() whenever you’re ready to flush any INSERT, UPDATE, or DELETE operations you’ve issued against the database and this abstraction basically surfaces this behavior. If you were to use an alternate framework that supported a different flushing model (e.g. NHibernate), you likely wouldn’t end up with the same abstraction.

    Moreover, this approach has no definitive Unit of Work boundary. With this approach, you aren’t defining a logical Unit of Work, but are merely injecting a UnitOfWork you can participate within. When you invoke the underlying DBContext.SaveChanges() method, it isn’t explicit what work will be committed.

    While this approach corrects a few design issues I find with the Repository Manager, overall I like this approach even less. At least with the Repository Manager approach you have a defined Unit of Work boundary which is kind of the whole point. My recommendation would be to avoid this approach as well.

    Repository SaveChanges Method

    The next strategy is basically a variation on the previous one. Rather than injecting a separate type whose sole purpose is to provide an indirect way to call the SaveChanges() method, some merely expose this through the Repository:

    public class CustomerService : ICustomerService
    {
      readonly ICustomerRepository _customerRepository;
    
      public CustomerService(ICustomerRepository customerRepository)
      {
        _customerRepository = customerRepository;
      }
    
      public void CreateCustomer(CreateCustomerRequest request)
      {
        customer = new Customer { FirstName = request.FirstName, LastName = request.LastName };
        _customerRepository.Add(customer);
        _customerRepository.SaveChanges();
      }
    }
    

    This approach shares many of the same issues with the previous one. While it reduces a bit of infrastructure noise, it’s still semantically coupled to Entity Framework’s approach and still lacks a defined Unit of Work boundary. Additionally, it lacks clarity as to what happens when you call the SaveChanges() method. Given the Repository pattern is intended to be a virtual collection of all the entities within your system of a given type, one might suppose a method named “SaveChanges” means that you are somehow persisting any changes made to the particular entities represented by the repository (setting aside the fact that doing so is really a subversion of the pattern’s purpose). On the contrary, it really means “save all the changes made to any entities tracked by the underlying DbContext”. I would also recommend avoiding this approach.

    Unit of Work Per Request

    A pattern I’m a bit embarrassed to admit has been characteristic of many projects I’ve worked on in the past (though not with EF) is to create a Unit of Work implementation which is scoped to a Web Application’s Request lifetime. Using this approach, whatever method is used to facilitate a Unit of Work is configured with a DI container using a Per-HttpRequest lifetime scope and the Unit of Work boundary is opened by the first component being injected by the UnitOfWork and committed/rolled-back when the HttpRequest is disposed by the container.

    There are a few different manifestations of this approach depending upon the particular framework and strategy you’re using, but here’s a pseudo-code example of how configuring this might look for Entity Framework with the Autofac DI container:

    builder.RegisterType<MyDbContext>()
            .As<DbContext>()
            .InstancePerRequest()
            .OnActivating(x =>
            {
              // start a transaction
            })
            .OnRelease(context =>
            {
              try
              {
                // commit or rollback the transaction
              }
              catch (Exception e)
              {
                // log the exception
                throw;
              }
            });
    
    public class SomeService : ISomeService
    {
      public void DoSomething()
      {
        // do some work
      }
    }
    
    

    While this approach eliminates the need for your services to be concerned with the Unit of Work infrastructure, the biggest issue with this is when an error happens to occur. When the application can’t successfully commit a transaction for whatever reason, the rollback occurs AFTER you’ve typically relinquished control of the request (e.g. You’ve already returned results from a controller). When this occurs, you may end up telling your customer that something happened when it actually didn’t and your client state may end up out of sync with the actual persisted state of the application.

    While I used this strategy without incident for some time with NHibernate, I eventually ran into a problem and concluded that the concern of transaction boundary management inherently belongs to the application-level entry point for a particular interaction with the system. This is another approach I’d recommend avoiding.

    Instantiated Unit of Work

    The next strategy involves instantiating a UnitOfWork implemented using either the .Net framework TransactionScope class or the transaction API introduced by Entity Framework 6 to define a transaction boundary within the application service. Here’s an example:

    public class CustomerService : ICustomerService
    {
      readonly ICustomerRepository _customerRepository;
    
      public CustomerService(ICustomerRepository customerRepository)
      {
        _customerRepository = customerRepository;
      }
    
      public void CreateCustomer(CreateCustomerRequest request)
      {
        using (var unitOfWork = new UnitOfWork())
        {
          try
          {
            customer = new Customer { FirstName = request.FirstName, LastName = request.LastName };
            _customerRepository.Add(customer);        
            unitOfWork.Commit();
          }
          catch (Exception ex)
          {
            unitOfWork.Rollback();
          }
        }
      }
    }
    

    Functionally, this is a viable approach to facilitating a Unit of Work boundary with Entity Framework. A few drawbacks, however, are that the dependency upon the Unit Of Work implementation is opaque and that it’s coupled to a specific implementation. While this isn’t a terrible approach, I would recommend other approaches discussed here which either surface any dependencies being taken on the Unit of Work infrastructure or invert the concerns of transaction management completely.

    Injected Unit of Work Factory

    This strategy is similar to the one presented in the Instantiated Unit of Work example, but makes its dependence upon the Unit of Work infrastructure transparent and provides a point of abstraction which allows for an alternate implementation to be provided by the factory:

    public class CustomerService : ICustomerService
    {
      readonly ICustomerRepository _customerRepository;
      readonly IUnitOfWorkFactory _unitOfWorkFactory;
    
      public CustomerService(IUnitOfWorkFactory unitOfWorkFactory, ICustomerRepository customerRepository)
      {
        _customerRepository = customerRepository;
        _unitOfWorkFactory = unitOfWorkFactory;
      }
    
      public void CreateCustomer(CreateCustomerRequest request)
      {
        using (var unitOfWork = _unitOfWorkFactory.Create())
        {
          try
          {
            customer = new Customer { FirstName = request.FirstName, LastName = request.LastName };
            _customerRepository.Add(customer);
            unitOfWork.Commit();
          }
          catch (Exception ex)
          {
            unitOfWork.Rollback();
          }
        }
      }
    }
    

    While I personally prefer to invert such concerns, I consider this to be a sound approach.

    As a side note, if you decide to use this approach, you might also consider utilizing your DI Container to just inject a Func to avoid the overhead of maintaining an IUnitOfWorkFactory abstraction and implementation.

    Unit of Work ActionFilterAttribute

    For those who prefer to invert the Unit of Work concerns as I do, the following approach provides an easy to implement solution for those using ASP.Net MVC and/or Web API. This technique involves creating a custom Action filter which can be used to control the boundary of a Unit of Work at the Controller action level. The particular implementation may vary, but here’s a general template:

    public class UnitOfWorkFilter : ActionFilterAttribute
    {
      public override void OnActionExecuting(ActionExecutingContext filterContext)
      {
        // begin transaction
      }
    
      public override void OnActionExecuted(ActionExecutedContext filterContext)
      {
        // commit/rollback transaction
      }
    }
    

    The benefits of this approach are that it’s easy to implement and that it eliminates the need for introducing repetitive infrastructure code into your application services. This attribute can be registered with the global action filters, or for the more discriminant, only placed on actions resulting in state changes to the database. Overall, this would be my recommended approach for Web applications. It’s easy to implement, simple, and keeps your code clean.

    Unit of Work Decorator

    A similar approach to the use of a custom ActionFilterAttribute is the creation of a custom decorator. This approach can be accomplished by utilizing a DI container to automatically decorate specific application service interfaces with a class which implements a Unit of Work boundary.

    Here is a pseudo-code example of how configuring this might look for Entity Framework with the Autofac DI container which presumes that some form of command/command-handler pattern is being utilized (e.g. frameworks like MediatR , ShortBus, etc.):

    // DI Registration
    builder.RegisterGenericDecorator(
         typeof(TransactionRequestHandler<,>), // the decorator instance
         typeof(IRequestHandler<,>), // the types to decorate
        "requestHandler", // the name of the key to decorate
         null); // the name of the key to this decorator
    
    
    
    public class TransactionRequestHandler<TRequest, TResponse> : IRequestHandler<TRequest, TResponse> where TResponse : ApplicationResponse
    {
      readonly DbContext _context;
      readonly IRequestHandler<TRequest, TResponse> _decorated;
    
      public TransactionRequestHandler(IRequestHandler<TRequest, TResponse> decorated, DbContext context)
      {
        _decorated = decorated;
        _context = context;
      }
    
      public TResponse Handle(TRequest request)
      {
        TResponse response;
    
        // Open transaction here
    
        try
        {
          response = _decorated.Handle(request);
    
          // commit transaction
    
        }
        catch (Exception e)
        {
          //rollback transaction
          throw;
        }
    
        return response;
      }
    }
    
    
    public class SomeRequestHandler : IRequestHandler<SomeRequest, ApplicationResponse>
    {
      public ApplicationResponse Handle()
      {
        // do some work
        return new SuccessResponse();
      }
    }
    

    While this approach requires a bit of setup, it provides an alternate means of facilitating the Unit of Work pattern through a decorator which can be used by other consumers of the application layer aside from just ASP.Net (i.e. Windows services, CLI, etc.) It also provides the ability to move the Unit of Work boundary closer to the point of need for those who would rather provide any error handling prior to returning control to the application service client (e.g. the Controller actions) as well as giving more control over the types of operations decorated (e.g. IQueryHandler vs. ICommandHandler). For Web applications, I’d recommend trying the custom Action Filter approach first, as it’s easier to implement and doesn’t presume upon the design of your application layer, but this is certainly a good approach if it fits your needs.

    Conclusion

    Out of the approaches I’ve evaluated, there are several that I see as sound approaches which maintain some minimum adherence to good design practices. Of course, which approach is best for your application will be dependent upon the context of what you’re doing and to some extent the design values of your team.

    Introducing NUnit.Specifications


     

    I recently started working with a new team that uses NUnit as their testing framework.  While I think NUnit is a solid framework, I don’t think the default API and style lead to effective tests.

    As an advocate of Test-Driven Development, I’ve always appreciated how context/specification-style frameworks such as Machine.Specifications (MSpec) allow for the expression of executable specifications which model how a system is expected to be used rather than the typical unit-test style of testing which tends to obscure the overall purpose of the system.

    To facilitate a context/specification-style API, I created a base class which makes use of the hooks provided by the NUnit testing framework to emulate MSpec.  I’ve published this code under the project name NUnit.Specifications.

    The following is an example NUnit test written using the ContextSpecification based class from NUnit.Specifications using the Should assertion library:

    image01{.thickbox}

    One nice benefit of building on top of NUnit is the wide-spread tool support available.  Here is the test as seen through various test runners:

    Resharper Test Runner:

    image03{.thickbox}

    TestDriven.Net: (see notes below)

    image04{.thickbox}

    NUnit Test Runner:

    image00{.thickbox}

    NUnit Test Adaptor for Visual Studio:

    image02{.thickbox}

     

    One caveat I discovered with the TestDriven.Net runner is it’s failure to recognize tests without the specification referencing types from the NUnit.Framework namespace (e.g. TestFixtureAttribute, CategoryAttribute, use of Assert, etc.).  That is to say, it didn’t seem to be enough that the spec inherited from a base type with NUnit attributes, but something in the derived class had to reference a type from the NUnit.Framework namespace for the test to be recognized.  Therefore, the TestDriven.Net results shown above were actually achieved by annotating the class with [Category(“component”)] explicitly.

     

    Other Stuff

    As a convenience, NUnit.Specifications also provides attributes for denoting categories of Unit, Component, Integration, Acceptance, and Subcutaneous as well as a Catch class (as provided by the MSpec library) for working with exceptions.

    You can obtain the NUnit.Specifications from NuGet or grab the source from github.

subscribe via RSS