Dependency Management in .Net: install2

On September 27, 2011, in Uncategorized, by derekgreer

Inspired by Rob Reynoldsawesome post on extending NuGet’s command line, I decided to create my own extension for facilitating application-level, build-time retrieval of external dependencies along with all transitive dependencies. I struggled a bit with what to call the command since what it does is really what I believe the regular install command should be doing (i.e. installing transitive dependencies), so I decided to just call it install2. Here’s how to use it:

Step 1: Install the NuGet Extension package by running the following command:

$> NuGet.exe Install /ExcludeVersion /OutputDir %LocalAppData%\NuGet\Commands AddConsoleExtension

Step 2: Install the extension by running the following command:

$> NuGet.exe addExtension nuget.install2.extension

Step 3: Create a plain-text packages file (e.g. dependencies.config) listing out all the dependencies you need. For example:


Step 4: Execute Nuget with the install2 extension command:

$> NuGet.exe install2 dependencies.config

If all goes well, you should see the following output:

Attempting to resolve dependency 'Iesi.Collections ('.
Successfully installed 'Iesi.Collections'.
Successfully installed 'NHibernate'.
Successfully installed 'Moq 4.0.10827'.



Dependency Management in .Net: Get

On September 21, 2011, in Uncategorized, by derekgreer

[Update: This article refers to a tool which will no longer be maintained. Until such time as NuGet is updated to naively support these capabilities, consider using the plug-in described here.]

In my last article, I demonstrated how my team is currently using NuGet.exe from our rake build to facilitate application-level, build-time retrieval of external dependencies.  Since not everyone uses rake for their build process, I decided to create a simple tool that could be consumed by any build process.

To see how the tool works, follow these steps:

Step 1: From the command line, execute the following:

$> nuget install Get

Step 2: Change directory to the Get tools folder.

Step 3: Create a plain text file named dependencies.config and add the following package references:

Moq          4.0.10827

Step 4: Execute the following command:

$> get dependencies.config

The tool currently supports NuGet’s -Source, -ExcludeVersion, and -OutputDirectory switches.  From here, you just need to have it download to a central lib folder and adjust your project references as necessary.  Now stop checking in those assemblies! 🙂

Tagged with:  

In my last article, I discussed some of my previous experiences with dependency management solutions and set forth some primary objectives I believe a dependency management tool should facilitate. In this article, I’ll show how I’m currently leveraging NuGet’s command line tool to help facilitate my dependency management goals.

First, it should be noted that NuGet was designed primarily to help .Net developers more easily discover, add, update, and remove dependencies to externally managed packages from within Visual Studio. It was not designed to support build-time, application-level dependency management outside of Visual Studio. While NuGet wasn’t designed for this purpose, I believe it currently represents the best available option for accomplishing these goals.

Approach 1

My team and I first started using NuGet for retrieving application dependencies at build-time a few months after its initial release, though we’ve evolved our strategy a bit over time. Our first approach used a batch file we named install-packages.bat that used NuGet.exe to process a single packages.config file located in the root of our source folder and download the dependencies into a standard \lib folder. We would then run the batch file after adding any new dependencies to the packages.config and proceed to make assembly references as normal from Visual Studio. We also use Mercurial as our VCS and added a rule to our .hgignore file to keep from checking in the downloaded assemblies. To ensure a freshly downloaded solution obtained all of its needed dependencies, we just added a call to our batch file from a Pre-build event in one of our project files. Voilà!

Here’s an example of our single packages.config file (note, it’s just a regular NuGet config file which it normally stores in the project folder):

<?xml version="1.0" encoding="utf-8"?>
<package id="Antlr" version="" />
<package id="Castle.Core" version="2.5.1" />
<package id="Iesi.Collections" version="" />
<package id="NHibernate" version="" />
<package id="FluentNHibernate" version="" />
<package id="Machine.Specifications" version="" />
<package id="Machine.Fakes" version="" />
<package id="Machine.Fakes.Moq" version="" />
<package id="Moq" version="4.0.10827" />
<package id="Moq.Contrib" version="0.3" />
<package id="SeleniumDotNet-2.0RC" version="" />
<package id="AutoMapper" version="" />
<package id="Autofac" version="" />
<package id="Autofac.Mvc3" version="" />
<package id="Autofac.Web" version="" />
<package id="CassiniDev" version="" />
<package id="NDesk.Options" version="0.2.1" />
<package id="log4net" version="1.2.10" />
<package id="MvcContrib.Mvc3.TestHelper-ci" version="" />
<package id="NHibernateProfiler" version="" />
<package id="SquishIt" version="0.7.1" />
<package id="AjaxMin" version="4.13.4076.28499" />
<package id="ExpectedObjects" version="" />
<package id="RazorEngine" version="2.1" />
<package id="FluentMigrator" version="" />
<package id="Firefox" version="3.6.6" />


Here’s the batch file we used:

@echo off
set SCRIPT_DIR=%~dp0
set NUGET=%SCRIPT_DIR%..\tools\NuGet\NuGet.exe
set PACKAGES=%SCRIPT_DIR%..\src\packages.config
set LOCALCACHE=C:\Packages\
set CORPCACHE=//corpShare/Packages/

echo [Installing NuGet Packages]

echo [Installing From Local Machine Cache]

echo [Installing From Corporate Cache]

echo [Installing From Internet]

echo [Copying To Local Machine Cache]
xcopy /y /d /s %DESTINATION%*.nupkg %LOCALCACHE%

echo Done


This batch file uses NuGet to retrieve dependencies first from a local cache, then from a corporate level cache, then from the default NuGet feed. It then copies any of the newly retrieved packages to the local cache.  I don’t remember if NuGet had caching when this was first written, but it was decided to keep our own local cache due to the fact that NuGet only seemed to cache packages if retrieved from the default feed. We used the corporate cache as a sort of poor-man’s private repository for things we didn’t want to push up to the public feed.

The main drawback to this approach was that we had to keep up with all of the transitive dependencies. When specifying a packages.config file, NuGet.exe only retrieves the packages listed in the file. It doesn’t retrieve any of the dependencies of the packages listed in the file.

Approach 2

In an attempt to improve upon this approach, we moved the execution of NuGet.exe into our rake build. In doing so, we were able to eliminate the need to specify transitive dependencies by ditching the use of the packages.config file in favor of a Ruby dictionary. We also removed the Pre-Build rule in favor of just running rake prior to building in Visual Studio.

Here is our dictionary which we store in a packages.rb file:

packages = [
[ "FluentNHibernate",              "" ],
[ "Machine.Specifications",        "" ],
[ "Moq",                           "4.0.10827" ],
[ "Moq.Contrib",                   "0.3" ],
[ "Selenium.WebDriver",            "2.5.1" ],
[ "Selenium.Support",              "2.5.1" ],
[ "AutoMapper",                    "" ],
[ "Autofac",                       "" ],
[ "Autofac.Mvc3",                  "" ],
[ "Autofac.Web",                   "" ],
[ "NDesk.Options",                 "0.2.1" ],
[ "MvcContrib.Mvc3.TestHelper-ci", "" ],
[ "NHibernateProfiler",            "" ],
[ "SquishIt",                      "0.7.1" ],
[ "ExpectedObjects",               "" ],
[ "RazorEngine",                   "2.1"],
[ "FluentMigrator",                ""],
[ "Firefox",                       "3.6.6"],
[ "FluentValidation",              "" ],
[ "log4net",                       "1.2.10" ]

configatron.packages = packages


Here’s the pertinent sections of our rakefile:

require 'rubygems'
require 'configatron'


FEEDS = ["//corpShare/Packages/", "" ]

require './packages.rb'

task :default => ["build:all"]

namespace :build do

	task :all => [:clean, :dependencies, :compile, :specs, :package]	


	task :dependencies do
		configatron.packages.each do | package |
			FEEDS.each do | feed | 
				!(File.exists?("#{LIB_PATH}/#{package[0]}")) and
					sh "#{TOOLS_PATH}/NuGet/nuget Install #{package[0]} -Version #{package[1]} -o #{LIB_PATH} -Source #{feed} -ExcludeVersion" do | cmd, results | cmd  end



Another change we made was to use the -ExcludeVersion switch to enable us to setup up the Visual Studio references one time without having to change them every time we upgrade versions. Ideally, I’d like to avoid having to reference transitive dependencies altogether, but I haven’t come up with a clean way of doing this yet.

Approach 2: Update

As of version 1.4, NuGet will now resolve a package’s dependencies (i.e. transitive dependencies) from any of the provided sources (see workitem 603). This allows us to modify the above script to issue a single call to nuget:

    task :dependencies do
        configatron.packages.each do | package |
            !(File.exists?("#{LIB_PATH}/#{package[0]}")) and
                    feeds = {|x|"-Source " + x }.join(' ')
                    sh "nuget Install #{package[0]} -Version #{package[1]} -o #{LIB_PATH} #{feeds} -ExcludeVersion" do | cmd, results | cmd  end


While NuGet wasn’t designed to support build-time, application-level dependency management outside of Visual Studio in the way demonstrated here, it suits my team’s needs for now. My hope is NuGet will eventually support these scenarios more directly.

Tagged with:  

Dependency Management in .Net

On September 18, 2011, in Uncategorized, by derekgreer

I started my career as a programmer developing on Unix platforms, primarily writing applications in ANSI C and C++.  Due to a number of factors, including the platform dependency of C/C++ libraries, the low-level nature of the language and the immaturity of the Internet, code reuse in the form of reusable libraries wasn’t as prevalent as it is today.  Most of the projects I developed back then didn’t have a lot of external dependencies and the code I reused across projects was checked out and compiled locally as part of my build process.  Then came Java.

When I first started developing in Java, I remember being excited over the level of community surrounding the platform.  The Java platform inspired numerous open source projects, due both to the platform’s architecture and the increasing popularity of the Internet.  The Apache Jakarta Project in particular was a repository for many of the most popular frameworks at the time.  The increase in the use of open source libraries during this time, along with some conditioning from the past, help forge a new approach to dependency management.

The Unix development community had long since established best practices around the use of source control and one of the practices long discouraged was that of checking in binaries and assets generated by your project.  Helping facilitate this practice was Apache’s Ant framework, an XML-based Java build library.  One of the targets provided by Ant was <get> which allowed for the retrieval of files over HTTP.  A typical scenario was to set up an internal site which hosted all the versioned libraries shared by an organization and to use Ant build files to download the libraries locally (if not already present) when compiling the application.  The task used for retrieving the dependencies effectively became the manifest for what was required to reproduce the build represented by a particular version of an application.  The shortcoming of this approach, however, was the lack of standards around setting up distribution repositories and dealing with caching.  Enter Maven.  Maven was a 2nd generation Java build framework which standardized the dependency management process.  Among other things, Maven introduced schema for denoting project dependencies, local caching and recommendations around repository setup and versioning conventions.

After developing on the Java platform for several years, I landed in a group which decided to rewrite the project I was assigned to from Java to .Net.  After some reorganization, I found myself working alongside new team members whose background was primarily in Microsoft based technologies.  I soon discovered that the typical practice within the Microsoft community was to check in any dependencies needed by a project.  This certainly added a level of convenience for getting projects set up, but no strategy existed for effectively managing versioned distributions of common libraries or easily discovering what versions of what dependencies a project used.

Around this time, Microsoft released beta 2 of the .Net framework and my team decided to upgrade our fledgling project to the new version.  Along with the 2.0 version of the framework came MSBuild, Microsoft’s new build engine.  While a port of Ant was available for the .Net framework at the time, my team decided to go with MSBuild since Visual Studio used it as its underlying build solution.  Unfortunately, MSBuild didn’t provide tasks for downloading dependencies, so I set out to write my own set of tasks which allowed us to manage dependencies “Maven-style”.  While these new tasks provided the desired capability, the strategy proved to be too foreign a concept for the rest of my team resulting in a return to just checking in all dependencies.  Several years later I made another attempt at introducing dependency management to a different .Net team, this time using NAnt, though I believe the group decided to return to using MSBuild and checking everything in again after I left the company.

Around mid-2010, I heard that a new .Net package management system was in the works named “OpenWrap”.  It wasn’t ready at the time, but I was excited that the community seemed to be moving in the right direction.  Not too long after the announcement of OpenWrap came an announcement from Microsoft that they had joined forces with an existing .Net package management project called Nubular (Nu).  The Nu project was a command line .Net package management system built upon RubyGems.  Nu was rewritten to remove the Ruby dependency and re-branded as NuPack which was shortly thereafter re-branded as NuGet.

NuGet was first released in January of 2011 and seems to have been well-received by many in the .Net community.  It’s reception is likely due to the fact that it was designed to accommodate how the majority of .Net developers were already working.   Primarily designed as a Visual Studio extension, NuGet adds a new menu item under the project ‘References’ context-sensitive menu for referencing packages along with adding a Package Manager Console for integrating PowerShell usage and (as of version 1.4) a Package Visualizer which provides various graphical diagrams for visualizing dependencies.  The NuGet team also provides a separate command-line utility (NuGet.exe) which adds the ability to create and publish your own packages.

The availability of a good .Net dependency management tool has been long overdue and NuGet addresses this need in a way palatable to most .Net development teams.  That said, there are some dependency management scenarios I wish the NuGet team had put more emphasis on, namely build-time retrieval of dependencies and application level management independent of Visual Studio integration.

NuGet works a little differently than the other approaches I’ve used in the past in that it’s primary focus isn’t to facilitate the build-time retrieval of dependencies, but rather to make it easy to add, update, and remove project references to external libraries from within Visual Studio.  When using NuGet, it’s expected that you’ll still be checking in any dependencies you reference by your project (though solutions have been set forth to facilitate source-only commits).  While the NuGet.exe command line tool  can be used to facilitate a more traditional approach to dependency management, the NuGet team’s focus on Visual Studio integration imposes some limitation on what can be done without a bit of supplemental infrastructure and perhaps a bit of compromise.

While I appreciate the value offered through NuGet’s Visual Studio integration (without which the tool may have suffered in its reception), I would have preferred the team had started with the following key scenarios:

  1. Provide a command line tool to Retrieve, Update and Remove assets along with transitive dependencies independent of coupling with Visual Studio.

  2. Use a single, plain-text manifest file for listing dependencies to retrieve.

  3. Allow transitive dependencies to be retrieved from any specified source

  4. Provide options for extracting to versioned or non-versioned destination folders as well as a single target destination folder (e.g. “lib”).

The support of these scenarios would certainly have influenced the evolution of NuGet’s Visual Studio integration, but while the underlying implementation may have differed, I believe a similar user experience could still have been implemented.

In my next article, I’ll show how my team is currently leveraging NuGet’s command line tool to facilitate dependency management needs apart from the tool’s Visual Studio integration.  Stay tuned!

Tagged with:  

Effective Tests: Acceptance Tests

On September 5, 2011, in Uncategorized, by derekgreer

In the last installment of our series, we discussed the topic of Context Obscurity along with strategies for avoiding the creation of obscure tests. As the final topic of this series, we’ll take an introductory look at the practice of writing Automated Acceptance Tests.

Acceptance Tests

Acceptance testing is the process of validating the behavior of a system from end-to-end. That is to say, acceptance tests ask the question: “When all the pieces are put together, does it work?” Often, when components are designed in isolation at the component or unit level, issues are discovered at the point those components are integrated together. Regression in system behavior can also occur after an initial successful integration of the system due to on-going development without the protection of an end-to-end regression test suite.

Automated acceptance testing is accomplished by scripting interactions with a system along with verification of an observable outcome. For Graphical User Interface applications, acceptance tests typically employ the use of UI-automated testing tools. Examples include Selenium, Watir and WatiN for testing Web applications; ThoughtWorks’ Project White for testing Windows desktop applications; and Window Licker for Java Swing-based applications.

The authorship of acceptance tests fall into two general categories: collaborative and non-collaborative. Collaborative acceptance tests use tools which separate a testing grammar from the test implementation, allowing non-technical team members to collaborate on the writing of acceptance tests. Tools used for authoring collaborative acceptance tests include FitNesse, Cucumber, and StoryTeller. Non-collaborative acceptance tests combine the grammar and implementation of the tests, are typically written in the same language as the application being tested and can be authored using traditional xUnit testing frameworks.

Acceptance Test Driven Development

Acceptance Test Driven Development (A-TDD) is a software development process in which the features of an application are developed incrementally to satisfy specifications expressed in the form of automated acceptance tests where the feature implementation phase is development using the Test-Driven Development methodology (i.e. Red/Green/Refactor).

The relationship of A-TDD to TDD can be expressed as two concentric processes where the outer process represents the authoring of automated acceptance tests which serve as the catalyst for an inner process which represents the implementation of features developed following the TDD methodology. The following diagram depicts this relationship:




When following the A-TDD methodology, one or more acceptance tests are written to reflect the acceptance criteria of a given user story. Based upon the expectations of the acceptance tests, one or more components are implemented using the TDD methodology until the acceptance tests pass. This process is depicted in the following diagram:




An Example

The following is a simple example of an automated acceptance test for testing a feature to be developed for a Web commerce application. This test will validate that the first five products in the system are displayed to the user upon visiting the landing page for the site. The tools we’ll be using to implement our acceptance test are the Machine.Specifications library (a.k.a. MSpec) and the Selenium Web UI testing library.

A Few Recommendations

When testing .Net Web applications on a Windows platform, the following are a few recommendations you may want to consider for establishing your acceptance testing infrastructure:

  • Use IIS Express or a portable Web server such as CassiniDev.
  • Use a portable instance of Firefox or other targeted browser supported by your selected UI automated testing library.
  • Establish testing infrastructure which allows single instances of time-consuming processes to be started once for all acceptance tests. Examples of such processes would include starting up Web server and/or browser processes and configuring any Object-Relational Mapping components (e.g. NHibernate SessionFactory initialization).
  • Establish testing infrastructure which performs a complete setup and tear down of the application database. Consider setting up multiple strategies for running full test suites vs. running the current use case under test.


To keep our example simple, we’ll assume our web site is already deployed as the default site on localhost port 80, that the database utilized by the application is already installed and configured, that our current machine has Firefox installed and that each test will be responsible for launching an instance of Selenium’s FirefoxDriver.

Here’s our acceptance test:

    [Subject("List Products")]
    public class when_a_user_requests_the_default_view
        static Database _database;
        static FirefoxDriver _driver;

        Establish context = () =>
                // Establish known database state
                _database = _database = new Database().Open();
                Enumerable.Range(0, 10).ForEach(i => _database.AddProduct(i));

                // Start the browser
                _driver = new FirefoxDriver();

        Cleanup after = () => _driver.Close();

        Because of = () => _driver.Navigate().GoToUrl("http://localhost/");

        It should_display_the_first_five_products = () => _driver.FindElements(By.ClassName("customer")).Count().ShouldEqual(5);

In this test, the Establish delegate is used to set up the expectations for this scenario (i.e. the context). Those expectations include the presence of at least five product records in the database and that a Firefox browser is ready to use. The Because delete is used to initiate our singe action which should trigger our expected outcome. The It delegate is then used to verify that a call to the Selenium WebDriver’s FindElements() method returns exactly five elements with a class of ‘customer’. Upon completion of the test, the Cleanup delegate is used to close the browser.


The intent of this article was merely to introduce the topic of automated acceptance testing since a thorough treatment of the subject is beyond the scope of this series. For further reading on this topic, I highly recommend the book Growing Object-Oriented Software Guided By Tests by Steve Freeman and Nat Pryce. While written from a Java perspective, this book is an excellent guide to Acceptance Test Driven Development and test writing practices in general.

This article concludes the Effective Tests series.  I hope it will serve as a useful resource for the software development community.

Tagged with: