Aspiring Craftsman

pursuing well-crafted software

    Perhaps Too Much Validation


    Several factors have influenced my coding style over the years leaving me with a preference toward lean code syntax. I’ve been developing for quite a while, so it would be hard to pinpoint exactly when, where, or from whom I’ve picked up various preferences, but to name a few, I prefer: code that only includes comments for public APIs or to provide explanation of algorithms; code that is free of the use of regions, explicit default access modifiers, and unused using statements; reliance upon convention over configuration (both to eliminate repetitive tasks, but also just to eliminate unnecessary code); encapsulating excessive parameters into a Parameter Object, avoidance of excessive use of attributes/annotations (actually, I’d eliminate them completely if I could), and of course deleting dead code. One special case of dead code that I often see is superfluous validation.

    Perhaps you’ve seen code like this:

    public class MyService
    {
        public DoSomething(IDependencyA dependencyA, IDependencyB dependencyB, IDependencyC dependencyC)
        {
            if(dependencyA is null)
            {
                throw new ArgumentNullException(nameof(dependencyA));
            }
    
            if(dependencyB is null)
            {
                throw new ArgumentNullException(nameof(dependencyB));
            }
    
            if(dependencyC is null)
            {
                throw new ArgumentNullException(nameof(dependencyC));
            }
        }
    
        
    }
    

    Perhaps you even think this is a best practice. Is it? As with many things, the answer is really: It depends. One of the things that has greatly shaped my views on several aspects of software development over the years is adopting Test-Driven Development. The “test” part of the name is really a hold-over from adapting the practice of writing Unit Tests for driving design. With Unit Testing, you’re testing the code you’ve written. With Test-Driven Development, you’re constraining the design of the code to meet a set of specifications. It’s really quite a difference and one you may not fully appreciate unless you fully buy in to doing it for an extended period of time.

    One of the side-effects of practicing TDD is that you don’t write code unless it’s needed to satisfy a failing test. The use of code coverage tools are basically superfluous for TDD practitioners. What, however, does this have to do with validation?

    When driving out implementation through a series of executable specifications (i.e. an objective list of exactly how the software should work), we may end up writing code which technically could be called a certain way which would result in exceptions or logical errors, but in practice never is. As it relates to this topic, all code we write can be grouped into two categories: public and private. In this sense I’m not talking about the access modifiers we place upon the code artifacts themselves, but the intended use of the code. Is the code you’re writing going to be used by others, or is it just code we’re calling internally within our applications? If it’s code you’re driving out through TDD which others will be calling then you should have specifications which describe how the code will react when used correctly as well as incorrectly and thus will have the appropriate amount of validation. If it isn’t code anyone else will be, or currently is calling (see also YAGNI), then the components which do call it will have been designed such that they don’t call the component incorrectly rending such validation useless.

    Let’s consider our code again:

    public class MyService
    {
        public DoSomething(IDependencyA dependencyA, IDependencyB dependencyB, IDependencyC dependencyC)
        {
            if(dependencyA is null)
            {
                throw new ArgumentNullException(nameof(dependencyA));
            }
    
            if(dependencyB is null)
            {
                throw new ArgumentNullException(nameof(dependencyB));
            }
    
            if(dependencyC is null)
            {
                throw new ArgumentNullException(nameof(dependencyC));
            }
        }
    
        
    }
    

    If this is an internal service that isn’t going to be called by any other code except other components within your application, we have 14 lines of code that are unneeded and are just adding noise to our code. I’ve worked in shops where every class in an application or library was coded this way, effectively adding hundreds to thousands of lines of unneeded code. Like regions, comments, or poorly factored code, this adds to the cognitive load required for reading through and understanding the code and ultimately is unnecessary.

    We could, of course, lessen the syntax noise by using the ThrowIfNull() method of System.ArgumentNullException as demonstrated in the following code:

    using static System.ArgumentNullException;
    
    public class MyService
    {
        public DoSomething(IDependencyA dependencyA, IDependencyB dependencyB, IDependencyC dependencyC)
        {
            ThrowIfNull(dependencyA);
            ThrowIfNull(dependencyB);
            ThrowIfNull(dependencyC);
        }
    
        
    }
    

    If we needed validation other than checking for null, we could even write our own custom code that would provide similar terse improvements. This however, misses the point. If you’re practicing TDD and this component is only ever called by other code within this project, it’s effectively unused code. Sure, it would guard against bad practices such as reusing the code within a different context (potentially excluding the addition of equivalent specifications ensuring the code is used properly), having others modify the project code without using TDD, etc., but that’s speculative planning for a future that may never come.

    The bottom line is, you already have guards against the code being used incorrectly. It’s called the tests!

    For Whom is this Container?


    Several of the Messaging platforms in the .Net space have pretty rudimentary APIs (e.g. RabbitMq, Kafka) which require quite a bit of boiler-plate code to be written to get a simple message published and subscribed. You could turn to one of the Conforming Abstraction libraries such as NServiceBus or MassTransit, but perhaps you don’t really want a lowest-common denominator API, you don’t like something about how it creates the messages or topic/queue artifacts, or you simply want a fluent API expressed in terms of the native platform’s nomenclature and behavior. This might lead you down the road of creating your own KafkaBus, SQSBus, RabbitMqBus, etc. that feels like the API you wish the original development team had just provided for you to begin with. Ah, but now you have a dilemma: Frameworks such as this tend to require a number of components you’ll need to compose, many of which you may want to allow users to configure (e.g. serialization needs, consumer class conventions, logging, produce and consume pipelines, etc.). You could write hand-rolled factories, builders, singletons, etc. to facilitate the configuration and building of instances of your components, but you know that using a dependency injection container would make both development and long-term maintenance of your library much easier. But now you have another dilemma: Are you going to tie your project to some open-source container? If so, which one? Should you support a handful of the most popular ones? Should you just rely upon the Service Locator pattern and provide configuration for end users should they want to resolve from their own containers?

    This was essentially the dilemma the ASP.Net Core team found themselves in when they set out to develop .Net Core. They had a fairly sizable framework with a lot of moving parts, many of which they wanted to allow the end user to configure. Earlier versions of ASP.Net MVC were built using a Service Locator pattern implementation which facilitated the ability to configure resolving from an open-source container of your choice. This, however, would have no doubt presented various design limitations in addition to the resulting lack of elegance to the resulting codebase, so the team decided to build the new platform from the ground up using dependency injection. They couldn’t, however, feasibly decide to couple their framework to one of the already mature and successful open source DI containers for various reasons. This prompted them to write their own.

    One of the keys to understanding the capabilities offered by .Net Core’s container compared to other libraries is recognizing that they built it for their needs, not yours. There is no doubt that there was recognition of the usefulness for some developers to have an out-of-the-box DI container, but they didn’t set out to build a container to compete with the already extremely mature frameworks such as Autofac, StructureMap, or Ninject. For instance, because they weren’t developing user interactive client-facing applications, they didn’t have needs such as convention-based scanning registration, multi-tenancy support, the need for decorators, etc. Their needs pretty much were limited to known types with lifetime scopes of transient, singleton, or scoped per request.

    Oddly, there is now a whole new generation of .Net developers which have never used a DI container other than that provided by the Microsoft Extensions suite which are missing out on being exposed to solutions to problems for which containers like Autofac, Lamar, and others facilitate fairly easily, largely I believe because no one has ever really told them: Microsoft didn’t really write that for you.

    Pragmatic Deferral


    Software engineering is often about selecting the right trade offs. While deferring feature development is often somewhat straight-forward, based upon a speculation about the return on investment, and generally decided by the customer; marketing; sales; or product people; low-level implementation decisions are typically made by the development team or individual developers and can often prove to be a bit more contentious among teams with a plurality of strong opinions. This is where principles like YAGNI (You’re Aren’t Going to Need It), or the Rule of Three have often been set forth as a guiding heuristic.

    While I generally advise the teams I coach to allow the executable specifications (i.e. the tests) to drive emergent design and to defer the introduction of ancillary libraries, frameworks, patterns, and custom infrastructure, until you need it, there is a level of pragmatism that I employee when determining when to introduce such things.

    I’ve been a fan of Test-Driven Development for some time now and have practiced it for over a decade. One of the primary benefits of Test-Driven Development is having an objective measure guiding what needs to get built. For example, if the acceptance criteria for a User Story concerns building a new Web API for a company’s custom B2B solution, your specs are going to drive out some sort of HTTP-based API. What the specs won’t dictate, however, are decisions such as whether to use an MVC framework, an IOC container, whether to introduce a fluent validation library or an object mapping library. Should we adhere strictly to principles like YAGNI or the Rule of Three for guidance here? My answer is: it depends.

    Deferring software decisions comes with quite a range of consequences. Some decisions, such as whether to select ASP.NET MVC at the outset of a .Net-based Web application, could cause quite a bit of rework if you were to defer such a decision until working with lower-level components started to reveal friction or duplication. Other decisions, such as deferring the introduction of an object mapping library (e.g. Automapper) until the shape of the objects you’re returning actually differ from your entities essentially have only positive consequences. But how do we know?

    The YAGNI principle is very similar to the firearm safety rule “The Gun is Always Loaded”. No, the gun isn’t always loaded … but it’s best to treat it like it is. Similarly, “You aren’t going to need it” doesn’t really mean you may not need it, but it’s intended to help you avoid unnecessary work. That is, until it causes more work.

    In software engineering, the more you code, the more you’ll have to maintain. The Art of Not Doing Stuff, when correctly applied, can save companies as much or more money than building the right things. While I’m not religious these days, there’s a definition of the term “Hermeneutics” that I heard years ago from a Christian radio personality, Hank Hanegraaff. He would say: “Hermeneutics is the art and science of biblical interpretation”. He would go on to explain, it’s a science because it’s guided by a system of rules, but it’s an art in that you get better at it the more you do it. Having heard that explanation years ago, I have long felt these properties are equally descriptive of software development.

    For myself, I take a pragmatic approach to YAGNI in that I make selections for a number of things at the outset of a new project which I’ve recognized, through experience, have resulted in less friction down the road; and I defer choices which I reason to have little to no cost by implementing at the point a given User Story’s acceptance criteria drives the need. For example, I do start off setting up a Web project using ASP.NET MVC. I do set up end-to-end testing infrastructure. I do add an open source DI container and set up convention-based registration. These are things which I’ve found actually cause me more friction if I pretend I’m not going to need them. I don’t want to implement my own IHttpHandler and wait until I see the need for a robust routing and pipeline framework and have to go back and reimplement everything. I don’t want to be hand-rolling factories over and over and have to go back and modify code at the point enough duplication reveals the need for dependency injection, and I don’t want to edit a Startup.cs or other bootstrapper component each time a component has a new dependency. Outside of these few concerns, however, I do typically defer things until needed.

    Ultimately, this pragmatism isn’t an exception to the YAGNI rule so much as it is a judicial application of YAGNI within a larger strategy of practicing the art of maximizing the amount of work not done. In short, apply YAGNI when it makes you more agile, not less.

    Magical Joy


    In a segment of an interview with host Byron Sommardahl on The Driven Developer Podcast, recorded in the summer of 2021, Byron and I discussed a bit about a pattern I introduced to our project when we worked together in 2010 which Byron later dubbed “The Magical Joy Bus” 😂. That pattern was the Command Dispatcher pattern. We unfortunately didn’t have the time I would have liked to fully unpack my thoughts and experiences with using this pattern over the years, so I thought I’d share that here.

    In brief, the Command Dispatcher pattern is one where a central component is used to decouple a message issuer from a message handler. Many .Net developers have become familiar with this pattern through Jimmy Bogard’s open source library: MediatR. While I’ve never personally used the MediatR library, I have used a far more simplistic implementation throughout the years. Since my implementation was only ever primarily a single class, I never felt particularly motivated to release it as an open source library. I did, however, share my code with a former colleague a few years ago who has since packaged up a slightly modified version of my original here.

    Back in 2010 and the following years, my motivation for using the pattern within the context of .Net Web applications was primarily to write clean controller actions, facilitate adherence to the Single Responsibility Principle within the Application Layer, and to eliminate the need for injecting extraneous controller or Application Service dependencies. For earlier versions of ASP.Net MVC, I still see it as a worthwhile pattern to implement. It certainly, however, has its drawbacks.

    As alluded to by Byron in my interview, the team I was working with back then didn’t quite like the “magic” involved with the design. The primary issue for my teammates was that you couldn’t easily navigate from a controller action to the message handler directly via Visual Studio’s “Edit.GoToDefinition” (i.e. F12) shortcut. This was an unfortunate shortcoming of this approach, but one over which I’ve never experienced a large degree of angst as it was, in essence, no different than the process one must go to in locating a controller action being invoked as the result of a given Web request. All convention-over-configuration approaches suffer from some degree of degradation in discoverability and navigation. Of course, the frequency in which developers find themselves needing to navigate from controllers to components within an Application layer is really where the issue lies. A secondary issue with this pattern is the ceremony involved in declaring the messages and handlers such that they can be discovered at run time. It’s just a lot of extra typing that can cause frequent mistakes. The application of convention-over-configuration to things such as DI registration or entity registration with Entity Framework are examples for which you perpetually reap the benefits of less friction. Various Command Dispatcher implementations typically result in perpetually more friction.

    We didn’t get around to discussing Byron’s intuition about the design all those years ago in the podcast, but Byron and my former colleagues weren’t alone in how they felt about the pattern. Over the years, I’ve introduced the pattern to two other teams, both of which expressed some of the same feelings of disdain over its impact on the codebase. Eventually, I came to the conclusion that, while I still saw the same benefits in the pattern’s implementation, there really was just too much friction in getting teams on board with its adoption.

    Fortunately with the advent of .Net Core which introduced the [FromServices] attribute, we can achieve the same benefits mentioned earlier by injecting handlers directly into Controller Actions:

        [HttpGet]
        public async Task<IActionResult> GetWidgets([FromQuery] GetPaginatedWidgetsRequest request, [FromServices] GetWidgetsRequestHandler handler)
        {
            return await handler.Handle(request).ToResult(r => new OkObjectResult(r), r => BadRequest());
        }
    

    This is my preferred approach today. While it allows us to keep our controllers clean; to write small, focused Application Layer handler classes; and to avoid injection of unused dependencies; it’s also easy for developers at any level to work with and maintains the standard navigation and debugging experience. Win-win!

    Nine Years Remote


    A recent inquiry from a recruiter about accepting a partially-remote position prompted me to reflect upon 9 years of working remotely as a software developer.

    When I first started working from home, attitudes were quite different than they are in today’s post COVID-19 world. Full time remote software development jobs were few and far between, and most employers that allowed working remotely full time did so due to factors other than a belief that it was more productive and cost-effective. Studies since have overwhelmingly shown that the majority were simply wrong.

    One interesting side-effect of the previous year’s COVID-19 political entanglement is the degree to which it forced an entire generation of closed-minded, micro-managing executives to consider (through necessity) that remote work forces, especially for primarily thoughtwork-based positions, were not only viable, but perhaps even superior.

    When our entire society started shutting down due to concerns over the COVID-19 virus, I actually hardly noticed at first. Having transitioned to full-time remote work in early 2014, I had long since become accustomed to working remotely by the time society started shutting down. Prior to landing my first full-time remote position, I had worked at a couple of prior companies which allowed working remotely a couple of days a week, so I had some notion of its viability even before then.

    While I was already used to working remotely, the whole pandemic thing actually helped to improve the lives of remote developers by remedying many of the productivity nuciences that plagued fully-remote as well as mixed-teams. To a large extent, the primary issues that remote workers had to face prior to everything being shut down was the lack of remote workforce accommodations, namely: mature or provided collaboration tools (e.g Slack, Zoom, Miro, etc.) and equal participation of remote workers on mixed-teams. While David Fullerton, in a StackOverflow blog article written back in 2013, had proffered up the wisdom that “If even one person on the team is remote, every single person has to start communicating online”, joining any mixed team for many still resulted in the remote worker likely being marginalized in meetings as they were the only one on a call while all their co-workers debated approaches around a conference table, were forced to watch some white boarding design session over a video camera while you tried to make out what everyone was saying, or were simply being left out of key social interactions resulting in being professionally disadvantaged in key business decisions due to the formation of cliques, or simply not being present during unplanned discussions, etc. Conscientious employees working from home already knew they were far more efficient at home than in the office, as well as knowing that non-conscientious workers were just a likely or more so to screw off at work as they were at home, but it took everyone being forced to do it for an extended period of time to hammer that into the heads of many executives that felt uncomfortable with conducting business differently than they had in the 20th century.

    One absolutely huge thing that goes seemingly undiscussed is the financial impact of working remotely vs. commuting. While I commuted to the office for 20 years before transitioning to full-time remote, it wasn’t until I had become accustomed to working from home and was confronted with the idea of returning back to the world of the commuting zombies that my perspective changed with respect to that commute time. Prior to accepting a full time remote job in early 2014, my commute time was approximately 1 hour one way, and that was on a good day when there wasn’t some minor traffic incident which could easily (and fairly regularly did) add an extra 20-30 minutes to my time. Once I had become accustomed to working remotely, the idea of tacking on an extra 5-10 hours a week in commute time to switch back to a job requiring you to work in the office seemed more like giving my time away for free. Prior to that, all those hours in the vehicle dealing with idiots on the road was just an assumed necessity. Driving to work was like driving anywhere else. Of course in the 20th century you had to drive to buy a new pair of shoes. Of course you had to drive to see a newly released movie. Of course you had to drive to go get a cheeseburger meal at McDonald’s. And of course, you had to drive to get to work. You didn’t think twice about it. You didn’t view commuting to work as 5-10 hours of your personal time given over to your employer for free for the privilege of employment any more than you’d have thought that McDonald’s owed you money for driving to their store to eat. Sure, you could listen to music, or talk radio, or a podcast, or an audio book. It wasn’t, however, really what you would have chosen to be doing at 6:30 in the morning. It wasn’t your time.

    Prior to COVID, trying to explain this perspective to those still in the office world was very much like Morpheus trying to explain to Neo that he’s in the Matrix. Sure, recruiters or employers could understand the logic of an argument that commuting is time given to an employer essentially for free, but many would just think it ridiculous for you to go so far as to demand a higher salary for accepting a position requiring a commute (when you knew is wasn’t really required to do the job). This doesn’t even account for wear and tear on vehicles, gas expenses, or the little micro-batches of time you end up spending doing things like food prep, additional “get ready” time, more laundry, etc. that you wouldn’t otherwise do if you were staying home for the day. Moreover, just because you compensate someone for their time, there’s a threshold beyond which your standard hourly rate isn’t worth the time. Okay, you may be willing to commute if your employer is going to compensate you for the extra 5-10 hours on top of the 40 you’re going to spend sitting in their cube farm under their fluorescent lighting (“Not near a window, Jim, because those seats are reserved for managers!”). Are you, however, willing to exchange that extra 5-10 hours a week for money to sit in the office for 45 hours? How about 50 hours? 60? At some point, it isn’t about whether you’re compensated or not. Hell, 40 hours a week really is too damn many hours to begin with. Add to that the insane perspective on time off that Americans get on average compared to much of the rest of the developed world. Hell, even plumbers and HVAC workers get paid for their commute time, and their job isn’t something that can be done remotely.

    Imagine if everyone actually accounted for these additional expenses when factoring in the pay they are willing to accept. If so, this would likely account for an extra 25-30% pay increase, accounting for time and travel expenses. For businesses on the fence about whether remote is better than on-site for their bottom line, this would certainly tip the scales. Currently, however, they aren’t forced to think this way. Or at least, many are still operating in a mindset that they don’t have to think this way. When it really comes down to it, a culture of requiring anyone that can do their job remotely to work in the office is really stealing from your employees. Fortunately, COVID has corrected this situation given there have been enough eyes opened to the benefits of remote work and enough businesses which have seen the waste that goes into buying or renting commercial real estate that, even after many business have begun attempting to force employees back into the office, there’s enough employers who now offer remote opportunities to give people a real choice.

    User Stories


    The use of User Stories has become fairly commonplace in the software industry. First introduced as an agile requirements-gathering process by Extreme Programming, User Stories arguably owe their popularity most to the adoption of the Scrum framework for which User Stories have become the de facto expression of its prescribed backlog.

    So what exactly is a User Story? Put simply, they are a light-weight approach to expressing the desired needs of a software system. The idea behind User Stories, which was introduced as simply “Stories” in the book Extreme Programming Explained - Embrace Change by Kent Beck, was to move away from rigid requirements gathering processes in process, form, and nomenclature. Beck explained that the very word “requirement” was an inhibitor to embracing change because of its connotations of absolutism and permanence. At their inception, the intended form of stories was to create an index card containing a short title, simple description written in prose, and an estimation.

    The Three-Part Template

    In the late 1990’s, a software company named Connextra was an early adopter of Extreme Programming. In contrast to the distinct roles defined by the Scrum framework, XP doesn’t prescribe any specific roles, but is intended to adapt to existing roles within an organization (e.g. project managers, product managers, executives, technical writers, developers, testers, designers, architects, etc.).

    The origin of most of Connextra’s stories were from members of their Marketing and Sales departments which wrote down a simple description of features they desired. This posed a problem for the development team, however, for when the time came to have a conversation about the feature, the development team often had difficulty locating the original stakeholder to begin the conversation. This led the team to formulate a 3-part template to help address friction resulting from ambiguous requirement sources. Their 3-part template is as follows:

    	As a [type of user]
    	I want to [do something]
    	So that I can [get some benefit]
    

    Ironically, while the 3-part template has become the defacto standard for authoring User Story descriptions, Scrum’s “Product Owner” role, most often filled by product development specialists acting as customer proxies, along with the use of software agile-planning tools such as Confluence, Planview, Azure DevOps Boards, etc., which captures who created a given story, tends to greatly diminish the need from which the template originated. This template has since become quite the de facto standard in expressing User Story Descriptions. The irony is that many teams, in caro-cult fashion, often utilize the 3-part template where the original need to identify the author of the story to start the conversation no longer exists. Change has occurred, but because many didn’t understand the underlying impetus for the 3-part template, they were incapable of adapting to that change.

    Jeff Patton writes the following concerning the prevalent use of the 3-part story template in his book “User Story Mapping”:

    “… the template has become so ubiquitous, and so commonly taught, that there are those who believe that it’s not a story if it’s not written in that form. … All of this makes me sad. Because the real value of stories isn’t what’s written down on the card. It comes from what we learn when we tell the story.”

    Mike Cohn, author of many books on agile processes including “User Stories Applied” and “Agile Estimating and Planning” writes similarly:

    “Too often team members fall into a habit of beginning each user story with “As a user…” Sometimes this is the result of lazy thinking and the story writers need to better understand the product’s users before writing so many “as a user…” stories.”

    Cohn’s observations are spot on. In my experience, not only does this happen “too often”, it’s the rule, not the exception. It’s really just human nature. The moment a process becomes formulaic, teams will begin to just go through the motions without engaging their minds. This can be good for manual tasks like brick-laying, or cleaning a house, but it is detrimental to processes intended to promote communication. Sadly, many teams spend an inordinate amount of time on the trappings of things like ensuring their requirements follow the 3-part story template rather than using the story as a tool for its original intent: A placeholder for a conversation.

    There and Back Again

    While not explicitly stated, the original idea behind Stories in Extreme Programming was to facilitate a conversation, not to define an objective goal. The agile movement started as a way to address issues in the industry’s largely failing attempts to apply manufacturing processes to software development. In particular, Stories were intended to address the underlying motivation for requirements (i.e. how teams determine what to build), not to themselves be requirements.

    In many ways, today’s User Stories have become the antithesis of what Kent Beck originally intended. Sadly, much of what is marketed as “agile” today has been corrupted by traditional-minded business analysts, product managers, and marketing agencies who never really understood the agile movement fully. User Stories have, to a large extent, become a casualty of these groups. We’ve gone from requirements to stories and back again. As described by Jeff Patton, “Stories aren’t a way to write better requirements, but a way to organize and have better conversations.

    The Better Way

    Ultimately, the question companies seek to answer is: How do we determine the features which provide the best ROI for the business? While it may seem counterintuitive to some, customers aren’t generally the best source for determining what features to build. They can be a source, but they aren’t generally a team’s best source. Customers are, however, the best source for determining how customers currently work, what problems they face, and what friction is involved in any current processes. Various analysis techniques can be used to solicit customer opinions on desired features, but it’s best to rely upon such techniques merely as means to distill the problems currently faced by customers. From there, stories are best created with a simple title and a description of the customer’s problem written in prose with the intent for the description to serve as a starting point for a conversation with the team.

    The best way to determine what to build is as a member of a mature agile team. The operative word here is mature. What makes for a mature team is a Product Owner with a background in the problem domain space, a Team Coach with deep knowledge of agile and lean processes, and 3-5 cross-functional developers weighted toward senior experience who have gone through a forming, storming, norming, and performing phase.

    User Stories shouldn’t be feature requests, but rather a placeholder for a conversation. A conversation with whom? With your team. About what? About how to iteratively solve the problems you learned from customers in small steps with frequent feedback. Product Owners should not bring requirements to a development team. There’s great power in collaboration. A smart team of 5 to 7 individuals including a subject matter expert (what the Product Owner should bring to the table) and a coach are a far better source for what features to build than just the customer or the Product Owner.

    An Example

    The following is an example story which more closely follows the original intent of Stories.

    Our scenario involves a company which provides a website allowing customers to create wedding and gift registries to send to others. In its current form, the site allows customers to pick from among existing vendors, but the company frequently receives requests from customers about specific products they’d like to see included. The current process involves the Sales team creating tickets for their Operations team to add new vendors to the site which involves updating the production database directly. Additionally, the work currently falls to one person whose job entails other operation tasks which often results in a delay to the timely fulfillment of customer requests.

    The following represents the story:

    Easily Manage Registry Products


    Description

    Our customers often want to add products that aren't part of our current vendor product list. This causes the sales team to constantly have to put in tickets and currently Margret is the only one that is working the tickets. We need a better solution!

    Note how the description is written in prose (i.e. in normal conversational language), and doesn’t follow the wooden 3-part template. Note also, the story doesn’t prescribe how to solve the problem. It just provides background on what the problem is and who it affects. It isn’t just that the story doesn’t dictate implementation details, but that it doesn’t dictate the solution at all. This is the ideal starting point for most stories. It’s a placeholder for a conversation about how to solve the problem.

    From here, the team would collaborate on the story to determine the best solution that results in the smallest feature increment which adds value to the end user. Several ideas may be discussed. The system could integrate with a 3rd-party content management system, allowing people within the company without SQL experience to update content. Alternately, the team may decide that adding a feature to allow customers to add custom products directly to their personal event registry is both easier, and scales far better than solutions requiring company employees to work tickets.

    As part of a story refinement session, the team may update the story with acceptance criteria to guide the implementation:

    Easily Manage Registry Products


    Description

    Our customers often want to add products that aren't part of our current vendor product list. This causes the sales team to constantly have to put in tickets and currently Margret is the only one that is working the tickets. We need a better solution!

    Acceptance Criteria

    When the customer navigates to the edit registry view
      it should contain a link for adding custom products

    When the customer clicks the add custom product link
      it should navigate to the add custom product view (note: see balsamiq wireframe attached)

    When the customer adds a new custom product with valid inputs
      it should add the custom product to the customers registry
      it should display a success message in the application banner
      it should navigate back to the edit registry page

    When the customer enters invalid custom product parameters
      it should show standard field level error messages
      it should not enable the save button

    While an Acceptance Criteria section isn’t mandatory, it can often be valuable for helping to frame the scope of the story, a reminder to the team of the high-level plans discussed for deferred work, and/or may serve as the team’s Definition of Done. For small teams involving just a few members, or for highly adaptive and collaborative teams, it may be enough to just just write “We decided to add a feature to allow the customer to add their own products!”. The team may very well take the initial story description and rapidly iterate on a solution, deciding together when they think it’s done! (Gasp!) Of course, this level of informality probably is only best suited to highly cohesive, highly functioning teams. For inexperienced to moderately experienced teams, some denotation of Acceptance Criteria would be advisable. The key point is, the story didn’t arrive to the team in the form of requirements, but as a placeholder for a conversation.

    Conclusion

    As the adoption of agile frameworks such as Scrum have become more mainstream, a number of practices have become formulaic and adopted by teams via a cargo-cult onboarding to agile practices without truly grasping what it means to be agile. The User Story has all but lost it original intent by many teams who have done little more than slap agile labels onto Waterfall manufacturing processes. User Stories were never intended to be requirements, but rather a placeholder for a conversation with the development team. Let’s do better.

    .Net Project Builds with Node Package Manager


    A few years ago, I wrote an article entitled Separation of Concerns: Application Builds & Continuous Integration wherein I discussed the benefits of separating project builds from CI/CD concerns by creating a local build script which lives with your project. Not long after writing that article, I was turned on to what I’ve come to believe is one of the easiest tools I’ve encountered for managing .Net project builds thus far: npm.

    Most development platforms provide a native task-based build technology. Microsoft’s tooling for these needs is MSBuild: a command-line tool whose build files double as Visual Studio’s project and solution definition files. I used MSBuild briefly for scripting custom build concerns for a couple of years, but found it to be awkward and cumbersome. Around 2007, I abandoned use of MSBuild for creating builds and began using Rake. While it had the downside of requiring a bit of knowledge of Ruby, it was a popular choice among those willing to look outside of the Microsoft camp for tooling and had community support for working with .Net builds through the Albacore library. I’ve used a few different technologies since, but about 5 years ago I saw a demonstration of the use of npm for building .Net projects at a conference and I was immediately sold. When used well, it really is the easiest and most terse way to script a custom build for the .Net platform I’ve encountered.

    “So what’s special about npm?” you might ask. The primary appeal of using npm for building applications is that it’s easy to use. Essentially, it’s just an orchestration of shell commands.

    Tasks

    With other build tools, you’re often required to know a specific language in addition to learning special constructs peculiar to the build tool to create build tasks. In contrast, npm’s expected package.json file simply defines an array of shell command scripts:

    {
      "name": "example",
      "version": "1.0.0",
      "description": "",
      "scripts": {
        "clean": "echo Clean the project.",
        "restore": "echo Restore dependencies.",
        "compile": "echo Compile the project.",
        "test": "echo Run the tests.",
        "dist": "echo Create a distribution."
      },
      "author": "Some author",
      "license": "ISC"
    }
    

    As with other build tools, NPM provides the ability to define dependencies between build tasks. This is done using pre- and post- lifecycle scripts. Simply, any task issued by NPM will first execute a script by the same name with a prefix of “pre” when present and will subsequently execute a script by the same name with a prefix of “post” when present. For example:

    {
      "name": "example",
      "version": "1.0.0",
      "description": "",
      "scripts": {
        "clean": "echo Clean the project.",
        "prerestore": "npm run clean",
        "restore": "echo Restore dependencies.",
        "precompile": "npm run restore",
        "compile": "echo Compile the project.",
        "pretest": "npm run compile",
        "test": "echo Run the tests.",
        "prebuild": "npm run test",
        "build": "echo Publish a distribution."
      },
      "author": "Some author",
      "license": "ISC"
    }
    

    Based on the above package.json file, issuing “npm run build” will result in running the tasks of clean, restore, compile, test, and build in that order by virtue of each declaring an appropriate dependency.

    Given you’re okay with limiting a fully-specified dependency chain where a subset of the build can be initiated at any stage (e.g. running “npm run test” and triggering clean, restore, and compile first) , the above orchestration can be simplified by installing the npm-run-all node dependency and defining a single pre- lifetime script for the main build target:

    {
      "name": "example",
      "version": "1.0.0",
      "description": "",
      "scripts": {
        "clean": "echo Clean the project.",
        "restore": "echo Restore dependencies.",
        "compile": "echo Compile the project.",
        "test": "echo Run the tests.",
        "prebuild": "npm-run-all clean restore compile test",
        "build": "echo Publish a distribution."
      },
      "author": "John Doe",
      "license": "ISC",
      "devDependencies": {
        "npm-run-all": "^4.1.5"
      }
    }
    

    In this example, issuing “npm run build” will result in the prebuild script executing npm-run-all with the parameters: clean, restore, compile and test which it will execute in the order listed.

    Variables

    Aside from understanding how to utilize the pre- and post- lifecycle scripts to denote task dependencies, the only other thing you really need to know is how to work with variables.

    Node’s npm command facilitates the definition of variables by command-line parameters as well as declaring package variables. When npm executes, each of the properties declared within the package.json are flattened and prefixed with “npm_package_”. For example, the standard “version” property can be used as part of a dotnet build to denote a project version by referencing ${npm_package_version}:

    {
      "name": "example",
      "version": "1.0.0",
      "description": "",
      "configuration": "Release",
      "scripts": {
        "build": "dotnet build ./src/*.sln /p:Version=${npm_package_version}"
      },
      "author": "John Doe",
      "license": "ISC",
      "devDependencies": {
        "npm-run-all": "^4.1.5"
      }
    }
    

    Command-line parameters can also be passed to npm and are similarly prefixed with “npm_config_” with any dashes (“-”) replaced with underscores (“_”). For example, the previous version setting could be passed to dotnet.exe in the following version of package.json by issuing the below command:

    npm run build --product-version=2.0.0
    
    {
      "name": "example",
      "version": "1.0.0",
      "description": "",
      "configuration": "Release",
      "scripts": {
        "build": "dotnet build ./src/*.sln /p:Version=${npm_config_product_version}"
      },
      "author": "John Doe",
      "license": "ISC",
      "devDependencies": {
        "npm-run-all": "^4.1.5"
      }
    }
    

    (Note: the parameter –version is an npm parameter for printing the version of npm being executed and therefore can’t be used as a script parameter.)

    The only other important thing to understand about the use of variables with npm is that the method of dereferencing is dependent upon the shell used. When using npm on Windows, the default shell is cmd.exe. If using the default shell on Windows, the version parameter would need to be deference as %npm_config_product_version%:

    {
      "name": "example",
      "version": "1.0.0",
      "description": "",
      "configuration": "Release",
      "scripts": {
        "build": "dotnet build ./src/*.sln /p:Version=%npm_config_product_version%"
      },
      "author": "John Doe",
      "license": "ISC",
      "devDependencies": {
        "npm-run-all": "^4.1.5"
      }
    }
    

    Until recently, I used a node package named “cross-env” which allows you to normalize how you dereference variables regardless of platform, but for several reasons including cross-env being placed in maintenance mode, the added dependency overhead, syntax noise, and support for advanced variable expansion cases such as default values, I’d recommend any cross-platform execution be supported by just standardizing on a single shell (e.g. “Bash”). With the introduction of Windows Subsystem for Linux and the virtual ubiquity of git for version control, most developer Windows systems already contain the bash shell. To configure npm to use bash at the project level, just create a file named .npmrc at the package root containing the following line:

    script-shell=bash
    

    Using Node Packages

    While not necessary, there are many CLI node packages that can be easily leveraged for aiding in authoring your builds. For example, a package named “rimraf”, which functions like Linux’s “rm -rf” command, is a utility you can use to implement a clean script for recursively deleting any temporary build folders created as part of previous builds. In the following package.json build, a package target builds a NuGet package which it outputs to a dist folder in the package root. The rimraf command is used to delete this temp folder as part of the build script’s dependencies:

    {
      "name": "example",
      "version": "1.0.0",
      "description": "",
      "scripts": {
        "clean": "rimraf dist",
        "prebuild": "npm run clean",
        "build": "dotnet pack ./src/ExampleLibrary/ExampleLibrary.csproj -o dist /p:Version=${npm_package_version}"
      },
      "author": "John Doe",
      "license": "ISC",
      "devDependencies": {
        "npm-run-all": "^4.1.5",
        "rimraf": "^3.0.2"
      }
    }
    

    If you’d like to see a more complete example of npm at work, you can check out the build for ConventionalOptions which supports tasks for building, testing, packaging, and publishing nuget packages for both release and prerelease versions of the library.

    Conventional Options


    I’ve really enjoyed working with the Microsoft Configuration libraries introduced with .Net Core approximately 5 years ago. The older XML-based API was quite a pain to work with, so the ConfigurationBuilder and associated types provided a long overdue need for the platform.

    I had long since adopted a practice of creating discrete configuration classes populated and registered with a DI container over direct use of the ConfigurationManager class within components, so I was pleased to see the platform nudge developers in this direction through the introduction of the IOptions type.

    A few aspects surrounded the prescribed use of the IOptions type of which I wasn't particularly fond were needing to inject IOptions rather than the actual options type, taking a dependency upon the Microsoft.Extensions.Options package from my library packages, and the cermony of binding the options to the IConfiguration instance. To address these concerns, I wrote some extension methods which took care of binding the type to my configuration by convention (i.e. binding a type with a suffix of Options to a section corresponding to the option type's prefix) and registering it with the container.

    I’ve recently released a new version of these extensions supporting several of the most popular containers as an open source library. You can find the project here.

    The following are the steps for using these extensions:

    Step 1

    Install ConventionalOptions for the target DI container:

    $> nuget install ConventionalOptions.DependencyInjection
    

    Step 2

    Add Microsoft’s Options feature and register option types:

      services.AddOptions();
      services.RegisterOptionsFromAssemblies(Configuration, Assembly.GetExecutingAssembly());
    

    Step 3

    Create an Options class with the desired properties:

        public class OrderServiceOptions
        {
            public string StringProperty { get; set; }
            public int IntProperty { get; set; }
        }
    

    Step 4

    Provide a corresponding configuration section matching the prefix of the Options class (e.g. in appsettings.json):

    {
      "OrderService": {
        "StringProperty": "Some value",
        "IntProperty": 42
      }
    }
    

    Step 5

    Inject the options into types resolved from the container:

        public class OrderService
        {
            public OrderService(OrderServiceOptions options)
            {
                // ... use options
            }
        }
    

    Currently ConventionalOptions works with Microsoft’s DI Container, Autofac, Lamar, Ninject, and StructureMap.

    Enjoy!

    Collaboration vs. Critique


    While there are certainly a number of apps developed by lone developers, it’s probably safe to say that the majority of professional software development occurs by teams. The people aspect of software development, more often than not, tends to be the most difficult part of software engineering. Unfortunately the software field isn’t quite like other engineering fields with well-established standards, guidelines, and apprenticeship programs. The nature of software development tends to follow an empirical process model rather than a defined process model. That is to say, software developers tend to be confronted with new problems every day and most of the problems developers are solving aren’t something they’ve ever done in the exact same way with the exact same toolset. Moreover, there are often many different ways to solve the same problem, both with respect to the overall process as well as the implementation. This means that team members are often required to work together to determine how to proceed. Teams are often confronted with the need to explore multiple competing approaches as well as review one another’s designs and implementation. One thing I’ve learned during the course of my career is that the stage these types of interactions occur within the overall process has a significant impact on whether the interaction is generally viewed as collaboration or critique.

    To help illustrate what I’ve seen happen countless times both in catch-up design sessions and code reviews, consider the following two scenarios:

    Scenario 1

    Tom and Sally are both developers on a team maintaining a large-scale application. Tom takes the next task in the development queue which happens to have some complex processes that will need to be addressed. Being the good development team that they are, both Tom and Sally are aware of the requirements of the application (i.e. how the app needs to work from the user’s perspective), but they have deferred design-level discussions until the time of implementation. After Tom gets into the process a little, seeing that the problem is non-trivial, he pings Sally to help him brainstorm different approaches to solving the problem. Tom and Sally have been working together for over a year and have become accustomed to these sort of ad-hoc design sessions. As they begin discussing the problem, they each start tossing ideas out on the proverbial table resulting in multiple approaches to compare and contrast. The nature of the discussion is such that neither Tom nor Sally are embarrassed or offended when the other points out flaws in a given design idea because there’s a sense of safety in their mutual understanding that this is a brainstorming session and that neither have thought in depth about the solutions being set forth yet. Tom throws out a couple of ideas, but ends up shooting them down himself as he uses Sally as a sounding board for the ideas. Sally does the same, but toward the end of the conversation suggests a slight alteration to one of Tom’s initial suggestions that they think may make it work after all. They end the session with a sense that they’ve worked together to arrive at the best solution.

    Scenario 2

    Bill and Jake are developers on another team. They tend to work in a more siloed fashion, but they do rely upon one another for help from time to time and they are required to do code reviews prior to their code being merged into the main branch of development. Bill takes the next task in the development queue and spends the better part of an afternoon working out a solution with a basic working skeleton of the direction he’s going. The next day he decides that it might be good to have Jake take a look at the design to make him aware of the direction. Seeing where Bill’s design misses a few opportunities to make the implementation more adaptable to changes in the future, Jake points out where he would have done things differently. Bill acknowledges that Jake’s suggestions would be better and would have probably been just as easy to implement from the beginning, but inwardly he’s a bit disappointed that Jake didn’t like his design as-is and that he has to do some rework. In the end, Bill is left with a feeling of critique rather than collaboration.

    Whether it’s a high-level UML diagram or working code, how one person tends to perceive feedback on the ideas comprising a potential solution has everything to do with timing. It can be the exact same feedback they would have received either way, but when the feedback occurs often makes a difference between whether it’s perceived as collaboration or critique. It’s all about when the conversation happens.

    Ditch the Repository Pattern Already


    One pattern that still seems particularly common among .Net developers is the Repository pattern. I began using this pattern with NHibernate around 2006 and only abandoned its use a few years ago.

    I had read several articles over the years advocating abandoning the Repository pattern in favor of other suggested approaches which served as a pebble in my shoe for a few years, but there were a few design principles whose application seemed to keep motivating me to use the pattern.  It wasn’t until a change of tooling and a shift in thinking about how these principles should be applied that I finally felt comfortable ditching the use of repositories, so I thought I’d recount my journey to provide some food for thought for those who still feel compelled to use the pattern.

    Mental Obstacle 1: Testing Isolation

    What I remember being the biggest barrier to moving away from the use of repositories was writing tests for components which interacted with the database.  About a year or so before I actually abandoned use of the pattern, I remember trying to stub out a class derived from Entity Framework’s DbContext after reading an anti-repository blog post.  I don’t remember the details now, but I remember it being painful and even exploring use of a 3rd-party library designed to help write tests for components dependent upon Entity Framework.  I gave up after a while, concluding it just wasn’t worth the effort.  It wasn’t as if my previous approach was pain-free, as at that point I was accustomed to stubbing out particularly complex repository method calls, but as with many things we often don’t notice friction to which we’ve become accustomed for one reason or another.  I had assumed that doing all that work to stub out my repositories was what I should be doing.

    Another principle that I picked up from somewhere (maybe the big xUnit Test Patterns book? … I don’t remember) that seemed to keep me bound to my repositories was that you shouldn’t write tests that depend upon dependencies you don’t own.  I believed at the time that I should be writing tests for Application Layer services (which later morphed into discrete dispatched command handlers) and the idea of stubbing out either NHIbernate or Entity Framework violated my sensibilities.

    Mental Obstacle 2: The Dependency Inversion Principle Adherence

    The Dependency Inversion Principle seems to be a source of confusion for many which stems in part from the similarity of wording with the practice of Dependency Injection as well as from the fact that the pattern’s formal definition reflects the platform from whence the principle was conceived (i.e. C++).  One might say that the abstract definition of the Dependency Inversion Principle was too dependent upon the details of its origin (ba dum tss).  I’ve written about the principle a few times (perhaps my most succinct being this Stack Overflow answer), but put simply, the Dependency Inversion Principle has at its primary goal the decoupling of the portions of your application which define policy from the portions which define implementation.  That is to say, this principle seeks to keep the portions of your application which govern what your application does (e.g. workflow, business logic, etc.) from being tightly coupled to the portions of your application which govern the low level details of how it gets done (e.g. persistence to an Sql Server database, use of Redis for caching, etc.).

    A good example of a violation of this principle, which I recall from my NHibernate days, was that once upon a time NHibernate was tightly coupled to log4net.  This was later corrected, but at one time the NHibernate assembly had a hard dependency on log4net.  You could use a different logging library for your own code if you wanted, and you could use binding redirects to use a different version of log4net if you wanted, but at one time if you had a dependency on NHibernate then you had to deploy the log4net library.  I think this went unnoticed by many due to the fact that most developers who used NHibernate also used log4net.

    When I first learned about the principle, I immediately recognized that it seemed to have limited advertized value for most business applications in light of what Udi Dahan labeled The Fallacy Of ReUse.  That is to say, properly understood, the Dependency Inversion Principle has as its primary goal the reuse of components and keeping those components decoupled from dependencies which would keep them from being easily reused with other implementation components, but your application and business logic isn’t something that is likely to ever be reused in a different context.  The take away from that is basically that the advertized value of adhering to the Dependency Inversion Principle is really more applicable to libraries like NHibernate, Automapper, etc. and not so much to that workflow your team built for Acme Inc.’s distribution system.  Nevertheless, the Dependency Inversion Principle had a practical value of implementing an architecture style Jeffrey Palermo labeled the Onion Architecture. Specifically, in contrast to traditional 3-layered architecture models where UI, Business, and Data Access layers precluded using something like Data Access Logic Components to encapsulate an ORM to map data directly to entities within the Business Layer, inverting the dependencies between the Business Layer and the Data Access layer provided the ability for the application to interact with the database while also seemingly abstracting away the details of the data access technology used.

    While I always saw the fallacy in strictly trying to apply the Dependency Inversion Principle to invert the implementation details of how I got my data from my application layer so that I’d someday be able to use the application in a completely different context, it seemed the academically astute and in vogue way of doing Domain-driven Design at the time, seemed consistent with the GoF’s advice to program to an interface rather than an implementation, and provided an easier way to write isolation tests than trying to partially stub out ORM types.

    The Catalyst

    For the longest time, I resisted using Entity Framework.  I had become fairly proficient at using NHibernate, the early versions of Entity Framework were years behind in features and maturity, it didn’t support Domain-driven Design well, and there was a fairly steep learning curve with little payoff. A combination of things happened, however, that began to make it harder to ignore. First, a lot of the NHibernate supporters (like many within the Alt.Net crowd) moved on to other platforms like Ruby and Node. Second, despite it lacking many features, .Net developers began flocking to the framework in droves due to it’s backing and promotion by Microsoft. So, eventually I found it impossible to avoid which led to me trying to apply the same patterns I’d used before with this newer-to-me framework.

    To be honest, once I adapted my repository implementation to Entity Framework everything mostly just worked, especially for the really simple stuff. Eventually, though, I began to see little ways I had to modify my abstraction to accommodate differences in how Entity Framework did things from how NHibernate did things.  What I discovered was that, while my repositories allowed my application code to be physically decoupled from the ORM, the way I was using the repositories was in small ways semantically coupled to the framework.  I wish I had kept some sort of record every time I ran into something, as the only real thing I can recall now were motivations with certain design approaches to expose the SaveChanges method for Unit of Work implementations. I don’t want to make more of the semantic coupling argument against repositories than it’s worth, but observing little places where my abstractions were leaking, combined with the pebble in my shoe from developers who I felt were far better than me which were saying I shouldn’t use them lead me to begin rethinking things.

    More Effective Testing Strategies

    It was actually a few years before I stopped using repositories that I stopped stubbing out repositories.  Around 2010, I learned that you can use Test-Driven Development to achieve 100% test coverage for the code for which you’re responsible, but when you plug your code in for the first time with that team that wasn’t designing to the same specification and not writing any tests at all that things may not work.  It was then that I got turned on to Acceptance Test Driven Development.  What I found was that writing high-level subcutaneous tests (i.e. skipping the UI layer, but otherwise end-to-end) was overall easier, was possible to align with acceptance criteria contained within a user story, provided more assurance that everything worked as a whole, and was easier to get teams on board with.  Later on, I surmised that I really shouldn’t have been writing isolation tests for components which, for the most part, are just specialized facades anyway.  All an isolation test for a facade really says is “did I delegate this operation correctly” and if you’re not careful you can end up just writing a whole bunch of tests that basically just validate whether you correctly configured your mocking library.

    So, by the time I started rethinking my use of repositories, I had long since stopped using them for test isolation.

    Taking the Plunge

    It was actually about a year after I had become convinced that repositories were unnecessary, useless abstractions that I started working with a new codebase I had the opportunity to steer.  Once I eliminated them from the equation, everything got so much simpler.   Having been repository-free for about two years now, I think I’d have a hard time joining a team that had an affinity for them.

    Conclusion

    If you’re still using repositories and you don’t have some other hangup you still need to get over like writing unit tests for your controllers or application services then give the repository-free lifestyle a try.  I bet you’ll love it.

subscribe via RSS