Monday, December 30, 2013

Simple.Web F# Helpers

If you're using the new Simple.Web F# project template and haven't used Simple.Web, here are some little helpers to get going.

Static Content

Using the current NuGet package for Simple.Web, one has to setup static content like so (comes with the template at the moment):

open Simple.Web

type StaticContentStartupTask() =
    interface IStartupTask with
        member __.Run(config, env) =
            let pf f = config.PublicFolders.Add(PublicFolder f) |> ignore
            ["/Scripts"; "/Content"; "/App"] |> List.iter pf

Unfortunately, the template currently leaves out one line of code to make this useful. Currently (I'm sorry I didn't add this, please forgive me!), you must add a line to the OwinStartupApp class.

type OwinAppSetup() =
    static member Setup(useMethod:UseAction) =
        let run x y = Application.Run(x, y)
        Application.LegacyStaticContentSupport <- true  // This is the line missing
        run |> useMethod.Invoke

Need to add the "Application.LegacyStaticContentSupport" line to tell Simple.Web (well, Fix, really) to use the static content. There will be an OWIN-specific way of doing this in a later release.

Public File Mappings

One of the cool things about Simple.Web is we can map a URI to a static HTML file very easily.

open System.Collections.Generic

type PublicFileMappingStartup() =
    interface IStartupTask with
        member __.Run(config, env) =
            let pf (ep, f) = config.PublicFileMappings.Add(KeyValuePair(ep, PublicFile f))
            [("/", "/Views/index.html");
             ("/account/login", "/Views/Account/login.html");
             ("/account/create", "/Views/Account/create.html")] |> List.iter pf

Overall I really like the F# syntax for setting these sorts of things up over C#. I think it maps cleanly and, at least to me, seems a lot lighter. The only portion that I can't seem to make more F#-ish is the StructureMap setup. If anybody has some tips, I'd love to hear them! I'd love something like:

open Simple.Web.StructureMap
type StructureMapStartup() =
    inherit StructureMapStartupBase()
    override __.Configure(config) =
        let register<'a, 'b>() = config.For<'a>().Use<'b>() |> ignore
        register()

Much Thanks

Much thanks must be given to Mark Rendle and his awesome design of Simple.Web. It has made web programming fun, again.

Friday, December 27, 2013

Simple.Web F# Project Template - Part 2

Much thanks to Daniel Mohl for his help in creating this project template. After some cleanup, the issues I ran into were cleared up and I was able to produce a functioning template.

After receiving Daniel's feedback (all in my Twitter feed), this was pretty easy:

  • Install Template Builder 1.0.3.22-beta or higher;
  • Change the Is-TemplateSubFolder value to "Simple.Web";
  • Make the Repository Id in the VSTemplate file match the Product Id in the manifest file;
  • Remove references to Packages.config from the fsproj and vstemplate files;
  • Make all nupkg files have a build action of "Content", and set "Include in VSIX" to true;
  • Set "ReplaceParameters" to "true" in the VSTemplate file for AssemblyInfo.fs.

So, here it is, my first Visual Studio extension, and project template: Simple.Web F#. Please let me know what you think or if you have any issues.

Tuesday, December 24, 2013

Simple.Web F# Project Template - Part 1

After working with the new ASP.NET F# community template with Simple.Web and some chatter on Twitter, I decided to put attempt to put together a Simple.Web F# project template. I wanted to write out the steps I took to get started to see if anybody can point out what I might be doing wrong or how I could probably do it better.

Since the community templates are based on SideWaffle, I started with a video made by Sayed Hashimi on how to create a SideWaffle project:

Based on the video, I created a new project based on the F# Web API project, removed almost all the NuGet packages, and added all of the Simple.Web packages needed for a new site. Next, I forked the GitHub repository, cloned it, and copied the Mvc5 folder into a new Simple.Web folder. I had to delete most of the stuff in the new solution that I copied over and added the extracted template items. (See the video.)

Finally, I cleaned up the project to ensure the right license was included, etc., and wound up at this state. (Update: and like an idiot, I deleted the branch in that link. The changes can still be seen, though, in the repo.) Unfortunately, after building the new solution and installing the VSIX (again, see the video), I can't see the new template in the File->New Project dialog box.

Any ideas? (Pull requests are welcomed!)

Update

With the help of Daniel Mohl, I was able to get this taken care of. Here's part 2.

Friday, December 20, 2013

Pure F# ASP.NET Web Applications and Simple.Web

So, there has been some awesome work done by Daniel Mohl on a pure F# project for ASP.NET. Unfortunately, I have been spoiled by Simple.Web and wondered if I could make the project work with it. It was not as hard as I thought it would be, at all. Fortunately, Mark Rendle and company have given us the Simple.Web.AspNet NuGet package which makes this trivial.

The biggest hurdle of it all is the NuGet package adds a C# file called OwinAppSetup.cs which sets up the OWIN bindings, on top of which Simple.Web runs. It is a really simple file, actually:

using System;
using System.Collections.Generic;
using System.Threading.Tasks;
using Simple.Web;

namespace FsWebApp
{
    using UseAction = Action<Func<IDictionary<string, object>, Func<IDictionary<string, object>, Task>, Task>>;

    public class OwinAppSetup
    {
        public static void Setup(UseAction use)
        {
            use(Application.Run);
        }
    }
}

Converting this to F# was fairly straight forward and is easily replaced with:

module OwinAppSetup

open System
open System.Collections.Generic
open System.Threading.Tasks
open Simple.Web

type UseAction = Action<Func<IDictionary<string, obj>, Func<IDictionary<string, obj>, Task>, Task>>

type OwinAppSetup() =
    static member Setup(useMethod:UseAction) =
        let run x y = Application.Run(x, y)
        run |> useMethod.Invoke

It is pretty much like the original version except that F#'s type system doesn't let us pass the run method into the UseMethod directly and call it like a method. Instead you have to manually call the Invoke method. (Easy stuff.) Other than deleting a few of the folders from the project template like the Models folder, everything runs perfectly. I also removed a few of the MVC/Web API specific assemblies from the project but that was all that was needed to get going.

Update

There is now some semblance of a Simple.Web F# project template available.

Tuesday, November 5, 2013

Tricky LINQ ToDictionary Where Action is the Value

This is going to be a quick one, but something I wanted to jot down before I forgot it. I needed a Dictionary<MyEnum, Action> and figured I would use LINQ's awesome ToDictionary<,>() method. I've used this plenty of times before and so I figured this would be a snap:

    private readonly Dictionary<MyEnum, Action> _actions;

    public Keyboard()
    {
        _actions = Enum.GetValues(typeof(MyEnum))
                       .Cast<MyEnum>()
                       .ToDictionary(x => x, x => () => { });
    }

Feeling good about my awesome LINQ statement, I went to move on when I noticed that my precious had red squigglies under it. Ah, probably just a typo or something. I mean, I am susceptible to those sometimes, I suppose. Upon further inspection, it was not a typo but a fun little error:

Cannot convert lambda expression to type 'System.Collections.Generic.IEqualityComparer<Foo.Bar.MyEnum>' because it is not a delegate type

Even more fun, this method was insisting I was trying to create a Dictionary<MyEnum, MyEnum>. No I'm not! What the heck!

Just Breathe...

Ah yes, it is easy to forget that there are four overloads for ToDictionary. Three of which are ToDictionary<TSource, TKey>, all of which an Action perfectly conflicts, and fourth which is ToDictionary<TSource, TKey, TElement>. So, my implementation, above, caused the compiler to become confused since it thought I was trying to pull off passing in an Action as an IEqualityComparer. So, we need to force the compiler to pick the last overload by specifying all the Ts:

        _actions = Enum.GetValues(typeof(MyEnum))
                       .Cast<MyEnum>()
                       .ToDictionary<MyEnum, MyEnum, Action>(x => x, x => () => { });

Simple really.

Thursday, October 31, 2013

I Hate Null References. I Hate Them!

I have been thinking hard about how to introduce my hatred for "null" references without sounding like it was just complaining. We'll see how this goes.

Null references, for the (lucky) unacquainted, occur when one dereferences a pointer that does not point to anything. Instead of swallowing the "error" or providing some default behavior, we get an exception. When not caught and handled, this will cause big problems. The fun part (for various definitions of fun) is that these exceptions are a pain to hunt down. So, most of the time, we do checks at the beginning of code blocks to check for null references. Thus, our code is sprinkled with things like:

void Foo(Bar bar, Baz baz)
{
    if(bar == null) throw new ArgumentNullException("bar");
    if(baz == null) throw new ArgumentNullException("baz");
    // if we get here, we're ok.
}

which doesn't remove the exception. Ugh. I really wish C# would implement non-nullables or something like C++'s references. There has been some talk about how to introduce this into C# but, as of today, we do not have this capability. I'm really looking forward to any sort of implementation!

Doing some learning

After digging around some (and being heavily inspired by Eric Lippert's posts on monads), I have been using the Maybe "pattern" and decided to share my implementation. As an aside, there are 13 posts to that link, but they are all great reads and I highly recommend checking them out. I am, in no way, an expert on Monads, but felt like sharing what I have learned up to this point. Before we move on, here is the code.

public struct Maybe<T>
{
    public static readonly Maybe<T> Nothing = new Maybe<T>();

    public readonly bool HasValue;
    public readonly T Value;

    public override string ToString()
    {
        return ToString(string.Empty);
    }

    public string ToString(string nothing)
    {
        return HasValue ? Value.ToString() : nothing;
    }

    public static implicit operator Maybe<T>(T value)
    {
     return new Maybe<T>(value);
    }

    public Maybe(T value)
    {
        Value = value;
        HasValue = ReferenceEquals(null, value);
    }
}

There is nothing really fancy going on here, but using a struct for the wrapper ensures that we won't have to worry about null references. I have allowed for the implicit conversion from "T" to Maybe<T> so we can keep the "Maybe"s contained.

public class Foo
{
    public static void Bar(Maybe<string> baz)
    {
        // Use baz
    }
}

public void Main()
{
    var myString = "Some string, not a maybe.";
    Foo.Bar(myString); // No explicit cast required.
}

Let's make it better

This helps with arguments, but we can take this a step further and use some LINQ trickery to allow some short-circuiting logic to (appear to) dereference the maybe safely and any properties it may have. Let's say we have a (poorly) written object graph that looks like this:

public class X { }
public class Y { public X X { get; set; } }
public class Z { public Y Y { get; set; } }

and we need to use the properties in a method. Without our special maybe, we might have something like this:

public class Foo(Z z)
{
    if(z == null) throw ArgumentNullException("z");

    if(z.Y != null)
    {
        if(z.Y.X != null)
        {
            var x = z.Y.X; // finally...
        }
    }
}

What if the idea of nothing (null, in this case) is ok? We're throwing an exception but might not mean it. And then we have the nested if-statements to finally get to the value we're after. For fun, we can use some extension methods to get some LINQ goodness and remove the explicit null checks. (This portion was greatly inspired by this post by Mike Hadlow.)

public static class MaybeExtensions
{
    public static Maybe<T> ToMaybe<T>(this T value)
    {
        return value;
    }

    public static Maybe<R> Select<T, R>(this T value, Func<T, Maybe<R>> func)
    {
        Maybe<T> maybe = value;
        return maybe.HasValue ? func(value) : Maybe<R>.Nothing;
    }

    public static Maybe<R> Select<T, R>(this T value, Func<T, R> func)
    {
        Maybe<T> maybe = value;
        return maybe.HasValue ? func(value) : Maybe<R>.Nothing;
    }

    public static Maybe<R> Select<T, R>(this Maybe<T> value, Func<T, R> func)
    {
        return value.HasValue ? func(value.Value) : Maybe<R>.Nothing;
    }

    public static Maybe<V> SelectMany<T, U, V>(this Maybe<T> maybe, Func<T, Maybe<U>> k, Func<T, U, V> s)
    {
        // This could be cleaned up with a "Bind" method, but I wanted to leave it off for now
        if(value.HasValue == false) return Maybe<Vgt;.Nothing;
        var kMaybe = k(value.Value);
        return kMaybe.HasValue ?
            s(value.Value, kMaybe.Value) :
            Maybe<V>.Nothing;
    }
}

We can now chain the maybes, using LINQ, without fear of a null references exception.

class X { }
class Y { public X X { get; set; } }
class Z { public Y Y { get; set; } }

void Foo(Z z)
{
    var x = from z in z.ToMaybe()
            from y in z.Y.ToMaybe()
            from x in y.X.ToMaybe()
            select x;

    // Or

    x = z.ToMaybe()
         .Select(z => z.Y)
         .Select(y => y.X); // Same thing as above
}

No matter if Z, Y, or X is null, we will get a Maybe<X> from this. Other than having to invoke the extension method ToMaybe(), I like this code much better than a bunch of null checks. Maybe that's just me. This means we can either keep the Maybe<T> internal and use the ToMaybe() extension method, or we can take in a Maybe<T>. As long as this doesn't turn into a check on HasValue that requires us to throw an exception, it seems ok. Of course, there are times when a null reference just is not an acceptable "pre-condition" and an exception is (because our hands are tied) acceptable. What are your thoughts?

Wednesday, October 23, 2013

Zip with LINQ

I really love LINQ. In fact, LINQ, mixed with extension methods, is really enjoyable and makes it hard to leave C#. (Among other things.) At any rate, one of my favorite LINQ functions is Zip and I got to use it the last few days to solve a problem that would have been annoying otherwise. The basic idea of Zip is to take two collections (IEnumerable) and combine their elements with a selection function. Zip's signature is:

public static IEnumerable<TResult> Zip<T, U, TResult>(
    this IEnumerable<T> first,
    IEnumerable<U> second,
    Func<T, U, TResult> selector)

Sometimes extension methods can be hard to read. Basically, this method is invoked on an IEnumerable of some type U. The first parameter is the other IEnumerable you want to "zip" together with the first. The final parameter, Func<T, U, TResult>, basically says it takes a method which takes in two parameters (of type U and T, respectively) and returns something of TResult. Since U and T are the types defined by the two IEnumerables we're zipping together, one can think of it as a method that takes the one of each of these collections and returns something else. Anything, really.

As a web developer, I have created myriad collections of items that have an associated count with each item. I don't know how many times I wrote something like (using Razor syntax here):

@int counter = 1;
foreach(var elem in Elements)
{
    <div>@elem.SomeProp - @counter++</div>
}

Too easy

Sure, this is a pretty trivial example, but my views have sometimes become jam-packed with this sort of "logic." (You know, cause this is "view" stuff, right?) But, had I given it some though, I could have created a better view model that had this information in it. (And, in turn, kept my view a bit cleaner.)

One of the things to note about Zip is that it will only join items up to the length of the shortest collection and nothing more. We can use this to our power for situations like our previous pretend view. Let's say we want to take a collection of strings and list them out in a console app. We could do something like:

public void Main()
{
    val names = new[] { "Tom", "Dick", "Jane" };

    for (int i = 0; i < names.Length; ++i)
    {
        Console.WriteLine("{0}-{1}", i + 1, names[i]);
    }

    // or

    int counter = 1;
    foreach(var name in names)
    {
        Console.WriteLine("{0}-{1}", counter++, name);
    }
}

Again, this is trivial so it doesn't seem too painful. But let's look at how we can do it with Zip:

public void Main()
{
    var names = new[] { "Tom", "Dick", "Jane" };
    var counts = new[] { 1, 2, 3 };
    var strings = names.Zip(counts, (x, y) => string.Format("{0}={1|", x, y);

    foreach(var val in strings)
    {
        Console.WriteLine(val);
    }
}

A little more

So, the same results as above with a little bit more typing (in this trivial case). But, since we know that Zip will only combine items up to the length of the smaller collection, we can do some interesting things. Since counting items has been a theme for this post, I'll stick with it. Let's say we want to take a collection of strings of variable length and get the same effect as the previous two examples. Well, we could use a for-loop in every spot, or we can use a functional approach and create a generic method that will let us accomplish this. The Enumerable static class has a helper function on it called Count. It returns an IEnumerable that counts from N to M (whatever you input). Since we are iterating an IEnumerable, it is not loaded into memory all at once so we can do something like this:

public static IEnumerable<TResult> ZipWithCounter<T, TResult>(
    this IEnumerable<T> first,
    Func<T, int, TResult> selector)
{
    return first.Zip(Enumerable.Range(0, int.MaxValue), selector);
}

// And its use:

var names = new[] { "Tom", "Dick", "Jane" };
var zipped = names.ZipWithCounter((name, i) => string.Format("{0} {1}", i, name));

(This is of course simplified and skimps on error checking for brevity.)

There are a lot of use cases for Zip that clean up how we join two collections together into some arbitrary third collection. It definitely follows more of a functional style of programming than using a looped counter and can make code much easier to read by centralizing repeated logic. Zip is one of those underutilized tools in the LINQ tool belt.

Tuesday, October 15, 2013

EStroids - An Entity System Game

After having stumbled upon entity systems, I have been a little bit infatuated with the design concept. I decided to spike out a little game for fun and brush off my math skills; thus EStroids was born. It is just a quick and dirty little game, doesn't have all the defensive programming checks that would normally go into an app or game, but was pretty fun to write. (It also has things that will make a mathematician cry, such as multiplying a 4✗4 matrix and a vector3.)

I have the little project hosted on GitHub in case I decide to poke around it some more. Overall, it was just fun and nice to see how easily it is to put little games together. I have absolutely no real-world experience that tells me this sort of approach is scalable, but I do see merits to the approach.

And without further ado, the link: https://github.com/jjvdangelo/EStroids

Systems

Each of the systems inherit from SystemBase that just takes in a manager. (Like I said, just quick and dirty.)

public abstract class SystemBase
{
    private readonly EntityManager _manager;
    protected EntityManager Manager { get { return _manager; } }

    public virtual void Initialize() { }
    public virtual void Shutdown() { }
    public abstract void Frame(double dt);

    protected SystemBase(EntityManager manager)
    {
        _manager = manager;
    }
}

Here's the "lifetime" system that manages objects that have a timed lifespan:

public class LifetimeSystem : SystemBase
{
    public override void Frame(double dt)
    {
        var toDestroy = new List();
        foreach (var entity in Manager.GetEntitiesWithComponent())
        {
            var expiration = entity.GetComponent();
            expiration.TimeAlive += dt;

            if (expiration.TimeAlive < expiration.MaxLifetime) continue;

            toDestroy.Add(entity);
        }

        toDestroy.ForEach(x => Manager.DestroyEntity(x));
    }

    public LifetimeSystem(EntityManager manager)
        : base(manager) { }
}

So much cleaner than trying to shove this into some base class and figure out where this behavior should live within an object oriented hierarchy.

Components

And this is a simple component (the Expiration component) used by the LifetimeSystem:

public class Expiration
{
    public double TimeAlive { get; set; }
    public double MaxLifetime { get; set; }
}

Basically, we work to separate the data away from the behavior and away from the entities themselves. This the complete opposite type of work that I've done for so many years trying to efficiently pull all of these things together. It is almost always a code smell when you have "Manager" objects, quickly leading to an anemic object model.

Make Sense?

For anybody who may experience with entity systems, does this make sense? Am I interpreting the intention of entity systems? I feel like I'm picking up the idea but it's always good to have a sanity check here and there!

Monday, September 30, 2013

Exploring Entity Systems

I am always interested in exploring ways of writing code. Lately, I have been trying to break the monotony of writing "business apps" and decided to take a look at writing a few games. It has been a long time since I have written a non-web app so I started up by doing a little research. Since I am a fan of CQRS/DDD/event sourcing/etc, I wanted to learn more about writing systems that do not require giant object graphs and "god objects." My previous exposure to game systems (a long time ago) seemed to run on magic, duct tape, and a lot of luck. But, maybe that was just programming in general.

In my research, what I have come across has been pretty interesting. I ran across a way of developing using "Entity Systems." It is like an OOP-less way of writing applications with the added benefit of built-in Aspect Oriented Programming (AOP). There are plenty of blog posts covering introductory information, so I won't rehash it here. (I may leave that to another blog post later.)

One of my favorite ways of exploring new ideas is to write up a quick spike. In that fashion, I have written a simple little test app that throws around some colored dots on the screen. It's not perfect, doesn't do everything I would want in a real system, but was fun to write. Entity systems will definitely remain on my radar for a while. Well, without further ado, here is my quick little app (in C#).

Links

For a nice overview of entity systems, check out the following links:

(While these are "old," they still contain some really good information.)

Monday, September 23, 2013

Placing Objects Where They Belong

Incorrect object graphs

It is very convenient, under the guise of Object Oriented Programming (OOP) to build out some pretty extensible object graphs because we try to mimic "the real world." I find this idea to be faulty, at best, because we usually find we are not modeling the real world but, instead, pushing a mixture of relational database and OOP design into a giant ball of mud. To give an example, think about any business application where we want a customer service rep to be able to leave a note on a customer's account. With the advent of Object Relational Mappers (ORMs), I see a tendency to construct giant object graphs. Take, for example, the following:

public class Note
{
    public int Id { get; set; }
    public int CsrId { get; set; }
    public string Title { get; set; }
    public string Message { get; set; }
    public DateTime Created { get; set; }
}

public class Customer
{
    public List Notes { get; set; }

    public Customer()
    {
        Notes = new List();
    }
}

So, on the customer's page we can load up their notes and display them nicely to make informed business decisions. But, is this correct? Do customers really have notes? We're modeling a "Has-A" relationship here and, while it is easier for small-ish apps, this can cause a lot of headache. Inevitably, when we show that we can leave a note behind, somebody in the business is going to want to leave behind an automated note. Now, for our contrived example, this isn't too bad, but we will find, suddenly, that almost every part of our app must consume an IRepository<Customer> just to get access to its notes. Ew.

A better way

Let's look at this problem for a different point of view. How would we solve this issue without a computer? Would a CSR have to call up a customer and ask them for all the notes we are currently keeping on them? Would we do some work, call up the customer, and then ask them to keep a note for us? If not, why in the world do we treat our objects the same way? It would be a painful business process to have to involve the customer for everything we do for them so they can keep track of our notes. Instead, we would put the notes where they belong. The customer doesn't own the notes we hold on them, something else does. (In other, nerdy, words, think of it as a bard in your system.)

Projections

I am a great fan of modeling my applications using a CQRS approach. It practically saves us from the headache above, if done correctly. For the initial CSR note-taking example, we can put a command in the system that the CSR object handles. It is, after-all, the CSR that cares about the notes, not the customer. But, why are we leaving a note about the customer? Is there some interesting business information that we are hiding because we have a generic note-taking command? If so, then we have effectively diluted our system with a generic CRUD-like command and we lose a lot of information.

Continuing down the CQRS approach, what if, instead of having a property on either the CSR or the customer class, we just used a projection? This projection could be a collection of all the notes and give us a view of interesting information, all contained in a nice, clean, place. By listening to events, we can create automated notes, informing the CSR of information that might otherwise be hidden or lost.

public class CustomerNotesProjection : IRespondTo<CustomerJoined>,
                                       IRespondTo<CustomerChangedServices>,
                                       IRespondTo<ManuallyEnteredCsrNoteCreated>
{
    public void RespondTo(CustomerJoined e)
    {
        // Load up the view and add a note...
    }

    public void RespondTo(CustomerChangedServices e)
    {
        // Same as above...
    }

    public void RespondTo(ManuallyEnteredCsrNoteCreated e)
    {
        // I sense a theme here...
    }
}

Now we have a single place to which we can turn when we need to capture some event and add a new note. The best part about using this sort of system, especially if you're using Event Sourcing, is that you can capture a lot more interesting information almost for free. On a current project, we have an event store setup that will automatically rebuild our views if it detects a change. All of the events get replayed and thus we can come back, later, and ask the system questions we hadn't thought about in the first place and the information is all there.

public class CustomerNotesProjection : IRespondTo<CustomerJoined>,
                                       IRespondTo<CustomerChangedServices>,
                                       IRespondTo<ManuallyEnteredCsrNoteCreated>,
                                       IRespondTo<CustomerSubscriptionCancelled>
{
    // ...

    public void RespondTo(CustomerSubscriptionCancelled e)
    {
        // yada, yada, yada...
    }
}

Easier to test

One of the problems with the extremely large object graph approach is we now have an excuse not to test our code. It is painful. Really painful.

When we want to test a piece of our code that might leave behind an automated note, we now have to access the customer object directly. Given our analysis that this is probably not the best place for notes to live, we can't even argue that, in a domain driven design approach, that the customer is an aggregate. When we ignore this, though, we are now passing around a customer repository (because we can mock it, right?) and then have to build up the mock to make it all work. This adds so much more complexity to our testing that we decide it's just not worth it. It's so much easier to test simpler cod so we bulk up on those tests to make our test coverage look good, overall.

Instead, we should listen to that pain and re-think our approach. (Even if not on this project, certainly the next, right?) Using a projection approach, we can easily test the projection. Given this event, I expect this note to be added to the view. Simple. And, all of the note testing is in one spot! Need a new automated note to be left behind? We have one test file to open with all of the examples of how this was done in the past. No more searching around the code base to find out how it has been done before. An added benefit is it makes refactoring much easier and even something like code reviews.

Conclusion

While OOP and object graphs are a great tool, it would seem that this is definitely a situation in which we have had a hammer pounding in screws. It takes a little bit more work to find areas in your code where something like this can be cleaned up, but it is well worth it. Having a single place that handles one thing, really, really, well will pay off handsomely. By actually adhering to the Single Responsibility Principle (SRP)—instead of just giving it lip-service—we can actually simplify a task that otherwise can be a painful experience.

The example given here might seem trivial, but it is something that can quickly tank a project. A bunch of little situations like this one can cause a maintenance nightmare or cripple the ability to add new features. Not every project requires the extra little bit of up-front complexity that is required by a CQRS or event sourcing approach, obviously. But it is often good to think and reason about our code before solidifying into a painful object graph in the name of "Object Oriented Programming."

Thursday, August 15, 2013

Stable Hash Codes and Abstract Base Classes

I like to use abstract base classes for my value objects. One of my most used is for identities:

public abstract class Identity<TId>
    where TId : Identity<TId>
{
    private readonly string _prefix;
    public abstract string Value { get; protected set; }

    /* ... */

    protected Identity(string value, string prefix)
        : this(prefix)
    {
        Value = value;
    }

    // So we can deserialize and have access to the prefix
    protected Identity(string prefix)
    {
        _prefix = prefix; 
    }
}

Since the identities are value objects, we need to make sure that any two identities of the same type with the same values are considered equal.

public abstract class Identity<TId> : IEquatable<TId>
    where TId : Identity<TId>
{
    private readonly string _prefix;
    public abstract string Value { get; protected set; }

    public bool Equals(TId other)
    {
        if (ReferenceEquals(null, other)) return false;
        return ReferenceEquals(this, other) || Equals(Value, other.Value);
    }

    public override bool Equals(object obj)
    {
        if (ReferenceEquals(null, other)) return false;
        return ReferenceEquals(this, obj) || Equals(obj as TId);
    }

    public override int GetHashCode()
    {
        unchecked
        {
            var hash = 17;
            hash += hash * 23 + Value.GetHashCode();
            hash += hash * 23 + GetType().GetHashCode();
            return hash;
        }
    }

    public override string ToString() { return string.Format("{0}/{1}", _prefix, Value); }

    protected Identity(string value, string prefix)
        : this(prefix)
    {
        Value = value;
    }

    // So we can deserialize and have access to the prefix
    protected Identity(string prefix)
    {
        _prefix = prefix; 
    }
}

Almost...

This works for most situations but, recently, I found out that there's a little bit of a bug with the implementation of GetHashCode(). The overall algorithm is pretty sound for most cases, minus one little piece: GetType().GetHashCode(). The way I found out this doesn't work is by having to persist the hash between uses of an app. Bringing the app down and back up caused GetType().GetHashCode() to return a different value. Whoops!

(The reason for GetType().GetHashCode() was so that two identities of different types with the same .Value; would return different hashes.)

I switched it up, for now, to use GetType().Name. Fixed the issue for me.

public abstract class Identity<TId> : IEquatable<TId>
    where TId : Identity<TId>
{
    /* ... */
    public override int GetHashCode()
    {
        unchecked
        {
            var hash = 17;
            hash += hash * 23 + Value.GetHashCode();
            hash += hash * 23 + GetType().Name.GetHashCode();
            return hash;
        }
    }
    /* ... */
}

And, to wrap up the use of this class, here's a sample of how to use it (fully serializable, too):

[DataContract(Namespace = "my-namespace")]
public abstract class ProductId : Identity<ProductId>
{
    [DataMember(Order = 1)]
    public override string Value { get; protected set; }

    public ProductId(long id)
        : base(id.ToString(CultureInfo.InvariantCulture), "product") { }

    private ProductId()
        : base("product") { }
}

Thursday, August 8, 2013

Registering Open Generics to a Factory Method with StructureMap Using Expression Trees

I have been working with a system lately that requires a factory method to create a lot of open generics. To keep it simple, at the beginning, I registered each type of instance with something like the following:

var singletonFactory = new OpenGenericFactory();

StructureMap.ObjectFactory.Configure(cfg =>
{
    cfg.For<IOpenGeneric<int, int>>().Use(OpenGenericFactory.GetGeneric<int, int>());
    cfg.For<IOpenGeneric<int, long>>().Use(OpenGenericFactory.GetGeneric<int, long>());
    cfg.For<IOpenGeneric<long, long>>().Use(OpenGenericFactory.GetGeneric<long, long>());
    cfg.For<IOpenGeneric<long, int>>().Use(OpenGenericFactory.GetGeneric<long, int>());
});

The actual classes don't matter much, other than it is an open generic with two generic arguments and there would be a lot of registering. Not fun at all, especially because it is easy to forget to go register them and it's a really boring manual task. I fairly simple solution is to bind the open generic like so:

var singletonFactory = new OpenGenericFactory();
var getGenericMethod = singletonFactory.GetType().GetMethod("GetGeneric");

StructureMap.ObjectFactory.Configure(cfg =>
{
    cfg.For(typeof(IOpenGeneric<,>).Use(x =>
    {
        var requestedType = x.BuildStack.Current.RequestedType;
        var genericArgs = requestedType.GetGenericArguments();
        var genericMethod = getGenericMethod.MakeGenericMethod(genericArgs);
        return genericMethod.Invoke(singletonFactory, null);
    });
});

This turns out to be orders of magnitude slower than registering each type by hand. One could go ahead and cache the "genericMethod" in a dictionary, but it is still pretty slow. Instead, we can create a Func out of the method and cache that. This one time compile of an expression tree (cached, of course) makes everything much faster.

using System.Linq.Expressions; // This is where the Expression tools live

var singletonFactory = new OpenGenericFactory();
var factoryType = singletonFactory.GetType();
var getGenericMethod = factoryType.GetMethod("GetGeneric");
var cache = new ConcurrentDictionary<Type, Func<OpeGenericFactory, object>>();

StructureMap.ObjectFactory.Configure(cfg =>
{
    cfg.For(typeof(IOpenGeneric<,>).Use(x =>
    {
        var requestedType = x.BuildStack.Current.RequestedType;
        var func = cache.GetOrAdd(requestedType, type =>
        {
            var genericArgs = type.GetGenericArguments();
            var genericMethod = getGenericMethod.MakeGenericMethod(genericArgs);
            var input = Expression.Parameter(factoryType, "input");
            return Expression.Lambda<Func<OpenGenericFactory, object>>(Expression.Call(input, genericMethod), input).Compile();
        }
        return func(singletonFactory);
    });
});

So, after a one-time reflection payment, we can use the method as if we were calling it directly. Fun!

Tuesday, July 30, 2013

Using protobuf-net with Inheritance

I have been doing some work with Google's Protocol Buffers using Marc Gravel's protobuf-net. It is a very easy to use package and has proven to be lightning fast in both serializing and deserializing my objects. As much as I have enjoyed it, this post is not a sales pitch about the goodness of protocol buffers but is, instead, to show an issue I ran across and how I got around it.

I was having a problem with inheritance and deserializing my objects. Basically, I have a value object generic that I use for identities and they were not deserializing properly.

[ProtoContract]
public abstract class Identity<TId> : IIdentity
    where TId : Identity<TId>
{
    [ProtoMember(1)]
    public string Value { get; private set; }

    protected Identity(string value)
    {
        Value = value;
    }
}

When the identity object would be deserialized, the Value property was always null. So, after some reading, I found out why. It was because I had to let protobuf-net know about my inheritance chain. One way of doing it is with attributes. (Specifically the ProtoInheritanceAttribute.)

[ProtoContract]
public class FooId : Identity<FooId>
{
    private FooId() { /* Just for serialization */ }
    public FooId(string value)
        : base(value) { }
}

[ProtoContract]
[ProtoInheritance(2, typeof(FooId))]
public abstract class Identity<TId> : IIdentity
    where TId : Identity<TId>
{
    [ProtoMember(1)]
    public string Value { get; private set; }

    protected Identity(string value)
    {
        Value = value;
    }
    protected Identity() { /* Just for serialization */ }
}

For one, it through me off that the inheritance attribute goes on the base class. DataContractSerializer and XmlSerializer come with the same issue with attributes and inheritance so I should have seen it coming. For one-off inheritance chains, this really isn't too bad of a trade-off especially for what you get. For me, though, I would have had to append an uncountable number of ProtoInheritanceAttributes on my Identity class. It was not going to work. Luckily, there is a little trick to make the serialization work the way I wanted it to. Since this is an abstract class, we can make the exposed property abstract and move up the ProtoMember attribute to the child class. (Be sure to note the change to from private to protected on the Value property.)

[ProtoContract]
public class FooId : Identity<FooId>
{
    [ProtoMember(1)]
    public override string Value { get; protected set; }

    private FooId() { /* Just for serialization */ }
    public FooId(string value)
        : base(value) { }
}

public abstract class Identity<TId> : IIdentity
    where TId : Identity<TId>
{
    public abstract string Value { get; protected set; }

    protected Identity(string value)
    {
        Value = value;
    }
    protected Identity() { /* Just for serialization */ }
}

It also cleans up the base class and rids it of the attributes.

Sunday, July 21, 2013

Fun With IDisposable and Action

IDisposable is a pretty neat tool in the .NET framework. For those that don't know what it is, a class that implements IDisposable can be wrapped in a using statement. When the code exits the block created by the using statement, the object's Dispose method is guaranteed to be called, automatically. (This includes the block being exited because of an exception.)

Every now and then I need to create some Razor extension methods (especially to avoid re-typing HTML wrappers around articles, or what-not). If you use Razor with ASP.NET MVC, you are probably familiar with Html.BeginForm(). It allows us to wrap an HTML form and emits the form's opening and closing tags around our elements.

@model MyNameSpace.MyModel
@using (Html.BeginForm())
{
    @Html.TextBoxFor(x => x.SomeProperty)
}

While I won't get into specifics of what extensions I have created in the past in this post, I wanted to discuss a way of cleaning up our code if we have a bunch of these extensions. Let's say we have two extension methods that make use of IDisposable. I have seen—and written myself—a few implementations like the following.

public static class HtmlHelperExtensions
{
    public static FooCloser Foo(this HtmlHelper helper)
    {
        // Do work
        return new FooCloser(helper);
    }

    public static void CloseFoo(this HtmlHelper helper)
    {
        // Finish up
    }

    public static BarCloser Bar(this HtmlHelper helper)
    {
        // Do work
        return new BarCloser(helper);
    }

    public static void CloseBar(this HtmlHelper helper)
    {
        // Finish up
    }
}

public class FooCloser : IDisposable
{
    private readonly HtmlHelper _helper;

    public void Dispose ()
    {
        _helper.CloseFoo();
    }

    public FooCloser (HtmlHelper helper)
    {
        _helper = helper;
    }
}

public class BarCloser : IDisposable
{
    private readonly HtmlHelper _helper;

    public void Dispose ()
    {
        _helper.CloseBar();
    }

    public BarCloser (HtmlHelper helper)
    {
        _helper = helper;
    }
}

After a while, all those "Closer" classes build up. But, there is a pattern that emerges that we can take advantage of. Instead of each set of helper methods getting their own IDisposable "closer," we can merge them all into one and refactor our code a bit by making use of Action.

public static class HtmlHelperExtensions
{
    public static IDisposable Foo(this HtmlHelper helper)
    {
        // Do work
        return new Closer(helper.CloseFoo);
    }

    public static void CloseFoo(this HtmlHelper helper)
    {
        // Finish up
    }

    public static IDisposable Bar(this HtmlHelper helper)
    {
        // Do work
        return new Closer(helper.CloseBar);
    }

    public static void CloseBar(this HtmlHelper helper)
    {
        // Finish up
    }
}

public class Closer : IDisposable
{
    private readonly Action _action;

    public void Dispose ()
    {
        _action();
    }

    public Closer (Action action)
    {
        _action = action;
    }
}

Now we have one universal "closer" that we can use anywhere we want to return an IDisposable to use this little trick—and not just with Razor extensions.

Simple.Web - Extending Handler Behaviors

I have continued working with Simple.Web and recently integrated a project with Azure Active Directory. To my surprise it was actually pretty easy to provision AD on Azure and then use Windows Identity Foundation (WIF) to integrate it into the project. Since I can run Simple.Web with ASP.NET, it all "Just Works." There are some annoying things about Azure AD (that go above an beyond the annoyances of AD itself) but I'll save that for another post.

At any rate, the reason for this post is to talk about extending the behaviors that come packaged in Simple.Web. When I first got Azure AD integrated, it was easy to slap on an IRequireAuthentication behavior on my handler, run the app, and then get redirected to the Azure AD login page. My project, though, is a multi-tenant app and I need the tenant id as well and the user's id. I could easily create a new behavior and slap it onto my handlers without much thought but I wanted to avoid the situation where there are a million-and-one interfaces on each handler since they're all required. So, I tried extending the IRequireAuthorization behavior and, to my surprise, it worked like a charm.

So, the first thing to do was create the interface:

using System;
using Simple.Web.Behaviors;

public interface IRequireTenant : IRequireAuthentication
{
    Guid TenantId { set; }
}

The next step was to create the "implementation":

using System;
using System.Security.Claims;
using Simple.Web.Http;

public static class SetTenant
{
    public static bool Impl(IRequireTenant handler, IContext context)
    {
        var cp = ClaimsPrincipal.Current;
        Guid tenantId;
        if (Guid.TryParse("http://schemas.microsoft.com/identity/claims/tenantid").Value, out tenantId))
        {
            handler.TenantId = tenantId;
            return true;
        }

        return false;
    }
}

And then go back and decorate the interface with the implementation:

[RequestBehavior(typeof(SetTenant))]
public interface IRequireTenant : IRequireAuthentication { ... }

Simple.Web still treats the handler as though it has the IRequireAuthentication behavior (it still triggers the AuthenticationProvider and redirects if needed) and I don't need to have a bunch of interfaces all over my handlers.

Wednesday, June 26, 2013

Simple.Web Serializers - Save Yourself Some Time!

Well, after a few hours of chasing my tail the past few days, I figured I needed to share some of my learnings. As I have blogged before, I have been working with Simple.Web to build out my projects lately. Yesterday, I started to write an API for a product and, when I attempted to test it, got a lovely 415 error from the application. Well, I forgot to add the XML and JSON serializers to the project so the application didn't know how to respond to the request.

So, if attempting to use Simple.Web for an HTTP API, don't forget to add your serializers! (They are easily installed via NuGet, just like the Simple.Web core.)

For JSON requests, you can use either the Simple.Web.JsonNet package or the Simple.Web.JsonFx package. For XML requests, you can use the Simple.Web.Xml package. Of course, Simple.Web is flexible enough that you could also write your own if you don't the current ones.

So, if you're building an HTTP API using Simple.Web, save yourself some time and install the serializers up front so you don't waste as much time as I did!

Thursday, June 6, 2013

Abusive use of interfaces

So, I started working on a new project this week and, after exploring the code a bit, I noticed something that I find keeps popping up in .NET code. There is an overabundance (and misuse) of interfaces. Everywhere. Even to the point that the action methods are taking in IModels and the such. After talking to one of the leads who had to rescue the code base, the lead developer stated that the "new" keyword was bad. So, to that person, everything had to be an interface and decided that dependency injection should be used everywhere. I mean, everywhere. And, nothing is programmed against a concrete object—not even in the tests! To give an idea of the types of objects and interfaces, here is an example:

public interface IPerson
{
    string FirstName { get; set; }
    string LastName { get; set; }
    int Age { get; set; }
}

public class Person : IPerson
{
    public string FirstName { get; set; }
    public string LastName { get; set; }
    public int Age { get; set; }
}

And then this class is used in controllers:

public class PersonController : Controller
{
    public ActionResult AddPerson(IPerson person)
    {
        if (ModelState.IsValid) { /* Add the person */ }
        // Do something else
    }
}

Well, for those not familiar with ASP.NET MVC, this use of interfaces in the action method (public ActionResult AddPerson) causes a whole lot of extra work. The model binder can't create an arbitrary object, so it can't bind the incoming data to an interface. Instead, a concrete class is required so it can be initialized and then bound. So, to make MVC not choke, you have to create a model binder. (A topic for a different approach.) So, instead of using the built-in goodness of ASP.NET MVC and its model binders, we're giving ourselves so much more work with absolutely no value added.

The other complete abuse here is that interfaces should describe the behavior and capabilities of an object, not to describe what it is. Interfaces usually describe the verbs available on objects, and classes describe what the object is. Sure, there are edge cases and other tricks that utilizing an interface allows one to do, but this is definitely not a good use of them.

Wednesday, May 8, 2013

Asynchronous Command Handlers, Object Identity, and CQRS

I have been working on a (closed-source, unfortunately) project in my spare time and have been using the Command Query Responsibility Separation (CQRS) pattern. In most places, it is pretty easy to utilize asynchronous command handlers, but there was one part that tripped me up a little bit. When a user creates a new aggregate root via, say, a web page, how do we handle the asynchronous nature of the command handler?

At one point, I thought about making these creation-type command handlers synchronous so the user could be warned if the creation has failed or if there were errors. I was not very happy with this because it meant that I couldn't just place the command on a bus and move on. It meant I cannot offload that work somewhere else and the user has to sit and wait for this thing to go on. Basically, an all around uncomfortable situation.

Why?

So, what made me care about this? Why is it important to think about? Well, it has some implications. First off, it makes one consider how an object's identity is created. If we have the domain create the id, then we have to sit around and wait for it to complete. (Hence, my dilemma.) Overall, this could work, but I was not satisfied with it. After all, my goal was to allow the web application to do as little processing as possible, other than serving up pages and communicating with the service bus.

Were do we go from here? I was a bit stumped and almost gave in to the idea that I would have to accept this band-aid fix as a permanent solution. Luckily, after some thought, and drawing some pictures, I realized I was looking at the problem from the wrong point of view. Shifting my thinking, the answer seemed pretty obvious and I wasn't sure why I didn't see it in the first place!

The First Step

The first step to my clarity was deciding that the domain did not need to be in charge of creating an identity. Instead, why can't we pass in, say a globally unique id (GUID) and tell the domain that we expect something to be created with this id? Now, we don't have to sit around and wait on some database to assign and id and filter it back to the user. So, as part of our command, we can create a new identity and pass it into the domain from the web server. Now, since the server has the targeted identity, we can forward the user to a "Success!" page with the new identity as a hidden field. We can either set a timer to forward the user to the read only model or provide a link on which the user can click.

But, what if it fails?

What if the creation fails? Well? Who cares? What does that actually mean in the domain? For me, in my current project, it didn't matter. We can display an "Oops! We screwed up!" page with a link back the creation page. We could go so far as to reload the creation page with the data passed in (since we have the command, after all). Even if the user cheats the system and tries to re-use an identity to create an aggregate to maybe cheat the system, we can detect it (the aggregate cannot be created when it has already been created!) and show the user an error page.

Wait, create what was already created?

Let's say a malicious user wants to try to trick the system into recreating an aggregate in hopes to gaining access to it. Well, we have to be careful here. In my solution, aggregates are made by newing up the target type, loading the event stream from the backing store, re-running the events, and then calling some method on the aggregate. This include "create" methods. The constructor doesn't actually put the object in a created state. So, instead, we have something like:

public abstract class Aggregate
{
    public void ReplayEvents(EventStream stream) { ... }
    protected void PlayEvent(Event target) { ... }
}

public class Foo : Aggregate
{
    private bool _created;

    public Foo()
    {
        // Just get the underlying code ready,
        // but don't set the state to created.
    }

    public void Create(FooCreateParams params)
    {
        // Validate the params and all that fun stuff
        // and, if all is well, fire a created event.
        // If this has already been created, throw an
        // exception or, maybe, fire an error event.

        if (_created) { /* Blow up! */ }

        PlayEvent(new CreatedEvent(params.Id)); // You get the gist.
    }

    private void PlayEvent(CreatedEvent target)
    {
        // React to the event here.
        _created = true;
    }
}

So, if the object has already been created, we don't want to mess with the state. Depending on your domain and the context of your application, you could either fail silently, fire an error event, or even throw an exception. No matter what you do, though, if a user somehow messes with the system (or you happen to have an identity clash) and tries to execute a Create on an already created object, we do not want to hint that the user actually hit upon an actual id.

Conclusion

With a little bit of thought, we are able to clear up a seemingly complex operation down to a pretty easy solution that allows us to keep our asynchronous operations. Now we have a pretty clean set of operations: User hits submit, we load the command on the bus with a new id, redirect the user to a "success" page with some way of referencing the new object, and then let the user move from there. On error, regardless of why, we let the user know an error happened, and provide them with some information to make a decision on how to move forward.

Tuesday, May 7, 2013

Refactoring? Or Rewriting?

I was recently asked in an interview what it meant to refactor code and what the prerequisite was to refactoring. I have to admit, I was a bit thrown off being asked what the "prerequisite" is, since it seems to me to be reflexive software development nowadays. To me, refactoring is an interesting concept in software development that a lot of other engineering-type professions don't get to leverage as freely as we do. But, I think it has become (incorrectly) synonymous with rewriting code. (Or maybe refactoring is instead used as a guise to rewrite.)

Refactoring is very simple and is akin to rewording the language in your code to express the same idea, or original intent, in a different statement. Rewriting your code is changing the intended behavior of the code—the opposite of refactoring. I have spent a lot of time refactoring some code on my current project lately, and I have run across a the following code a lot.

public List<Foo> GetFoos(SomethingElse[] somethingElses)
{
    var retval = new List<Foo>();

    foreach(var else in somethingElses)
    {
        retval.Add(new Foo() { Bar = else.Bar });
    }

    return retval;
}

So, this is a pretty trivial sample of code, but it is really easy to see what is going on and what the intent of the code is. Basically, it is mapping a collection of one type of object to a collection of another type. We create a list and then iterate through the original collection, appending a new instance of Foo, and then return it. Using LINQ, we can actually simplify this just a tad and even type less code to get the same effect. (Refactor it.)

using System.Linq;

public List<Foo> GetFoos(SomethingElse[] somethingElses)
{
    return somethingElses
        .Select(x => new Foo() { Bar = x.Bar })
        .ToList();
}

Again, this is a trivial example, but it shows how we can refactor our code, keep in tact the original desired behavior, but use different syntactic sugars to clean it up. (I prefer using LINQ, personally, over foreach, especially in nested situations.) But, besides this being a trivial situation, what confidence do we have that we did not silently introduce a bug into our system? What assurance do we have? Well, before refactoring our code, we should put barriers in place to help us reason about our code. In the simplest sense, we should have a unit test, in place, before we make our changes to assert that we did not introduce a bug. Of course, if we're good TDDers we would have this unit test in place already, and have a nice regression test to fall back on. If not, it would behoove us to get one in place, quickly.

[Test]
public void given_an_array_of_something_elses_it_should_return_a_list_of_foos()
{
    var somethingElses = Enumerable.Range(0, 5)
        .Select(i => new SomethingElse() { Bar = i })
        .ToArray()

    // Let's assume GetFoos is defined as a static method on Baz
    var result = Baz.GetFoos(somethingElses);

    for (int i = somethingElses.Length - 1; i >= 0; --i)
    {
        Assert.That(result[i].Bar, Is.EqualTo(somethingElses.ElementAt(i).Bar));
    }
}

Well, that's refactoring, but what about rewriting? If we look at our simple method, what happens when we pass in a null array of SomethingElse? At this point, our method doesn't care and will attempt to iterate over it anyway. This, of course, results in a null reference exception and we have to track down how this happened. But, let's say we decide to change the behavior of this method. We, instead, will throw an exception if the array is null because we the precondition has not been met. Since we are changing the method's behavior, we are rewriting our code. One hint that this is a rewrite is the fact that we need to introduce a new unit test.

[Test]
public void given_a_null_array_it_should_throw_an_argument_null_exception()
{
    var exception = Assert.Throws<ArgumentNullException>(() => Baz.GetFoos(null));
    Assert.That(exception.ParamName, Is.EqualTo("somethingElses"));
}

I have used NUnit-style syntax in this post, but it should be fairly clear what is being tested.

Now, why is this important? Well, as we're writing our code, we tend to learn a lot about it. We see new ways to implement things that allow us to, later, extend the behavior of something without changing the original meaning. It also allows us to nicely execute test driven development: "Red –> Green –> Refactor."

Monday, April 29, 2013

Pits of Success

The Pit of Success: in stark contrast to a summit, a peak, or a journey across a desert to find victory through many trials and surprises, we want our customers to simply fall into winning practices by using our platform and frameworks. To the extent that we make it easy to get into trouble we fail.

—Rico Mariani, MS Research MindSwap Oct 2003.

For anybody who has designed a system, it can get really annoying when users turn around and do really dumb things with our designs. It can be frustrating, especially when we have made our design perfectly clear (in our minds). I often overhear arguments between some of the programmers around me and our customer about this very thing. (Programmers arguing with the customer—topic for another post.)

One of my big pushes at work is to change the way we view our software. It is very easy to put blinders on and just get the job done. We deal with a lot of very pertinent aircraft data and messing that data up has major consequences. As this system was started in the 70s using COBOL, there was not a lot of screen real estate. As such, there are times when a field on the screen serves double duty, but only has a label for one of its jobs. An example is a date field on one of our screens. It can house a date, or the user can input a command to increment the count in a field next to it. Needless to say, if I find the whole thing confusing and I have access to the programmers and source, I can't imagine how hard it is to be a user.

I have getting into the "Lean Startup" movement and found a term used in Toyota's Lean Manufacturing process: Poka-yoke. It exactly describes my argument for making user screens easier to use and more intuitive. Gone are the days of needing 500 disparate fields on the screen because it is easier than creating a new screen altogether. Much like in writing code, our user interaction points need to have low coupling and high cohesion. I am, by no means, a user experience expert, but if the same situations arise where users are incorrectly using our systems, it is usually because of poor design, not stupid users. We should aim to develop our systems such that users find themselves walking in the pit of success rather than walking a tightrope trying to avoid our mistakes.

Sunday, April 28, 2013

Opening Worlds, Sharing, Teaching, Learning

I'm probably late to the game here, but I saw this Code.org short video a few days ago and wanted to share it.

Great coders are today's rap stars.

— will i. am

I personally think everybody should learn how to code. Or at least learn about how a computer operates. I am not a fan of using "magic" to explain things and I do not think that technology is so unapproachable that it has to be explained away as being magic. I was lucky enough to be taught how to code at a very young age and believe it opened up my world so much more than if I had been robbed of that time.

Sunday, April 21, 2013

Object Oriented State

It is very easy to lull ourselves into the the feeling that we are writing object oriented code, especially in languages like Java and C#, without giving much thought to what that really means. The idea behind object oriented programming is to combine behavior with state—or rather, encapsulate state within an object and expose behavior. Juxtapose this with procedural languages, like C, where state and behavior are separated. This "feeling" that we are writing object oriented code usually seems to stem from the idea that we are wrapping our code in classes.

Please, do not get me wrong, I am not bashing procedural code. In fact, I believe that procedural code, used correctly, can be easier to understand and reason through. In fact, it is used pretty heavily in object oriented programming to implement procedural pieces. That is probably why we tend to write so much procedural code, even in object oriented languages. Unfortunately, I think too many people currently view their work as object oriented programming if they are using classes. I think a distinction needs to be made because we should reason about our code more often. While I will save that argument for another post, what I will cover is a few examples of how we can make our code more object oriented, if that is our end goal.

Anemic

Without digging too much into domain driven design (DDD), command query responsibility segregation (CQRS), or other topics, I want to explore some code and compare the differences and consequences of the design. Let's start with a shopping cart that can hold products. We'll use a dictionary with the product as the key and the quantity as the value.

public class Product
{
    public decimal Price { get; set; }
}

public class ShoppingCart
{
    public Dictionary<Product, int> Items { get; set; }
}

If we stick to this (anemic) "object oriented" design and, say, try to sum the items in a shopping cart, we are going to need something like this (using LINQ to simplify the code):

public class ShoppingCartController : Controller
{
    ...
    public ActionResult GetCartSubTotal()
    {
        var cart = cartRepository.GetCartForUser();  // Use your imagination here.
        var total = cart.Items.Select(x => x.Key.Price * x.Value).Sum();
        return PartialView(total);
    }
}

Which doesn't seem too bad, I suppose. But now we need the sub total in another portion of the application. After a while, we have a few places where this code is repeated. So, we decide, simply, to refactor the code and expose a method that returns the subtotal.

A little better

public class ShoppingCart
{
    public Dictionary<Product, int> Items { get; set; }

    public decimal GetSubTotal()
    {
        return Items.Select(x => x.Key.Price * x.Value).Sum();
    }
}

It seems this is where most developers stop. There is more we can do, though, to clean up our design. If we are truly worried about encapsulation, why, oh why, can the rest of the world manipulate our dictionary of items? What happens if we decide to create a new class that holds a reference to a product and the current quantity, thus using a List instead? Now we have to go back through the application and clean it up. But, why do we even go through that in the first place? What if, instead, we all agree that (unless absolutely necessary), we will not use public properties? We could transform our cart into something like:

public class ShoppingCart
{
    private readonly Dictionary<Product, int> _items = new Dictionary<Product, int>();

    public decimal GetSubTotal() { ... }
    public int AddProductToCart (Product product, int quantity) { ... }
    public void RemoveProductFromCart (Product product) { ... }
    public void RemoveProductFromCart (Product product, int quantity) { ... }
}

Thus, we have successfully pulled all of the algorithms to perform the different functions on a shopping cart into one place. We can very easily test this code, and we can refactor it without touching the rest of our application. This allows us to follow "tell, don't ask," where we tell our shopping cart to add an item instead of asking it for the underlying items so we can, via external code, add an item or update the quantity.

Exposure

But, I bet you're wondering, how is the rest of the world supposed to know about the items we have in this cart if we don't expose the collection of items? Good question! We can create a read only view model, or a snapshot, of the underlying data that is safe for the rest of the world to use.

public class CartEntry
{
    public string ProductName { get; private set; }
    public decimal Price { get; private set; }
    public int Quantity { get; private set; }

    public CartEntry(string productName, decimal price, int quantity)
    {
        ProductName = productName;
        Price = price;
        Quantity = quantity;
    }
}

public class CartView
{
    public IEnumerable<CartEntry> Items { get; private set; }
    public decimal SubTotal { get; private set; }

    public CartView (IEnumerable<CartEntry> items, decimal subTotal)
    {
        Items = items;
        SubTotal = subTotal;
    }
}

public class ShoppingCart
{
    private readonly Dictionary<Product, int> _items = new Dictionary<Product, int>();

    public CartView GetView()
    {
        var items = _items.Select(x => new CartEntry(x.Key.Name, x.Key.Price, x.Value)).ToList();
        return new CartView(items, GetSubTotal());
    }
}

It may seem like a lot more code, but when you see how much repeated logic and code you pull into one place, and how much more testable this is, it will be worth it. Of course, there are always exceptions and use cases when something like this doesn't make sense. Of course, when those use cases are found, it is usually because one has reasoned about their code and has a good reason not to put in the little bit of effort to truly follow an object oriented approach.

Friday, April 19, 2013

Using Strongly Typed Properties

When my team and I started to port a Web Forms application over to the MVC side of the ASP.NET stack, I specifically set out to solve an issue we continued to run into. We have these fields on the screen that tie back to some property on an underlying "program," usually exposed as some sort of primitive, such as an int, or string. There are many these properties are found in many programs and usually follow some sort of business rules. Let's say one of these properties is called a Foo. In the Web Forms implementation, we would see something such as the following.

public class SomeProgram : Program
{
    public string Foo { get; set; }
}

public class SomeOtherProgram : Program
{
    public string Foo { get; set; }
}

A "Foo" has a very specific meaning in the domain of the application. In implementation described above, it is very easy to accidentally pass the wrong string to a property because you don't get a red squiggly telling you that you just set that "Foo" with a "Bar" because they're both just strings.

Instead, in our rework, we created a base class called CommonType. (These are basically just value types in the domain driven design world, in that we don't care about their individual identity.) From that CommonType, we have created other "types" to represent our data. So, instead of our properties look like they did, above, we now have something else like this:

public class SomeProgram : Program
{
    public Foo Foo { get; set; }
}

public class SomeOtherProgram : Program
{
    public Foo Foo { get; set; }
}

Now we can't accidentally pass in a "Bar" to a "Foo" without some sort of conversion and we get the help of the compiler to tell us we're mapping things incorrectly. The other benefit of doing this is that these fields need to be represented the same way in the UI. Instead of having the same hand-coded (usually inline) styles, we can create editor templates, model binders, and tell MVC how to play nicely with our types.

@model X.Y.Z.Program
@using (Html.BeginForm())
{
    
}

Now we have one, and only one, place that defines what a "Foo" looks like in the application instead of relying on Find-All.

Tuesday, April 9, 2013

Agile Software Development

As I am currently in a transition phase out of my current job, I have been looking at my résumé a lot. I noticed that I do not have the word "Agile" on it at all. This is interesting to me because it was something that I tried to promote just a few years ago. I think that has a lot to do with how I view software development now. While listening to a .NET Rocks round-table podcast a few months back, the question, "Is agile dead?" was posed. The answers were pretty interesting, but the one that stuck out to me the most was something along the lines that agile isn't dead, it is just the way we do software development now.

On the shows's page, a listener, "bashmohandes," made a comment that I have been thinking about for a very long time:

I don't think Agile is dead, but I think it got ruined by companies that think they are doing it but they are just fooling themselves, or using it as an excuse to keep changing requirements on the poor developers.

Most of the places I worked at, have some remnants of agile methods, like daily scrums, but pretty much nothing more, no fixed size sprints, or spring planning meetings, or retrospective meetings at the end of the sprint, or pair programming.

Specifically, that companies are "using it as an excuse to keep changing requirements." While I am leaving my current position, this is something that I have been struggling to keep from affecting my team. There are a few people in place who could fix this—but they don't. In fact, I don't think they know how to. It was recently told to me that we develop software in "a more waterfall approach." This made me laugh a bit as we don't even follow waterfall. That whole requirements gathering thing is a joke. The developers are currently expected to read the customer's ever changing mind and it is difficult to pin them down.

My current project has been around since the 70s and supports a major command of the USAF. The failure of the application means that troops and supplies are halted and commanders can lose situational awareness of their fleets. But, even with how critical this application is, there is hardly any structure when it comes to defining requirements and paving a path forward. Listening to the podcast really hit home. So many managers seem to want a "spray on agile" solution as if it is some sort of magic. Well, it isn't and "spray on solutions" don't work. In fact, they make things worse.

On the web development side of the shop, I have been able to focus everybody's attention on unit testing, pair programming, and actually talking with the stake holders. Unfortunately, the planning portion is still lacking. Our side of the house has been strategically made void of information silos. Unfortunately, on the COBOL side of things, they still exist. Each COBOL programmer works at their own pace, makes changes, and throws them at the web team to handle. Cause we're agile.

While I believe that true agile is the way to go in software development, I also take a pragmatic approach to things. When initially implementing an agile methodology, it is usually very difficult to get everybody on board, up front, and willing to make all the changes necessary to truly support the change. But, it is important to keep going, keep adding things, keep changing, keep growing, keep learning. It really takes a substantial commitment from management, the customer, and the development team, to realize that they are all in it together.

Thursday, April 4, 2013

Personal Brand - Part 2

In part one, I gave a little back story about myself and how, by failing to manage my "personal brand," I set myself up for a really rough transition out of my current job. One of the great things about owning up to your own mistakes is that you can then do something about it. It is yours to own, it is yours to fix.

I have resolved myself to doing a few things this year to both help promote myself and showcase my capabilities, but to also give back to those who have helped me. I have used a lot of open source, but I have failed, miserably, to give back. That is really wrong and something I want to turn around this year. Really, without thinking, it is easy to pull down everybody else's hard work, through NuGet, and have an app up and running so quickly these days. It is very easy to forget that you're being a heavy consumer, even when you're producing.

Well, I would love to get involved in open source and actually contribute something. My hesitation has been mostly driven by my current employment situation. If something I wrote in open source looked like something I used at work, it would cause some issues. Well, I am on my way out, starting a new chapter, and to hell with it. I want to give back to the community and, hopefully, help somebody out there on the interwebz have a better development experience.

Something else that I have failed to do was keep some sort of blog. It is a bit narcissistic to have one and so I have always had trouble with the idea of putting my random thoughts out on the wire and expecting people to want to read it. But, I realized, this is for me. I am writing to keep track of my own personal progress. It is a way for me to measure my personal growth, keep track of things I'd otherwise forget, and, again, hopefully give back to somebody out there.

While I would love for this experience to produce some magical employment opportunity in the future, that's not really what it is about.

So, I will be working to blog more, tweet more, get involved in open source, and be an all around more giving person on the internet (and IRL). If you have a project on which you want help, post a comment, send me an email, hit me up on Twitter, something. For the time being, I will probably troll around GitHub and look around some more. I really like what Mark Rendle is doing with Simple.Web, so I might fiddle with that a bit until I settle down on a project or two.

I realize that my "lack of experience" on my résumé is going to haunt me for a while. I could, technically, have skipped the Air Force and just focused solely on me and what I want, but that wouldn't have been any good either. So, here I am, taking my first steps to being a better person, managing my own personal brand, and expecting to make a change—somewhere.

Wednesday, April 3, 2013

Personal Brand - Part 1

I have been using computers, programming, and enjoying technology since I was a very little kid. I recently started to look for a new job and realized, very quickly, that I have failed to "manage my personal brand." Here's a little story on how I came to realize it, and, in part two, what I plan to do to fix it.

After having successfully built up a customer base, while serving in the US Air Force, and later transition from military to civilian life on this same clientele, I have found it extremely difficult to explain to employers why I am not a "Junior Programmer." Looking at my résumé, it would appear that I only have three years of development experience. Which is really hard to explain. Really hard.

I successfully built up a great customer base that supported me as I transitioned out of military life to civilian life. Upon deciding to have a child, my wife and I found it prudent for her to come home and me to "get a job." We really wanted a few stable things like healthcare and the ability for me to work a normal 40-hour work week. Had we held off on expanding our family, I would probably still be growing my consultant work and building up a product to bring to market.

Back to work...

Upon posting my résumé online, I received about 20 phone calls from recruiters and had a new job lined up within less than 24 hours. I found myself working back on a military base working on an application that supports a major command (MAJCOM) in the Air Force. I joined the team in their rush to complete a website written using ASP.NET Web Forms with three months left in their year long development cycle. It was a pretty nasty beast with no structure—but it was a "job."

After hitting the release date (which was actually considered a miracle) we went into maintenance mode. The team started to fall apart and, eventually, everybody quit a few months later. Well, everybody except me, the new guy, and a federal employee we worked with. Unfortunately for us, the work load didn't change and the customer still expected the continued support of the application. I became the "lead," as a contractor with the recommendation and support of the "big boss."

Over the next year, I was able to build up a new team and actually competed for, and was hired as, a federal employee. This gave the team some stability, and me the opportunity to change things. I was able to untangle the ball of mud into a pretty elegant ASP.NET Web API and MVC3 solution. My new team was very supportive and open to my guidance. Together we created a composable system that obfuscated the 30+ year old COBOL code and started to drag functionality out of the legacy monolith into C#.

Needless to say, I have been working heavily in .NET for a long time and using, almost exclusively, C#. I know it very well and use my own personal time to research and increase my knowledge. I write code, even when I know I'm going to throw it away, just to practice and learn something new. I explore open source, non-Microsoft frameworks, along with the stuff Microsoft hands us developers. I do not believe that Microsoft has all of the answers and am not a developer that follows them blindly (when able).

Transition

I am now going on a combined two years on this project and was always a bit leery of writing a blog or exposing too much about what I do. My project is not accessible to the public, but it is used world-wide by people from "wrench-turners" to executives in the military. If my application were to suddenly stop, it can keep aircraft from flying, personnel from being moved, and people from working—all over the world. But now I'm leaving.

I decided to stay in the private sector a bit more before jumping back out to form my own company, again. And, after having successfully managed the development and life-cycle of a multi-million dollar application, I am having to answer questions about the difference between a class and an interface. The interviews feel like I won a spot as a player on a very poor C# quiz show. They are very shallow, at best. But I have pushed on through them.

Rejection

Today, I received a rejection notice for a "Software Engineer III" position, from a recruiter:

Hi Jim,

We got feedback from your submission at [redacted]. The hiring manger said that from his perspective you are a little light on the C/.NET development side. He broke it down a little more for us, citing the COBOL application interface, VB6 and independent front end web application work and that he has worked with many engineers in this type of environment for many years. That being said, he needs someone with a more development heavy C/.NET background.

I'll keep looking and will let you know the next time I have a position that suites your skillset. [sic]

I don't even write COBOL! Or VB6 for that matter. Oi! But it is enough to motivate me to manage my personal brand.

Tuesday, April 2, 2013

Introduction to the Decorator Pattern

One of my favorite software development patterns is the Decorator Pattern. After having taken on so-called brown-field projects, and discussing design patterns with others, I notice so many areas where this pattern could have saved a big ball of mud from forming. It happens very often, even when code starts off cleanly written, that new requirements are added. Without much thought, it is easy to go into the areas affected and edit the classes that have already been written. (Very quickly violating the Open/Closed and Single Responsibility (SRP) principles.)

We will start with something simple here just to get a feel for the pattern itself. So, without further ado, let's take a look at some code. Let's say we have a class called Handler and its job is to handle some command given to the system. We will avoid the use of generics for now to keep it simple.

namespace Patterns.Decorator
{
    public abstract class Handler
    {
        public abstract void Handle (object command);
    }
}

And let's say we have some concrete implementations of Handler that each talk to the database to update some record. (Null checks avoided to simplify the code.)

namespace Patterns.Decorator.Concrete
{
    public class FooUpdater : Handler
    {
        public override void Handle (object command)
        {
            // Foo is just some made-up class to give this method a bit of a body
            if (command is Foo)
            {
                var asFoo = (Foo)command;
                var record = database.Load(asFoo.Id) // get a record from the database
                record.Bar = asFoo.Bar;              // update the record
                database.Save(record);               // and save it
            }
        }
    }

    public class BazUpdater : Handler
    {
        public override void Handle (object command)
        {
            // Baz is just some made-up class, as well
            if (command is Baz)
            {
                var asBaz = (Baz)command;
                var record = // Okay, pretty much the same thing as above
                ...
            }
        }
    }
}

Everything is working well, but this was quickly implemented to get it in front of the customer. The functionality has been "blessed" and we realize we need to log exceptions. What I have seen, all too often, is something such as:

public class FooUpdater : Handler
{
    public override void Handle (object command)
    {
        try
        {
            if (command is Foo)
            {
                var asFoo = (Foo)command;
                var record = database.Load(asFoo.Id) // get a record from the database
                record.Bar = asFoo.Bar;              // update the record
                database.Save(record);               // and save it
            }
        }
        catch (Exception ex)
        {
            ErrorLogger.LogException(ex);
        }
    }
}

For a very simple system, this might work just fine. But this is not a maintainable solution. We have just broken the Open/Closed principle along with SRP. Our Handle method has taken a dependency on ErrorLogger (good luck testing this method now) and has to be changed every time we want to add some functionality. Even worse, that same boiler-plate code has to be copy-and-pasted all through our project. Yuck!

One way to fix this is to rewrite the Handler class:

public abstract class Handler
{
    protected abstract void HandleImpl (object command);

    public void Handle (object command)
    {
        try
        {
            this.HandleImpl(command);
        }
        catch(Exception ex)
        {
            ErrorLogger.LogException(ex);
        }
    }
}

But then our base class becomes flooded with all sorts of mixed behaviors (it tends to become a god object) and quickly becomes a maintenance nightmare. Imagine if we wanted to add some functionality that logged some information based on the type of command given. We would probably find ourselves with a nasty, brittle, switch statement. Once it was written, we would never want to add functionality because it would be too scary. You can't even test this mess to get some sort of regression tests without a bunch of pain. (No wonder some people don't like writing unit tests!)

Fear not! The Decorator Pattern is here to help save us from this unwieldy jumble of code. Let's think of each piece of functionality that we were trying to implement. They sort of wrap around each other and form a ball, kind of like decorating a cake. One layer of frosting, or functionality, at a time. We can easily achieve this same effect by pulling out the layers into their own classes that look like the original base class. But, instead of having a default constructor, they will take in a Handler, so we can layer functionality.

namespace Patterns.Decorator.Concrete
{
    public class FooUpdater : Handler
    {
        public override void Handle (object command)
        {
            if (command is Foo)
            {
                ... // Do the work like we originally did, above.
            }
        }
    }
}

namespace Patterns.Decorator.Logging
{
    public class Logger : Handler
    {
        private readonly Handler _decorated;

        public override void Handle (object command)
        {
            try
            {
                _decorated.Handle(command)
            }
            catch (Exception ex)
            {
                ErrorLogger.LogException(ex);
            }
        }

        public Logger(Handler decorated)
        {
            _decorated = decorated;
        }
    }
}

Now to add the logging functionality, we can leave the original FooUpdater class alone and just wrap it with new functionality. Easy as cake!

var handler = new Logger(new FooUpdater());

So, we have seen how to add functionality to a system without modifying the code we wrote originally. Of course, requirements change and we might need to modify the original code—but we have crafted our solution so that the original code only needs to be modified for one reason (the business rules have changed). Using the decorator pattern we can safely maintain the Open/Closed principle and SRP. Now we won't be so afraid to add that next layer of functionality.

Sunday, March 24, 2013

Simple.Web and F#

I have become increasingly more interested in F# and hope, someday, to transition over from C# to F# in my day-to-day work. (More on that later.)

I have also become increasingly more interested in Simple.Web by Mark Rendle and hope, someday, to transition over from ASP.NET MVC to Simple.Web in my day-to-day work. (More on that later, as well.)

Given both my fondness of F# and Simple.Web, I decided to try to marry the two. I created the normal ASP.NET Empty Project and added the Simple.Web.Aspnet NuGet package along with the Simple.Web.Razor NuGet package. Then I created a handlers project (F# Class Library), referenced it from the first project, and added the same NuGet package. I created a simple handler for the site's index in the first project.

namespace SimpleTest.Web

open Simple.Web

[<UriTemplate("/")>]
type Class1() =
    interface IGet with
        member this.Get() = Status.OK

I also added a Razor page to handle my get request and then hit F5 to fire it up. Since it was so simple, I was expecting to see my fake homepage and see my little test succeed. Wrong.

Instead, I am greeted with a yellow screen of death telling me "Object reference not set to an instance of an object." Hmmm...

I created the same class in C# and everything worked fine. I was beginning to wonder if this was going to work out at all. Well, after some fiddling, I got it to work. Why it works, I am not really sure about. (I'm far from completely comfortable in F#, let alone an expert.) All I had to do was add a member function that mirrored my interface version of Get().

namespace SimpleTest.Web

open Simple.Web

[<UriTemplate("/")>]
type Class1() =
    member this.Get() = Status.OK

    interface IGet with
        member this.Get() = Status.OK

Of course that means I can just return the result of the member function from the interface implementation. But, that got me to thinking. I decided, in the interface implementation, to throw a NotImplementedException:

namespace SimpleTest.Web

open System
open Simple.Web

[<UriTemplate("/")>]
type Class1() =
    member this.Get() = Status.OK

    interface IGet with
        member this.Get() = raise(new NotImplementedException())

To my amusement, it still loaded my page. I understand that to access an interface-defined method in F# you have to explicitly cast the instance to the interface you're targeting. I just did not know that applied across the language boundaries as well when doing whatever magic Simple.Web is doing to find the applicable method.

Time to hunt that down next.