May 2014

Volume 29 Number 5

C# Best Practices : Dangers of Violating SOLID Principles in C#

Brannon King | May 2014

As the process of writing software has evolved from the theoretical realm into a true engineering discipline, a number of principles have emerged. And when I say principle, I’m referring to a feature of the computer code that helps maintain the value of that code. Pattern refers to a common code scenario, whether good or bad.

For example, you might value computer code that works safely in a multi-threaded environment. You may value computer code that doesn’t crash when you modify code in another location. Indeed, you might value many helpful qualities in your computer code, but encounter the opposite on a daily basis.

There have been some fantastic software development principles captured under the SOLID acronym—Single responsibility, Open for extension and closed for modification, Liskov substitution, Interface segregation, and Dependency injection. You should have some familiarity with these principles, as I’ll demonstrate a variety of C#-specific patterns that violate these principles. If you’re unfamiliar with the SOLID principles, you might want to review them quickly before proceeding. I’ll also assume some familiarity with the architectural terms Model and ViewModel.

The SOLID acronym and the principles encompassed within did not originate with me. Thank you, Robert C. Martin, Michael Feathers, Bertrand Meyer, James Coplien and others, for sharing your wisdom with the rest of us. Many other books and blog posts have explored and refined these principles. I hope to help amplify the application of these principles.

Having worked with and trained many junior software engineers, I’ve discovered there’s a large gap between the first professional coding endeavors and sustainable code. In this article, I’ll try to bridge that gap in a lighthearted way. The examples are a bit silly with the goal of helping you recognize you can apply the SOLID principles to all forms of software.

The professional development environment brings many challenges for aspiring software engineers. Your schooling has taught you to think about problems from a top-down perspective. You’ll take a top-down approach to your initial assignments in the world of hearty, corporate-sized software. You’ll soon find your top-level function has grown to an unwieldy size. To make the smallest change requires full working knowledge of the entire system, and there’s little to keep it in check. Guiding software principles (of which only a partial set is mentioned here) will help keep the structure from outgrowing its foundation.

The Single Responsibility Principle

The Single Responsibility Principle is often defined as: An object should only have one reason to change; the longer the file or class, the more difficult it will be to achieve this. With that definition in mind, look at this code:

public IList<IList<Nerd>> ComputeNerdClusters(
  List<Nerd> nerds,
  IPlotter plotter = null) {
  ...
  foreach (var nerd in nerds) {
    ...
    if (plotter != null)
      plotter.Draw(nerd.Location, 
      Brushes.PeachPuff, radius: 10);
    ...
  }
  ...
}

What’s wrong with this code? Is software being written or debugged? It may be that this particular drawing code is only for debugging purposes. It’s nice that it’s in a service known only by interface, but it doesn’t belong. The brush is a good clue. As beautiful and widespread as puffs of peach may be, it’s platform-specific. It’s outside the type hierarchy of this computational model. There are many ways to segregate the computation and associated debugging utilities. At the very least, you can expose the necessary data through inheritance or events. Keep the tests and test views separate.

Here’s another faulty example:

class Nerd {
  public int IQ { get; protected set; }
  public double SuspenderTension { get; set; }
  public double Radius { get; protected set; }
  /// <summary>Get books for growing IQ</summary>
  public event Func<Nerd, IBook> InTheMoodForBook;
  /// <summary>Get recommendations for growing Radius</summary>
  public event Func<Nerd, ISweet> InTheMoodForTwink;
  public IList<Nerd> FitNerdsIntoPaddedRoom(
    IList<Nerd> nerds, IList<Point> boundary)
  {
    ...
  }
}

What’s wrong with this code? It mixes what’s called “school subjects.” Remember how you learned about different topics in different classes in school? It’s important to maintain that separation in the code—not because they’re entirely unrelated, but as an organizational effort. In general, don’t put any two of these items in the same class: mathematics, models, grammar, views, physical or platform adapters, customer-specific code, and so on.

You can see a general analogy to things you build in school with sculpture, wood and metal. They need measurements, analysis, instruction and so on. The previous example mixes math and model—FitNerdsIntoPaddedRoom doesn’t belong. That method could easily be moved to a utility class, even a static one. You shouldn’t have to instantiate models in your math test routines.

Here’s another multiple responsibilities example:

class AvatarBotPath
{
  public IReadOnlyList<ISegment> Segments { get; private set; }
  public double TargetVelocity { get; set; }
  public bool IsReverse { get { return TargetVelocity < 0; } }
  ...
}
public interface ISegment // Elsewhere
{
  Point Start { get; }
  Point End { get; }
  ...
}

What’s wrong here? Clearly there are two different abstractions represented by a single object. One of them relates to traversing a shape, the other represents the geometric shape itself. This is common in code. You have a representation and separate use-specific parameters that go with that representation.

Inheritance is your friend here. You can move the TargetVelocity and IsReverse properties to an inheritor and capture them in a concise IHasTravelInfo interface. Alternatively, you could add a general collection of features to the shape. Those needing velocity would then query the features collection to see if it’s defined on a particular shape. You could also use some other collection mechanism to pair representations with travel parameters.

The Open Closed Principle

That brings us to the next principle: open for extension, closed for modification. How is it done? Preferably not like this:

void DrawNerd(Nerd nerd) {
  if (nerd.IsSelected)
    DrawEllipseAroundNerd(nerd.Position, nerd.Radius);
  if (nerd.Image != null)
    DrawImageOfNerd(nerd.Image, nerd.Position, nerd.Heading);
  if (nerd is IHasBelt) // a rare occurrence
    DrawBelt(((IHasBelt)nerd).Belt);
  // Etc.
}

What’s wrong here? Well, you’ll have to modify this method every time a customer needs new things displayed—and they always need new things displayed. Nearly every new software feature requires some sort of UI element. After all, it was the lack of something in the existing interface that prompted the new feature request. The pattern displayed in this method is a good clue, but you can move those if statements into the methods they guard and it won’t make the problem go away.

You need a better plan, but how? What will it look like? Well, you have some code that knows how to draw certain things. That’s fine. You just need a general procedure for matching those things with the code to draw them. It will essentially come down to a pattern like this:

readonly IList<IRenderer> _renderers = new List<IRenderer>();
void Draw(Nerd nerd)
{
  foreach (var renderer in _renderers)
    renderer.DrawIfPossible(_context, nerd);
}

There are other ways to add to the list of renderers. The point of the code, however, is to write drawing classes (or classes about drawing classes) that implement a well-known interface. The renderer must have the smarts to determine if it can or should draw anything based on its input. For example, the belt-drawing code can move to its own “belt renderer” that checks for the interface and proceeds if necessary.

You might need to separate the CanDraw from the Draw method, but that won’t violate the Open Closed Principle, or OCP. The code using the renderers shouldn’t have to change if you add a new renderer. It’s that simple. You should also be able to add the new renderer in the correct order. While I’m using rendering as an example, this also applies to handling input, processing data and storing data. This principle has many applications through all types of software. The pattern is more difficult to emulate in Windows Presentation Foundation (WPF), but it’s possible. See Figure 1 for one possible option.

Figure 1 Example of Merging Windows Presentation Foundation Renderers into a Single Source

public abstract class RenderDefinition : ViewModelBase
{
  public abstract DataTemplate Template { get; }
  public abstract Style TemplateStyle { get; }
  public abstract bool SourceContains(object o); // For selectors
  public abstract IEnumerable Source { get; }
}
public void LoadItemsControlFromRenderers(
    ItemsControl control,
    IEnumerable<RenderDefinition> defs) {
  control.ItemTemplateSelector = new DefTemplateSelector(defs);
  control.ItemContainerStyleSelector = new DefStyleSelector(defs);
  var compositeCollection = new CompositeCollection();
  foreach (var renderDefinition in defs)
  {
    var container = new CollectionContainer
    {
      Collection = renderDefinition.Source
    };
    compositeCollection.Add(container);
  }
  control.ItemsSource = compositeCollection;
}

Here’s another foul example:

class Nerd
{
  public void WriteName(string name)
  {
    var pocketProtector = new PocketProtector();
    WriteNameOnPaper(pocketProtector.Pen, name);
  }
  private void WriteNameOnPaper(Pen pen, string text)
  {
    ...
  }
}

What’s wrong here? The problems with this code are vast and sundry. The main issue I want to point out is there’s no way to override creating the PocketProtector instance. Code like this makes it difficult to write inheritors. You have a few options for dealing with this scenario. You can change the code to:

  • Make the WriteName method virtual. That would also require you make WriteNameOnPaper protected to meet the goal of instantiating a modified pocket protector.
  • Make the WriteNameOnPaper method public, but that will maintain the broken WriteName method on your inheritors. This isn’t a good option unless you get rid of WriteName, in which case the option devolves into passing an instance of PocketProtector into the method.
  • Add an additional protected virtual method whose sole purpose is to construct the PocketProtector.
  • Give the class a generic type T that’s a type of PocketProtector and construct it with some kind of object factory. Then you’ll have the same need to inject the object factory.
  • Pass an instance of PocketProtector to this class in its constructor or via a public property, instead of constructing it within the class.

The last option listed is generally the best plan, assuming you can reuse PocketProtector. The virtual creation method is also a good and easy option.

You should consider which methods to make virtual to accommodate the OCP. That decision is often left until the last minute: “I’ll make the methods virtual when I need to call them from an inheritor I don’t have at the moment.” Others may choose to make every method virtual, hoping that will allow extenders the ability to work around any oversight in the initial code.

Both approaches are wrong. They exemplify an inability to commit to an open interface. Having too many virtual methods limits your ability to change the code later. A lack of methods you can override limits the extensibility and reusability of the code. That limits its usefulness and lifespan.

Here’s another common example of OCP violations:

class Nerd
{
  public void DanceTheDisco()
  {
    if (this is ChildOfNerd)
            throw new CoordinationException("Can't");
    ...
  }
}
class ChildOfNerd : Nerd { ... }

What’s wrong here? The Nerd has a hard reference to its child type. That’s painful to see, and an unfortunately common mistake for junior developers. You can see it violates the OCP. You have to modify multiple classes to enhance or refactor ChildOfNerd.

Base classes should never directly reference their inheritors. Inheritor functionality is then no longer consistent among inheritors. A great way to avoid this conflict is to put inheritors of a class in separate projects. That way the structure of the project reference tree will disallow this unfortunate scenario.

This issue isn’t limited to parent-child relationships. It exists with peer classes as well. Suppose you have something like this:

class NerdsInAnArc
{
  public bool Intersects(NerdsInAnLine line)
  {
    ...
  }
  ...
}

Arcs and lines are typically peers in the object hierarchy. They shouldn’t know any non-inherited intimate details about each other, as those details are often needed for optimal intersection algorithms. Keep yourself free to modify one without having to change the other. This again brings up a single-responsibility violation. Are you storing arcs or analyzing them? Put analysis operations in their own utility class.

If you need this particular cross-peer ability, then you’ll need to introduce an appropriate interface. Follow this rule to avoid the cross-entity confusion: You should use the “is” keyword with an abstraction instead of a concrete class. You could potentially craft an IIntersectable or INerdsInAPattern interface for the example, although you’d likely still defer to some other intersection utility class for analyzing data exposed on that interface.

The Liskov Substitution Principle

The Liskov Substitution Principle defines some guidelines for maintaining inheritor substitution. Passing an object’s inheritor in place of the base class shouldn’t break any existing functionality in the called method. You should be able to substitute all implementations of a given interface with each other.

C# doesn’t allow modifying return types or parameter types in overriding methods (even if the return type is an inheritor of the return type in the base class). Therefore, it won’t struggle with the most common substitution violations: contravariance of method arguments (overriders must have the same or base types of parent methods) and covariance of return types (return types in overriding methods must be the same or an inheritor of the return types in the base class). However, it’s common to try to work around this limitation:

class Nerd : Mammal {
  public double Diopter { get; protected set; }
  public Nerd(int vertebrae, double diopter)
    : base(vertebrae) { Diopter = diopter; }
  protected Nerd(Nerd toBeCloned)
    : base (toBeCloned) { Diopter = toBeCloned.Diopter; }
  // Would prefer to return Nerd instead:
  // public override Mammal Clone() { return new Nerd(this); }
  public new Nerd Clone() { return new Nerd(this); }
}

What’s wrong here? The behavior of the object changes when called with an abstraction reference. The clone method new isn’t virtual, and therefore isn’t executed when using a Mammal reference. The keyword new in the method declaration context is supposedly a feature. If you don’t control the base class, though, how can you guarantee proper execution?

C# has a few workable alternatives, although they’re still somewhat distasteful. You can use a generic interface (something like IComparable<T>) to implement explicitly in every inheritor. However, you’ll still need a virtual method that does the actual cloning operation. You need this so your clone matches the derived type. C# also supports the Liskov standard on contravariance of return types and covariance of method arguments when using events, but that won’t help you change the exposed interface through class inheritance.

Judging from that code, you might think C# includes the return type in the method footprint the class method resolver is using. That’s incorrect—you can’t have multiple overrides with different return types, but the same name and input types. Method constraints are also ignored for method resolution. Figure 2 shows an example of syntactically correct code that won’t compile due to method ambiguity.

Figure 2 Ambiguous Method Footprint

interface INerd {
  public int Smartness { get; set; }
}
static class Program
{
  public static string RecallSomeDigitsOfPi<T>(
    this IList<T> nerdSmartnesses) where T : int
  {
    var smartest = nerdSmartnesses.Max();
    return Math.PI.ToString("F" + Math.Min(14, smartest));
  }
  public static string RecallSomeDigitsOfPi<T>(
    this IList<T> nerds) where T : INerd
  {
    var smartest = nerds.OrderByDescending(n => n.Smartness).First();
    return Math.PI.ToString("F" + Math.Min(14, smartest.Smartness));
  }
  static void Main(string[] args)
  {
    IList<int> list = new List<int> { 2, 3, 4 };
    var digits = list.RecallSomeDigitsOfPi();
    Console.WriteLine("Digits: " + digits);
  }
}

The code in Figure 3 shows how the ability to substitute might be broken. Consider your inheritors. One of them could modify the isMoonWalking field at random. If that were to happen, the base class runs the risk of missing a critical cleanup section. The isMoonWalking field should be private. If inheritors need to know, there should be a protected getter property that provides access, but not modification.

Figure 3 An Example of How the Ability to Substitute Might Be Broken

class GrooveControl: Control {
  protected bool isMoonWalking;
  protected override void OnMouseDown(MouseButtonEventArgs e) {
    isMoonWalking = CaptureMouse();
    base.OnMouseDown(e);
  }
  protected override void OnMouseUp(MouseButtonEventArgs e) {
    base.OnMouseUp(e);
    if (isMoonWalking) {
      ReleaseMouseCapture();
      isMoonWalking = false;
    }
  }
}

Wise and occasionally pedantic programmers will take this a step further. Seal the mouse handlers (or any other method that relies on or modifies private state) and let inheritors use events or other virtual methods that aren’t must-call methods. The pattern of requiring a base call is admissible, but not ideal. We’ve all forgotten to call expected base methods on occasion. Don’t allow inheritors to break the encapsulated state.

Liskov Substitution also requires inheritors to not throw new exception types (although inheritors of exceptions already thrown in the base class are fine). C# has no way to enforce this.

The Interface Segregation Principle

Each interface should have a specific purpose. You shouldn’t be forced to implement an interface when your object doesn’t share that purpose. By extrapolation, the larger the interface, the more likely it includes methods that not all implementers can achieve. That’s the essence of the Interface Segregation Principle. Consider an old and common interface pair from the Microsoft .NET Framework:

public interface ICollection<T> : IEnumerable<T> {
  void Add(T item);
  void Clear();
  bool Contains(T item);
  void CopyTo(T[] array, int arrayIndex);
  bool Remove(T item);
}
public interface IList<T> : ICollection<T> {
  T this[int index] { get; set; }
  int IndexOf(T item);
  void Insert(int index, T item);
  void RemoveAt(int index);
}

The interfaces are still somewhat useful, but there’s an implicit assumption that if you’re using these interfaces, you want to modify the collections. Oftentimes, whoever creates these data collections wants to prevent anyone from modifying the data. It’s actually very useful to separate interfaces into sources and consumers.

Many data stores would like to share a common, indexable non-writable interface. Consider data analysis or data searching software. They typically read in a large log file or database table for analysis. Modifying the data was never part of the agenda.

Admittedly, the IEnumerable interface was intended to be the minimal, read-only interface. With the addition of LINQ extension methods, it has started to fulfill that destiny. Microsoft has also recognized the gap in indexable collection interfaces. The company has addressed this in the 4.5 version of the .NET Framework with the addition of IReadOnlyList<T>, now implemented by many framework collections.

You’ll remember these beauties in the old ICollection interface:

public interface ICollection : IEnumerable {
  ...
  object SyncRoot { get; }
  bool IsSynchronized { get; }
  ...
}

In other words, before you can iterate the collection, you must first potentially lock on its SyncRoot. A number of inheritors even implemented those particular items explicitly just to help hide their shame at having to implement them. The expectation in multi-threaded scenarios became that  you lock on the collection everywhere you use it (rather than using the SyncRoot).

Most of you want to encapsulate your collections so they can be accessed in a thread-safe fashion. Instead of using foreach, you must encapsulate the multi-threaded data store and only expose a ForEach method that takes a delegate instead. Fortunately, newer collection classes such as the concurrent collections in the .NET Framework 4 or the immutable collections now available for the .NET Framework 4.5 (through NuGet) have eliminated much of this madness.

The .NET Stream abstraction shares the same faults of being way too large, including both readable and writable elements and synchronization flags. However, it does include properties to determine the writability: CanRead, CanWrite, CanSeek and so on. Compare if (stream.CanWrite) to if (stream is IWritableStream). For those of you creating streams that aren’t writable, the latter is certainly appreciated.

Now, look at the code in Figure 4.

Figure 4 An Example of Unnecessary Initialization and Cleanup

// Up a level in the project hierarchy
public interface INerdService {
  Type[] Dependencies { get; }
  void Initialize(IEnumerable<INerdService> dependencies);
  void Cleanup();
}
public class SocialIntroductionsService: INerdService
{
  public Type[] Dependencies { get { return Type.EmptyTypes; } }
  public void Initialize(IEnumerable<INerdService> dependencies)
  { ... }
  public void Cleanup() { ... }
  ...
}

What’s the problem here? Your service initialization and cleanup should come through one of the fantastic inversion of control (IoC) containers commonly available for the .NET Framework, instead of being reinvented. For example’s sake, nobody cares about Initialization and Cleanup other than the service manager/­container/boostrapper—whatever code loads up these services. That’s the code that cares. You don’t want anyone else calling Cleanup prematurely. C# has a mechanism called explicit implementation for helping with this. You can implement the service more cleanly like this:

public class SocialIntroductionsService: INerdService
{
  Type[] INerdService.Dependencies { 
    get { return Type.EmptyTypes; } }
  void INerdService.Initialize(IEnumerable<INerdService> dependencies)
  { ... }
  void INerdService.Cleanup() {       ... }
  ...
}

Generally, you want to design your interfaces with some purpose other than pure abstraction of a single concrete class. This gives you the means to organize and extend. However, there are at least two notable exceptions.

First, interfaces tend to change less often than their concrete implementations. You can use this to your advantage. Put the interfaces in a separate assembly. Let the consumers reference only the interface assembly. It helps compilation speed. It helps you avoid putting properties on the interface that don’t belong (because inappropriate property types aren’t available with a proper project hierarchy). If corresponding abstractions and interfaces are in the same file, something has gone wrong. Interfaces fit in the project hierarchy as parents of their implementations and peers of the services (or abstractions of the services) that use them.

Second, by definition, interfaces don’t have any dependencies. Hence, they lend themselves to easy unit testing through object mocking/proxy frameworks. That brings me to the next and final principle.

The Dependency Inversion Principle

Dependency Inversion means to depend on abstractions instead of concrete types. There’s a lot of overlap between this principle and the others already discussed. Many of the previous examples include a failure to depend on abstractions.

In his book, “Domain Driven Design” (Addison-Wesley Professional, 2003), Eric Evans outlines some object classifications that are useful in discussing Dependency Inversion. To summarize the book, it’s useful to classify your object into one of these three groups: values, entities or services.

Values refer to objects with no dependencies that are typically transient and immutable. They’re generally not abstracted and you can instantiate them at will. However, there’s nothing wrong with abstracting them, especially if you can get all the benefits of abstractions. Some values might grow into entities over time. Entities are your business Models and ViewModels. They’re built from value types and other entities. It’s useful to have abstractions for these items, especially if you have one ViewModel that represents several different variants of a Model or vice versa. Services are the classes that contain, organize, service and use the entities.

With this classification in mind, Dependency Inversion deals primarily with services and the objects that need them. Service-specific methods should always be captured in an interface. Wherever you need to access that service, you access it via the interface. Don’t use a concrete service type in your code anywhere other than where the service is constructed.

Services generally depend on other services. Some ViewModels depend on services, especially container and factory-type services. Therefore, services are generally difficult to instantiate for testing because you need the full service tree. Abstract their essence into an interface. Then all references to services should be made through that interface so they can be easily mocked up for testing purposes.

You can create abstractions at any level in the code. When you find yourself thinking, “Wow, it’s going to be painful for A to support B’s interface and B to support A’s interface,” that’s the perfect time to introduce a new abstraction in the middle. Make usable interfaces and rely on them.

The adapter and mediator patterns can help you conform to the preferred interface. It sounds like extra abstractions bring extra code, but generally that’s not true. Taking partial steps toward interoperability helps you organize code that would’ve had to exist for A and B to talk to each other anyway.

Years ago, I read that a developer should “always reuse code.” It seemed too simple at the time. I couldn’t believe such a simple mantra could penetrate the spaghetti all over my screen. Over time, though, I’ve learned. Look at the code here:  

private readonly IRamenContainer _ramenContainer; // A dependency
public bool Recharge()
{
  if (_ramenContainer != null)
  {
    var toBeConsumed = _ramenContainer.Prepare();
    return Consume(toBeConsumed);
  }
  return false;
}

Do you see any repeated code? There’s the double read on _ramenContainer. Technically speaking, the compiler will eliminate this with an optimization called “common sub-expression elimination.” For discussion, suppose you were running in a multi-threaded situation and the compiler actually repeated class field reads in the method. You would run the risk that your class variable is changed to null before it’s even used.

How do you fix this? Introduce a local reference above the if statement. This rearrangement requires you to add a new item at or above the outer scope. The principle is the same in your project organization! When you reuse code or abstractions, you eventually arrive at a useful scope in your project hierarchy. Let the dependencies drive the inter-project reference hierarchy.

Now, look at this code:

public IList<Nerd> RestoreNerds(string filename)
{
  if (File.Exists(filename))
  {
    var serializer = new XmlSerializer(typeof(List<Nerd>));
    using (var reader = new XmlTextReader(filename))
      return (List<Nerd>)serializer.Deserialize(reader);
  }
  return null;
}

Is it depending on abstractions?

No, it isn’t. It begins with a static reference to the file system. It's using a hardcoded deserializer with hardcoded type references. It expects exception handling to occur outside the class. This code is impossible to test without the accompanying storage code.

Typically, you would move this into two abstractions: one for the storage format and one for the storage medium. Some examples of storage formats include XML, JSON and Protobuf binary data. Storage mediums include direct files on a disk and databases. A third abstraction is also typical in this type of system: some kind of rarely changing memento representing the object to be stored.

Consider this example:

class MonsterCardCollection
{
  private readonly IMsSqlDatabase _storage;
  public MonsterCardCollection(IMsSqlDatabase storage)
  {
    _storage = storage;
  }
  ...
}

Can you see anything wrong with these dependencies? The clue is in the dependency name. It’s platform-specific. The service isn’t platform-specific (or at least it’s attempting to avoid a platform dependency by using an external storage engine). This is a situation where you need to employ the adapter pattern.

When dependencies are platform-specific, the dependents will end up with their own platform-specific code. You can avoid this with one additional layer. The additional layer will help you organize the projects in such a way that the platform-specific implementation exists in its own special project (with all its platform-specific references). You’ll only need to reference the project containing all the platform-specific code by the start-up application project. Platform wrappers tend to be large; don’t duplicate them more than necessary.

Dependency Inversion brings together the entire set of principles discussed in this article. It uses clean, purposeful abstractions you can fill with concrete implementations that don’t break the underlying service state. That’s the goal.

Indeed, the SOLID principles are generally overlapping in their effects upon sustainable computer code. The vast world of intermediate (meaning easily decompiled) code is fantastic in its ability to reveal the full extent to which you may extend any object. A number of .NET library projects fade over time. That’s not because the idea was faulty; they just couldn’t safely extend into the unanticipated and varying needs of the future. Take pride in your code. Apply the SOLID principles and you’ll see your code’s lifespan increase.


Brannon B. King has worked as a full-time software developer for 12 years, eight of which have been spent deep in C# and the .NET Framework. His most recent work has been with Autonomous Solutions Inc. (ASI) near Logan, Utah (asirobots.com). ASI is unique in its ability to foster a contagious love of C#; the crew at ASI takes passion in fully utilizing the language and pushing the .NET Framework to its limits. Reach him at countprimes@gmail.com.

Thanks to the following technical experts for reviewing this article: Max Barfuss (ASI) and Brian Pepin (Microsoft)
Brian Pepin has been working as a software engineer at Microsoft Corporation since 1994, focusing mostly on developer APIs and tools. He has worked on Visual Basic, Java, .NET Framework, Windows Forms, WPF, Silverlight and the Windows 8 XAML designer in Visual Studio. Currently he works on the Xbox team focusing on the Xbox operating system components and enjoys spending free time in the Seattle area with his wife Danna and son Cole.
Max Barfuss is a software craftsman dedicated to the belief that good coding, design and communication habits are the things that distinguish great software engineers from the rest. He has sixteen years of software development experience, including eleven years in the land of .NET.