Поделиться через

.SidebarContainerA { width: 40%; height:225px; float: right; border: 2px solid #008080; padding: 10px 20px 0px 20px; background-color: #e1e8f5; margin: 20px 0px 20px 10px; } .SidebarContainerB { width: 40%; height:650px; float: right; border: 2px solid #008080; padding: 10px 20px 0px 20px; background-color: #e1e8f5; margin: 20px 0px 20px 10px; } .SidebarContainerC { width: 40%; height: 625px; float: right; border: 2px solid #008080; padding: 10px 20px 0px 20px; background-color: #e1e8f5; margin: 20px 0px 20px 10px; } .SidebarContainerD { width: 40%; height: 225px; float: right; border: 2px solid #008080; padding: 10px 20px 0px 20px; background-color: #e1e8f5; margin: 20px 0px 20px 10px; }

Patterns in Practice

Design For Testability

Jeremy Miller


The Value Proposition
What Is Testability?
Isolate the Ugly Stuff
Using Fakes to Establish Boundary Conditions
The Gateway Pattern
Separate Deciding from Doing
Small Tests before Big Tests
The Big Picture

Don't adjust your dial; this is still a column about software design fundamentals. This time around I'd like to talk about testability, both as an important quality of your designs and as another tool you can use to create and think through designs. There is undeniable value in doing automated testing, but the question is whether the benefits of automated testing are worth the time and manpower investment. The benefits and costs of automated testing are greatly affected by design and architecture choices. In this column I will examine the design issues, patterns, and principles that can enhance testability.

This column does not assume that you will be using, or want to use, the design practice of Test Driven Development (TDD). I would hope that this column would be useful to anyone who is interested in adding more automated testing to their process or is just interested in software design in general.

The Excessive Test Setup Smell

In the ProcessWorkflowEvent sample, I'm trying to make the point that coupling to infrastructure concerns will increase the unit testing effort. There's a more general concern evident in this sample. Excessive work to set up a unit test is a "code smell." Excessive unit test setup is often caused by harmful coupling or a lack of cohesion. Many times you will be better served to refactor the code in question before writing a unit test with a great deal of setup.

The Value Proposition

Developers write code, but the real mission is to ship software. The effort to ship software is the sum of the time spent designing, coding, debugging, and testing the software. Lean programming has taught that you shouldn't be doing local optimization. Instead of only worrying about the coding effort, keep your eye on the total effort. The value proposition of automated testing is to reduce the total time of coding + debugging + testing. Your team will come out ahead as long as automated testing is saving more time in debugging and testing than it's costing in extra coding time.

In the United States, we say that the only certainties in life are death and taxes. Well, add another one. There will certainly be bugs in the code you and your team just wrote (and mine, too). Maybe it's because of a misunderstood requirement, a genuine programming mistake, an unforeseen edge case, or a third-party API that doesn't quite do what you thought it would do. Regardless of how those bugs got into your code, you've got to find those problems and get rid of them.

In the end, testability is all about creating rapid and effective feedback cycles in your development process to find problems in your code. It's an axiom in software development that problems are cheaper to fix the earlier they are detected. Since detecting and fixing errors is such a major part of the development effort, it's worth our time as developers to invest in feedback mechanisms to find errors more quickly. When comparing the relative ease of developing in one codebase compared to another, I would go so far as to say that the single most important factor is how short the feedback cycle is between writing a little code and getting some sort of feedback about whether the code you just wrote is working correctly.

What Is Testability?

I'm going to define testability as the quality of a software design that allows for automated testing in a cost-effective manner. The end goal of testability is to create rapid feedback cycles in your development process in order to find and eliminate flaws in your code. Here are a few aspects of testability:

Repeatable Automated tests must be repeatable. You must be able to measure an expected outcome for known inputs. You cannot do automated testing on systems that cannot be set up into a known state.

Easy to Write The tests should be mechanically easy to write. The equation is very simple: if it takes a lot of work to set up the inputs to an automated test for a unit of code, you aren't going to recoup your investment in writing that test.

Easy to Understand The tests should be readable and intention-revealing. Ideally, the automated tests that you write should also serve as a useful form of design and requirements documentation. In the real world, changing an existing codebase will at some point break the existing automated tests. When that time happens, it's advantageous for the automated tests to be intention-revealing to help developers understand why the test is broken in order to fix the broken tests. Automated testing coverage should make teams more confident in their ability to change code, not afraid of breaking inscrutable tests.

Fast The automated tests should run quickly. Again, the goal is to make the overall development process faster by establishing rapid feedback cycles to find problems. Slow-running tests will retard productivity.

These four points are directly related to design decisions, specifically to being mindful in regard to the classic design qualities of separation of concerns, cohesion, and coupling—things that you generally want anyway. That said, there are certain design patterns, such as inversion of control and the various forms of separated presentation, that can greatly enhance testability.

Mocking Best Practices

Fake objects come in many different flavors. Most of the examples in this column use stubs—simple objects that provide pre-canned answers and return values. Another type of fake is a "mock" object that is used to record or validate the interaction between classes. Mocks are one of the most valuable—but confusing and misused—concepts in automated testing. Here are some best practices for getting the most out of mocks in your tests:

  • Do not mock or fake out calls to fine-grained or chatty interfaces like ADO.NET that require a lot of calls. The effort involved far outstrips the reward, and those tests are generally unreadable. Only mock coarse-grained interfaces like my IEmailGateway example.
  • Do not mock any interface or class that you do not completely understand. Be especially cautious about mocking interfaces from outside your codebase. It's often better to write your own adapter class or gateway around external APIs that expresses the functionality of that external API in terms of your own architecture.
  • If you find yourself repeating the same mocking setup across multiple tests, you may want to change your design to separate the code that forces the repetitive mocking setup.
  • Do use mock objects as placeholders for classes that don't yet exist, especially as a way to help determine what the public interface should be. Mock objects (or stubs) allow you to investigate the shape and signature of an API without having to build the real implementation first.

For more about mocks and test doubles, see Mark Seemann's MSDN Magazine article " Unit Testing: Exploring the Continuum of Test Doubles ".

As in all things, there are some trade-offs. Many of these patterns are controversial because they don't quite fit the current direction of Microsoft tooling, are relatively new, or seem to negate the usefulness of some traditional .NET approaches. I'd like to use the rest of this column to present some sample scenarios and design patterns, and try to explain why these patterns are considered so important by teams that value TDD and automated testing.

Isolate the Ugly Stuff

Five years ago I considered myself to be strong in object-oriented design. Then I worked on a couple of projects that used TDD and learned firsthand just how little I really knew. Before going into these projects, I had read quite a bit about writing unit tests with NUnit that showed examples like this:

public class HolyHandGrenadeTester {
  public void users_should_count_to_three_after_pulling_the_pin() {
    new HolyHandGrenade().TheNumberToCountTo().ShouldEqual(3);

Armed with these examples, I went into my first TDD project and immediately ran into trouble. My C# code was interacting with databases, Web services, the ASP.NET runtime, and external systems. Those things were a lot harder to test than code that runs completely inside a single CLR AppDomain.

I specifically remember having a lot of trouble writing unit tests for a little custom workflow subsystem. The workflow data was fairly complex and hierarchical, so I decided at the time to store the state of the workflow as XML data in an Oracle CLOB field. The workflow code itself knew to pull out the XML data from the database, work on it, update the XML, then invoke more ADO.NET code to save the changes back to the database. My code looked like the contrived example shown in Figure 1 .

Figure 1 Workflow Code

public void ProcessWorkflowEvent(
  string newStatus, long workflowId) {

  // First, go get the XML out of the database
  OracleConnection connection = 
    new OracleConnection(ProjectConfiguration.ConnectionString);
  OracleCommand command = connection.CreateCommand();
  command.CommandType = CommandType.StoredProcedure;
  command.CommandText = "sp_that_returns_the_workflow_information";
  // and set the necessary parameters


  string xml = (string) command.Parameters["@xml"].Value;
  XmlDocument document = new XmlDocument();

  // depending upon what the new status is, make changes
  // to the Xml document, decide whether or not to send an email,
  // then save the changed Xml document with another set
  // of ADO.NET calls


In many of the unit test samples, I'm using the extension methods from the SpecUnit library . Among other things, SpecUnit provides a series of extension methods that help make unit tests much more readable than the classic xUnit Assert.AreEqual(expected, actual) syntax.

Here's a sample from SpecUnit:

public static object ShouldEqual(
    this object actual, object expected) {
  Assert.AreEqual(expected, actual);
  return expected;

And this method would be used like this within unit tests:


SpecUnit is usable from any of the popular xUnit tools for .NET. My team is using SpecUnit plus many extension methods that are specific to our application to make unit tests a little bit easier to write and much easier to read.

Let's think about what you would have to do to write automated unit tests for this code. The only way to feed the test input into this data was to create a properly formed XML document and then put that XML into a database row. There was a lot of business logic that would change the state of the XML file, and therefore a lot of permutations of previous state and user input that I had to test.

As I said before, I struggled mightily when I tried to write unit tests for this code. My real issue was getting the business rules about the workflow state changes correct. The need to set up the XML data in the database was slowing me down. This is when I learned my first important lesson about testability: some things are just plain ugly to deal with in testing.

What I needed to do was isolate the ugly stuff. All I really mean by "ugly stuff" is any kind of code or infrastructure that is complicated or laborious or just plain inconvenient to get into a test harness, or that makes tests run very slowly. My partial list of ugly stuff would be:

  • Database access. As most TDD practitioners will tell you, tests that involve a database will run an order of magnitude slower than tests that can run completely within a single CLR App­Domain. Setting up data in a database is much more time intensive than simply building objects in memory because of referential integrity and data constraints. When you test against the database, you will often find yourself adding setup data that has nothing to do with the actual test to the database just to satisfy referential integrity and value constraints.
  • GUI technologies, such as Windows Presentation Foundation (WPF) or testing Web apps directly in the browser. It is possible to test the user interface itself, but it's a significant investment in time and the tests run even slower than database tests.
  • Active Directory access.
  • Web services. It's hard enough to set up a known state for testing your own code. Coordinating with a completely separate team to set up tests across two or more systems is more difficult. My advice is to make sure that the functionality of your Web services can be tested independent of the actual Web service protocols. For Web services external to your code, I strongly advise having your code depend on an abstracted interface for external Web services rather than use a Web service proxy class directly. This will enable you to replace the external Web service with your own stubbed implementation during internal testing.
  • Configuration files.

Now, all of those things are important, and all of them should be tested at some point in time. The key, though, is to decouple as much of your core application as possible away from this infrastructure to at least make your core application code easy to test. I should be able to test my data access in isolation once, then write simple tests for the business logic without any concern for the database.

Back to my sample workflow subsystem. At the time I was struggling to get the business logic correct, but the XML data setup and data access was slowing me down. What I really needed to do was walk right up to the business logic code and work with it entirely in memory with quick-running tests. The first thing I should have done was to isolate the business logic away from the infrastructure and XML:

public class WorkflowState {
  public DateTime LastChanged { get; set; }
  public string CurrentStatus { get; set; }
  public string Priority { get; set; }
  // Other properties

  public void ChangeStatus(string newStatus) {
    // WorkflowState will update itself based on
    // the current state of the WorkflowState object
    // and the newStatus

Let's say that you have a business rule that says you should escalate the priority of the WorkflowState to Urgent if the customer is upset. Now the workflow business logic can be tested in isolation from the database infrastructure and the XML like this:

public class WorkflowStateTester {
  public void 
    the_priority_should_be_escalated_to_urgent_when_the_customer_is_angry() {
    WorkflowState state = new WorkflowState(){Priority = 
      "Low", CurrentStatus = "NotStarted"};


The workflow business logic is easier to test, but what else did you get out of this change? You've made the business logic easier to write and understand because it's now independent from the data access infrastructure and the XML format. You now have a much greater ability to change the data access and the business logic independently.

Testability is extremely important in areas of the code that have a lot of variability, require many permutations of input to adequately test, or are volatile in regard to changing requirements.

In the original ProcessWorkflowEvent method shown in Figure 1 , the act of loading and persisting the XML document to and from the database is basically the same no matter what is happening with the workflow business logic. I can write a small handful of tests against that persistence code and be confident that it works. It took many more test scenarios to adequately test all the code paths through the business logic. I could have saved myself quite a bit of effort on the original project if I'd been able to unit test the business logic in complete isolation.

Using Fakes to Establish Boundary Conditions

You might have noticed that the second version of the workflow logic isn't complete. It still needs to interact with the database and possibly an e-mail server as well. I would rewrite the original Process­WorkflowEvent method to something like the method on a new WorkflowService class shown in Figure 2 . Instead of doing the data access and sending e-mail directly in WorkflowService, I'll have it delegate to some new services named IRepository and IEmailGateway that I'll explain very soon.

Figure 2 Processing Workflow Events

public class ChangeWorkflowStatusMessage {
  public string NewStatus { get; set; }
  public long WorkflowId { get; set; }

public class WorkflowService {
  private readonly IEmailGateway _emailGateway;
  private readonly IRepository _repository;

  public WorkflowService(IEmailGateway emailGateway, IRepository repository) {
    _emailGateway = emailGateway;
    _repository = repository;

  public void ChangeStatus(ChangeWorkflowStatusMessage message) {
    // Fetch the correct WorkflowState object from the database 
    WorkflowState workflow = _repository.Find<WorkflowState>(message.WorkflowId);

    // email the owner of this workflow and tell them that something happened
    // if this is an Urgent item
    if (workflow.Priority == "Urgent") {
      // workflow.Owner is the user who is responsible for 
      // this workflow
      _emailGateway.SendChangeStatusEmailTo(workflow, workflow.Owner);

    // Save any changes to the WorkflowState object

One of the prerequisites for automated testing is establishing known inputs to the test. To fully test the WorkflowService.Change­Status functionality, I need to set up test inputs that would work on WorkflowState objects that are and are not Urgent. I could set up the database for those test inputs on each test run, but there's an easier way. Let's just use a fake database that can be more easily controlled in testing scenarios by using the Repository pattern to mediate between the application's domain model and the data access services.

My team has consistently chosen to use a Repository pattern with an IRepository interface:

public interface Irepository {
  T Find<T>(long id) where T : Entity;
  void Delete<T>(T target);
  T[] Query<T>(Expression<System.Func<T, bool>> where);
  void Save<T>(T target);

The real implementation of Repository uses NHibernate and LINQ for NHibernate to transfer information between the Entity objects and the database tables. In unit testing scenarios, all you really care about is that an Entity was saved, deleted, or available from this IRepository interface because that's all that the rest of the application knows about. For testing, you can create a known database state by using an implementation of IRepository (see Figure 3 ) that simply stores objects in memory and uses LINQ-to-Objects instead of LINQ for NHibernate for querying.

Figure 3 Fake Database from IRepository

public class InMemoryRepository : IRepository {
  private readonly Cache<Type, object> _types;
  private MockUnitOfWork _lastUnitOfWork;

  public InMemoryRepository() {
    _types = new Cache<Type, object>(type => {
      Type listType = typeof(List<>).MakeGenericType(type);
      return Activator.CreateInstance(listType);

  private IList<T> listFor<T>() {
    return (IList<T>)_types.Get(typeof(T));

  public T Find<T>(long id) where T : Entity {
    return listFor<T>().FirstOrDefault(t => t.Id == id);

  public void Delete<T>(T target) {

  public T[] Query<T>(Expression<Func<T, bool>> where) {
    var query = from item in listFor<T>() select item;
    return query.Where(where.Compile()).ToArray();

  public void Save<T>(T target) {

In unit tests, you construct objects that depend on IRepository with an instance of the InMemoryRepository that was set up with the desired data. Figure 4 shows a sample unit test that uses InMemoryRepository to simulate a database state.

Figure 4 Test that Simulates a Database State

public void find_the_user_by_user_name_and_password() {
  // Set up an InMemoryRepository with a User
  InMemoryRepository repository = new InMemoryRepository();
  User user = new User(){UserId = "jeremy"};
  user.ResetPassword("thePassword"); // the password is hashed

  // Construct a new instance of SecurityDataService using the
  // InMemoryRepository from above
  SecurityDataService service = new SecurityDataService(repository);
  service.Authenticate("jeremy", "wrong").ShouldBeFalse();
  service.Authenticate("jeremy", "thePassword").ShouldBeTrue();
  service.Authenticate("wrong", "thePassword").ShouldBeFalse();
  service.Authenticate("wrong", "and wrong").ShouldBeFalse();

The sample tests the functionality of the SecurityDataService class (see Figure 5 ). Internally, it needs to query an IRepository for the existence of a user with a username and password.

Figure 5 Class to Be Tested

public class SecurityDataService : ISecurityDataService {
  private readonly IRepository _repository;

  // The IRepository dependency is set up via Constructor Injection
  // In our real application, Both the real NHibernateRepository
  // and the SecurityDataService would be constructed by
  // an Inversion of Control container
  public SecurityDataService(IRepository repository) {
    _repository = repository;

  public bool Authenticate(string username, string password) {
    return _repository.FindBy<User>(u => u.UserId == 
      username && u.Password == User.HashPassword(password)) != null;

In the real application, the SecurityDataService uses the N­Hibernate Repository object that accesses the database. But in testing you can happily substitute a fake Repository that can easily be controlled to more quickly simulate multiple test cases. You can only do this type of substitution with fake objects if the classes that use IRepository only depend on the declared interface of I­Repository. If the classes that use IRepository depend even indirectly on the fact that the normal IRepository does NHibernate-specific things, you won't be able to cleanly substitute the fake object.

Fake objects, such as stubs or mocks, are a huge advantage for doing emergent or continuous design by allowing you to build, and even design, a system incrementally by standing in for services that you haven't yet created.

The Gateway Pattern

Almost every project I work on contains at least one feature that requires the application to send automated e-mail notifications to the user depending on some sort of action and state in the system. But I most definitely do not want my application code getting hung up with e-mail functionality.

I like to wrap up e-mail functionality by using the Gateway pattern. In the Gateway pattern, an object encapsulates access to an external system or resource. I want to pull every single piece of code it takes to send e-mails, including SMTP-type configuration, and hide it away from the rest of my code inside an IEmail­Gateway object, like this:

public class EmailMessage {
  public string Body { get; set; }
  public string[] To { get; set; }

public interface IEmailGateway {
  void SendChangeStatusEmailTo(WorkflowState workflow, User owner);

There are a lot of existing solutions for sending e-mails from .NET code, so I'm not really that concerned about the actual act of sending the e-mail, but I'm very concerned about whether my application code decided to send an e-mail, to whom, and what the message body was. What I need to test in the WorkflowService is the simple fact that it did or did not choose to send an e-mail based on the final status of the WorkflowState object.

Separate Deciding from Doing

One of the best things you can do in your code is to treat performing an action and deciding to take an action as two separate responsibilities. In the case of WorkflowService, it is only responsible for deciding to send the e-mail:

// email the owner of this workflow and tell them
// that something happened if this is an Urgent item
if (workflow.Priority == "Urgent") {
  // workflow.Owner is the user who is responsible for 
  // this workflow
  _emailGateway.SendChangeStatusEmailTo(workflow, workflow.Owner);

You could just let the code send the e-mail to the right people, but it's generally pretty annoying to set up fake e-mail accounts or run around and ask people if they just got an e-mail from your system. Instead, use a mock object in place of the IEmailGateway just to check whether WorkflowService decided to send the e-mail notification (see Figure 6 ).

Figure 6 Testing whether E-Mail Was Sent

public void send_an_email_if_the_WorkflowState_is_urgent() {
  // setting up the WorkflowState

  var theOwner = new User();
  var theWorkflowState = new WorkflowState{Priority = 
    "Urgent", Id = 5, Owner = theOwner};

  // Exercise the ChangeStatus() method
  service.ChangeStatus(new ChangeWorkflowStatusMessage(){
    NewStatus = "Urgent", WorkflowId = 5});

  // Verify that the change status email was sent
  emailGateway.AssertWasCalled(x => 
    x.SendChangeStatusEmailTo(theWorkflowState, theOwner));

Separating the e-mail functionality into IEmailGateway made it easy to unit test the business logic, but did it give you anything else? Pulling all access to the actual e-mail sending code into a single class helps in the following ways:

  • Prevent duplication in the code. Repetitive coding is required to find configuration data to set up the System.Mail.Smtp­Client object to send the e-mail in .NET code.
  • It makes the code that needs to send the e-mails somewhat simpler. WorkflowService just needs to say: send this e-mail to this user.
  • You can change the e-mail infrastructure. What if you want to use a third-party tool instead of System.Mail.SmtpClient for e-mailing? What if you want extra logging for each e-mail, or want to implement a retry capability for failures or want to queue up e-mails to send in a batch? If you consistently use the same Gateway pattern interface for access to sending e-mails, you can add more robust infrastructure later without changing the classes that depend on IEmailGateway.

Resources for Increasing App Testability

Unit Testing with Mocks

Better Testing with Mocks

Tame Your Software Dependencies for More Flexible Apps

A Unit Testing Walkthrough with Visual Studio Team Test

Small Tests before Big Tests

You want fast and accurate feedback loops on your code to flush out problems quickly. Any automated test should tell you that there is a problem when something doesn't work as expected, but a small, focused test can tell you exactly where something is wrong. When a big end-to-end test fails, you have a lot of different factors to consider, which often makes debugging harder. You still need the big end-to-end test, but my strong advice is to structure your application in such a way that you can write and test granular units of code first to flush out problems quickly before you attempt to execute end-to-end tests.

In the WorkflowService example, before attempting to run the actual WorkflowService from end to end, I would do the following tests:

  1. Write the WorkflowState object and test all of its business logic with various permutations of state.
  2. Write a couple of tests to prove that I can correctly save and load a WorkflowState object from the database.
  3. Write some interaction tests against the WorkflowService to make sure that it is correctly coordinating the IEmailGateway, IRepository, and WorkflowState objects.
  4. Inside the concrete EmailGateway class, I would write unit tests to verify that it is forming the correct e-mail contents for a given WorkflowState and User object independent of sending the e-mail.

Only once I am reasonably confident that every one of the pieces of Workflow­Service function independently will I try to execute a full test of WorkflowService that uses the real database and sends e-mails.

Smaller tests are cheaper to create, easier to understand, faster to run, and much simpler to debug. It sounds like more work to write a bunch of unit tests than a single integrated test, but I've frequently found it more efficient to test small units of code before attempting to integrate those units in a bigger test.

The Big Picture

I would say that my focus on testability over the past five years has had more impact on how I approach software design than anything else. Granted, there are times when I have to go out of my way in a design for no other purpose than to make the code testable. In the end, though, I think that testability goes hand in hand with the classical definition of a good design.

Carefully considering the question, how can I test this in isolation? is yet another tool that will help you arrive at the classic design qualities of cohesion, coupling, and separation of concerns in your codebase.

Send your questions and comments to mmpatt@microsoft.com .

Jeremy Miller , a Microsoft MVP for C#, is also the author of the open-source StructureMap ( structuremap.sourceforge.net ) tool for Dependency Injection with .NET and the forthcoming StoryTeller ( storyteller.tigris.org ) tool for supercharged FIT testing in .NET.