2: Unit Testing: Testing the Inside

patterns & practices Developer Center

On this page: Download:
In this chapter | Prerequisites | Unit tests in Visual Studio | Running unit tests | Debugging unit tests | Test-first development | Limitations of test-first development? - Baseline tests, Tests verify facts about the application, not exact results | How to use unit tests | Testing within a development task | Code coverage | Check-in policies | How to write good unit tests | Arrange – Act – Assert | Test one thing with each test | Use test classes to verify different behavioral areas of the code under test | Test exception handling | Don't only test one input value or state | Separate test data generation from verification | Pex generates test data | Isolation testing: fake implementations | Microsoft Fakes | Mocks | Shims - Shims in Visual Studio 2010 | Testing and debugging | IntelliTrace | Coded UI tests | How to create and use coded UI tests - Create a coded UI test, Running coded UI tests, Edit and add assertions, Extend the basic procedure to use multiple values, Isolate | Test first? | Coded UI tests: are they unit or system tests? | Designing for coded UI tests | Maintaining coded UI tests | Continuous integration with build verification tests | How to set up a build in Team Foundation Server - Code coverage, Third-party build frameworks | Generate an installer from the build | Test the installer | Monitoring the build | How to set up build failure or completion emails - On the Team Foundation Server machine, On your desktop machine, To set up build alerts in Visual Studio 2010, Responding to build failure alarms, The builds check-in policy | The build report | Spreading the load of many unit tests | Summary | Differences between Visual Studio 2010 and Visual Studio 2012 | Where to go for more information

Download PDF

Download code samples

Download Paperback

What drives Fabrikam's development process is the desire to quickly satisfy their customer's changing demands. As the more agile of the two companies in our story, Fabrikam's online users might see updates every couple of weeks or even days. Even in their more conventional projects, clients are invited to view new features at frequent intervals.

These rapid cycles please the customers, who like to see steady progress and bugs fixed quickly. They also greatly reduce the chances of delivering a product that doesn't quite meet the client's needs. At the same time, they improve the quality of the product.

At Contoso, the more traditional of the two, employees are suspicious of Fabrikam's rapid cycles. They like to deliver a high quality product, and testing is taken very seriously. Most of their testing is done manually, exercising each user function by following scripts: click this button, enter text here, verify the display there. It can't be repeated every night. They find it difficult to see how Fabrikam can release updates so rapidly and yet test properly.

But they can, thanks to automation. Fabrikam creates lots of tests that are written in program code. They test as early as possible, on the development machine, while the code is being developed. They test not only the features that the user can see from the outside, but also the individual methods and classes inside the application. In other words, they do unit testing.

Unit tests are very effective against regression—that is, functions that used to work but have been disturbed by some faulty update. Fabrikam's style of incremental development means that any piece of application code is likely to be revisited quite often. This carries the risk of regression, which is the principal reason that unit tests are so popular in Fabrikam and agile shops like them.

This is not to deny the value of manual testing. Manual tests are the most effective way to find new bugs that are not regressions. But the message that Fabrikam would convey to their Contoso colleagues is that by increasing the proportion of coded tests, you can cycle more rapidly and serve your customers better.

For this reason, although we have lots of good things to say about manual testing with Visual Studio, we're going to begin by looking at unit testing. Unit tests are the most straightforward type of automated test; you can run them standalone on your desktop by using Visual Studio.


Unit testing does involve coding. If your own speciality is testing without writing code, you might feel inclined to skim the rest of this chapter and move rapidly on to the next. But please stick with us for a while, because this material will help you become more adept at what you do.

In this chapter

The topics for this chapter are:

  • Unit testing on the development machines.
  • Checking the application code and tests into the source code store.
  • The build service, which performs build verification tests to make sure that the integrated code in the store compiles and runs, and also produces installer files that can be used for system tests.

Follow link to expand image


To run unit tests, you need Visual Studio on your desktop.

The conventions by which you write and run unit tests are determined by the unit testing framework that you use. MSTest comes with Visual Studio, and so our examples use that.

But if you already use another unit testing framework such as NUnit, Visual Studio 2012 will recognize and run those tests just as easily, through a uniform user interface. It will even run them alongside tests written for MSTest and other frameworks. (Visual Studio 2010 needs an add-in for frameworks other than MSTest, and the integration isn't as good as in Visual Studio 2012.)

Later in this chapter, we'll talk about checking tests and code into the source repository, and having the tests run on your project's build service. For that you need access to a team project in Visual Studio Team Foundation Server, which is installed as described in the Appendix.

Unit tests in Visual Studio

A unit test is a method that invokes methods in the system under test and verifies the results. A unit test is usually written by a developer, who ideally writes the test either shortly before or not long after the code under test is written.

To create an MSTest unit test in Visual Studio, pull down the Test menu, choose New Test, and follow the wizard. This creates a test project (unless you already had one) and some skeleton test code. You can then edit the code to add tests:

public class RooterTests
  [TestMethod] // This attribute identifies the method as a unit test.
  public void SignatureTest()
    // Arrange: Create an instance to test:
    var rooter = new Rooter();
    // Act: Run the method under test:
    double result = rooter.SquareRoot(0.0);
    // Assert: Verify the result:
    Assert.AreEqual(0.0, result);

Each test is represented by one test method. You can add as many test methods and classes as you like, and call them what you like. Each method that has a [TestMethod] attribute will be called by the unit test framework. You can of course include other methods that are called by test methods.

If a unit test finds a failure, it throws an exception that is logged by the unit test framework.

Running unit tests

You can run unit tests directly from Visual Studio, and they will (by default) run on your desktop computer. (More information can be found in the MSDN topic Running Unit Tests with Unit Test Explorer.) Press CTRL+R, A to build the solution and run the unit tests. The results are displayed in the Test Explorer window in Visual Studio:


Unit test results

(The user interface is different in Visual Studio 2010, but the principles are the same.)

If a test fails, you can click the test result to see more detail. You can also run it again in debug mode.

The objective is to get all the tests to pass so that they all show green check marks.

When you have finished your changes, you check in both the application code and the unit tests. This means that everyone gets a copy of all the unit tests. Whenever you work on the application code, you run the relevant unit tests, whether they were written by you or your colleagues.

The checked-in unit tests are also run on a regular basis by the build verification service. If any test should fail, it raises the alarm by sending emails.

Debugging unit tests

When you use Run All, the tests run without the debugger. This is preferable because the tests run more quickly that way, and you don't want passing tests to slow you down.

However, when a test fails, you might choose Debug Selected Tests. Don't forget that the tests might run in any order.


Running or debugging tests

Test-first development

Writing unit tests before you write the code—test-first development—is recommended by most developers who have seriously tried it. Writing the tests for a method or interface makes you think through exactly what you want it to do. It also helps you discuss with your colleagues what is required of this particular unit. Think of it as discussing samples of how your code will be used.

For example, Mort, a developer at Fabrikam, has taken on the task of writing a method deep in the implementation of an ice-cream vending website. This method is a utility, likely to be called from several other parts of the application. Julia and Lars will be writing some of those other components. They don't care very much how Mort's method works, so long as it produces the right results in a reasonable time.

Follow link to expand image

Follow link to expand image

Mort wants to make sure that he understands how people want to use his component, so he writes a little example and circulates it for comment. He reasons that although they aren't interested in his code, they do want to know how to call it. The example takes the form of a test method. His idea is that the test forms a precise way to discuss exactly what's needed.

Julia and Lars come back with some comments. Mort adjusts his test, and writes another to demonstrate different aspects of the behavior they expect.

Julia and Lars can be confident that they know what the method will do, and can get on with writing their own code. In a sense, the tests form a contract between the writer and users of a component.


Think of unit tests as examples of how the method you're about to write will be used.

Mort frequently writes tests before he writes a piece of code, even if his own code is the only user. He finds it helps him get clear in his mind what is needed.

Limitations of test-first development?

Test-first development is very effective in a wide variety of cases, particularly APIs and workflow elements where there's a clear input and output. But it can feel less practical in other cases; for example, to check the exact text of error reports. Are you really going to write an assertion like:

Assert.AreEqual("Error 1234: Illegal flavor selected.", errorMessage);

Baseline tests

A common strategy in this situation is to use the baseline test. For a baseline test, you write a test that logs the output of your application to a file. After the first run, you verify manually to see that it looks right. For subsequent runs, you write test code that compares the new output to the old log, and fails if anything has changed. It sounds straightforward, but typically you have to write a filter that allows for changes like time of day and so on.

Many times, a failure occurs just because of some innocuous change, and you get used to looking over the output, deciding there's no problem, and resetting the baseline file. Then on the sixth time it happens, you miss the crucial thing that's actually a bug; and from that point onwards, you have a buggy baseline. Use baseline tests with caution.

Tests verify facts about the application, not exact results

Keep in mind that a test doesn't have to verify the exact value of a result. Ask yourself what facts you know about the result. Write down these facts in the form of a test.

For example, let's say we're developing an encryption method. It's difficult to say exactly what the encrypted form of any message would be, so we can't in practice write a test like this:

string actualEncryption = Encrypt("hello");
string expectedEncryption = "JWOXV";
Assert.AreEqual(expectedEncryption, actualEncryption);

But wait. Here comes a tip:


Think of facts you know about the result you want to achieve. Write these as tests.

What we can do is verify a number of separate required properties of the result, such as:

// Encryption followed by decryption should return the original:
Assert.AreEqual (plaintext, Decrypt(Encrypt(plaintext)));

// In this cipher, the encrypted text is as long as the original:
Assert.AreEqual (plaintext.Length, Encrypt(plaintext).Length);

// In this cipher, no character is encrypted to itself:
for(int i = 0; i < plaintext.Length; i++)
   Assert.AreNotEqual(plaintext[i], Encrypt(plaintext)[i]);

Using assertions like these, you can write tests first after all.

How to use unit tests

In addition to test-first (or at least, test-soon) development, our recommendations are:

  • A development task isn't complete until all the unit tests pass.
  • Expect to spend about the same amount of time writing unit tests as writing the code. The effort is repaid with much more stable code, with fewer bugs and less rework.
  • A unit test represents a requirement on the unit you're testing. (We don't mean a requirement on the application as a whole here, just a requirement on this unit, which might be anything from an individual method to a substantial subsystem.)
    • Separate these requirements into individual clauses. For example:
      • Return value multiplied by itself must equal input AND
      • Must throw an exception if input is negative AND ….
    • Write a separate unit test for each clause, like the separate tests that Mort wrote in the story. That way, your set of tests is much more flexible, and easier to change when the requirements change.
  • Work on the code in such a way as to satisfy a small number of these separate requirements at a time.
  • Don't change or delete a unit test unless the corresponding requirement changes, or you find that the test does not correctly represent the intended requirement.

Testing within a development task

The recommended cycle for a development task is therefore:

  1. Check code out of source control.

  2. Run the existing tests to make sure they pass. If you change the code and then find there are tests failing, you could spend a long time wondering what you did wrong.

  3. Delete any existing unit tests for requirements that are no longer valid.
    For example, suppose Mort's requirement changes so that negative inputs just return a result of zero. He deletes the test ThrowsOnNegativeArgument, but keeps the BasicRooterTest.

  4. Loop:

    1. Red: Write a new unit test and make sure it fails.

      Write a new unit test:

      - To test the new feature that you're about to implement.

      - To extend the range of data that you use to test. For example, test a range of numbers rather than just one.

      - To exercise code that has not previously been exercised. See the section on code coverage below.
      Run the test and make sure that it fails. This is a good practice that avoids the mistake of forgetting to put an assertion at the end of the test method. If it definitely fails, then you know you've actually achieved something when you eventually get it to pass.

    2. Green: Update your application to make the tests pass.

      Make sure that all the tests pass—not just the new ones.

    3. Refactor: Review the application code to make it easy to read and update.

      Review the code to make sure that it's easy to read and update, and performs well. Then run the tests again.

    4. Perform a code coverage check.

    } until most of the code is covered by tests, and all the requirements are tested, and all the tests pass.

Code coverage

It is important to know how much of your code is exercised by the unit tests. Code coverage tools give you a measure of what percentage of your code has been covered by a unit test run and can highlight in red any statements that have not been covered.

Low coverage means that some of the logic of the code has not been tested. High coverage does not necessarily imply that all combinations of input data will be correctly processed by your code; but it nevertheless indicates that the likelihood of correct processing is good.

Aim for about 80%.

To see code coverage results, go to the Unit Test menu and choose Analyze Code Coverage. After you run tests, you'll see a table that shows the percentage of the code that has been covered, with a breakdown of the coverage in each assembly and method.


Code coverage results

Choose the Coverage coloring button to see the most useful feature, which shows you which bits of the code you have not exercised. Consider writing more tests that will use those parts.

Check-in policies

The best way to keep the server builds clean and green is to avoid checking bad code into source control. Set check-in policies to remind you and your team colleagues to perform certain tasks before checking in. For example, the testing policy requires that a given set of unit tests have passed. In addition to the built-in policies, you can define your own and download policies from the web. For more information, see the MSDN topic Enhancing Code Quality with Team Project Check-in Policies.

Users can override policies when they check in code; however, they have to write a note explaining why, and the event shows up in a report.

To set a check-in policy, on the Team menu in Visual Studio choose Team Project Settings, select Source Control. Click on the Check-in Policy tab.


Add check-in policy

How to write good unit tests

A lot has been written about what makes a good unit test, and we don't have space to replicate it all here. If you're looking for more information, search the web for "unit test patterns."

However, there are some particularly useful tips.

Arrange – Act – Assert

The general form {Arrange; Act; Assert} is favored by many developers:

Arrange: Set up test data;
Act: Call the unit under test;
Assert: Compare the expected and actual results, and
log the result of the comparison as a fail or pass.

For example:

public void TestSortByFlavor() 
 // Arrange: Set up test data: 
    var catalog = new IceCreamCatalog(Flavor.Oatmeal, Flavor.Cheese);
 // Act: Exercise the unit under test: 

 // Assert: Verify and log the result: 
    Assert.AreEqual(Flavor.Cheese, catalog.Items[0].Flavor); 

Test one thing with each test

Don't be tempted to make one test method exercise more than one aspect of the unit's behavior. Doing so leads to tests that are difficult to read and update. It can also lead to confusion when you are interpreting a failure.

Keep in mind that the MSTest test framework does not by default guarantee a specific ordering for the tests, so you cannot transfer state from one test to another. However, in Visual Studio, you can use the Ordered Test feature to impose a specific sequence.

Use test classes to verify different behavioral areas of the code under test

Separate tests for different behavioral features into different test classes. This usually works well because you need different shared utility methods to test different features. If you want to share some methods between test classes, you can of course derive them from a common abstract class.

Each test class can have a TestInitialize and TestCleanup method. (These are the MSTest attributes; there are equivalents in other test frameworks, typically named Setup and Teardown.) Use the initialize or setup method to perform the common tasks that are always required to set up the starting conditions for a unit test, such as creating an object to test, opening the database or connections, or loading data. The cleanup or teardown method is always called, even if a test method fails; this is a valuable feature that saves you having to write all your test code inside try…finally blocks.

Test exception handling

Test that the correct exceptions are thrown for invalid actions or inputs.

You could use the [ExpectedException] attribute, but be aware that a test with that attribute will pass no matter what statement in the test raises an exception.

A more reliable way to test for exceptions is shown here:

        [TestMethod, Timeout(2000)]
        public void TestMethod1()
        {   ...
            AssertThrows<InvalidOperationException>( delegate 

        internal static void AssertThrows<exception>(Action method) 
                                    where exception : Exception
            catch (exception)
                return; // Expected exception.
            catch (Exception ex)
                Assert.Fail("Wrong exception thrown: " + ex.Message);
            Assert.Fail("No exception thrown");

A function similar to AssertThrows is built into many testing frameworks.

Don't only test one input value or state

By verifying that 2.0==MySquareRootFunction(4.0), you haven't truly verified that the function works for all values. The code coverage tool might show that all your code has been exercised, but it might still be the case that other inputs, or other starting states, or other sequences of inputs, give the wrong results.

Therefore, you should test a representative set of inputs, starting states, and sequences of action.

Look for boundary cases: those where there are special values or special relationships between the values. Test the boundary cases, and test representative values between the boundaries. For example, inputs of 0 and 1 might be considered boundary cases for a square root function, because there the input and output values are equal. So test, for example, -10, -1, -0.5, 0, 0.5, 1, and 10.

Test also across the range. If your function should work for inputs up to 4096, try 4095 and 4097.

The science of model-driven testing divides the space of inputs and states by these boundaries, and seeks to generate test data accordingly.

For objects more complex than a simple numeric function, you need to consider relationships between different states and values: for example, between a list and an index of the list.

Separate test data generation from verification

A postcondition is a Boolean expression that should always be true of the input and output values, or the starting and ending states of a method under test. To test a simple function, you could write:

public void TestValueRange()
  while (GenerateDataForThisMethod(
              out string startState, out double inputValue)))
     TestOneValue(startState, inputValue);
// Parameterized test method:
public void TestOneValue(string startState, double inputValue)
     // Arrange - Set up the initial state:

     // Act - Exercise the method under test:
     var outputValue = objectUnderTest.MethodUnderTest(inputValue);

     // Assert - Verify the outcome:
     Assert.IsTrue(PostConditionForThisMethod(inputValue, outputValue));

// Verify the relationship between input and output values and states:
private bool 
          (string startState, double inputValue, double outputValue)
    // Accept any of a range of results within specific constraints:
    return startState.Length>0 && startState[0] == '+'
                    ? outputValue > inputValue 
                    : inputValue < outputValue;

To test an object that has internal state, the equivalent test would set up a starting state from the test data, and call the method under test, and then invoke the postcondition to compare the starting and ending states.

The advantages of this separation are:

  • The postcondition directly represents the actual requirement, and can be considered separately from issues of what data points to test.
  • The postcondition can accept a range of values; you don't have to specify a single right answer for each input.
  • The test data generator can be adjusted separately from the postcondition. The most important requirement on the data generator is that it should generate inputs (or states) that are distributed around the boundary values.

Pex generates test data

Take a look at Pex, which is an add-in for Visual Studio.

There's also a standalone online version.

Pex automatically generates test data that provides high code coverage. By inspecting the code under test, Pex generates interesting input-output values. You provide it with the parameterized version of the test method, and it generates test methods that invoke the test with different values.

(The website also talks about Moles, an add-in for Visual Studio 2010, which has been replaced by an integrated feature, fakes, for Visual Studio 2012.)

Isolation testing: fake implementations

If the component that you are developing depends on another component that someone else is developing at the same time, then you have the problem that you can't run your component until theirs is working.

A less serious issue is that even if the other component exists, its behavior can be variable depending on its internal state, which might depend on many other things; for example, a stock market price feed varies from one minute to the next. This makes it difficult to test your component for predictable results.

The solution is to isolate your component by replacing the dependency with a fake implementation. A fake simulates just enough of the real behavior to enable tests of your component to work. The terms stub, mock, and shim are sometimes used for particular kinds of fakes.

The principle is that you define an interface for the dependency. You write your component so that you pass it an instance of the interface at creation time. For example:

// This interface enables isolation from the stock feed:
public interface IStockFeed
  int GetSharePrice(string company);

// This is the unit under test:
public class StockAnalyzer
    private IStockFeed stockFeed;

    // Constructor takes a stockfeed:
    public StockAnalyzer(IStockFeed feed) { stockFeed = feed; }

    // Some methods that use the stock feed:
    public int GetContosoPrice() { ... stockFeed.GetSharePrice(...) ... }

By writing the component in this way, you make it possible to set it up with a fake implementation of the stock feed during testing, and a real implementation in the finished application. The key thing is that the fake and the real implementation both conform to the same interface. If you like interface diagrams, here you go:


Interface injection

This separation of one component from another is called "interface injection." It has the benefit of making your code more flexible by reducing the dependency of one component on another.

You could define FakeStockFeed as a class in the ordinary way:

// In test project.
class FakeStockFeed : IStockFeed
   public int GetSharePrice (string company) { return 1234; }

And then in your test, you'd set up your component with an instance of the fake:

    public class StockAnalysisTests
        public void ContosoPriceTest()
            // Arrange:
            var componentUnderTest = new StockAnalyzer(new FakeStockFeed());
            // Act:
            int actualResult = componentUnderTest.GetContosoPrice();
            // Assert:
            Assert.AreEqual(1234, actualResult);

However, there's a neat mechanism called Microsoft Fakes that makes it easier to set up a fake, and reduces the clutter of the fake code.

Microsoft Fakes

If you're using MSTest in Visual Studio 2012, you can have the stub classes generated for you.

In Solution Explorer, expand the test project's references, and select the assembly for which you want to create stubs—in this example, the Stock Feed. You can select another project in your solution, or any referenced assembly, including system assemblies. On the shortcut menu, choose Add Fakes Assembly. Then rebuild the solution.

Now you can write a test like this:

class TestStockAnalyzer
    public void TestContosoStockPrice()
      // Arrange:

        // Create the fake stockFeed:
        IStockFeed stockFeed = 
             new StockAnalysis.Fakes.StubIStockFeed() // Generated by Fakes.
                     // Define each method:
                     // Name is original name + parameter types:
                     GetSharePriceString = (company) => { return 1234; }

        // In the completed application, stockFeed would be a real one:
        var componentUnderTest = new StockAnalyzer(stockFeed);

      // Act:
        int actualValue = componentUnderTest.GetContosoPrice();

      // Assert:
        Assert.AreEqual(1234, actualValue);

The special piece of magic here is the class StubIStockFeed. For every public type in the referenced assembly, the Microsoft Fakes mechanism generates a stub class. The type name is the same as the original type, with "Stub" as a prefix.

This generated class contains a delegate for each message defined in the interface. The delegate name is composed of the name of the method plus the names of the parameter types. Because it's a delegate, you can define the method inline. This avoids explicitly writing out the class in full.

Stubs are also generated for the getters and setters of properties, for events, and for generic methods. Unfortunately IntelliSense doesn't support you when you're typing the name of a delegate, so you will have to open the Fakes assembly in Object Browser in order to check the names.

You'll find that a .fakes file has been added to your project. You can edit it to specify the types for which you want to generate stubs. For more details, see Isolating Unit Test Methods with Microsoft Fakes.


A mock is a fake with state. Instead of giving a fixed response to each method call, a mock can vary its responses under the control of the unit tests. It can also log the calls made by the component under test. For example:

class TestMyComponent
    public void TestVariableContosoPrice()
     // Arrange:
        int priceToReturn;
        string companyCodeUsed;
        var componentUnderTest = new StockAnalyzer(new StubIStockFeed()
               GetSharePriceString = (company) => 
                     // Log the parameter value:
                     companyCodeUsed = company;
                     // Return the value prescribed by this test:
                     return priceToReturn;
        priceToReturn = 345;
     // Act:
        int actualResult = componentUnderTest.GetContosoPrice(priceToReturn);
     // Assert:
        Assert.AreEqual(priceToReturn, actualResult);
        Assert.AreEqual("CTSO", companyCodeUsed);


Stubs work if you are able to design the code so that you can call it through an interface. This isn't always practical, especially when you are calling a platform method whose source you can't change.

For example, DateTime.Now is not accessible for us to modify. We'd like to fake it for test purposes, because the real one inconveniently returns a different value at every call. So we'll use a shim:

public class TestClass1
        public void TestCurrentYear()
            using (ShimsContext.Create())
              // Arrange:
                // Redirect DateTime.Now to return a fixed date:

                System.Fakes.ShimDateTime.NowGet = () =>
                { return new DateTime(2000, 1, 1); };

                var componentUnderTest = new MyComponent();

              // Act:
                int year = componentUnderTest.GetTheCurrentYear();

              // Assert:
                Assert.AreEqual(2000, year);

// Code under test:
public class MyComponent
    public int GetTheCurrentYear()
       // During testing, this call will be redirected to the shim:
       DateTime now = DateTime.Now;
       return now.Year;

What happens is that any call to the original method gets intercepted and redirected to the shim code.

Shims are set up in the same way as stubs. For example, to create a shim for DateTime, begin by selecting the reference to System in your test project, and choose Add Fakes Assembly.

Notice the ShimsContext: when it is disposed, any shims you created while it was active are removed.

Shim class names are made up by prefixing "Shim" to the original type name.

You might see an error stating that the Fakes namespace does not exist. Fix any other errors, and this will then work correctly.

The shim in this example modifies a static property. You can also create shims to intercept calls to constructors; and to methods or properties of all instances of a class, or to specific instances. See the MSDN topic Using shims to isolate calls to non-virtual functions in unit test methods.

Shims in Visual Studio 2010

In Visual Studio 2010, you have to obtain an add-in called Moles. The naming conventions are slightly different: the stub method has "M" instead of "Shim" as its prefix. Also, you must apply an attribute to the test class:

public class TestClass1
    public void TestSetup()
        // Redirect DateTime.Now to return a fixed date:

        MDateTime.NowGet = () => { return new DateTime(2000, 1, 1); } ;        

The Moles add-in injects patches into the code under test, intercepting calls to the methods that you specify. It generates a set of mirrors of the classes that are referenced by the code under test. To specify a particular target class, prefix its name with "M" like this: MDateTime or System.MFile. You can target classes in any assembly that is referenced from yours. Use IntelliSense to choose the method that you want to redirect. Property redirects have "Get" and "Set" appended to the property's name. Method redirects include the parameter type names; for example MDateTime.IsLeapYearInt32.

Just as with mocks, you can get your mole to log calls and control its response dynamically from the test code.


Moles in Visual Studio 2010 need rework to turn them into fakes for Visual Studio 2012.

Testing and debugging

When you run unit tests normally, we recommend that you use the command that runs without the debugger. Typically you expect the tests to pass, and debugging just slows things down.

If a test fails, if you are using the Visual Studio built-in MSTest framework, use the Debug Checked Tests command in the Test Results view; this will rerun the tests that failed.

If you are using another test framework, you can usually find add-ins that integrate the test framework with Visual Studio debugging.

Without such an add-in, to run the debugger with another testing framework, the default steps are:

  1. Set the test project as the solution's Startup project
  2. In the test project's properties, on the Debug tab, set the Start Action to start your test runner, such as NUnit, and open it on the test project.
  3. When you want to run tests, start debugging from Visual Studio, and then select the tests in the test runner.


IntelliTrace keeps a log of the key events as your tests and code execute in debug mode, and also logs variable values at those events. You can step back through the history of execution before the test failed, inspecting the values that were logged.

To enable it, on the Visual Studio Debug menu, choose Options and Settings, IntelliTrace. You can also vary the settings to record more or less data.


Using IntelliTrace

IntelliTrace is particularly useful when the developer debugging the code was not present when the bug was discovered. When we come to discuss the build service, we will see how a test failure can automatically save the state of the environment at the failure, and attach the state to a bug work item. During manual testing, the tester can do the same at the touch of a button. IntelliTrace enables the developer to look not just at the state of the system at the instant of failure, but also at the states that led up to that point.

Coded UI tests

Unit tests typically work by calling methods in the interface of the code under test. However, if you have developed a user interface, a complete test must include pressing the buttons and verifying that the appropriate windows and content appear. Coded UI tests (CUITs) are automated tests that exercise the user interface. See the MSDN topic Testing the User Interface with Automated Coded UI Tests.

How to create and use coded UI tests

Create a coded UI test

To create a coded UI test, you have to create a Coded UI Test Project. In the New Project dialog, you'll find it under either Visual Basic\Test or Visual C#\Test. If you already have a Coded UI Test project, add to it a new Coded UI Test.

In the Generate Code dialog, choose Record Actions. Visual Studio is minimized and the Coded UI Test builder appears at the bottom right of your screen.

Choose the Record button, and start the application you want to test.


Recording a coded UI test

Perform a series of actions that you want to test. You can edit them later.

You can also use the Target button to create assertions about the states of the UI elements.

The Generate Code button turns your sequence of actions into unit test code. This is where you can edit the sequence as much as you like. For example, you can delete anything you did accidentally.

Running coded UI tests

Coded UI tests run along with your other unit tests in exactly the same way. When you check in your source code, you should check in coded UI tests along with other unit tests, and they will run as part of your build verification tests.


Keep your fingers off the keyboard and mouse while a CUIT is playing. Sitting on your hands helps.

Edit and add assertions

Your actions have been turned into a series of statements. When you run this test, your actions will be replayed in simulation.

What's missing at this stage is assertions. But you can now add code to test the states of UI elements. You can use the Target button to create proxy objects that represent UI elements that you choose. Then you write code that uses the public methods of those objects to test the element's state.

Extend the basic procedure to use multiple values

You can edit the code so that the procedure you recorded will run repeatedly with different input values.

In the simplest case, you simply edit the code to insert a loop, and write a series of values into the code.

But you can also link the test to a separate table of values, which you can supply in a spreadsheet, XML file, or database. In a spreadsheet, for example, you provide a table in which each row is a set of data for each iteration of the loop. In each column, you provide values for a particular variable. The first row is a header in which the data names are identified:







In the Properties of the coded UI test, create a new Data Connection String. The connection string wizard lets you choose your source of data. Within the code, you can then write statements such as

var flavor = TestContext.DataRow["Flavor"].ToString();  


As with any unit tests, you can isolate the component or layer that you are testing—in this case, the user interface—by providing a fake business layer. This layer should simply log the calls and be able to change states so that your assertions can verify that the user interface passed the correct calls and displayed the state correctly.


Well-isolated unit tests

Test first?

You might think this isn't one of those cases where you can realistically write the tests before you write the code. After all, you have to create the user interface before you can record actions in the Coded UI Test Builder.

This is true to a certain extent, especially if the user interface responds quite dynamically to the state of the business logic. But nevertheless, you'll often find that you can record some actions on buttons that don't do much during your recording, and then write some assertions that will only work when the business logic is coupled up.

Coded UI tests: are they unit or system tests?

Coded UI tests are a very effective way of quickly writing a test. Strictly speaking, they are intended for two purposes: testing the UI by itself in isolation (with the business logic faked); and system testing your whole application (which we'll discuss in Chapter 5, "Automating System Tests").

But coded UI tests are such a fast way of creating tests that it's tempting to stretch their scope a bit. For example, suppose you're writing a little desktop application—maybe it accesses a database or the web. The business logic is driven directly from the user interface. Clearly, a quick way of creating tests for the business logic is to record coded UI tests for all the main features, while faking out external sources of variation such as the web or the database. And you might decide that your time is better spent doing that than writing the code for the business logic.

Cover your ears for a moment against the screams of the methodology consultants. What's agonizing them is that if you were to test the business logic by clicking the buttons of the UI, you would be coupling the UI to the business logic and undoing all the good software engineering that kept them separate. If you were to change your UI, they argue, you would lose the unit tests of your business logic.

Furthermore, since coded UI tests can only realistically be created after the application is running, following this approach wouldn't allow you to follow the test-first strategy, which is very good for focusing your ideas and discussions about what the code should do.

For these reasons, we don't really recommend using coded UI tests as a substitute for proper unit tests of the business logic. We recommend thinking of the business logic as being driven by an API (that you could drive from another code component), and the UI as just one way of calling the operations of the API. And to write an API, it's a good idea to start by writing samples of calling sequences, which become some of your test methods.

But it's your call; if you're confident that your app is short-lived, small, and insignificant, then coded UI tests can be a great way to write some quick tests.


Coded UI test for application testing

Designing for coded UI tests

When you run a test, the CUIT engine has to find each control that your actions use. It does so by navigating the presentation tree, using the names of the UI elements. If the user interface is redesigned, the tests might not work because the elements cannot be found. Although the engine has some heuristics for finding moved elements, you can improve its chances of working.

  • In HTML, make sure every element has an ID.
  • In Windows presentation technologies, support Accessibility.
  • If you design a custom control, define a CUIT extension to help the recorder interpret user gestures (see the MSDN topic Enable Coded UI Testing of Your Custom Controls). For example, when you use a file selection control, the recorder does not record a sequence of mouse clicks, but instead records which file was selected. In the same way, you can define a recorded extension that encodes the user's intentions when using your control.

Maintaining coded UI tests

A drawback of CUITs is that they must be recreated whenever there are significant changes to the user interface definition. You can minimize the effort needed:

  • Make separate recordings, and thereby separate methods, for different forms or pages, and for groups of no more than a dozen actions.
  • If a change occurs, locate the affected methods and rerecord just those methods.
  • Use the CUIT Editor to update the code. It is also possible to edit the code directly, but the result is more reliable using the editor.

This is a brief overview of CUITs. For more information, see the MSDN topic How to: Edit a Coded UI Test Using the Coded UI Test Editor.

Continuous integration with build verification tests

Build verification tests are sometimes called the rolling build or the nightly build. On a regular or a continuous basis, the build service compiles and tests the software that has been checked into the source tree. If a test fails—or worse, if the source doesn't compile—then the service sends plaintive emails to everyone.

Source control helps team members avoid overwriting each other's work, and lets a team of people work on a single body of software. As soon as you have installed TFS and created a team project, you can use Visual Studio to create a source tree and add code to it, and assign permissions to other users.

But to make sure that the code does what is expected of it, you must set up regular builds. Typically, you will have more than one set up: a continuous ("rolling") build that runs most of the unit tests; and a nightly build that runs more extensive tests, including performance and load tests, and automated systems tests (which we will discuss in the next chapter).

If you're a test professional, you won't need us to tell you this, but just to confirm the point: The only way to deliver quality software on time is to never let code be checked in without good test coverage; to run the tests all the time during development; and never to let bugs go unfixed. Anyone who has been around a while knows of projects where they let a moraine of untested code be pushed back towards the end of the project, and knows the pain that caused.

So these are the rules about making changes to the source:

  • Check in all your code at least every week, and preferably more often. Plan your development tasks so that you can contribute a small but complete extension or improvement to the system at each check-in.
  • Before checking in a change, use the Get Latest Version command (on the shortcut menu of your project in Solution Explorer) to integrate updates that have been made by other team members while you were working on your changes. Rebuild everything and run the unit tests.
  • Use the Run All Impacted Tests command, which determines what tests are affected by changes that you have made or imported. (See Streamline Testing Process with Test Impact Analysis on MSDN.) Your changes can affect tests that you didn't write, and changes made by others can affect tests you have written.
    To use this command, you must initialize a baseline when you check out code. (See the MSDN topic How to: Identify the Test Impact of Code Changes During Development.)
  • Switch on Code Coverage and check that at least 80% of your code has been exercised by the tests. If not, use the coloring feature to find code that has not been used. Write more tests.
  • Do not check in your changes unless 80% coverage has been achieved and the tests all pass. Some instances of lower coverage are allowed. For example, where code is generated, it is sometimes reasonable to take coverage of one generated item as verification of another. But if you propose to make an exception, the proposal must be reviewed by a colleague with a skeptical personality.
    To enforce this rule, create a testing check-in policy.
  • If the rolling build breaks after you checked in code, you (and everyone else) will get a notification by email. If your changes might be the cause of the problem, undo your check-in. Before working on any other code, check in a fixed version. If it turns out that the fix will take some time, reopen the work item related to this change; you can no longer claim it is complete.
  • Don't go home immediately after checking in. You don't want to come back in the morning to find everyone's mailbox full of build failures.
    This doesn't apply if your team uses gated check-ins, where your code isn't actually integrated into the main body of the source until the tests pass on an auxiliary build server.

How to set up a build in Team Foundation Server

In Team Explorer, in the Builds window, choose New Build Definition.


New build definition

In the wizard, you can select how you want it to start. Typically, you'd choose a Scheduled build to run the biggest set of tests every night. Continuous integration runs a build for every check-in; rolling builds require fewer resources, running no more than one at a time. Gated builds are a special case: each check-in is built and tested on its own, and is only merged into the source tree if everything passes.


Check-in triggers

Create a drop folder on a suitable machine, and set sharing permissions so that anyone on your project can read it and the build service can write to it.

Specify this folder in the build definition:


Create a drop folder

Under Process, you can leave the default settings, though you might want to check them.


Automated test settings

Save the definition.

The build definition appears in Team Explorer. You can run any build on demand.

Code coverage

You'll want to make sure that code coverage analysis is included in the project settings so that you can see code coverage results in the build reports. Enable code analysis in your project's properties.


Code coverage analysis

Third-party build frameworks

You can get extensions that allow you to execute the build with third-party frameworks. For example, if you use JUnit, or if you perform builds using Ant or Maven, you can integrate them with Team Foundation Server. Go to the MSDN Visual Studio Gallery and search for the page: Team Foundation Server Build Extensions Power Tool for more information.

Generate an installer from the build

Each build should generate an installer of the appropriate type—typically a Microsoft Windows Installer (setup) package, but it could also be, for example, a Visual Studio Extension (.vsix) or, for websites, a Microsoft Internet Information Services (IIS) deployment package. The type of installer you need varies with the type of project.

For a website, choose Publish from the shortcut menu of your Visual Studio project. The Publish Web Application wizard will walk you through generating the installer. The installer will be generated in the output folder of the project. The regular build on the build server will create an installer in the same way.


Creating a website installer

For a desktop application, you have to add an installer project to your solution. In the New Project dialog, look under Other Project Types, Setup. The details are in the MSDN page Windows Installer Deployment. Set the project properties to define what you want to include in the installer. When the project is built—either on your development machine or in the build server—the output folder of that project will contain a setup.exe and associated folders, which can be copied to wherever you want to deploy.

For a distributed system, you will need more than one setup project, and the automated build will need a deployment script. We'll discuss that in a later chapter.

In addition, there is a separate publication mechanism, ClickOnce Deployment. We'll discuss this more in Chapter 5, "Automating System Tests." There is additional information in the MSDN topic ClickOnce Security and Deployment.

Test the installer

Write a basic unit test to verify that an installer file was successfully generated. (The main unit tests aren't, of course, dependent on the installer; they work by directly running the .dll or .exe that was created by the compiler.)

Why do we recommend you generate an installer from every build?

Firstly, there are many bugs that emerge when you deploy your system in an environment that is not the build machine. In the next chapter, we're going to recommend that system testing should always begin by installing the system on a clean machine.

Secondly, in good iterative software development practice, you should deliver something to your stakeholders at the end of each iteration. The installer is what you're delivering. When they install and run it, they should be able to work through the user stories that were planned at the start of the iteration.

Maybe we should add, for those with long memories, that generating installers is a lot easier in Visual Studio 2012 than it used to be back in the bad old days.

Monitoring the build

There's no point in having a regular build unless you know whether it passed or failed. There are four ways to find out:

  • Build NotificationsTool on your desktop. Once you've pointed it to your server, it sits hidden in your taskbar and pops up toast when a build completes. You can set it just to show builds initiated by you or your check-ins.
    After you check in some code, it's reassuring, a while later, to see the "build OK" flag appear in the corner of your screen.
    You'll find it on the Start menu under Microsoft Visual Studio > Team Foundation Server Tools. Before you can point it to a particular server, you must have connected to that server at least once using Team Explorer.
  • Email.You can getnotifications sent to the team. This is particularly useful if a build fails, because you want to fix it as a matter of high priority. Unlike the build notification tool, you won't miss it if you're away from your desktop for a while.
    To set up email notifications, see the next section.
  • Build Explorer. Monitor the build by opening the Builds node in Team Explorer. You can also define a new build and start a build run from here.
  • Build Reports appear under the Reports node in Team Explorer. A particularly useful report is Build Quality Indicators. It tells you about test coverage in recent builds. Make sure the coverage remains high.
    If you can't see the Reports node, look again at the section about enabling reports in the previous chapter.

How to set up build failure or completion emails

You need to enable emails on the server, and then set up the details from a client machine.

On the Team Foundation Server machine

Open Team Foundation Server Administration console, and under your server machine name, select Application Tier.

Make sure that Service Account is set to an account that has permission on your domain to send email messages. If it isn't, use Change Account. Don't use your own account name, because you will change your password from time to time. (If you ever have to change the password on the service account, notice that there's an Update Password command. This doesn't change the password on that account; it changes the password that Team Foundation Server presents when it tries to use that account.)


Service account settings

Scroll down to Email Alert Settings and choose Alert Settings. At this point, you'll need the address of an SMTP server in your company's network. Get one from your domain administrator.


Email alert settings

On your desktop machine

From Team Explorer, choose Home, Settings, Project Alerts. Your team project site will open in your browser on your own alerts. To set the team alerts choose the Administer Server icon at the top right or the Advanced Alerts Management Page link.


Administer server

Choose alerts, My Alerts, BuildAlerts, and then New. In the dialog box, select an alert template to create an alert for A build fails. Add a new clause to the query: Requested For | Contains | [Me].


Build alerts

To set up build alerts in Visual Studio 2010

Install the Team Foundation Server Power Tools. On your development machine, go to the Visual Studio Gallery and find Team Foundation Server Power Tools. Close Visual Studio before installing.

Open Visual Studio and from the Team menu open Alerts Explorer. In the toolbar of the Alerts Explorer window, choose New Alert. Under Build Alerts, choose Failed Build. In the next dialog, provide your team's group email address.

Responding to build failure alarms

When you get an email notification, click the link near the top of the message:


Email notification

In the build summary, you'll see Associated Changesets. This is the list of changes that were made since the previous successful build. One of them almost certainly caused the failure. The person responsible for each changeset is also listed. Enquire of them what they propose to do.

If it's you, undo your recent change, fix the problem, and then check in again.

The builds check-in policy

You might be tempted to fix the problem and check in the fix without first undoing the change. Maybe you like working under pressure with alarms going off every few minutes as someone checks in another change. But it's not an advisable tactic, because while your bug is in the build, it masks any other problem. And of course a quick fix often turns out not to be as quick as you first thought, and can be unreliable. So if the build on the server fails, it's better to back out the associated changesets that are listed in the log, to get the build back to green. Then you can think about the issue calmly.

You can set a check-in policy that insists on this procedure. In Visual Studio on the Team menu, choose Team Project Settings, Source Control. (If you can't see that menu item, make sure that Team Explorer is connected to your project.) In the dialog, in the Check-in Policies tab, add the Builds policy. Then if anyone tries to check in code while a build is failing that uses that code, they get a discouraging message. Like other check-in policies, you can override it if you must. But the idea is that while the build is failing, you should fix it only by undoing recent changes.

The build report

You can obtain the results of the latest server builds through the Builds section of Team Explorer. The project portal website also includes reports of the build results. (See the MSDN topic Build Dashboard (Agile).)

In addition to the automatically generated test pass/fail results, the build report also shows a result that you can set manually in the log of each run. Typically, a member of the test or development team will regularly "scout" for a recent build that completed properly, that has good performance, and is suitable for further testing.

When you want to deploy your system for manual testing or for creating samples or demonstrations, look at the report of recent builds. The report includes links to the folder where you can find the built assemblies or installers.

Spreading the load of many unit tests

If you have more than a few hundred unit tests, you can spread the load of running them across multiple machines. (See the MSDN topic, Running Unit Tests on Multiple Machines Using a Test Controller and Test Agents.)

To spread the load, set up several machines that contain test agents, and a machine that has a test controller. These items can be found on the Visual Studio installation DVD. We will meet them again when we discuss setting up a lab environment. Configure the test agents to talk to the test controller. In Visual Studio, in the Test menu, open Manage Test Controller and select your controller.

When you run the unit tests, they will run on the machines that contain the test agents. The test controller will separate the tests into batches of 100, and run each batch on a different machine.


Unit testing provides a crucial engineering practice, ensuring not only that the system being built compiles correctly, but also that the expected behavior can be validated in every build. Visual Studio 2012 provides the following capabilities to an application development lifecycle:

  • Unit testing is integrated into the development environment.
  • Code coverage reports help you test every code path.
  • Fakes allow you to isolate units, allowing parallel development of units.
  • Coded UI Tests create test code from recorded manual test sessions.
  • Integration with third-party unit testing frameworks.
  • IntelliTrace reports help you find out how a fault occurred.
  • Continuous integration build service.
  • Build reports and alerts show you anything that fails.
  • Automatic load spreading when there are many unit tests.

Differences between Visual Studio 2010 and Visual Studio 2012

In Visual Studio 2012:

  • Unit Test Runner. The interface for running unit tests is substantially changed from Visual Studio 2010. It isn't too difficult to find your way around either of them. However, you can write test methods using the MSTest framework (where tests are introduced with the [TestMethod] attribute) in exactly the same way. For more information, see the MSDN topic Running Unit Tests with Test Explorer.
  • Third-party test frameworks such as NUnit are supported. The tests from any framework appear in Unit Test Explorer, provided there is an adapter for it. Adapters for several popular frameworks are available, and you can write your own. In Visual Studio 2010, tests in other frameworks have to be run from their own user interface, although you can get add-ins to improve the integration. See the MSDN topic, How to: Install Third-Party Unit Test Frameworks.
  • Fakes (stubs and shims) are built-in features. For Visual Studio 2010, you have to get the Moles add-in, which is not compatible with Fakes. See the MSDN topic, Isolating Unit Test Methods with Microsoft Fakes.
  • C++ and native code tests can be created and run. They are not available in Visual Studio 2010. See Writing Unit tests for C/C++ with the Microsoft Unit Testing Framework for C++ on MSDN.
  • Unit test projects**.** There are separate project types for unit tests, coded UI tests, load tests, and so on. In Visual Studio 2010, there is just one type into which you can put different types of test. See the MSDN topic, How to: Create a Unit Test Project.
  • Windows apps are supported with specialized unit testing features.
  • Compatibility. Unit tests and test projects created in Visual Studio 2010 will run on Visual Studio 2012. You can't use the Visual Studio 2010 Express edition for unit tests. See Upgrading Unit Tests from Visual Studio 2010 on MSDN.


Where to go for more information

There are a number of resources listed in text throughout the book. These resources will provide additional background, bring you up to speed on various technologies, and so forth. For your convenience, there is a bibliography online that contains all the links so that these resources are just a click away.

Next Topic | Previous Topic | Home | Community

Last built: Aug 6, 2012