June 2011

Volume 26 Number 06

Agile C++ - Agile C++ Development and Testing with Visual Studio and TFS

By John Socha-Leialoha | June 2011

You’re a developer or tester working on an application built in Visual C++. As a developer, wouldn’t it be great if you could be more productive, produce higher-quality code, and rewrite your code as needed to improve the architecture without fear of breaking anything? And as a tester, wouldn’t you love to spend less time writing and maintaining tests so you have time for other test activities?

In this article, I’ll present a number of techniques that our team here at Microsoft has been using to build applications.

Our team is fairly small. We have 10 people working on three different projects at the same time. These projects are written in both C# and C++. The C++ code is mostly for the programs that must run inside Windows PE, which is a stripped-down version of Windows often used for OS installation. It’s also used as part of a Microsoft System Center Configuration Manager task sequence to run tasks that can’t run inside the full OS, such as capturing a hard disk to a virtual hard disk (VHD) file. That’s a lot for a small team, so we need to be productive.

Our team uses Visual Studio 2010 and Team Foundation Server (TFS) 2010. We use TFS 2010 for version control, work tracking, continuous integration, code-coverage gathering and reporting.

When and Why Our Team Writes Tests

I’ll start by looking at why our team writes tests (the answers might be different for your team). The specific answer is a little different for our developers and testers, but perhaps not as different as you might at first think. Here are my goals as a developer:

  • No build breaks
  • No regressions
  • Refactor with confidence
  • Modify the architecture with confidence
  • Drive design through test-driven development (TDD)

Of course, quality is the big “why” behind these goals. When these goals are met, life as a developer is a lot more productive and fun than when they’re not.

For our testers, I’m going to focus on just one aspect of an Agile tester: writing automated tests. The goals for our testers when they write automated tests include no regressions, acceptance-driven development, and gathering and reporting code coverage.

Of course, our testers do much more than just write automated tests. Our testers are responsible for code-coverage gathering because we want code-coverage numbers to include the results from all tests, not just unit tests (more on this later).

In this article, I’m going to cover the different tools and techniques our team uses to achieve the goals stated here.

Eliminating Build Breaks with Gated Check-Ins

In the past, our team used branches to ensure that our testers always had a stable build to test. However, there is overhead associated with maintaining the branches. Now that we have gated check-ins, we only use branching for releases, which is a nice change.

Using gated check-ins requires that you’ve set up build control and one or more build agents. I’m not going to cover this topic here, but you can find details on the MSDN Library page, “Administering Team Foundation Build,” at bit.ly/jzA8Ff.

Once you have build agents set up and running, you can create a new build definition for gated check-ins by following these steps from within Visual Studio:

  1. Click View in the menu bar and click Team Explorer to ensure the Team Explorer tool window is visible.

  2. Expand your team project and right-click Build.

  3. Click New Build Definition.

  4. Click Trigger on the left and select Gated Check-in, as shown in Figure 1.

    Select the Gated Check-in Option for Your New Build Definition

    Figure 1 Select the Gated Check-in Option for Your New Build Definition

  5. Click Build Defaults and select the build controller.

  6. Click Process and select the items to build.

Once you’ve saved this build definition—we called ours “Gated Checkin”—you’ll see a new dialog box after you submit your check-in (see Figure 2). Clicking Build Changes creates a shelveset and submits it to the build server. If there are no build errors and all the unit tests pass, TFS will check in your changes for you. Otherwise, it rejects the check-in.

Gated Check-in Dialog Box

Figure 2 Gated Check-in Dialog Box

Gated check-ins are really nice because they ensure you never have build breaks. They also ensure that all unit tests pass. It’s all too easy for a developer to forget to run all the tests before check-in. But with gated check-ins, that’s a thing of the past.

Writing C++ Unit Tests

Now that you know how to run your unit tests as part of a gated check-in, let’s look at one way you can write these unit tests for native C++ code.

I’m a big fan of TDD for several reasons. It helps me focus on behavior, which keeps my designs simpler. I also have a safety net in the form of tests that define the behavioral contract. I can refactor without fear of introducing bugs that are a result of accidentally violating the behavioral contract. And I know that some other developer won’t break required behavior they didn’t know about.

One of the developers on the team had a way to use the built-in test runner (mstest) to test C++ code. He was writing Microsoft .NET Framework unit tests using C++/CLI that called public functions exposed by a native C++ DLL. What I present in this section takes that approach much further, allowing you to directly instantiate native C++ classes that are internal to your production code. In other words, you can test more than just the public interface.

The solution is to put the production code into a static library that can be linked into the unit test DLLs as well as into the production EXE or DLL, as shown in Figure 3.

Figure 3 The Tests and Product Share the Same Code via a Static Library

Here are the steps required to set up your projects to follow this procedure. Start by creating the static library:

  1. In Visual Studio, click File, click New and click Project.
  2. Click Visual C++ in the Installed Templates list (you’ll need to expand Other Languages).
  3. Click Win32 Project in the list of project types.
  4. Enter the name of your project and click OK.
  5. Click Next, click Static library and then click Finish.

Now create the test DLL. Setting up a test project requires a few more steps. You need to create the project, but also give it access to the code and header files in the static library.

Start by right-clicking the solution in the Solution Explorer window. Click Add, then click New Project. Click Test under the Visual C++ node in the template list. Type the name of the project (our team adds UnitTests to the end of the project name) and click OK.

Right-click the new project in Solution Explorer and click Properties. Click Common Properties in the tree on the left. Click Add New Reference. Click the Projects tab, select the project with your static library and click OK to dismiss the Add Reference dialog.

Expand the Configuration Properties node in the tree on the left, then expand the C/C++ node. Click General under the C/C++ node. Click the Configuration combo box and select All Configurations to ensure you change both Debug and Release versions.

Click Additional Include Libraries and enter a path to your static library, where you’ll need to substitute your static library name for MyStaticLib:

$(SolutionDir)\MyStaticLib;%(AdditionalIncludeDirectories)

Click the Common Language Runtime Support property in the same property list and change it to Common Language Runtime Support (/clr).

Click on the General section under Configuration Properties and change the TargetName property to $(ProjectName). By default, this is set to DefaultTest for all test projects, but it should be the name of your project. Click OK.

You’ll want to repeat the first part of this procedure to add the static library to your production EXE or DLL.

Writing Your First Unit Test

You should now have everything you need in order to write a new unit test. Your test methods will be .NET methods written in C++, so the syntax will be a little different than native C++. If you know C#, you’ll find it’s a blend between C++ and C# syntax in many ways. For more details, check out the MSDN Library documentation, “Language Features for Targeting the CLR,” at bit.ly/iOKbR0.

Let’s say you have a class definition you’re going to test that looks something like this:

#pragma once
class MyClass {
  public:
    MyClass(void);
    ~MyClass(void);

    int SomeValue(int input);
};

Now you want to write a test for the SomeValue method to specify behavior for this method. Figure 4 shows what a simple unit test might look like, showing the entire .cpp file.

Figure 4 A Simple Unit Test

#include "stdafx.h"
#include "MyClass.h"
#include <memory>
using namespace System;
using namespace Microsoft::VisualStudio::TestTools::UnitTesting;

namespace MyCodeTests {
  [TestClass]
  public ref class MyClassFixture {
    public:
      [TestMethod]
      void ShouldReturnOne_WhenSomeValue_GivenZero() {
        // Arrange
        std::unique_ptr<MyClass> pSomething(new MyClass);
 
        // Act
        int actual = pSomething->SomeValue(0);
 
        // Assert
        Assert::AreEqual<int>(1, actual);
      }
  };
}

If you aren’t familiar with writing unit tests, I’m using a pattern known as Arrange, Act, Assert. The Arrange part sets up the preconditions for the scenario you want to test. Act is where you call the method you’re testing. Assert is where you check that the method behaved the way you want. I like to add a comment in front of each section for readability, and to make it easy to find the Act section.

Test methods are marked by the TestMethod attribute, as you can see in Figure 4. These methods, in turn, must be contained inside a class marked with the TestClass attribute.

Notice that the first line in the test method creates a new instance of the native C++ class. I like to use the unique_ptr standard C++ library class to ensure this instance is deleted automatically at the end of the test method. Therefore, you can clearly see that you can mix native C++ with your CLI/C++ .NET code. There are, of course, restrictions, which I’ll outline in the next section.

Again, if you haven’t written .NET tests before, the Assert class has a number of useful methods you can use to check different conditions. I like to use the generic version to be explicit about the data type I expect from the result.

Taking Full Advantage of C++/CLI Tests

As I mentioned, there are some limitations you’ll need to be aware of when you mix native C++ code with C++/CLI code. The differences are a result of the difference in memory management between the two code bases. Native C++ uses the C++ new operator to allocate memory, and you’re responsible for freeing that memory yourself. Once you allocate a piece of memory, your data will always be in the same place.

On the other hand, pointers in C++/CLI code have a very different behavior because of the garbage-collection model it inherits from the .NET Framework. You create new .NET objects in C++/CLI using the gcnew operator instead of the new operator, which returns an object handle, not a pointer to the object. Handles are basically pointers to a pointer. When the garbage collection moves managed objects around in memory, it updates the handles with the new location.

You have to be very careful when mixing managed and native pointers. I’ll cover some of these differences, and give you tips and tricks to get the most out of C++/CLI tests for native C++ objects.

Let’s say you have a method you want to test that returns a pointer to a string. In C++ you might represent the string pointer with LPCTSTR. But a .NET string is represented by String^ in C++/CLI. The caret after the class name signifies a handle to a managed object.

Here’s an example of how you might test the value of a string returned by a method call:

// Act
LPCTSTR actual = pSomething->GetString(1);
 
// Assert
Assert::AreEqual<String^>("Test", gcnew String(actual));

The last line contains all the details. There’s an AreEqual method that accepts managed strings, but there’s no corresponding method for native C++ strings. As a result, you need to use managed strings. The first parameter to the AreEqual method is a managed string, so it’s actually a Unicode string even though it’s not marked as a Unicode string using _T or L, for example.

The String class has a constructor that accepts a C++ string, so you can create a new managed string that will contain the actual value from the method you’re testing, at which point AreEqual ensures they’re the same value.

The Assert class has two methods that might look very attractive: IsNull and IsNotNull. However, the parameter for these methods is a handle, not an object pointer, which means you can only use them with managed objects. Instead, you can use the IsTrue method, like this:

Assert::IsTrue(pSomething != nullptr, "Should not be null");

This accomplishes the same thing, but with slightly more code. I add a comment so the expectation is clear in the message that appears in the test results window, which you can see in Figure 5.

Test Results Showing the Additional Comment in the Error Message

Figure 5 Test Results Showing the Additional Comment in the Error Message

Sharing Setup and Teardown Code

Your test code should be treated like production code. In other words, you should refactor tests just as much as production code in order to keep the test code easier to maintain. At some point you may have some common setup and teardown code for all the test methods in a test class. You can designate a method that will run before each test, as well as a method that runs after each test (you can have just one of these, both or neither).

The TestInitialize attribute marks a method that will be run before each test method in your test class. Likewise, TestCleanup marks a method that runs after each test method in your test class. Here’s an example:

[TestInitialize]
void Initialize() {
  m_pInstance = new MyClass;
}
 
[TestCleanup]
void Cleanup() {
  delete m_pInstance;
}

MyClass *m_pInstance;

First, notice that I used a simple pointer to the class for m_pInstance. Why didn’t I use unique_ptr to manage the lifetime?

The answer, again, has to do with mixing native C++ and C++/CLI. Instance variables in C++/CLI are part of a managed object, and therefore can only be handles to managed objects, pointers to native objects or value types. You have to go back to the basics of new and delete to manage the lifetime of your native C++ instances.

Using Pointers to Instance Variables

If you’re using COM, you might run into a situation where you want to write something like this:

[TestMethod]
Void Test() {
  ...
  HRESULT hr = pSomething->GetWidget(&m_pUnk);
  ...
}

IUnknown *m_pUnk;

This won’t compile, and it will produce an error message like this:

cannot convert parameter 1 from 'cli::interior_ptr<Type>' to 'IUnknown **'

The address of a C++/CLI instance variable has the type interior_ptr<IUnknown *> in this case, which isn’t a type compatible with native C++ code. Why, you ask? I just wanted a pointer.

The test class is a managed class, so instances of this class can be moved in memory by the garbage collector. So if you had a pointer to an instance variable, and then the object moved, the pointer would become invalid.

You can lock the object for the duration of your native call like this:

cli::pin_ptr<IUnknown *> ppUnk = &m_pUnk;
HRESULT hr = pSomething->GetWidget(ppUnk);

The first line locks the instance until the variable goes out of scope, which then allows you to pass a pointer to the instance variable to a native C++, even though that variable is contained inside a managed test class.

Writing Testable Code

At the beginning of this article I mentioned the importance of writing testable code. I use TDD to ensure my code is testable, but some developers prefer to write tests soon after they write their code. In either case, it’s important to think not just about unit tests, but about the entire test stack.

Mike Cohn, a well-known and prolific author on Agile, has drawn a test-automation pyramid that provides an idea of the types of tests and how many should be at each level. Developers should write all or most of the unit and component tests, and perhaps some integration tests. For details about this pyramid of testing, read Cohn’s blog post, “The Forgotten Layer of the Test Automation Pyramid” (bit.ly/eRZU2p).

Testers are typically responsible for writing acceptance and UI tests. These are also sometimes called end-to-end, or E2E, tests. In Cohn’s pyramid, the UI triangle is smallest compared with the areas for the other types of tests. The idea is that you want to write as few automated UI tests as you can. Automated UI tests tend to be fragile and expensive to write and maintain. Small changes to the UI can easily break UI tests.

If your code isn’t written to be testable, you can easily end up with an inverted pyramid, where most of the automated tests are UI tests. This is a bad situation, but the bottom line is that it’s a developer’s job to ensure that testers can write integration and acceptance tests below the UI.

Additionally, for whatever reason, most of the testers I’ve run across are very comfortable writing tests in C#, but shy away from writing tests in C++. As a result, our team needed a bridge between the C++ code under test and the automated tests. The bridge is in the form of fixtures, which are C++/CLI classes that appear to the C# code to be just like any other managed class.

Building C# to C++ Fixtures

The techniques here aren’t much different from the ones I covered for writing C++/CLI tests. They both use the same type of mixed-mode code. The difference is how they’re used in the end.

The first step is to create a new project that will contain your fixtures:

  1. Right-click the solution node in Solution Explorer, click Add and click New Project.
  2. Under Other Languages, Visual C++, CLR, click Class Library.
  3. Enter the name to use for this project, and click OK.
  4. Repeat the steps for creating a test project to add a reference and the include files.

The fixture class itself will look somewhat similar to the test class, but without the various attributes (see Figure 6).

Figure 6 C# to C++ Test Fixture

#include "stdafx.h"
#include "MyClass.h"
using namespace System;
 
namespace MyCodeFixtures {
  public ref class MyCodeFixture {
    public:
      MyCodeFixture() {
        m_pInstance = new MyClass;
      }
 
      ~MyCodeFixture() {
        delete m_pInstance;
      }
 
      !MyCodeFixture() {
        delete m_pInstance;
      }
 
      int DoSomething(int val) {
        return m_pInstance->SomeValue(val);
      }
 
      MyClass *m_pInstance;
  };
}

Notice that there’s no header file! This is one of my favorite features of C++/CLI. Because this class library builds a managed assembly, information about classes is stored as .NET-type information, so you don’t need header files.

This class also contains both a destructor and a Finalizer. The destructor here really isn’t the destructor. Instead, the compiler rewrites the destructor into an implementation of the Dispose method in the IDisposable interface. Any C++/CLI class that has a desctructor, therefore, implements the IDisposable interface.

The !MyCodeFixture method is the finalizer, which is called by the garbage collector when it decides to free this object, unless you previously called the Dispose method. You can either employ the using statement to control the lifetime of your embedded native C++ object, or you can let the garbage collector handle the lifetime. You can find more details about this behavior in the MSDN Library article, “Changes in Destructor Semantics” at bit.ly/kW8knr.

Once you have a C++/CLI fixture class, you can write a C# unit test that looks something like Figure 7.

Figure 7 A C# Unit Testing System

using Microsoft.VisualStudio.TestTools.UnitTesting;
using MyCodeFixtures;
 
namespace MyCodeTests2 {
  [TestClass]
  public class UnitTest1 {
    [TestMethod]
    public void TestMethod1() {
      // Arrange
      using (MyCodeFixture fixture = new MyCodeFixture()) {
        // Act
        int result = fixture.DoSomething(1);
 
        // Assert
        Assert.AreEqual<int>(1, result);
      }
    }
  }
}

I like employing a using statement to explicitly control the lifetime of the fixture object instead of relying on the garbage collector. This is especially important in test methods to ensure that tests don’t interact with other tests.

Capturing and Reporting Code Coverage

The final piece I outlined at the start of this article is code coverage. My team’s goal is to have code coverage automatically captured by the build server, published to TFS and easily available to everyone.

My first step was to find out how to capture C++ code coverage from running tests. Searching the Web, I found an informative blog post by Emil Gustafsson titled “Native C++ Code Coverage Reports Using Visual Studio 2008 Team System” (bit.ly/eJ5cqv). This post shows the steps that are required to capture code-coverage information. I turned this into a CMD file I can run at any time on my development machine to capture code-coverage information:

"%VSINSTALLDIR%\Team Tools\Performance Tools\vsinstr.exe" Tests.dll /COVERAGE
"%VSINSTALLDIR%\Team Tools\Performance Tools\vsperfcmd.exe" /START:COVERAGE /WaitStart /OUTPUT:coverage
mstest /testcontainer:Tests.dll /resultsfile:Results.trx
"%VSINSTALLDIR%\Team Tools\Performance Tools\vsperfcmd.exe" /SHUTDOWN

You’ll want to replace Tests.dll with the actual name of your DLL that contains tests. You’ll also need to prepare your DLLs to be instrumented:

  1. Right-click the test project in the Solution Explorer window.
  2. Click Properties.
  3. Select the Debug configuration.
  4. Expand Configuration Properties, then expand Linker and click Advanced.
  5. Change the Profile property to Yes (/PROFILE).
  6. Click OK.

These steps enable profiling, which you need turned on in order to instrument the assemblies so you can capture code-coverage information.

Rebuild your project and run the CMD file. This should create a coverage file. Load this coverage file into Visual Studio to ensure you’re able to capture code coverage from your tests.

Performing these steps on the build server and publishing the results to TFS requires a custom build template. TFS build templates are stored in version control and belong to a specific team project. You’ll find a folder called BuildProcessTemplates under each team project that will most likely have several build templates.

To use the custom build template included in the download, open the Source Control Explorer window. Navigate to the BuildProcessTemplates folder in your team project and ensure you have it mapped to a directory on your computer. Copy the BuildCCTemplate.xaml file into this mapped location. Add this template to source control and check it in.

Template files must be checked in before you can use them in build definitions.

Now that you have the build template checked in, you can create a build definition to run code coverage. C++ code coverage is gathered using the vsperfmd command, as shown earlier. Vsperfmd listens for code-coverage information for all instrumented executables that are run while vsperfcmd is running. Therefore, you don’t want to have other instrumented tests run at the same time. You should also ensure you have only one build agent running on the machine that will process these code-coverage runs.

I created a build definition that would run nightly. You can do the same by following these steps:

  1. In the Team Explorer window, expand the node for your team project.
  2. Right-click Builds, which is a node under your team project.
  3. Click New Build Definition.
  4. In the Trigger section, click Schedule and select the days on which you want to run code coverage.
  5. In the Process section, click Show details in the section called Build process template at the top and then select the build template you checked into source control.
  6. Fill out the other required sections and save.

Adding a Test Settings File

The build definition will also need a test settings file. This is an XML file that lists the DLLs for which you want to capture and publish results. Here are the steps to set up this file for code coverage:

  1. Double-click the Local.test settings file to open the Test Settings dialog box.
  2. Click Data and Diagnostics in the list on the left side.
  3. Click Code Coverage and check the check box.
  4. Click the Configure button above the list.
  5. Check the box next to your DLL that contains your tests (which also contains the code the tests are testing).
  6. Uncheck Instrument assemblies in place, because the build definition will handle this.
  7. Click OK, Apply and then Close.

If you want to build more than one solution or you have more than one test project, you’ll need a copy of the test settings file that includes the names of all the assemblies that should be monitored for code coverage.

To do that, copy the test settings file to the root of your branch and give it a descriptive name, such as CC.testsettings. Edit the XML. The file will contain at least one CodeCoverageItem element from the previous steps. You’ll want to add one entry for each DLL you want captured. Note that the paths are relative to the location of the project file, not the location of the test settings file. Check this file into source control.

Finally, you need to modify the build definition to use this test settings file:

  1. In the Team Explorer window, expand the node for your team project, then expand Builds.
  2. Right-click the build definition you created earlier.
  3. Click Edit Build Definition.
  4. In the Process section, expand Automated Tests, then 1. Test Assembly and click on TestSettings File. Click the … button and select the test settings file we created earlier.
  5. Save your changes.

You can test this build definition by right-clicking and selecting Queue New Build to start a new build right away.

Reporting Code Coverage

I created a custom SQL Server Reporting Services report that displays code coverage, as shown in Figure 8 (I’ve blurred the names of actual projects to protect the guilty). This report uses a SQL query to read data in the TFS warehouse and display the combined results for both C++ and C# code.

The Code-Coverage Report

Figure 8 The Code-Coverage Report

I won’t go into all of the details on how this report works, but there are a couple of aspects I do want to mention. The database contains too much information from the C++ code coverage for two reasons: test method code is included in the results and standard C++ libraries (that are in header files) are included in the results.

I added code in the SQL query that filters out this extra data. If you look at the SQL inside the report, you’ll see this:

and CodeElementName not like 'std::%'
and CodeElementName not like 'stdext::%'
and CodeElementName not like '`anonymous namespace'':%'
and CodeElementName not like '_bstr_t%'
and CodeElementName not like '_com_error%'
and CodeElementName not like '%Tests::%'

These lines exclude code-coverage results for specific namespaces (std, stdext and anonymous) and a couple of classes shipped with Visual C++ (_bstr_t and _com_error), as well as any code that’s inside a namespace that ends with Tests.

The latter, excluding namespaces that end with Tests, excludes any methods that are in test classes. When you create a new test project, because the project name ends with Tests, all test classes by default will be within a namespace that ends with Tests. You can add other classes or namespaces here that you want excluded.

I’ve only scratched the surface of what you can do—be sure to follow our progress on my blog at blogs.msdn.com/b/jsocha.           


John Socha-Leialoha is a developer in the Management Platforms & Service Delivery group at Microsoft. His past achievements include writing the Norton Commander (in C and assembler) and writing “Peter Norton’s Assembly Language Book” (Brady, 1987).

Thanks to the following technical expert for reviewing this article: Rong Lu