Simplified Unit Testing for Native C++ Applications

Maria Blees

This article discusses:

  • Unit testing philosophy
  • Setting up WinUnit
  • Building test fixtures
  • Implementation and customization
This article uses the following technologies:
C++, Visual Studio

Code download available at:WinUnit2008_02.exe(1438 KB)


Start Testing Today
Getting Started with WinUnit
Fixtures: Setup and Teardown
Running WinUnit
Implementation Details
Going Further...

These days it can be hard not to feel downright oppressed as a native code developer—it seems like the developers using the Microsoft® .NET Framework get all the cool tools!

I've always been interested in good engineering practices, but I've been frustrated by so-called "engineering experts" who extol the virtues of unit testing, yet can offer little more than hand waving when asked for tool recommendations for native code. What I really wanted, and what I thought would be the easiest to integrate into an automated build system, was the equivalent of NUnit for native code. That is, I wanted to be able to make a DLL with only tests in it, and have an external test-runner that would run those tests and take care of the reporting and logging. I also wanted to be able to declare each test only once, and to have a minimum of extra code in my test DLL.

So I built a native code unit testing tool I call WinUnit. I'll go into more detail later, but here's a little preview of how easy it is to create and run a test DLL using WinUnit. To start, create a single CPP file—let's call it DummyTest.cpp:

#include "WinUnit.h" BEGIN_TEST(DummyTest) { WIN_ASSERT_TRUE(3 > 4, _T("You should see this error message.")); } END_TEST

Build it into a DLL with this command:

cl /EHsc /LD DummyTest.cpp

Then execute WinUnit on it:

>WinUnit DummyTest.dll Processing [DummyTest.dll]... (DummyTest) DummyTest.cpp(5): error : WIN_ASSERT_TRUE failed: "3 > 4". You should see this error message. FAILED: DummyTest. [DummyTest.dll] FAILED. Tests run: 1; Failures: 1. There were errors. Tests run: 1; Failed: 1.

Changing 3 > 4 to the true expression 4 > 3 will of course get rid of the failure.

WinUnit will accept any number of DLLs or directories, process them, report the total results, and return with an exit code. By the way, if you haven't guessed, WinUnit only works on Windows®.

I concede that the world was not entirely devoid of options for C++ unit testing before I wrote my own tool. The problem I found with existing offerings was that they generally placed a higher priority on portability than on being easy to understand and use out of the box. CppUnitLite is my favorite among the portable set, but because it's designed to be very customizable, there was more overhead than I would have liked in getting it set up. The line between test-runner and tests is blurry due to the fact that it's all done within the same binary, and the code that actually invokes the test-running functionality needs to be written and put somewhere. In addition, the tricky macro implementation was hard to explain to my coworkers and thus made adoption difficult. However, if you require portability to other operating systems, CppUnitLite is definitely worth taking a look at.

Other options that have been available for C++ unit testing for quite some time are the same options available for testing .NET code, most notably NUnit and the Visual Studio® Team Test unit testing framework. Since both will execute any properly attributed .NET assembly, you can use these tools with tests written in managed C++ (now C++/CLI). The main inconvenience I find with this approach is that the tools are highly geared toward .NET development and not toward native code. This should not be a surprise. But it is hard to use most of the provided assert methods with native C++ for they expect .NET-style objects, and as a developer you will need to have at least a passing familiarity with C++/CLI (or Managed Extensions for C++) to write the tests. You'll find more details about limitations with this approach in the footnote on the Visual Studio 2005 Product Feature Comparisons page (msdn2.microsoft.com/vstudio/aa700921.aspx). As mentioned previously, I find the "ease of convincing my coworkers to use it" factor important, and using the managed tools provides an extra hurdle in this area.

Start Testing Today

Code Coverage and Convenience

I'd like to briefly address a feature of Visual Studio Team System (VSTS) that deserves a mention for its relevance to unit testing: its set of code coverage tools. While the exact significance of code coverage numbers is debatable, reviewing your coverage after you've carefully constructed thoughtful tests can be, if nothing else, an excellent sanity check.

Unfortunately, VSTS does not make it easy to use (or even to know about) the code coverage tools for native code. Along with the code for this project, I've included a set of macros that will make it easy for you to do code coverage on native projects—in conjunction with WinUnit as well as with any other scenarios you have for exercising your code.

To use these macros, simply load WinUnit.vsmacros into Macro Explorer in Visual Studio. In the _Variables module you'll need to set variables for your particular local machine, which includes the path to the Visual Studio performance tools (including the code coverage tools) and the paths to WinUnit.exe and WinUnit.h. You'll find more details in readme.txt as well as the _Readme module in the macro project.

The CodeCoverage module includes macros for starting and stopping coverage data collection, instrumenting binaries (you can specify a list of binaries to be instrumented for each project), and launching a .coverage file with the results.

The RunningTests module includes macros for running the test containing the cursor in the currently opened document, all tests in the currently opened file, and all tests in the selected project. Other macros provide shortcuts for changing relevant project settings and for running different sets of tests with code coverage turned on.

Before getting into WinUnit usage, I want to discuss my philosophy about unit testing in general and unit testing native code in particular. If you're like me, you may have noticed that the clean, elegant, and modular classes shown in unit testing examples seem to have nothing in common with the blob of legacy code you typically have to work with. If you think that you must rework your entire code base before you can unit test, I can tell you those tests are going to be a long time coming. The good news is that a testable unit doesn't have to be a clean, elegant class. It's simply the smallest unit of code that can be tested in isolation. If that's your whole application, well, there's no better time to start than now!

Seriously, if your application consists of exactly one executable and its only interface is through a GUI, you may have to do a bit of refactoring to make the non-GUI part automatable. However, it can certainly be done incrementally. I recommend reading the whitepaper "The Humble Dialog Box," by Michael Feathers, for a sensible approach to making a GUI application more testable (objectmentor.com/resources/articles/TheHumbleDialogBox.pdf).

Your ultimate goal should be to have automated developer tests running as part of your build process, making it as easy as possible for developers to add tests as they fix bugs or write new code. Once the infrastructure is set up, you can work on refactoring existing code to allow better test coverage in problem areas. The book Working Effectively with Legacy Code, also by Michael Feathers (Prentice Hall, 2004), is an excellent resource for guiding you through this process (and many of the examples are in C++).

If your application is already split up into static libraries with clearly defined interfaces, you can link these directly into test DLLs and you'll be able to get at the individual classes to test them. Consider having one test DLL per library to maintain clear interface boundaries.

If your application contains separate DLLs, or is itself a library provided in DLL form, there are two ways you can access the exports to exercise them in tests. If you use import libraries, you can link those into test DLLs and have transparent access to the production functions for testing. If you do not use import libraries, use GetProcAddress in conjunction with LoadLibrary and FreeLibrary to get the DLL exports to call directly in your tests.

Note that if your DLL is a COM DLL, it doesn't need to be registered to be tested. Windows XP and later offer a mechanism called "Registration-Free COM," which you could use to get around the usual registration requirement for COM DLLs. You'll find details at msdn2.microsoft.com/ms973913. As a bonus, once you've gotten it working for tests, you can use the feature in your production code.

If your application is a monolithic executable, you probably want to split it up so you have at least one library. But you might also want to test the executable in its entirety to check for proper exit codes, for example. In my unit tests for WinUnit itself, I included some tests that exercised the entire executable. You can find these within the TestWinUnit project—in the MainTest.cpp file—in the accompanying code download.

At risk of coming across as the style police, there's one point about physically organizing production code that I would be remiss if I did not mention. In my years at Microsoft working on large C++ projects, I've found that one thing that made modularity (and testability) exceedingly difficult was having multiple classes jumbled together across arbitrary .cpp and .h files. It's up to you how strictly you adhere to this practice in your production code, but I find associating one and only one class with a similarly named pair of .cpp and .h files to be helpful. As a bonus, it simplifies keeping your tests organized, with one matching test file for each class. You can see some examples in my projects WinUnitLib and TestWinUnit.

There is one other book I recommend reading if the whole concept of unit testing has just passed you by, or if you're a developer in need of a refresher on how to think like a tester: Pragmatic Unit Testing with C# in NUnit, by Andrew Hunt and David Thomas, (Pragmatic Bookshelf, 2006) is an excellent primer. (There's also a Java counterpart.) The meat of the book is actually language-independent, despite the language-specific title. A cute mnemonic from the book is that unit tests should be A-TRIP: Automatic, Thorough, Repeatable, Independent, and Professional. Repeatable means each test produces the same results any time it is run. Independent means tests should be able to be run in any order, with no dependencies on each other. This point will be important to remember later.

Getting Started with WinUnit

The first thing you'll need to do is build WinUnit and place WinUnit.exe and WinUnit.h in known locations on your machine. You can refer to the readme.txt included with the code for more information. Next, to get started writing tests, create a test project. As I demonstrated earlier, this is just a regular C++ DLL. For information on creating a native C++ DLL project in Visual Studio, see the walkthrough at msdn2.microsoft.com/ms235636.

As with other unit testing frameworks, several assert macros are provided with WinUnit for your test-writing convenience, such as the WIN_ASSERT_TRUE you saw earlier. (I'll discuss the other WIN_ASSERT macros you can use with WinUnit later in this section.) The asserts perform their magic using C++ exceptions, so you should keep the Visual Studio 2005 default compiler option of /EHsc when building your test DLL. However, there's no requirement for your production code to use C++ exception handling, even if it's linked into your test DLL. (Note that the Visual C++® 2005 toolset or later is required to use the provided assert macros.) You'll also want to add WinUnit's Include directory to your test project's include path as well (see Figure 1), or to your global include paths.

Figure 1 Setting Project Include Path

Figure 1** Setting Project Include Path **(Click the image for a larger view)

I designed WinUnit to work well from the command line, which means you can set up Visual Studio to run WinUnit on your test project after every build. To do so, go to the project Properties of your test project, and under Configuration Properties | Build Events, select Post-Build Event (see Figure 2). For Command Line, type the full path to where you copied WinUnit.exe followed by "$(TargetPath)" (include the double quotes). For Description, type "Running WinUnit..." (without the quotes). If you add the folder containing WinUnit.exe to your global executables path, you don't need to specify the full path here.

Figure 2 Configuring WinUnit as a Post-Build Event

Figure 2** Configuring WinUnit as a Post-Build Event **(Click the image for a larger view)

Alternatively (or in addition), you may wish to add WinUnit.exe to your Tools menu in order to run your tests. To accomplish this, go to Tools | External Tools and click Add. For Title, type "&WinUnit". For Command, browse to WinUnit.exe. For Arguments, type "$(TargetPath)" (include the double quotes). For Initial Directory, type $(TargetDir). Uncheck Close on Exit and check Use Output Window (see Figure 3). At this point in the process you can select WinUnit from the Tools menu, and it will execute on whichever project is currently selected.

Figure 3 Adding WinUnit to External Tools

Figure 3** Adding WinUnit to External Tools **

Finally, to set up your project for debugging, go to project Properties | Configuration Properties | Debugging, and type the full path to WinUnit.exe in the Command textbox. For Command Arguments, type "$(TargetPath)" (include the double quotes).

To verify everything in your project is set up correctly, you can add DummyTest.cpp to your project and build. The project should build, but if you've typed in the false assertion line as presented, you'll clearly see a test failure on that line.

Now that the setup is out of the way, it's time to look at the different ways of using WinUnit features to write tests. You may want to switch to WinUnitComplete.sln for the moment to follow along with my examples. The projects containing the examples are SampleLib and TestSampleLib. SampleLib is a static library that happens to contain exactly one class, BinaryNumber. TestSampleLib is a test DLL that links with SampleLib.lib and includes tests for the BinaryNumber class along with a few other examples.

General WinUnit test functions always begin with BEGIN_TEST(TestName) and end with END_TEST. Within each test, one or more WIN_ASSERT macros are used in order to verify various bits of functionality.

My example BinaryNumber class has two constructors. One takes an unsigned short, and one takes a string comprising the characters 1 and 0. I intend for the constructors to result in equivalent objects when passed equivalent values. So I may have a test like this:

BEGIN_TEST(BinaryNumberConstructorsShouldBeEquivalent) { unsigned short numericValue = 7; BinaryNumber bn1(numericValue); BinaryNumber bn2("111"); WIN_ASSERT_EQUAL(bn1.NumericValue, bn2.NumericValue, _T("Both values should be %u."), numericValue); WIN_ASSERT_STRING_EQUAL(bn1.StringValue, bn2.StringValue); } END_TEST

This test will fail if either of the two assert lines is false. Notice that the WIN_ASSERT_EQUAL macro in this example is passed the two values being compared, plus two extra arguments. These comprise an informational message that will be shown if the assert fails. All WIN_ASSERT macros take an optional printf-style format string and arguments for this purpose.

Since I implemented operator '==' for the BinaryNumber class, I can also use the following construction (which is what I have in my sample file, BinaryNumberTest.cpp):

BEGIN_TEST( BinaryNumberConstructorsShouldBeEquivalent) { BinaryNumber bn1(7); BinaryNumber bn2("111"); WIN_ASSERT_EQUAL(bn1, bn2); } END_TEST

Here's a different type of assert, WIN_ASSERT_THROWS. Suppose I use exceptions for my error handling in my production code, and I want to force an error condition in my test to ensure that the proper exception is thrown. I might make a test like this:

BEGIN_TEST( BinaryNumberPlusRecognizesIntegerOverflow) { unsigned short s = USHRT_MAX / 2 + 1; BinaryNumber bn1(s); BinaryNumber bn2(s); WIN_ASSERT_THROWS(bn1 + bn2, BinaryNumber::IntegerOverflowException) } END_TEST

I know that my BinaryNumber class only holds an unsigned short. I also know that the operator '+' should detect when I'm trying to add two numbers together that are too big. WIN_ASSERT_THROWS takes an expression that should throw a C++ exception, along with the exception type itself. The test fails if the exception is not thrown.

Figure 4 lists the WIN_ASSERT macros available in WinUnit.h, as well as one non-assert, WIN_TRACE, that can be used to provide tracing through the OutputDebugString API function in your tests.

Figure 4 Assert and Trace Macros in WinUnit.h

Macro Description
WIN_ASSERT_EQUAL(expected, actual, ...) Compares expected and actual using ==; fails if they are not equal.
WIN_ASSERT_NOT_EQUAL(notExpected, actual, ...) Compares notExpected and actual using !=; fails if they are equal.
WIN_ASSERT_STRING_EQUAL(expected, actual, ...) Does a case-sensitive string comparison of expected and actual. Strings to be compared can be either wide-character or not, but both must be the same "wideness". (This does not affect the "wideness" of the optional message.)
WIN_ASSERT_ZERO(zeroExpression, ...) Compares zeroExpression to 0; fails if they are not equal.
WIN_ASSERT_NOT_ZERO(nonzeroExpression, ...) Compares nonzeroExpression to 0; fails if they are equal.
WIN_ASSERT_NULL(nullExpression, ...) Compares nullExpression to NULL; fails if they are not equal. Only works with pointers.
WIN_ASSERT_NOT_NULL(notNullExpression, ...) Compares notNullExpression to NULL; fails if they are equal. Only works with pointers.
WIN_ASSERT_FAIL(message, ...) Always fails; informational message is required.
WIN_ASSERT_TRUE(trueExpression, ...) Succeeds if trueExpression evaluates to true.
WIN_ASSERT_FALSE(falseExpression, ...) Succeeds if !falseExpression evaluates to true.
WIN_ASSERT_WINAPI_SUCCESS(trueExpression, ...) Succeeds if trueExpression evaluates to true. Use this with Windows functions whose documentation says to call GetLastError for more error information on failure. This macro/function calls GetLastError and includes the string associated with the error code as part of the message.
WIN_ASSERT_THROWS(expression, exceptionType, ...) Succeeds if expression throws a C++ exception of type exceptionType.
WIN_TRACE(message, ...) Used to output informational messages for debugging purposes. Its use does not cause a test failure. "message" is a printf-style format string, followed by any arguments.

Note that for either WIN_ ASSERT_EQUAL or WIN_ASSERT_NOT_EQUAL, if a numeric literal is passed in as one of the values and an unsigned number as the other, you'll get a signed/unsigned mismatch, as numeric literal integers always template-match to int. To get around this, postfix numeric literals with 'U' so they will match as unsigned.

All asserts take an optional printf-style format string, plus arguments. If _UNICODE is defined, these message strings will be wchar_t*, so use the _T ("") macro around the format strings, or L"" if you're building Unicode-only.

You may have noticed that my test names begin with the name of the class being tested, followed by the name or description of the method being tested, followed by a partial sentence describing what the test is supposed to show. This convention serves two purposes. First, it is an easy way to make clear exactly what you're trying to test, which can be useful when looking at test output. Second, it's a way to group tests to be run together at varying levels of granularity. WinUnit does not inherently have the concept of groups of tests, as do its .NET Framework-based equivalents. The way to tell WinUnit to run a subset of tests in a project is with the -p (prefix) option. You specify a prefix string, and WinUnit will run all the tests whose names start with that string. By naming your tests with words ordered least to most specific, you will be able to easily run related groups of tests from the command line.

Fixtures: Setup and Teardown

In real life, the things you're trying to test are often not as simple as the examples I've shown so far. There may be cases where several setup steps are required to get to the point where you can actually execute the functionality you want to verify. For example, if I were testing a function that deletes all the files in a directory, I might first need to create a directory and put files in it. Or I might want to set an environment variable required for the functionality I'm testing.

To maintain the independence and repeatability of tests, it's important that whatever setup work you do at the beginning of a test is undone at the end of the test. WinUnit supports the concepts of single-test fixtures. These are setup and teardown function pairs that will be executed at the start and finish of each test associated with them. This is especially helpful if you have several tests that require the same setup and cleanup.

Figure 5 shows an example of a fixture, with setup and teardown functions implemented and two tests associated with it. Note that this example also shows that you can also use the various WIN_ASSERT macros even in fixtures so you can report failure if the fixture you define does not work correctly.

Figure 5 Setup and Teardown Test Fixture

// DeleteFileTest.cpp #include "WinUnit.h" #include <windows.h> // Fixture must be declared. FIXTURE(DeleteFileFixture); namespace { TCHAR s_tempFileName[MAX_PATH] = _T(""); bool IsFileValid(TCHAR* fileName); } // Both SETUP and TEARDOWN must be present. SETUP(DeleteFileFixture) { // This is the maximum size of the directory passed to GetTempFileName. const unsigned int maxTempPath = MAX_PATH - 14; TCHAR tempPath[maxTempPath + 1] = _T(""); DWORD charsWritten = GetTempPath(maxTempPath + 1, tempPath); // (charsWritten does not include null character) WIN_ASSERT_TRUE(charsWritten <= maxTempPath && charsWritten > 0, _T("GetTempPath failed.")); // Create a temporary file UINT tempFileNumber = GetTempFileName(tempPath, _T("WUT"), 0, // This means the file will get created and closed. s_tempFileName); // Make sure that the file actually exists WIN_ASSERT_WINAPI_SUCCESS(IsFileValid(s_tempFileName), _T("File %s is invalid or does not exist."), s_tempFileName); } // TEARDOWN does the inverse of SETUP, as well as undoing // any side effects the tests could have caused. TEARDOWN(DeleteFileFixture) { // Delete the temp file if it still exists. if (IsFileValid(s_tempFileName)) { // Ensure file is not read-only DWORD fileAttributes = GetFileAttributes(s_tempFileName); if (fileAttributes & FILE_ATTRIBUTE_READONLY) { WIN_ASSERT_WINAPI_SUCCESS( SetFileAttributes(s_tempFileName, fileAttributes ^ FILE_ATTRIBUTE_READONLY), _T("Unable to undo read-only attribute of file %s."), s_tempFileName); } // Since I'm testing DeleteFile, I use the alternative CRT file // deletion function in my cleanup. WIN_ASSERT_ZERO(_tremove(s_tempFileName), _T("Unable to delete file %s."), s_tempFileName); } // Clear the temp file name. ZeroMemory(s_tempFileName, ARRAYSIZE(s_tempFileName) * sizeof(s_tempFileName[0])); } BEGIN_TESTF(DeleteFileShouldDeleteFileIfNotReadOnly, DeleteFileFixture) { WIN_ASSERT_WINAPI_SUCCESS(DeleteFile(s_tempFileName)); WIN_ASSERT_FALSE(IsFileValid(s_tempFileName), _T("DeleteFile did not delete %s correctly."), s_tempFileName); } END_TESTF BEGIN_TESTF(DeleteFileShouldFailIfFileIsReadOnly, DeleteFileFixture) { // Set file to read-only DWORD fileAttributes = GetFileAttributes(s_tempFileName); WIN_ASSERT_WINAPI_SUCCESS( SetFileAttributes(s_tempFileName, fileAttributes | FILE_ATTRIBUTE_READONLY)); // Verify that DeleteFile fails with ERROR_ACCESS_DENIED // (according to spec) WIN_ASSERT_FALSE(DeleteFile(s_tempFileName)); WIN_ASSERT_EQUAL(ERROR_ACCESS_DENIED, GetLastError()); } END_TESTF namespace { bool IsFileValid(TCHAR* fileName) { return (GetFileAttributes(fileName) != INVALID_FILE_ATTRIBUTES); } }

In this example, I'm pretending I was the writer of the Windows DeleteFile function and I'm writing some test functions for it. In my setup function, I first create a temporary file. In my test functions, I exercise the DeleteFile functionality by trying to delete the temporary file. Although I expect the file to have been deleted by the end of one of the tests, I still check whether the file exists and delete it in the teardown function. The teardown should undo whatever the setup did and should not count on the tests succeeding. This ensures that your tests are independent and repeatable.

As you can see, there's a slightly different syntax for tests that use fixtures. Instead of the BEGIN_TEST and END_TEST construction, BEGIN_TESTF and END_TESTF are used, with the fixture name following the name of the test in BEGIN_TESTF. The setup and teardown functions are indicated by the all-caps SETUP and TEARDOWN, with the name of the fixture in parentheses.

You'll notice I've used the WIN_ASSERT_WINAPI_SUCCESS macro a few times in this example. As indicated in Figure 4, this macro is to be used in conjunction with Windows functions whose documentation explicitly says to call GetLastError for more information in the case of failure. The first parameter to the assert should be an expected true statement related to the function call. (It can be the function call itself, if it returns a Boolean.) If the function itself is not part of the expression, it's a good idea to include a message indicating the name of the function that has failed if this assert fires, to make it easier to see at a glance from the output exactly what happened. The assert implementation calls GetLastError and retrieves the message string associated with the error code and adds that to the test results.

To see a fixture used in a different way, see BinaryNumberTest.cpp in the TestSampleLib project. There I used it to open and close a data provider from which I read rows of test data to be used in my tests. In this case, the data provider is just a text file, and each row is a line, but it could just as easily be an XML file or even a database table. You should consider using a similar approach if you are testing functionality that can benefit from large amounts of data being run through it.

One difference between WinUnit fixtures and those of other unit test frameworks is that they are never automatically associated with tests—the association must be explicit. This also means that it is possible to have more than one fixture in a single file.

Running WinUnit

Now let's take a look at how WinUnit works. WinUnit uses one of the only reflection-like features available for native code on Windows: discovering the exports of a DLL. There was a great two-part article by Matt Pietrek in MSDN® Magazine in February and March 2002 ("Inside Windows: An In-Depth Look into the Win32® Portable Executable File Format"), which I referred to extensively in figuring out how to do this. It works for both 32-bit and 64-bit executables, but be sure to use a 64-bit build of WinUnit to run 64-bit test binaries (32-bit executables can't run 64-bit ones).

The command line syntax for WinUnit is:

WinUnit [options] {dllName | directoryName}+

The tool takes one or more DLLs or directories over which all contained DLLs will be enumerated. It discovers the exports and executes the ones whose name—which must be undecorated—starts with TEST_. This restriction is intended to ensure that only functions you meant to run as tests will be run, since the function prototype is assumed. The expected function prototype is as follows:

bool __cdecl TEST_TestName(wchar_t* errorBuffer, size_t bufferSize);

The errorBuffer parameter receives error output from the test; bufferSize is the size of the buffer in wide characters. If the test function returns false (or throws a structured exception handling (SEH) exception), it's considered a failed test, and whatever output was put in the errorBuffer is displayed.

Besides DLLs and directory names, WinUnit takes several optional command-line arguments, which are shown in Figure 6. By default the output of WinUnit goes to the console. Informational messages go to stdout; error messages go to stderr. After WinUnit has processed the specified DLLs, it exits with a code of 0 for success and non-zero otherwise (see Figure 7).

Figure 7 WinUnit Exit Codes

Code Description
0 No errors (any tests that were run succeeded).
1 One or more tests failed.
2 An unhandled exception caused premature termination.
-1 Usage error.

Figure 6 WinUnit Command-Line Arguments

Parameter Description
-q Do not output to console.
-n No interactive UI (error dialogs are turned off as much as possible). Recommended for automated builds.
-b Output to debugger (by default, output goes only to console).
-p string Only run tests starting with the prefix string (it's case-insensitive).
-x Ignore the requirement for the TEST_ prefix of exports--process all exports regardless of name. (This only works in conjunction with the -s option, to avoid the danger of running functions with the wrong function prototype.)
-s Show test names only. Default is to run them.
-o filename Put the output in the given file. This is unrelated to the -q and -b options—output can go to any or all of the console, a file, or the debugger.
-v number Run at the given verbosity. 1 is the default. 0 will show you only failing tests and total results.
-l customLoggerString The custom logger string is the name of a DLL in which you've implemented a custom logger, followed optionally by a colon and an initialization string to be passed to the logger's Initialize function.
--variable value This option will set environment variable variable to value value for the lifetime of the WinUnit process. This can be used for passing data to tests via the command line.
-e ( exactTestName* ) (Note: This option is included for completeness but is intended only for automation purposes. Use –p for regular use.) Run only tests specified by their full names (case-sensitively), space-delimited, and enclosed in parentheses.

The BEGIN_TEST macros used in the examples prepend TEST_ to the test names and use 'extern "C" __declspec(dllexport)' to export the function names undecorated. This is the equivalent of putting them in a .def file for the DLL.

The WIN_ASSERT macros work via C++ exceptions; however, all exceptions are caught within the test functions themselves, so WinUnit doesn't care whether exceptions were used or not. I used exceptions because they make it easy to exit out of any block at any point and still ensure that proper cleanup occurs. You could implement your own test functions that would work just as well with WinUnit entirely without macros by using the function prototype shown earlier, ensuring that the names started with TEST_, and exporting them in undecorated form from a DLL.

The BEGIN_TEST and END_TEST macros declare the function and set up a try/catch block. The WIN_ASSERT macros throw an AssertException class exception on failure, which holds a message string describing the error. If a thrown AssertException is caught, the message is copied into the buffer and the function returns false.

Implementation Details

I initially thought it would be elegant to have no parameters to the test functions, and instead just throw exceptions directly out of the functions and catch and process them in the test runner. This would have meant there was no need for an END_TEST macro. Unfortunately, stack unwinding did not work properly when exceptions were thrown across the DLL boundary, so it ended up working out better to have the exceptions as an implementation detail and catch them within the tests themselves.

Implementing the fixture concept was another challenge. My first idea was to rely on C++ automatic storage and declare an object at the top of the test function whose constructor and destructor would do setup and teardown. However, I soon discovered firsthand why you're not supposed to throw exceptions in destructors. Part of the C++ specification states that when an exception is thrown by a destructor function during stack unwind, the terminate function is called. This state could occur if I implemented fixtures as described and had a fixture object with exception-throwing asserts in its destructor. If the asserts in the fixture object's destructor fired during the stack unwind due to an exception thrown in the body of the function, terminate would be called.

The remedy for this was to ensure that the fixture object's destructor would never be called during a stack unwind by putting it alone in its own try/catch block outside the main try/catch for the function. This necessitated the special fixture syntax using the BEGIN_TESTF and END_TESTF pair to house two try/catch blocks and a fixture object declaration.

Running across the unpleasant terminate behavior made me think of something else I also wanted to make sure to handle: I wanted to provide the option of running in non-interactive mode (currently the –n command line option). WinUnit generally runs non-interactively, but I noticed that when the terminate function was called, it put up a dialog box. I thought dialog boxes like this would be inconvenient in automation scenarios because they would potentially block further automated tasks.

You can see what I did to disable such error messages in ErrorHandler.cpp in the WinUnit project. Interestingly, some of the dialogs that can come up in error conditions are associated with the C Runtime (CRT). This means that setting a global variable in the CRT to turn them off isn't going to do any good if the variable is set in the test-running tool (WinUnit) and the test binary is using a different CRT. If you want to ensure that CRT-specific error dialogs will be turned off, you will need to be sure you are using the same Runtime Library option for building both WinUnit and your test binaries, and that it's one of the DLL options. This option can be found in project Properties | Configuration Properties | C/C++ | Code Generation. By default, it is /MD for release builds and /MDd for debug builds. If you build both WinUnit and your test binaries in Visual Studio 2005 with the same option here, you will have the same CRT instance.

I wanted to have some trace functionality to aid in debugging tests. I decided to take advantage of the fact that my tool is meant to run on Windows and that I know the PE format. I therefore hook (override) OutputDebugStringA and OutputDebugStringW in the test DLL being executed, sending the output wherever I want. I borrowed this part heavily from the code that comes with the book Debugging Applications for Microsoft .NET and Microsoft Windows (Microsoft Press®, 2003), by John Robbins, who kindly gave me permission to use it in this project. I then provided a WIN_TRACE macro which funnels its arguments to OutputDebugString.

I also wanted to have some way to pass information from the command line into the tests. I thought this might be useful if, for example, you were using the tool as part of an automated build, and there was some known location that contained test data files, and you wanted the tests to know about that location. I decided on environment variables. Arbitrary variables can be set for the process using the "--" command-line option; they can be retrieved within the tests using WinUnit::Environment::GetString (a wrapper on GetEnvironmentVariable), GetEnvironmentVariable itself, or getenv_s/_wgetenv_s.

Finally, I wanted to offer the ability for a user to write a custom logger. Internally, a logger chain is used, so you can opt to send output to console, to OutputDebugString, to a file, or to any combination of those. A fourth option is to provide your own custom logger, which I'll talk about next.

Going Further...

WinUnit should solve your basic native C++ unit testing needs, but there are a few things you may want to consider adding to it.

You may have noticed the –l command-line option in Figure 4 for passing in a custom logger. You might want to use this feature to send WinUnit output to an XML file or even to a database for use in a reporting system. Also, the default loggers do not write Unicode; the –o option creates an ANSI text file. If you want your output in Unicode, you will need to write a custom logger for that. To implement a custom logger feature, simply create a DLL that implements one or more of a set of ten logger-related functions, and then pass the path to the DLL to WinUnit via the –l option. You can see my project SampleLogger.cpp for more details.

You might also wish to add your own custom asserts, using the existing ones as examples. You could make an additional header file or a static library to be linked into your test DLLs. It might be interesting to implement a more in-depth set of string asserts, similar to the StringAssert class that comes with the Visual Studio unit testing framework for .NET-based code, which includes string comparisons with regular expressions. A FileAssert class could be useful as well—comparing entire files or attributes of files. You'll notice that I used an Assert class with static methods for the implementation of the asserts, and the WIN_ASSERT macros are thin wrappers around them. I would have liked not to have used macros at all, but I wanted to be able to easily include the file and line number. Of course, macros pollute the global namespace, so if you're going to make your own it would be a good idea to start them with a different prefix.

Another handy Visual Studio add-in would be one that allowed you to run tests individually (with or without code coverage) by right-clicking on them, similar to how TestDriven.NET works for unit tests in the .NET Framework. See the sidebar "Code Coverage and Convenience" for a short discussion of code coverage tools.

My goal has been to show that unit testing native C++ can be easy, fun, and you can get started right now. To give you an idea of the power of WinUnit, take a look through the TestWinUnit project, where I used WinUnit to test itself. Those examples are completely real world and will show you advanced usage you can apply to your own unit tests. If you've been struggling with your native C++ unit testing, WinUnit makes it easy—and any time you can make testing easy, you're far more likely to actually do it.

Maria Blees has been a developer at Microsoft for 10 years and has a special fondness for native code and engineering excellence. She can be reached at the address listed in the code and welcomes bug reports and suggestions for this tool.