Fuzz Testing

Create a Custom Test Interface Provider for Team System

Dan Griffin

Code download available at: Fuzzing2007_11.exe (301 KB)
Browse the Code Online

This article discusses:
  • Visual Studio Team Edition for Testers
  • Fuzz testing
  • Writing a Test Interface Provider
  • Building, testing, and deploying a fuzzer
This article uses the following technologies:
Visual Studio 2005 SDK, VSTS


What Is Visual Studio Team System?
What Is Fuzz Testing?
Writing a Test Interface Provider
We've Got a Fuzzer ...

While attending the Microsoft® Tech•Ed 2007 conference in Orlando, I had the privilege of working in one of the developer booths at The Learning Center. What struck me the most about that experience was the buzz surrounding the latest Application Lifecycle Management (ALM) tools. There was also plenty of chatter about hot methodologies such as Agile Programming and Test-Driven Development (TDD). And, as a result, there was tremendous interest in the latest ALM suite from Microsoft, the Visual Studio® Team System (VSTS) products.

VSTS offers some great features and extensibility opportunities for testers; that's the focus of this article. Having said that, I should note that this is not an article about TDD! Instead, I'll show how to extend the software testing capabilities of VSTS. I'll use as my example a subset of security testing known as "fuzzing" (more on that later).

What Is Visual Studio Team System?

The server-side component of VSTS is called Team Foundation Server (TFS). The client-side component can be any of the team editions of Visual Studio. For more information about VSTS and TFS, check out msdn2.microsoft.com/teamsystem.

From a technical perspective, the Visual Studio Team Edition (VSTE) products comprise a set of extensions to the Visual Studio 2005 IDE. The different editions provide extensions relevant to a specific discipline—Architect, Developer, Tester, Database Professional. The intent is to wed role-specific functionality to a consistent user interface.

One of the great things about Visual Studio is that it has become more readily extensible with each new release. VSTE is an example of this; however, a better litmus test for product extensibility is how easily it can be accomplished by someone not a part of the company that created it.

To that end, my discussion here of the extensibility of Visual Studio 2005 Team Edition for Software Testers will explore the modification of the existing Test Interface Provider (TIP) sample that's included in the latest Visual Studio SDK. I'll also go into detail about testing, deploying, and debugging the plug-in. My final goal is a TIP that implements a testing strategy that's recently become quite popular, especially among security testers: fuzz testing.

What Is Fuzz Testing?

A good way to explain fuzz testing (fuzzing) is by example. These days, fuzzing is most frequently used to validate two types of data parsers: file and network.

An example of a file parser is Microsoft Word. What would fuzz testing the Word file parser entail? Imagine creating a million Word documents of random length containing random data. Suppose that these documents aren't created by Word itself, but rather by literally piping a random source of binary data into test files on disk. Open each of those random documents in Word and then see what happens.

There are likely to be three potential outcomes of that experiment. First, for the majority of the test files, Word will give some error that the file format is unrecognized or that the file is corrupt and can't be opened. Second, a few of the files out of a million might actually contain some combination of recognizable control and printable characters. Those files will simply open without any problems. Third—and most interesting—it may be that a couple of the files contain data that the document parser doesn't expect. In this case, the program may crash.

To a security tester, that third class of test documents is interesting indeed. If the file parser crashes, that means there's a bug in it. And if there's a bug, it may be exploitable. For example, the bug may result in a stack or heap corruption leading to the crash. In general, this type of bug is of great concern in a networked environment since a wide variety of file types can be distributed as e-mail attachments. Experience shows that most users will open an attachment regardless of its nature.

A problem of similar scope exists in the realm of network parsers. Consider the sheer number of protocols understood by a typical PC, server, or embedded network device: DHCP (Dynamic Host Configuration Protocol), DNS, HTTP, SMB (Server Message Block), SMTP, and so on. Each protocol implementation includes a parser for packet data originating on the network, and each parser is likely to entail some complex logic. This kind of code tends to have lots of edge cases, too, making it difficult to validate.

One approach to testing network protocol code is through fuzzing. However, in this case I'll propose a variation on the just-throw-random-data-at-it approach I described in the context of the Word parser above.

Suppose I want to test a DHCP client. Suppose further that there's a specific portion of the DHCP client code that's particularly nasty, but that piece of code is somewhat protected from the network by another portion of the parser that's considered to be more robust. In this case, fuzzing the DHCP client with purely random data isn't likely to be an efficient use of time, since I expect that the robust piece of code will filter out most of the malformed data. Instead, I take a more targeted approach.

In fact, in my experience, fuzzing is most effective when conducted in this manner. Continuing the DHCP example, instead of bombarding the client with purely random data, I configure my fuzz test (fuzzer) to use a combination of known valid data, intentionally bad data, and random data. The purpose of the known valid data is to bypass the components of the parser that don't interest me. The purpose of intentionally bad data is to take advantage of what I know or suspect to be a specific flaw in the code. And finally, the purpose of the random data is to just see what happens.

During the course of fuzzing, my method evolves as I hone in on suspicious behavior. For example, where some portion of my test DHCP packet originally included random data, I might subsequently use known valid data to get me deeper into the parser. If I discover an error, I might send intentionally bad data to try to exploit it. This heuristic approach gives me the best of both worlds: human intuition plus machine automation.

Fuzzing is indeed a powerful tool, yet it is still in relatively uncharted territory with respect to test coverage. I therefore issue this call to action (to be implemented after reading the rest of the article, of course): pick a technology that you're familiar with, or at least interested in, and write a fuzzer for it. More likely than not, you'll find bugs.

Here's an example: I attended a clever fuzzing demonstration at the RSA conference this year in San Francisco. It inspired me to rush back to my hotel room and write a fuzzer for a technology I had worked closely with. Using a custom fuzzer, I applied the iterative approach described above—start with random data and then fine-tune it—and hit pay dirt within a few hours. As a user, I was able to cause an access violation in code running as LocalSystem.

Writing a Test Interface Provider

The sample code that accompanies this article is an implementation of a simple fuzzer. The intent is twofold: to show fuzz testing in action and to show an example of fuzz testing integration with the VSTE automation framework.

The sample code contains the following four projects:

  1. ConsoleApp—this is the application being tested. The code is short and sweet. This application responds either by outputting a line of text or by throwing an unhandled exception.
  2. MyTest—this is the custom test type provider. The code is mostly unchanged from the sample, except that I renamed MyTestAdapter to FuzzTestAdapter and added all the code to conduct the actual test.
  3. MyTestUI—the C++ code for the custom test user interface. It's unchanged from the original sample.
  4. TestProject—the Startup Project for debugging. It's easy to re-create: just create a new Test Project then add the custom test type.

I discuss these projects in more detail later in this article.

The Visual Studio 2005 SDK is required in order to build the sample. The SDK can be downloaded from microsoft.com/downloads/details.aspx?familyid=51A5C65B-C020-4E08-8AC0-3EB9C06996F4. The current version of the SDK is 4.0, dated 2/28/2007. It includes the original sample VSTE TIP upon which mine is based; however, I had some problems getting the original sample to build, install, and run. While the changes required to turn the existing "MyTest" sample into the "FuzzTest" sample discussed in this article were minor, the effort required to get to the point of debugging the running plug-in was not. Help from the product team, and a lot of trial and error, solved those problems and I'll describe the steps in detail in a moment.

Again, the changes that I made to the original MyTest sample to turn it into the FuzzTest sample that accompanies this article are easy to grasp. In fact, running windiff.exe with the –T switch to compare the original and modified sample directory trees shows a short list of changes that is both interesting and instructive.

The first necessary change was to get two files that are missing from the base MyTest sample, at least in the version of the Visual Studio SDK referenced previously. The files are mytest.ico and mytestui.rc. Both files were provided to me by the VSTS product team and are included in my FuzzTest sample code. They both go into the solution mytestui subdirectory.

The second code change was to the mytestui project file. Specifically, a post-build command was using a relative path and was doomed to fail when I attempted to build the sample project in a directory other than its default location (that is, in the SDK installation tree). The command in question launches the Command Table Configuration compiler (ctc.exe), a tool included with the Visual Studio SDK. For more information about CTC files, see msdn2.microsoft.com/bb165048.

To address this problem, I modified the project to launch ctc.exe from its default absolute path—an approach that admittedly has its own shortcomings (for example, if the SDK isn't installed to the C: drive)—but at least you'll know what to look for!

As an aside, I mention the above changes since they're relevant not only to the FuzzTest sample, but also for getting the original sample built in case additional experimentation is desired. There are some important deployment and debugging issues to be aware of as well, but I'll get to those later in the article.

In turning the base MyTest sample into a new test plug-in, the next step was to replace the original MyTestAdapter class (in mytest\myhostadapter.cs, which I removed) with the new FuzzTestAdapter implementation in mytest\fuzztestadapter.cs. Copying those two files to a temporary directory and running windiff.exe on mytestadapter.cs and fuzztestadapter.cs shows essentially all of the core code changes required to create the new TIP. And in fact, the majority of the interesting changes exist within the Run function (see Figure 1).

Figure 1 Implementation of the IBaseAdapter::Run Method

[SuppressMessage("Microsoft.Design", "CA1031")]
public void Run(ITestElement testElement, ITestContext testContext)
MyTestAssertHelper.ParameterNotNull(testElement, "testElement");
MyTest test = testElement as MyTest;
MyTestAssertHelper.ParameterNotNull(test, "testElement");
MyTestResult result = new MyTestResult(
       new ComputerInfo(Environment.MachineName), m_runId, test);
Stopwatch timer = null;

// Start the timer for this test.
timer = new Stopwatch();

// This is a FUZZ Test: Loop until the test breaks, then report back
// loop through unicode chars and pass into method
bool ErrorFound = false;
char ch;
for (int i = 0; i < char.MaxValue; i++)
       ch = Convert.ToChar(i);
       string testParam = ch.ToString();
       string CommandLine = test.CommandLine.Replace(
              "<FuzzTestString>", "\"" + testParam + "\"");
       int spaceIndex = CommandLine.ToUpper().IndexOf(".EXE ") + 4;

       Process p = new Process();
       p.StartInfo.FileName = CommandLine.Substring(0,spaceIndex);
       p.StartInfo.Arguments = CommandLine.Substring(spaceIndex+1);
       p.StartInfo.UseShellExecute = false;
       p.StartInfo.CreateNoWindow = true;
       p.StartInfo.RedirectStandardInput = true;
       p.StartInfo.RedirectStandardOutput = true;
       p.StartInfo.RedirectStandardError = true;

       StreamWriter sIn = p.StandardInput;
       StreamReader sOut = p.StandardOutput;
       StreamReader sErr = p.StandardError;
       sIn.AutoFlush = true;
       string OutputString = sOut.ReadToEnd();
       string ErrorString = sErr.ReadToEnd();

       if (ErrorString != "")
       // Record the process exit code.
       ErrorFound = true;
       result.ProcessExitCode2 = -1;//failed
       result.Outcome = TestOutcome.Failed;
       result.ErrorMessage = 
              "Test failed with input: " + 
              CommandLine + 
              ", Error Message: " + 

if (!ErrorFound)
       // all tests succeeded:
       result.ProcessExitCode2 = 0; // success
       result.Outcome = TestOutcome.Passed;
catch (Exception ex)
       result.Outcome = TestOutcome.Failed;
       result.ErrorMessage = ex.ToString();
       if (timer != null)
       result.Duration = timer.Elapsed;

Analyzing the code for the function Run in Figure 1 reveals how the new FuzzTestAdapter works. In short, when a test of type FuzzTestAdapter is to be executed in VSTE, the first step (after some object initialization at the very beginning) is to start a timer. Test run duration isn't necessarily the most interesting statistic relating to a fuzz test (as opposed to, say, a performance test), but it's supported as a built-in feature of the TestResultMessage class, so I left it in.

The next step in Run is to extract the command-line configuration option for the target test instance, which indicates how to run it. Notice that, following the full path to the target test binary, the plug-in is expecting to find a placeholder string, "<FuzzTestString>". The string as a whole is retrieved via the CommandLine member of the MyTest class. The purpose of that string is to expose to the user a convenient way to reuse the TIP for different tests. This becomes even more convenient in the context of my Fuzzer TIP, since the command line changes each time the test loops.

Continuing with the description of the Run method, the next step is to instantiate a System.Diagnostics.Process object to launch the test program on the local machine. If, as a result of the test input (or for any other reason), the test process throws an exception or returns an error, a test failure will be flagged. Note that I adorned my implementation of the Run method with a [SuppressMessage] attribute. That attribute is necessary in this context since the execution of the test process is wrapped by a handler that will catch all exceptions. This is generally considered bad practice, and is by default flagged as such by the VSTS static code analyzer, but it's required in this case to ensure that all non-positive test completion scenarios are caught and tracked as failures.

To wrap up the discussion of Run, and to describe the specific type of fuzz test implemented by my TIP, observe in Figure 1 that most of the logic described in the previous two paragraphs is wrapped in a loop that iterates through each possible value that can be taken by the managed char type. The net result is that the test binary (ConsoleApp.exe in this sample) is called with each possible char value, continuing until either the loop completes or the test program fails to correctly handle one of the values in that range. Naturally, for the sake of the demo, I've ensured that the test will fail; otherwise, it wouldn't be an interesting example!

The ConsoleApp test demonstrates how fuzzing can be used to expose errors even in simple programmatic interfaces. In this case, the interface was written based on an implicit and risky assumption about the range of character values to be handled. The following snippet, adapted from the ConsoleApp code, shows this assumption:

Int32 arraySize = Math.Max(Math.Max('Z', 'A'), Math.Max('z', 'a')) + 1;
charCountArray = new Int32[arraySize];
char c = inputString[0];
charCountArray[c] += 1;

In this code, the inputString variable is the second command-line argument received from the FuzzTestAdapter. Any character value outside the ASCII upper- and lower-case alphabet range will result in an "array index out of bound" error. As the fuzzer loops through its character values, it will quickly arrive at one that causes the code to fail. When that happens, the ConsoleApp process terminates with an unhandled exception. That result is retrieved by the fuzzer via its Process p object and returned to VSTE.

Once the MyTestAdapter class in the original sample code was replaced by the FuzzTestAdapter class, all that remained to do in my implementation was to rename references to the former by references to the latter.

Now that I've explained the changes required to turn the original TIP sample into my FuzzTestAdapter, let's look at building, deploying, and debugging.


In order to build the sample FuzzTestSample.sln solution, open it in VSTE and then select Build | Build Solution from the top-level menu. If you want to just start with the basic sample solution included with the SDK, you'll first need to grab the two missing files that are included with the code download accompanying this article.


Both the original TIP sample and the updated fuzzer version include a deployment batch script. The script is the recommended way to install the TIP binaries for testing. The script assumes that it's being run on the same machine as the binaries were built and from within a Visual Studio command window.

Before running the script (but after building the solution, of course), I've found that it's safest to shut down Visual Studio for the deployment phase. This is admittedly inconvenient, but if I didn't perform this step, some of the file copy operations during deployment would fail.

To run a Visual Studio command window, select Start | All Programs | Microsoft Visual Studio 2005 | Visual Studio Tools | Visual Studio 2005 Command Prompt. Then run the deploy.bat setup script. I made minor changes to the script; you may find that the original one needs some tweaks, too, depending on where you run it. Take a look at the relative paths used in the copy and xcopy commands for deploying the binaries. Also, note that the last command performed by the script is a call to regpkg.exe, based on a relative path to the Visual Studio Integration Tools directory in the SDK. I changed this to an absolute path using the %ProgramFiles% Windows environment variable, based on the assumption that the default installation directory for the SDK was used.

Finally, don't forget this last step, since it's not part of the batch file! From within the Visual Studio command window, run the devenv /setup command. This command ensures that your plug-in's resource metadata is merged with the metadata of other installed Visual Studio packages and completes the installation of the new plug-in.

In preparing this article, I asked the Visual Studio product group how the TIP is actually consumed. Each Test Type must define a TIP, which will be instantiated by TMI (Test Management Interface). TMI will use this TIP to load tests from storage, create new tests, save tests, and interpret results.


Once deployment is completed, the next step is to create and run a test of the new type. Here are the steps I used to accomplish this:

  1. Reopen the FuzzTestSample solution in VSTE.
  2. Ensure that TestProject is set as the default start-up project. To do this, right-click on TestProject in the Solution Explorer window and select Set as Startup Project. After doing this, the TestProject name should appear in bold lettering in Solution Explorer, as shown on the right-hand side of the screenshot in Figure 2.
  3. Right-click on TestProject again in Solution Explorer and select Add | New Test.
  4. In the New Test window that appears, you should see a test type entitled My Test. That's the new test type from the sample TIP, and its presence implies that the registration steps were successful. Select the My Test icon and click OK.
  5. In the Command Line field of the new test, replace the default entry with the full path to the ConsoleApp.exe test program, appended with the expected command-line values (as discussed in the previous section regarding the Run routine). For example, you might enter "c:\projects\FuzzTestSample\ConsoleApp\bin\Debug\ConsoleApp.exe ProcessString <FuzzTestString>". See the left-center pane in Figure 2.
  6. Run the test by pressing the F5 key.

Figure 2 VSTE with Sample Test Result

Figure 2** VSTE with Sample Test Result **(Click the image for a larger view)

This approach has the benefit of running the TIP in the debugger. For example, prior to pressing F5, I can set a breakpoint on the Run function in MyTest\MyTest\FuzzTestAdapter.cs and watch it launch ConsoleApp.exe with the various character values. Of course, in production, I wouldn't expect the plug-in to be run in the debugger since that slows down test execution considerably.

This particular test takes a few moments to complete. Once it's complete, the test failure pane—shown at the bottom of the screenshot in Figure 2—becomes visible. From the error text shown in the test failure, it's clear that a non-alphabetic character caused an IndexOutOfRange exception, just as expected.

It's worth mentioning the method I learned for making code changes to the custom test type after a successful initial deployment. Based on experimentation, this sequence of steps appears to be the minimum required for redeploying the plug-in in such a way that it can be debugged again.

  1. Make the desired code changes and rebuild the solution (incremental is fine).
  2. Shut down Visual Studio.
  3. Copy the updated file, Microsoft.VisualStudio.QualityTools.Samples.MyTestTIP.dll, from the solution output directory to its deployed location. By default, the latter is C:\Program Files\Microsoft Visual Studio 8\Common7\IDE\PrivateAssemblies.
  4. Reopen the solution in Visual Studio.
  5. Add a new test (for example, MyTest2.mytest).

We've Got a Fuzzer ...

Effective automation is considered the ultimate goal of most software Quality Assurance teams. And discovery of many classes of defects, including security bugs, has proven particularly difficult to automate. A heuristic approach, such as the adaptive fuzzing method described at the beginning of the article, combined with an automation framework such as Visual Studio Team Edition, provides the best of both worlds. That is, human intuition, plus a machine's tolerance for boring repetition, equals good test coverage!

I'd like to thank Euan Garden and Jeff Wang at Microsoft, as well as Juan Perez at Personify Design, for their input into this article.

Dan Griffin is a software security consultant in Seattle, WA. He previously spent seven years at Microsoft on the Windows Security development team. Dan can be contacted at jwsecure.com.