Partager via


Getting started with test driven development

I'm at the build conference in Anaheim this week, and I was in the platform booth when a customer asked me a question I'd not been asked before: "How do you get started with test driven development".  My answer was simply "just start - it doesn't matter how much existing code you already have, just start writing tests alongside your new code.  Get a good unit test framework like the one in Visual Studio, but it really doesn't matter what framework you use, just start writing the tests".

This morning, I realized I ought to elaborate on my answer a bit.

I'm a huge fan of Test Driven Development.  Of all the "eXtreme Programming" methodologies, TDD is by far the one that makes the most sense.  I started using TDD back in Windows 7.  I had read about TDD over the years, and was intrigued by the concept but like the customer, I didn't really know where to start.  My previous project had extensive unit tests, but they really didn't use any kind of methodology when developing them.  When it came time to develop a new subsystem for the audio stack for Windows 7 (the feature that eventually became the "capture monitor/listen to" feature), I decided to apply TDD when developing the feature just to see how well it worked.  The results far exceeded my expectations.

To be fair, I don't follow the classic TDD paradigm where you write the tests first, then write the code to make sure the tests pass.  Instead I write the tests at the same time I'm writing the code.  Sometimes I write the tests before the code, sometimes the code before the tests, but they're really written at the same time.

In my case, I was fortunate because the capture monitor was a fairly separate piece of the audio stack - it is essentially bolted onto the core audio engine.  That meant that I could develop it as a stand-alone system.  To ensure that the capture monitor could be tested in isolation, I developed it as a library with a set of clean APIs.  The interface with the audio engine was just through those clean APIs.  By reducing the exposure of the capture monitor APIs, I restricted the public surface I needed to test.

But I still needed to test the internal bits.  The good news is that because it was a library, it was easy to add test hooks and enable the ability to drive deep into the capture monitor implementation.  I simply made my test classes friends of the implementation classes and then the test code could call into the protected members of the various capture monitor classes.  This allowed me to build test cases that had the ability to simulate internal state changes which allowed me to build more thorough tests.

I was really happy with how well the test development went, but the proof about the benefits of TDD really shown when it was deployed as a part of the product. 

During the development of Windows 7, there were extremely few (maybe a half dozen?) bugs found in the capture monitor that weren't first found by my unit tests.  And because I had such an extensive library of tests, I was able to add regression test cases for those externally found tests.

I've since moved on from the audio team, but I'm still using TDD - I'm currently responsible for two tools in the Windows build system/SDK and both of them have been developed with TDD.  One of them (the IDL compiler used by Windows developers for creating Windows 8 APIs) couldn't be developed using the same methodology as I used for the capture monitor, but the other (mdmerge, the metadata composition tool) was.  Both have been successful - while there have been more bugs found externally in both the IDL compiler and mdmerge than were found in the capture monitor, the regression rate on both tools has been extremely low thanks to the unit tests.

As I said at the beginning, I'm a huge fan of TDD - while there's some upfront cost associated with creating unit tests as you write the code, it absolutely pays off in the long run with a higher initial quality and a dramatically lower bug rate.

Comments

  • Anonymous
    September 15, 2011
    I'm a huge fan of "Working Effectively with Legacy Code" by Michael Feathers for getting started with TDD if you already have an existing code base. Ignore the title, it's a book that teaches you how to go from the code base you have to the code base you want while introducing unit tests all along.

  • Anonymous
    September 15, 2011
    There's a lot to like about the "test early, test often" approach you describe, but it isn't TDD.

  • Anonymous
    September 15, 2011
    The comment has been removed

  • Anonymous
    September 15, 2011
    I find the tests almost equally valuable for forcing the code to be testable, which has the automatic effect of making the code much better designed: isolated, componentized, etc.  It's much more difficult to write testable spaghetti code.  Unfortunately, the converse is true: it's much more difficult to test spaghetti code.

  • Anonymous
    September 15, 2011
    @Ben Voigt, "Instead I write the tests at the same time I'm writing the code.  Sometimes I write the tests before the code, sometimes the code before the tests, but they're really written at the same time." Sounds a bit more like he's bouncing back and forth. so unless he never lets the tests drive development he's at least some of the time doing some form of TDD.

  • Anonymous
    September 20, 2011
    A few years back I wrote my own ultra-lightweight plain-old-C unit testing library.  I've migrated my web site to Google Sites so I'm not quite sure how to make it easily viewable in pieces, but you can see it on some of my previous hosts where I still have accounts: davearonson.x10hosting.com/.../cut.html davearonson.heliohost.org/.../cut.html mysite.verizon.net/.../cut.html I'd like to get your opinions on its usability and usefulness.

  • Anonymous
    September 20, 2011
    Test before development is an important part of the TDD concept, because it allows you to write your tests before you start making decisions about how the code will work. You will, of course, still run into the issue that your development decisions will change the behaviour of your code such that the tests no longer work, but you will be able to modify your tests to handle that, and still maintain the lack of assumptions in your tests. I'm tempted to suggest that it might be nice to write the documentation first, too.

  • Anonymous
    September 20, 2011
    @Alun: The decisions about how the code will work should have been made when you were writing the dev spec for the feature.  IMHO a dev spec should contain enough information that a knowledgeable developer can completely implement the feature, which means that all the algorithmic decisions should have already been made.  

  • Anonymous
    September 26, 2011
    +1 on TDD, I've also been using it for a few years now and it certainly reduces "bug production." :) One challenge I have had though is keeping my tests organized in a logical structure, given that adding new tests is really cheap. I've found some guidance around this in Gerard Meszaros book "xUnit Test Patterns" - a must read once you have a few months of TDD under your belt.