Know Thy Code
Simplify Data Layer Unit Testing using Enterprise Services
Roy Osherove
This article discusses:
|
This article uses the following technologies: COM+ 1.5, C#, Visual Studio, Testing |
Contents
The State of Database Testing
Keeping Consistent State
Database Restore
ADO.NET Transaction Object
COM+ Transaction—Inherit ServicedComponent
Simple, Easy, and Painless
COM+ 1.5
Unit Testing in Visual Studio 2005 Team System
Conclusion
Despite all the hype surrounding unit testing and Test Driven Development (TDD) many developers don't see them as useful processes that are applicable to the production of real-world applications. One reason is that once you get down to the intricacies of application development, all of the easy testing you've heard so much about can vanish. You may find yourself writing unit tests that are hard to implement against real code. This is often the case in database testing, where TDD can quickly lose its appeal, and force you to revert your processes to the previous real-world methodologies your team employed. Here, I'll present some techniques to make your testing process easier, even when developing real-world applications, using unit testing.
The State of Database Testing
Most well-architected data-driven systems have some sort of data access layer (DAL). This could be a specific class or even a separate class library project that is responsible for all the communications with the specific database. When I refer to database unit testing, I am actually talking about writing unit tests that exercise and drive the development of that specific DAL component.
Testing DALs is trickier than checking simple logic and algorithmic code. There are two possible ways to run unit tests against a DAL. The first is to replace the database API that your DAL is talking to with an object of your own creation that mimics the database's interface, but does not actually perform any tasks. This method is often referred to as "mocking" or "stubbing." Usually, a mock of an object has specific behavior and expects various actions to be performed on the replaced object, while a stub is merely a silent replacement for something that is not used to test logical actions. A stub just helps you decouple the tested object from interacting with expensive or time-consuming resources. Mocking usually means that you are performing interaction testing, in which you test interactions between objects. For more information on mock objects, see Mark Seemann's article in the October 2004 issue of MSDN®Magazine, available at Unit Testing: Mock Objects to the Rescue! Test Your .NET Code with NMock.
Figure 1** State-Based Testing **
The second testing method involves your DAL calling the real database behind the scenes, which entails various potential problems. This usually means that you are performing state-based testing. You don't test logical calls between objects (namely, your DAL and the ADO.NET infrastructure), but simply perform actions and check the state of your tested objects afterward to make sure they fit your requirements. Figure 1 shows how all the objects and layers take part in the test. For more on state-based testing, see Martin Fowler's article at Mocks Aren't Stubs.
Keeping Consistent State
There are a number of things you should keep in mind when you're writing unit tests.
Always know the initial state before testing A test should always start with a predictable initial state. For example, if you're testing functionality that deletes rows from a specific database table, make sure that before the test begins you have known data in the table that you can delete in your test. This means that before each test you need to create your own initial state, be it data in the database or fresh instances of tested objects. You cannot rely on anything but the current running test to do this.
This is usually accomplished with the aid of a setup method. Think of the setup method as a constructor that is invoked for each test case in your test fixture. If you have five test cases, the setup method would be invoked five times—once before each test case starts. With NUnit, creating a start method is done by marking a method with SetUpAttribute. This is the place to set up fresh instances of objects to be tested, thus decoupling each test's state from every other test's state.
Each unit test is independent of the others A test should not depend on any other unit test to do its job, and a test should not be required to run in a specific order. Therefore, you should be able to run each of your unit tests separately in any order and get the same results every single time. The NUnit framework does not guarantee that your tests will be run in any particular order. This means that you will need to write each test as if it were the one and only test in the whole testing project.
A unit test fails or passes in a consistent, predictable manner If a valid test fails, it should continue to fail until you fix the bug that causes the failure. Likewise, if a test passes once, it should always pass until you introduce a change into the system that causes it to no longer pass. Running a unit test several times on the same compiled code and having it fail on some of them and pass on others can both lull you into a false sense of security and frustrate you to no end. Again, always know the initial state before testing.
The problem with testing against a live database is that many unit tests you write will make changes to the state of the database—changes that will be visible to all of the other unit tests that run against the same database. Suppose you have a test that deletes a row in the database. You might have a separate test that relies on the fact that a specific number of rows exists in the database before running (perhaps it checks that a read operation succeeds by checking the number of rows returned), which will fail if it was run right after your deletion test. Since this is a database, possibly shared with other developers and testers, you can't just add fresh objects into your test state using a setup method. This external state dependency has to somehow be consolidated before you run the next test. Creating a new object instance isn't going to help here; you need to actually return the database to its previous state.
All the rules I've mentioned so far underscore how important it is to keep the database state consistent between the tests. You have to undo any changes that are performed against the database in each test, even though it is this rolling back that is the most problematic and can lead to unmaintainable code.
There are several solutions to the problems I've mentioned. Since this article deals with a specific solution in detail I will limit the discussion of the other solutions. Fortunately, they're quite simple.
Database Restore
The perfect way to restore a database to a known state is, of course, to actually restore it from a previously made backup copy. If you run a simple RESTORE DATABASE script before each test, you could ensure the tests will be run each time on the same database. Unfortunately, though, this is quite time-consuming. If you assume that it takes five seconds to do a simple restore of a small SQL Server™ database, and multiply that by a couple hundred tests (each test triggers a setup method that restores the database) the restore process itself would add more than 16 minutes of overhead to all those tests. This is hardly a great way to spend your day, especially if you want to be able to run the tests as you develop your code. However, this solution is handy for specific cases in which you have tests that perform a significant amount of data input into the database (usually in load tests). This is typically not the case with the unit tests I am describing here, which are much simpler (insert/update/delete one or a few rows in the database each test).
Things get a little tricky and performance begins to degrade if you're testing against multiple databases in the same testing project (multiply that restore time by the number of databases and see the frightening result). The same is true if a number of developers are testing against the same database, as each developer's tests will interfere with everyone else's tests.
If you choose not to restore the database completely for each test, you are left with the option of using a transaction against the database operations to roll back your changes. There are several ways you can accomplish this.
ADO.NET Transaction Object
Transactions are a great way to deal with this problem. In his book Test Driven Development with Microsoft .NET (Microsoft Press®, 2004), James Newkirk talks about using transactions to undo the effects unit tests have had on your database. He explains, however, how to use ADO.NET 1.x transactions to achieve this, which in my opinion is not the cleanest way. The idea is that you explicitly open a transaction object inside your test setup method and then send that ITransaction object into your tested object, which uses the open connection to execute the tasks. At the end of the test, you simply call the ITransaction's Abort method to undo any changes made to the database during the test.
This method is overly complex and is hard to implement and maintain for each specific project. It also requires you to modify your tested object's API to receive such a transaction. This is something that may be feasible in some systems but certainly is not possible in many other systems in which the API must remain closed. This is particularly true for systems that use COM+ transactions to achieve their database functionality. Newkirk admits that this isn't the most elegant solution, and I agree. I'm not that keen on changing my whole product's APIs just so their test effects can be rolled back. Of course, there are other ways to pass information into methods other than through method signatures, such as using static fields or thread-local storage (TLS). If you use ADO.NET transactions for your testing purposes, you might consider these as an alternative.
COM+ Transaction—Inherit ServicedComponent
COM+ transactions are a great way to overcome the obstacles inherent in the previous approach. You need to somehow make your unit tests run inside a COM+ transaction and then abort that transaction during your test's teardown method (a teardown method runs at the end of your test, and is denoted in NUnit through the TeardownAttribute). Plus, your tested APIs don't have to change. It should just work. Remarkably, this is one of the easiest things to implement.
To participate in a transaction, you need a class that inherits from the System.EnterpriseServices.ServicedComponent class. Your test fixture class, which tests the DAL object, is the perfect candidate. Simply make it inherit ServicedComponent, thereby allowing it to participate in COM+ transactions using the ContextUtil class found in the System.EnterpriseServices namespace. The convenient thing about using a COM+ ServicedComponent is that to enable transactions, all you have to do is put a simple attribute on top of the class declaration. In this case, you can put up an attribute that tells COM+ that each public method that is invoked on an instance of this class must run inside a new transaction context, as shown here:
[TestFixture]
[Transaction(TransactionOption.RequiresNew)]
public class TransactionalTests:ServicedComponent
{
[Test]
public void Insert()
{
//insert something into the database
}
}
Each time you run this specific test fixture, each test will be invoked inside a transaction.
Now all you need is to make sure that after each test is run, the transaction it participated in will be rolled back. This is easily done using a [TearDown] method, which uses ContextUtil.SetAbort to roll back the transaction, as shown in Figure 2.
Alternative Testing Frameworks
There are multiple unit-testing frameworks that replace the NUnit framework and that allow you to run transactional unit tests through a simple attribute called RollbackAttribute. These frameworks use COM+ 1.5 to control the transactional unit tests, and as such they require the same set of operating systems as the ones I've mentioned.
MbUnit MbUnit is a recent and welcome addition to the testing frameworks world. It is written by Jonathan de Halleux and can be downloaded at mbunit.tigris.org. The framework includes a RollbackAttribute that uses the same techniques that have been described in this article.
It also includes a SqlRestoreAttribute that performs a database restore action for each specific test, as mentioned in the first method in the article. MbUnit also comes with many other attributes that can make a test developer's life easier, including RepeatAttribute and PerfCounterAttribute, which allow for testing against minimum or maximum performance counter values. Although it is still in its infancy, MbUnit has many benefits, including the ability to run NUnit tests.
XtUnit XtUnit is an add-on library you can download and use in your own code. It is an elegant way of adding rollback capabilities to your tests, no matter what test framework you are working with. It uses .NET interception to achieve this. It is also extensible, allowing you to easily create attributes of your own to be executed on your tests. You can get more information on this framework at teamagile.com/mainpages/downloads.html.
NunitX NunitX is a variant of the NUnit testing framework, including the GUI and console runners, and has been slightly enhanced to include a new attribute named RollbackAttribute. By applying this attribute to specific unit tests, you run the test inside a transaction context. The framework uses COM+ 1.5 SWC. The same principles you saw here are implemented in the framework. It can be downloaded from www.osherove.com.
Make Your Own Attribute
The latest version of the NUnit framework (2.2.1) includes the ability to extend it with additional attributes. This lets you add your own rollback attribute quite easily for your own tests.
Figure 2 Rolling Back Transactions
[TestFixture]
[Transaction(TransactionOption.RequiresNew)]
public class TransactionalTests:ServicedComponent
{
[Test]
public void Insert()
{
// Perform your magic against the database
CategoriesManager mgr = new CategoriesManager();
int newID = mgr.InsertCategory("MyCategory");
Assert.IsTrue(newID != 0, "returned ID should be more than zero");
}
[TearDown]
public void Teardown()
{
if(ContextUtil.IsInTransaction)
{
// Abort the running transaction
ContextUtil.SetAbort();
}
}
}
Any code, including any DAL code that you call, and any changes to the database resulting from the execution of this code, will be rolled back. Figure 3 shows the test flow using a COM+ transaction as a rollback mechanism.
Figure 3** Using a COM+ Transaction to Roll Back Changes **
Simple, Easy, and Painless
There are several drawbacks to keep in mind when using this method. Because using serviced components in this fashion means that all the tests in your test fixture will run inside a transaction, then even tests that only perform read operations with no changes to the database will use transactions. And transactions, though not as time-consuming as a full database restore, still incur a cost. To avoid that overhead, you need to write a whole new test fixture just to separate the "write" tests from the "read" tests, which makes your unit testing code more complex.
If the code you are testing has specific requirements that conflict with the COM+ transaction requirements, you must assume no rollback will be available. A few common scenarios cause this conflict. First, you could have code in your DAL object that itself makes use of TransactionOption.RequiresNew. That causes the DAL object to disregard the transaction started by your test fixture and to create a new transaction—one that you cannot control from within your test. Second, you could have code in your DAL object that uses TransactionOption.Disabled or TransactionOption.NotSupported. This means the code will perform its actions outside of any given transaction context; therefore, any changes to the database cannot be rolled back using the transaction context.
Classes that inherit from ServicedComponent are required to be inside a strongly named assembly. That means your tests will have to be signed. Also, any assembly that is referenced by a strongly named assembly is required to be strongly named as well. When I just want to do some simple database tests against a simple proof-of-concept assembly, I often don't have the time or willpower for signing and naming assemblies. Furthermore, there is no way to test an assembly that isn't strongly named without first disassembling it and then reassembling it with a strong name key.
Additionally, it seems that, for whatever reason, the current version of NUnit (version 2.2.1) does not always play well with serviced components, throwing a variety of exceptions when running tests inside the GUI driver. For those using NUnit as a testing harness, this problem makes this method practically unusable in many cases (though it might work just fine on some machines and in some circumstances).
COM+ 1.5
So how can you overcome problems such as the inability to use the NUnit GUI runner to run the tests or the need to strongly name the assemblies? Enter COM+ 1.5. COM+ 1.5 is an enhancement to the COM+ Enterprise Services you know and love. To take advantage of the features I'm about to demonstrate, you need to be running either Windows® XP Service Pack 2 or Windows Server™ 2003 or later. This really shouldn't pose any configuration problems, since the only machines that need COM+ 1.5 in this case are the testing/build/development machines, not the customer machines. If you attempt to use these features in systems running earlier versions of the software than those just mentioned, a nasty PlatformNotSupportedException will be thrown.
Among other things, COM+ 1.5 lets you use Enterprise Services without inheriting ServicedComponent explicitly with a feature called Services Without Components (SWC). Instead, all you have to do is get familiar with two really simple classes that reside in the System.EnterpriseServices namespace: ServiceConfig and ServiceDomain. These two classes let you enlist in transactions and abort them without ever needing to change your inheritance chain (that is—create a class that inherits from ServicedComponent). Figure 4 shows some very simple code that accomplishes exactly what the code in the previous samples did: it makes a test run inside a transaction.
Figure 4 COM+ 1.5 Transaction Code
[SetUp]
public void Setup()
{
// Enter a new transaction without inheriting from ServicedComponent
Console.WriteLine("Attempting to enter a transactional context...");
ServiceConfig config = new ServiceConfig();
config.Transaction= TransactionOption.RequiresNew;
ServiceDomain.Enter(config);
Console.WriteLine("Attempt suceeded!");
}
[Test]
public void Insert()
{
// Perform your magic against the database
CategoriesManager mgr = new CategoriesManager();
int newID = mgr.InsertCategory("MyCategory");
Assert.IsTrue(newID != 0, "returned ID should be more than zero");
}
[TearDown]
public void Teardown()
{
Console.WriteLine("Attempting to Leave transactional context...");
if(ContextUtil.IsInTransaction)
{
// Abort the running transaction
ContextUtil.SetAbort();
}
ServiceDomain.Leave();
Console.WriteLine("Left context!");
// Trying to access ContextUtil now will yield an exception
}
There's no ServicedComponent in sight; instead, there is a very simple use of the two classes introduced in COM+ 1.5. ServiceDomain is used to enter or leave a transactional context. Within a transactional context, you can use the ContextUtil object at any time to manipulate transactions. The static Enter method takes an argument of the ServiceConfig type, which essentially tells the transactional context the requirements of your component. In fact, it's a complete replacement of the [Transaction] Attribute that you put on top of your class in the previous example. You simply specify a property on the ServiceConfig class that tells the context that you require a new transaction. And the beautiful thing is that you can enter or leave a ServiceDomain at any time, with any code between those two calls able to commit or rollback the current transaction.
You can specify which tests participate in a transaction and which don't. Right now all tests do because SetUp and TearDown are called for each test. But entering and leaving a transaction domain takes time, so it's better if tests that don't need this rollback functionality don't actually do their work inside a transaction. If you just put individual calls in each test case to open and close a transaction domain instead, you'd get better granularity, at the cost of writing more repetitive code. See the "Alternative Testing Frameworks" sidebar for information on frameworks such as XtUnit and MbUnit that provide built-in attributes with rollback abilities.
If your test performs a significant number of writes into the database, you run the risk of making the rollback and the transactional enlistment take a long time. In such cases, using a database restore may be faster or perhaps even preferable.
SWC is the most elegant of the three transaction-based methods I've discussed and allows the most flexibility. It is currently my favorite method for performing unit tests that involve database interactions, although the new System.Transactions namespace in the .NET Framework 2.0 may take its place in my bag of tricks (for more information on System.Transactions, see John Papa's column in the February 2005 issue of MSDN Magazine, at Data Points: ADO.NET and System.Transactions).
Unit Testing in Visual Studio 2005 Team System
Visual Studio® 2005 Team System provides a built-in unit testing framework for testing managed code, available through a new kind of project type: the test project. This is a special project in which you can create unit tests, load tests, and a variety of other kinds of tests, to exercise your application and return results. Every solution that contains a test project also contains a test run configuration file, which stores the settings used for running tests. These settings include the computer on which to run tests (local or remote), the hosting process (such as ASP.NET), and whether to gather code-coverage results. Team System also lets you run tests from either a command line or within the Visual Studio IDE. You can then see the results, along with the reasons for any failed tests, and publish the results so that they can be analyzed by other team members or incorporated into reports.
After you have run unit tests, you can see the unit test code coverage — how much of your code has been exercised by the tests you ran. Code coverage is reported to you in two ways, through statistics for what percentage of your code was actually run and through color coding of the source displayed in Visual Studio. Code that was run is highlighted in green and code not run is highlighted in red.
Two special kinds of unit tests are also available in Team System: data-driven unit tests and ASP.NET unit tests. A data-driven test is a unit test that you configure to be called repeatedly for each row of data from some data source. ASP.NET unit tests are used to exercise code in an ASP.NET application as it responds to page requests. ASP.NET unit tests are hosted inside the process of the ASP.NET application being tested.
Some new attributes in Team System are similar to what you've used in other unit-test harnesses. Instead of NUnit's TestFixtureAttribute, you now have TestClassAttribute, and instead of NUnit's TestAttribute, you have TestMethodAttribute. The same goes for SetUpAttribute and TearDownAttribute, which are replaced by TestInitializeAttribute and TestCleanUpAttribute, respectively.
In Team System, you can automatically generate unit tests from the classes used in the source code you want to test. Generated unit tests contain minimal code, and need to be edited before they produce meaningful test results. However, when you edit unit tests, you are aided by the classes and methods of the Unit Testing Framework of Team System. It offers, for example, new flavors of Assert methods, which you can use to pinpoint the results that differ from what you expect. The Assert class should be familiar to anyone who has used NUnit.
Unfortunately, there are currently no special features in Visual Studio 2005 that improve database unit testing, so you'll probably still end up performing some of the tasks discussed in this article. Figure 5 shows a simple test class that uses exactly the same rollback mechanism as employed in the previous demos, using SWC. The syntax is only slightly different. As with all new features in Visual Studio 2005, changes could occur before the final release.
Figure 5 Rolling Back with SWC
[TestClass]
public class TransactionalTests2005
{
[TestInitialize]
public void Initialize()
{
ServiceConfig config = new ServiceConfig();
config.Transaction = TransactionOption.RequiresNew;
ServiceDomain.Enter(config);
}
[TestMethod]
public void UpdateCategoriesTest()
{
// Insert something into the database
}
[TestCleanup]
public void Cleanup()
{
if (ContextUtil.IsInTransaction)
{
ContextUtil.SetAbort();
}
ServiceDomain.Leave();
}
}
Conclusion
Database testing can be a challenge unless you use the right tools and techniques. Database unit testing is an integral part of the testing process. Using COM+ 1.5 makes it very easy to make the code work as you envisioned it. Many of the techniques I've described here will probably remain unchanged through the next release of Visual Studio. And while Visual Studio 2005 won't necessarily change the code you write in your tests, it will help you quickly get up and running with the unit testing process for your apps.
Roy Osherove is the Principal of Team Agile, a consultancy devoted to agile software development and .NET architecture. He also maintains a blog with related information at www.iserializable.com. You can reach Roy at Roy@TeamAgile.com.