How long will it take?
When getting close to shipping a product and the team discovers a critical defect that must be fixed one of the first questions asked by the management team is, "how long will it take?" From a testing perspective they generally want to know 2 things; what has to be done to reduce perceived risk (or increase their confidence the fix hasn't introduce new defects), and then they ask how long will it take. Unfortunately, many teams take a swag at the time element and proceed to beat on the product until the magically agreed upon time expires.
I have seen a lot of test case templates, and one field that I often see omitted from the template (or ignored by the test designer) for a test case is duration, or a reasonable approximation of how long it should take an experienced tester to complete a particular test (whether it is a discrete, functional test or a complex user or business scenario). Adding an estimated time for completion to a test then allows us to better approximate (within an hour or so) how long it will take to execute a particular test suite or a portion of that test suite.
For automated tests written in C#, getting the time for a test to complete is now simple with the Stopwatch class introduced in the .NET Framework 3.0. The Stopwatch class is exactly what the name implies; it provides methods and properties to accurately measure the elapsed time between starting the stopwatch instance and stopping the stopwatch instance. The following code snippet provides a simple example of how the Stopwatch instance might be used in a functional or behavioral automated test.
using System;
using System.Diagnostics;
namespace TestingMentor.Examples
{
class StopwatchExample
{
static void Main(string[] args)
{
// initialize a new stop watch instance
Stopwatch myTotalTestTime = new Stopwatch();
// start the stop watch
myTotalTestTime.Start();
// test code goes - simulated by sleep
System.Threading.Thread.Sleep(3500);
// stop the stop watch
myTotalTestTime.Stop();
// log the elapsed time to the log file
// (simulated by writing to the console window in this example)
Console.WriteLine("Total test time: " +
GetElapsedTestTime(myTotalTestTime.Elapsed));
}
private static string GetElapsedTestTime(TimeSpan timeSpanObject)
{
string time = String.Format("{0:00}:{1:00}:{2:00}:{3:00}",
timeSpanObject.Hours, timeSpanObject.Minutes,
timeSpanObject.Seconds, timeSpanObject.Milliseconds / 10);
return time;
}
}
}
It is also possible to create several instances, or start and stop several stopwatches in a single test to measure not only the duration of the complete test, but how long it takes to complete a particular task within an automated test. This is extremely valuable for performance testing (a test used to determine the time required to perform and complete a particular task), or stress testing to measure mean time to failure (MTTF) and mean time between failures (MTBF).
Adding a stopwatch to the tests in your automated test suite will better enable you to accurately determine the duration or amount of time required not only to execute a complete test suite such as the build verification test suite or an automated regression test suite, but it also allows us to better calculate the time required to execute a particular subset of tests in a test suite (such as all priority 1 tests, or tests in a particular functional area).
Comments
Anonymous
July 25, 2007
Yeah - Stopwatch is a pretty nifty thing. Used it in our dev code only though. We use VSTT for our testing and we get time spent on each testmethod automatically. I guess most test fx-es do that for you. But having the stopwatch helps if you want to debug and determine the lazier parts of your test i.e. which parts are taking longer than anticipated. We did this and found an unexpected culprit!Anonymous
July 26, 2007
Discovery of a (any) bug at any stage of development means an "unknown" amount of dev and test effort required to get a sense "confidence" that "this has been fixed". Automated tests at unit level/BVT style are one way to "get a sense of confidence" that defect fix did exactly what was intended. nothing more nothing less .... But note that you are covering only code part ... ShriniAnonymous
July 26, 2007
Hi Shrini, Your comment is completely tangential to the post, but I appreciate your comments (even though I don't agree with you sometimes). I would argue there are some defects that we can reasonably predict (given the appropriate resources and bandwidth) how long it will take to effect a fix and test the fix to restore a similar (or potentially greater) level of confidence or reduced risk based derived from historical reference or experience. (Confidence does not imply perfection, or that testing has found all defects.) But, I do agree there are some types of defects (memory leaks, etc.) that we cannot predict how long it will take to fix and test. To your second point, well designed automated test suites at the unit level, BVT, or regression level not only demonstrate "the defect fix did exactly what was intended," but can also demonstrate the fix did not destablize other collateral features or functionality of the product. So, in fact they can do something more! (Of course, this also assumes that the tester is smart enough and skilled enough to understand the fix, understand the implications of the fix, has in-depth domain and system knowledge to understand dependencies and potential collateral areas that may be affected by the fix, etc. and designs any additional tests that may be required to adequately assess the implications of the fix.) But, this post is about a test design consideration (either manual or automated) to include the duration of tests that allow professional testers to better estimate specific tasks, and also provides a simple illustration of the Stopwatch class in the .NET framework for use in automated tests (if the automated test harness or framework does not include that functionality). As a professional tester I find it much better to go into a meeting with factual, qualified data to back up my estimation or position rather than trying to defend a point based on personal emotions, wild speculation, or nebulous excuses.