Notes on stable automation

 

In this post I'd like to touch on the subject of Automation Stability, which is crucial for any quality testbed.  What sort of stability are we after? Simply to reduce the number of incomplete test results you get due to testcase crashes, UI timing issues, or other non-failures etc. More generally: when the testcase terminates without completing its job to verify the application you are testing.

 

First, let's do the math. Let's say your automation completes 80% of the time without a problem and the remaining failures require 10 minutes each to analyze.  So for 100 testcases we get:

 

(1 - .80) * 100 = 20.0 failures, which will require over three hours to analyze.

 

Now if you are in a more mature organization, you're probably going to have on order of hundreds (if not thousands) of testcases and depending on the domain, 10 minutes might not be much time at all for analysis. So my point is that tests should strive to ensure that all tests are stable, if for no other reason than to make their jobs easier..

 

In our above equation, you'll notice three variables each of which plays a crucial role.

 

Time

The time it takes to identify a non-product related failure and modify your testcase is important. There are two key ways to target this metric. The first, proper use of logging is key. By logging which 'scenario' or 'step' your testcases is on you can much more quickly narrow down where the failure was in the code.

 

Another thing to consider is how the testcase code itself is structured. Just like 'regular' software, test automation should be built with proper modularization and design patterns. It is easy to bang out a plethora of automation which is just a script, but the problem lies in maintenance. If you copy and paste the same 10 steps in every single testcase, if step 7 changes now you need to update your entire testbed. If those setup steps were baked into a class library however, you would just need to make the change once.

 

Core failure rate

Oh gosh, rather than opening that can of worms I'll devote an entire post to this subject later.

 

Number of testcases

When a lot of testers start out there is a manta of 'more tests mean better coverage', but that is not always the case. Redundant testcases are going to find redundant bugs, but at the cost of forcing you to spend more time maintaining that automation. Finding the right tests to keep and throwing out the dupes is a skill well worth having.

 

Writing quick, stable, and thorough automation could easily lend itself to a book but for now I'll keep it at this blog post. If you have anything you would like to add please feel free to post a comment. Thanks!