Partilhar via


Measuring Test Automation ROI

I just finished reading Implementing Automated Software Testing by E.Dustin, T. Garrett, and B. Gauf and overall this is a good read providing some well thought out arguments for beginning an automation project, and provides strategic perspectives to manage a test automation project. The first chapter made several excellent points such as:

  • Automated software testing “is software development.”
  • Automated software testing “and manual testing are intertwined and complement each other.”
  • And, “The overall objective of AST (automated software testing) is to design, develop, and deliver an automated test and retest capability that increases testing efficiencies.”

Of course, I was also pleased to read the section on test data generation since I design and develop test data generation tools as a hobby. The authors correctly note that random test data increases flexibility, improve functional testing, and reduce limited in scope and error prone manually produced test data.

There is also a chapter on presenting the business case for an automation project by calculating a return on investment (ROI) measure via various worksheets. I have 2 essential problems with ROI calculations within the context of test automation. First, if the business manager doesn’t understand the value of automation within a complex software project (especially one which will have multiple iterations) they should read a book on managing software development projects. I really think most managers understand that test automation would benefit their business (in most cases). I suspect many managers have experienced less than successful automation projects but don’t understand how to establish a more successful automation effort. I also suspect really bright business managers are not overly impressed with magic beans.

Magic beans pimped by a zealous huckster are the second essential problem with automation ROI calculations. Let’s be honest, the numbers produced by these worksheets or other automation ROI calculators are simply magic beans. Now, why do I make this statement? Because the numbers that are plugged into the calculators or worksheets are ROMA data. I mean really, how many of us can realistically predict the number of atomic tests for any complex project? Also, do all tests take the same amount of time, or will all tests be executed the same number of iterations? Does it take the same amount of time to develop all automated tests, and how does one go about predicting a realistic time for all automated tests to run? And of course, how many of those tests will be automated? (Actually, that answer is easy….the number of automated tests should be 100% of the tests that should be automated.)

Personally, I think test managers should not waste their time trying to convince their business manager of the value of a test automation project; especially with magic beans produced from ROMA data. Instead test managers should start helping their team members think about ROI at the test level itself. In other words, teach your team how to make smart decisions about what tests to automate and what tests should not be automated because they can be more effectively tested via other approaches.

In my next post I will outline some factors that testers, and test managers can use to help decide which tests you might consider automating. Basically, the bottom line here is that an automated test should provide significant value to the tester and the organization, and should help free up the testers time in order to increase the breadth and/or scope of testing.

Comments

  • Anonymous
    August 24, 2009
    "In other words, teach your team how to make smart decisions about what tests to automate and what tests should not be automated because they can be more effectively tested via other approaches." Well said! If you need an ROI analysis to convince business management that test automation is a good thing when used intelligently, than you have already lost. Automation is just one of many techniques that good testers have in their arsenal.  Knowing when (and when not) to use automation as important as knowing how. -joe

  • Anonymous
    August 25, 2009
    Hi, I agree, all the concerns you stated are valid. However, I didn't see the main point of concern. Anyone can somehow automate test cases, and their number is not a problem. The problem begins when trying continuously using them with continuously changing application. That is why Maintenance and Robustness as important as Coverage. "I suspect many managers have experienced less than successful automation projects but don’t understand how to establish a more successful automation effort."

  • For sure, they were unaware of those non-business requirements for Test Automation. Thanks.
  • Anonymous
    August 25, 2009
    Hi Joe, thanks for your comments. Hi Albert, Thank you also for your feedback. A continuously changing application is one of many risks in a software project. It could also affect an automation effort. As a risk, I suspect the business manager would expect the test manager to figure out how to best mitigate this potential problem in the tactical implementation of the automation strategy. (Good managers solve problems, great managers prevent problems!) Of course, a changing application is one of the factors that we need to take into consideration when deciding which tests to automate, and will be discussed further in the next post. But, often when testers refer to a "continuously changing application" they  refer to fluxuations in the UI design. There are a multitude of functional automated tests that can be designed and executed that are completely independent of the UI. In fact, the most effective automated functional tests designed to evaulate computational logic often occurs below the UI layer. So, if the UI is in in constant flux then perhaps behavioral testing and UI automation needs to occur later in the cycle. If the underlying architecture  is in constant flux, then the project is probably doomed anyway.

  • Anonymous
    August 25, 2009
    The comment has been removed

  • Anonymous
    August 26, 2009
    Hi Justin, Great article. I have spoken a lot about combinatorial testing at conferences using advanced features of our tool PICT. We also have had tremendous success using covering arrays in combinatorial testing situations. Tools and techniques are generally very effective when people are properly trained how to use them correctly and in the right context. Although haven't done a study such as yours with combinatorial testing we have data regarding detection of latent issues, coverage data, and also time/cost savings. Its great to see more empirical data on the subject. WRT to my probabilistic stochastic test data generation method, I have only collected anecdotal evidence and the approach has been presented as several conferences. My Babel tool is used by some teams within Microsoft and SDETs here have reported finding several string parsing issues. (I am working on an update to that tool this weekend as I wind down my vacation.) I haven't heard of any real skepticism about my approach. My approach has been peer reviewed, and was also reviewed at a conference in Dusseldorf, Germany. It will also be published in the Proceedings of the CONQUEST 2008 12th International Conference on Quality Engineering in Software Technology. I am always looking for constructive feedback, and look forward to chatting with you more. Of course any testing approach or technique is not perfect, but I suspect there are those who will criticize and ridicule any testing approach or technique with arguments based on faulty/biased experiments, one-off (generally out-of-context) examples, or emotional whining. :-)