You get what you measure - and bring your boots.

Elisabeth Hendrickson recently posted a roll-up of several interesting articles about software quality metrics.  Well worth the read if you're managing a team and are looking for ways to track progress.

I've been thinking a lot about metrics lately.  I believe the responsibility of the QA manager is to provide insight into the following 3 aspects of the project:

  1. Quality - what state is your product in?
  2. Confidence - i.e. how sure are you that your measurement of quality is accurate?  How do you know?
  3. What's left?  - how much is left to do before you can ship and what resources will it take?

I've mentioned confidence in quality a few times in past posts, but I don't believe I've ever really explained my thoughts around the idea.  It's one thing to run tests and report a high pass rate.  It's an entirely different thing to stand up in front of your upper management team and put your name on the dotted line stating that your product is ready to ship.  In the former case, you've run some tests that while reporting a "green light" situation, actually may or may not have tested the product.  Clearly, the latter is more valuable.

A single metric alone will tell you nothing.  For example, a high pass rate could either mean your product is ready to ship or it could mean your tests are hard-coded to return true in all cases. 

Metrics combined and correlated will tell you a little more.  For example, a high pass rate with high code and arc coverage should give you more confidence that your measurements are reasonably accurate.  Combine that with other factors such as code churn reports, bug metrics (i.e. incoming, number of outstanding "resolved" bugs, etc.), and you start to see a much more "real" picture of your product's state.  This is something Team Foundation Server can really help with (yes, I had to insert the sales pitch here).  Read Sam's book for a great explanation.

Once you know where you are and how sure you are of this fact, then the final step is to help assess how much work is remaining to get where you need to be.  Think about running a QA project like going for a hike - It's much better to plan your trip, bring along the necessary resources (i.e. water, a compass, a map, hiking boots, etc.), and follow the path from start to finish without going off into the weeds.  Take away any of those aspects, and you can find yourself standing in a big pile of poison ivy before you realize it without a clue as to how to get back to the trail or in the worst case, any pertinent information to pass along to the rescuers when you call for help!

So don't get lost - set up a good metrics program quickly and find your way back to the trail!

Do you track quality metrics?  If so, which metrics do you track?  Which do you find most useful?