Share via


KPIs and Metrics

Hey guys,

Metrics and KPI’s can be a tricky thing to get adopted in a team.  I think a general way of trying to get them accepted is to make sure they are used to help the team’s progress, not as a ‘big brother’ mechanism.

A lot of times, I think the most useful metrics already exist and are being gathered manually.  Implementing them as KPIs just helps reduce the tedius amount of work necessary to report on them.

I got some good interest in my last KPI blog posting, so I thought I would try and post about a new KPI every week.

Tenatively, the list of KPIs I want to blog about is:

  • % code coverage – this one is pretty easy to pull out of the warehouse; the idea is to show what code coverage percentage we are achieving as a team, and what the trend is – are we covering more this week than last.
  • code churn – this goes hand in hand with the one above, how much code is changing in our system, and what is the trend, more or less than the previous week
  • test run wall time – as a whole, how long is it taking our tests to run.  This one may be a trickier KPI to implement.  When I worked in Windows Server 2003 as a developer, our test team manually built a report that showed how long it took for their test run to complete.  It was also a very primitive way of measuring the performance of our code.  We didn’t have great trending though, so nobody noticed our test runs take longer and longer to execute.  There was a fairly simple performance bottleneck in our code, but our reports didn’t have great trending, so it took longer than necessary for us to realize there was a problem.  I’d like to create a KPI to measure test runs as a whole; I’d also like to create individual KPIs for each test in the system.  The latter might require delving into SQL Server’s DTS object model.

That set will get us started.  Once we get a good set of KPIs together, I’ll do some posting about how to update the cube on a schedule to keep the KPI’s up to date, and also post about the various ways we can display these KPIs.

If anyone has any idea for metrics, please let me know.

Thanks!

Eric.

Comments

  • Anonymous
    July 16, 2006
    PingBack from http://microsoft.wagalulu.com/2006/07/16/kpis-and-metrics/
  • Anonymous
    July 17, 2006
    One useful "quality" metric is "Complexity Debt" or "Over-complexity". If you measure this at every level (code, class, package, assembly,...) and set thresholds, you can then get a measure of the over-complexity.  

    This is a good counterbalance to the other "process" metrics. E.g. to make sure that we're not meeting schedules and coverage at the expense of structural complexity, which will slow everything down on future iterations.
  • Anonymous
    July 17, 2006
    I like that idea - what is the best way to measure comlexity, would you use something like cyclomatic complexity or function points?

    Thanks,

    Eric.
  • Anonymous
    July 17, 2006
    We use cyclomatic at the lines of code level, and extend the same principle up through the higher levels by counting the number of edges in the dependency graph at each level of design breakout. For example for a class we use the number of inter-method dependencies for the methods of that class. For a package, the number of inter-class dependencies for the classes contained by that package. The same works for non-leaf packages, assemblies, etc.

    By setting a threshold at each level we get a normalized degree of over-complexity for every item. We also like to relate this number to the size of the amount of code contained by the item so that we can realistically compare e.g. a method-level problem with a package-level problem.
  • Anonymous
    July 19, 2006
    David Lemphers blogged about a brainstorm he had recently to use Team Foundation Server to create...