Share via


Assessing Tester Performance

Using context-free software product measures as personal performance indicators (KPI) is about as silly as pet rocks!

Periodically a discussion of assessing tester performance surfaces on various discussion groups. Some people offer advice such as counting bugs (or some derivation thereof), number of tests written in x amount of time, number of tests executed, % of automated tests compared to manual tests, and (my one of my least favorite measures of individual performance) % of code coverage.

The problem with all these measures is they lack context, and tend to ignore dependent variables. It is also highly likely that an astute tester can easily game the system and potentially cause detrimental problems. For example, if my manager considered one measure my performance on the number of bugs found per week, I would ask how many I had to find per week to satisfy the 'expected' criteria. Then each week I would report 2 or 3 more bugs than the 'expected' or 'average' number (in order to 'exceed' expectations), and any additional bugs I found that week, I would sit on and hold in case I was below my quota the following week. Of course, this means that bug reports are being artificially delayed which may negatively impact the overall product schedule.

The issue at hand is this bizarre desire by some simple-minded people who want an easy solution to a difficult problem. But, there is no simple formula for measuring the performance of an individual. Individual performance assessments are often somewhat subjective, and influenced by external factors identified through Human Performance Technology (HPT) research such as motivation, tools, inherent ability, processes, and even the physical environment.

A common problem I often see is unrealistic goals such as "Find the majority of bugs in my feature area." (How do we know what the majority is? What if the majority doesn't include the most important issues? etc.) Another problem I commonly see is for individuals to over-promise and under-deliver relative to their capabilities. I also see managers who dictate the same identical set of performance goals to all individuals. While there may be a few common goals, as a manager I would want to tap into the potential strengths of each individual on my team. I also have different expectations and levels of contributions from individuals depending on where they are in their career, and also based on their career aspirations.

So, as testers we must learn to establish SMART goals with our managers that include:

  • goals that align with my manager's goals
  • goals that align with the immediate goals of the product team or company
  • and stretch goals that illustrate continued growth and personal improvement relative to the team, group, or company goals

(This last one may be controversial; however, we shouldn't be surprised to know individual performance is never constant in relation to your peer group. )

But, (fair or not) for a variety of reasons most software companies do (at least periodically) evaluate their employee performance in some manner, the key to success is in HPT and agreeing on SMARTer goals upfront.

Comments

  • Anonymous
    April 28, 2009
    PingBack from http://www.anith.com/?p=33234

  • Anonymous
    April 29, 2009
    The comment has been removed

  • Anonymous
    April 30, 2009
    The comment has been removed

  • Anonymous
    May 03, 2009
    The comment has been removed

  • Anonymous
    May 04, 2009
    Hi Shrini, You are mistaken when you equate a technique to a recipe. A technique is a systematic process generally based on one or more heuristic patterns and/or fault models to help solve a particular problem. You're right, a person should not confuse abstation with a technique. However, learning to create abstract models of software is extremely useful in the application of various testing techniques. For example, state transition testing is a well known functional testing technique that requires the tester to model important states and traversals between those states. Other 'techniques' also require in-depth analysis and abstraction in their application. Those who think techniques are as simple as following a recipe are doing it wrong!

  • Anonymous
    May 06, 2009
    The comment has been removed

  • Anonymous
    May 06, 2009
    Hi Rikard, Clever man! Unfortunately, I would argue your assessment of SMART goals are short-sighted and one-dimensional. Specific - helps us to define a clear expectations between us and others (mainly our managers) of how we intend to demonstrate added value (importance) to the project/team/organization. Measurable - helps us define what success looks like. Just achieving our goal is great, but exceeding a goal is usually demonstrated with those fluffy, good things you mention...we call those "scooby snacks!" Attainable - goals that are achievable are much more psychologically motivating as compared to goals that are clearly not attainable. There are different types of goals. Some are independent and some are holistic. Also, we encourage people to set variety of goals that span a spectrum from easily acheivable to 'stretch' goals that are intended to self-motivate a person to learn new things and expand beyond their current capabilities. It is good to have a mix of goals. Realistic - the goals we set are only limited by our ambition and our capacity to achieve those goals Timely - I have 6 month goals, 1 year goals, and 3 year goals. The 3 year goals are reevaluated every 6 months. We encourage new testers to think in terms of both short term and long term goals, and have a healthy mix of both. Newer testers will generally have more short term goals, but should have at least 1 long term, stretch goal. More senior testers may have several short-term goals and usually 2 or more long-term, stretch goals. There are different types of goals. Some are narrow in scope with a single concrete deliverable. Once that goal is complete we move on to different goals. But, some goals are on-going (constantly improving); however, any successful manaager needs a way to track progress towards that goal regardless of whether it is a short-term goal or a long-term goal, and SMART goals help both us and our managers know when we hit specific milestones (trustworthiness).

  • Anonymous
    May 07, 2009
    The comment has been removed

  • Anonymous
    May 07, 2009
    Hi Rikard, I absolutely agree that there are rarely one size fits all goals for everyone. There may be some general team-wide or project-wide goals that generally apply to people in the group, but those should be a minor part of individual goals. Often times new employees need a little direction establishing goals for personal improvement and subsequent evaluation against their peer group. In those cases, managers may offer some common goals, but they should also encourage the employee to draft at least one personal stretch goal. Unfortunately, I have seen some managers use a check-list type approach to goals for all of the people who report to them. Any moron can produce a list of ill-conceived goals or objectives. Ultimately, goal setting is a personal endeavor, and is essentially a contract between you and your manager. Both your manager and you must agree those goals are appropriate, and your manager should provide guidance on whether those goals are the appropriate goals for your personal growth. Yes, I know that some people don't want very clear goals. Generally, more senior the person the fuzzier the goals. However, even senior level people can benefit from clear individual goals because they can help your manager more objectively evaluate your individual performance in relation to your peer group and provide more objectivity in performance evaluations.