The Problems with Task Based Performance Metrics in IT – Updated

Task based performance metrics, at their root level, measure someone’s performance based on quantity instead of quality. The attraction is the ease of counting the tasks completed, but there are problems caused by using this type of metric in IT.

What are the problems with task based performance metrics in IT?


It devalues harder tasks and tickets that required extra troubleshooting unless you alter the metrics to give extra weight to more complex tasks – or leads to creation of extra tickets for the extra work to get credit.


Alternatively, it creates segregation of first, second and third level technical support so that subject matter experts are held to a different standard than general tech support. It results in rote tasks to be completed more quickly by people who know less overall; this limits their ability to handle more complex matters and ones easily mistaken for something else properly.


Relying on the number of tickets as a measure of performance hurts customer service when it drives employees to close things as quickly as possible, though customers may need more handholding.


Teams may look at automating repetitive tasks to speed them up instead of simplifying, streamlining and improving the overall process so fewer tasks are needed.


The shift to task based metrics and segregation of first level support encourages development of troubleshooting scripts, as well as hiring lower skilled people for a call center to solve issues based on the script quickly. Service levels go down, while the same volume of tickets is closed more cheaply.


There is an incentive to send someone a how to document via a link and say you’re done, regardless of how little help it is for the customer. Hey, they called back, another ticket! Constantly resetting someone’s password becomes a positive experience for the help desk, a familiar task to complete as compared to taking 10 minutes more to find out why it is continually necessary.


Preventative maintenance is neglected in this type of environment when system administrators’ performance is tracked in this way. When it remains a priority to management, the goals focus on how many patches, how many servers or the time it took to complete, not necessarily the quality of the work.


When the number of software defects found in testing is the metric by which performance is measured, expect to see every little bug reported and even gaps in documentation listed as bugs to get the count up.


When the number of software test steps completed or the percentage complete is the metric, you can end up with focus on getting tests completed regardless of their importance to the critical functions of the software. “We ran through 50 reports without error. Sure that data import failed, but look how we’re 95% through!”