Measuring agility: what to measure? Which agile indicator? (2)

Measuring-agility-indicators-Agile-Evolution-of-the-number-of-danomalies-removed

All articles in our "Measuring Agility" series:

Previously, in measuring agility...

What can we measure to realize our ability to be more agile? And what good will it do us?

In the previous article, we talked about three agile indicators: the evolution of the added value perceived by the users, the evolution of the cycle time and the evolution of the traversal time.

Here are three more.

#4 agile indicator : The evolution of the number of reported anomalies

Seeing theevolution of the number of reported defects can help us understand if the work we are doing as a team, program or organization is consistent with the level of quality we want for our users.

What to take into account in this agile indicator? Only the anomalies reported by users in production? Also those detected by the team, including the Product Owner, during his pre-delivery checks? Or by the continuous integration platform, if corrupted code has been pushed without prior verification? There are several schools of thought, so it's up to you to decide where you want to place the cursor.

A code or a process is never 100% free of anomalies. Even the languages on which our programs are based are constantly being improved, for reasons of functionality or performance. The introduction of new code means the potential introduction of new bugs.

The question to ask is: how much can we produce as little as possible over time, so that we continue to satisfy our users and our technical debt does not explode over time?

Measuring agility with an agile indicator: The evolution of the number of reported anomalies

Why measure it?

An increase in the quantity of reported anomalies over time can lead to a decrease in the quality of our production, which causes us to lose agility (less ability to respond to user needs, for example). It is then necessary to understand the root causes of this evolution. Then, it is necessary to implement the appropriate corrective actions.

How to measure it?

This could be a measurement, once per period (week, month, quarter...), of the number of anomalies reported on the latter, and the plotting of a graph like the one above.

How to improve it?

Theautomation of a certain number of tests (unit, integration, functional...) can help identify anomalies as early as possible, and therefore correct them before they disturb users.

#5 agile indicator: The evolution of the code coverage rate

The code coverageCode coverage, in a software approach, is the percentage of the product's source code that is executed by your automated tests. Let's say you have 1000 lines of code in your product. Your tests are automated. Once executed, they test 700 lines of that same code (leaving the other 300 lines in doubt). Your code coverage will be 70%.

This is a more technical agile indicator than the previous ones. But even this one could eventually be adapted to non-technical teams. How can it be adapted? For example, we will measure the proportion of steps in a process that have associated verification procedures. In our HR team: do we have a checklist for the qualification step? And for the interview stage with the candidate?

The evolution of the code coverage rate - an agile indicator to measure

It is also an agile indicator that is perhaps more controversial than the previous ones, because it is easier to "fake". To improve the score, it is enough to add unnecessary tests. This then increases the score. Instead of really doing it in a rational way in order to reduce the number of untested cases.

But the reason I mention this is that in a very concrete way, within organizations of various sizes that I have accompanied, I have seen a real link between code coverage and the ability of a team to be agile, as long as the tests are written in a good way, for a good reason, and in a relevant order and quantity.

Why measure it?

Better code coverage normally means an increased ability to detect anomalies in the code. And therefore an increased reactivity to correct them even before the faulty code has been pushed to the repository. In the long run, it also means less time spent retesting the same procedures. And therefore more time spent creating new value for users. An agile team should therefore aim to improve its code coverage rate up to a certain point, and to maintain this rate at a level they consider satisfactory in the long term.

How to measure it?

Code coverage is measured by dedicated tools, which usually vary depending on the technology your product is based on.

How to improve it?

There is no magic: if the coverage rate seems insufficient, you should probably spend more time implementing the tests covering your code, and therefore generally less time developing new features. It's a fine balance to strike. You probably don't need to aim for 100% coverage. On the other hand, you should prioritize the implementation of tests for the most critical parts of the product, and make sure that the coverage rate increases or remains stable over time. Indeed, as you develop new features, the number of lines of code should increase: if the coverage rate decreases, it may be a sign that there are no tests implemented for these new features, which is a problem.

#6 Agile Indicator: The evolution of user and employee satisfaction

Agility is about consistency and continuous improvement. And contrary to the interpretation that some may make of certain terms, such as the famous "sprint" of Scrum, adopting an agile approach is not synonymous with going as fast as possible. But keeping a sustainable pace over time, ideally "indefinitely" if we refer to the agile principles (n°8).

As such, it is important to regularly "take the temperature" of the people who accompany us in this process. There are the users on the one hand. But there are also the people who work for these users: developers, testers, graphic designers, facilitators, product owners, managers, etc. And this is obviously true in an approach that has nothing to do with software development.

Why measure it?

A decrease in time of this satisfaction is a signal that something is wrong with the way we function. This allows us to take action before the situation deteriorates too much, sometimes to the point of no return (burnout syndrome, resignation...).

How to measure it?

Again, there are several avenues. The Squad Health Checkmodel, popularized by Henrik Kniberg, allows team members to express themselves collectively on a certain number of points and to give their satisfaction on each one. The Niko Niko calendar Management 3.0 calendar is perhaps less interesting from this point of view. However, it is faster to implement. As far as user satisfaction is concerned, a simple survey with a score, such as the Net Promoter Score (NPS)for example, can already provide a result.

Agile indicator to track: The evolution of user and employee satisfaction.

How to improve it?

There is no magic: if the score doesn't seem good enough, you have to listen to the feedback and roll up your sleeves to apply it. This is precisely where an exercise like the Squad Health Check, more collective and based on exchange, will be more interesting than a Niko Niko. Beware: listening and not acting is not enough.

Evolution, a sign of the direction taken

The six agile metrics that were presented to you in this article and the previous one were all presented as "the evolution of something" (time to traverse, number of anomalies, etc.). Why?

A number, taken out of context, can mean everything or nothing. In an improvement approach, what should interest us is rather to know to what extent we are able to progress in the right direction, by acting in order to allow the decrease of the value of some indicators, and the increase of the value of others.

If you have a user satisfaction of 7/10, this can be an acceptable score. If, over the previous twelve months, the average satisfaction of your users was 5 / 10, then congratulations, you have made good progress and you should continue in this direction. But if that same average over the last twelve months was 9.7/10, what could have happened since then? And what should we change to stop this downward slide?

You may be embarking on a product or service journey that will last one, two, five, ten years. Don't be too impatient. Don't trust the numbers as they are. Look for improvement in a direction rather than a specific goal. Visualizing the evolution of indicators is what will allow you to make the right decision at the right time. You will add a welcome context that the number alone does not provide.

The continuation and end in the next episode

How did things evolve at Scholesoft, Paul's company? What results have been achieved? Stay tuned for future articles on this topic.

Elie THEOCARI

Elie THEOCARI

Consultant, coach and trainer in agility

"I train and coach companies of all sizes and industries, with the goal of gaining agility."

Measuring-agility-indicators-Agile-Evolution-of-the-number-of-danomalies-removed

Share

An article by

Elie THEOCARI

Elie THEOCARI

Consultant, coach and trainer in agility

"I train and coach companies of all sizes and industries, with the goal of gaining agility."

Want to go further?