All articles in our "Measuring Agility" series:
- #1 Measuring agility: what does it mean?
- #2 Measuring agility: a good idea?
- #3 Measuring agility: what pitfalls to avoid (1)
- #4 Measuring agility: what pitfalls to avoid (2)
- #5 Measuring agility: what to measure (1)
- #6 Measuring agility: what to measure (2)
- #7 Measuring agility: the final word
Paul thought he had a better idea. He understood what it meant to "measure the agility" of his teams or his organization. What's more, he also understood how to get the message across internally. Measurement is part of the learning and improvement process that his teams were entitled to. Finally, he had researched different indicators that could be easily measured by his teams. Each of them had a real purpose in terms of the message they were sending.
That was three months ago. Since then, to say that everything did not go as planned is an understatement.
Genesis of a disaster
Paul began by presenting the company's executive committee with some indicators. These are the ones he planned to ask each team to share. Alex, the manager, was ecstatic. So much so thathe asked for more indicators, at the corporate level, at the program level, at the team level, at the individual level within each team. From six indicators per team, we arrived at a dashboard of twenty indicators at different levels of granularity. Some of these indicators focus on individual performance.
A new tool has been created, the KILLMI. Twice a day, team members must log in to the KPI Instantiator for a Lean Learning Mission-based Industry. They enter a significant amount of data that will help prepare the COKARD. Velocity, remaining work, time spent on each request, gross margin... It's a great tool. In just two hours a day, the teams can save thirty minutes of weekly preparation time for the COKARD. They are guaranteed to arrive with the right indicators.
The COKARDor Rationalized Agile KPI Operational Committee for ManagementLet's talk about it. It has been set up so that each team can share its indicators with the company's management committee. It now takes place once a week and lasts four hours. That is to say, the time that everyone can share and justify your progress (or non-progression). The leader of each team is required to attend this meeting.
The indicators presented during the COKARD are carefully reviewed during the session. Figures that differ by more than 5% from initial estimates or commitments are discussed in the session. The team leader justifies the values presented.
At the end of each COKARD, a podium of the company's performing teams is displayed on all screens in the company. It is then emailed to every employee in the company, as well as to shareholders. Any member of a team who keeps his place on the podium for more than two consecutive weeks is entitled to a bonus equivalent to 5% of his salary. And it goes up to 15% for the manager of this team.
"That was only three months ago...", Paul thinks to himself as he glances thoughtfully at the resignation letter he printed a week ago and still hasn't decided to send.
What could have gone so wrong? What traps could Paul and his organization have fallen into?
Measuring agility without knowing why
We talked about it in the previous article: measuring the agility of a team or an organization is not something that should shock. Being agile is about delivering value on a regular basis. It's about having the ability to adapt. But it also means being able to react to the unexpected, while keeping in mind that we are human beings. Because the well-being of our teams and the satisfaction of our users are both crucial to the success of our business.
We can make regular measurements in order to have a usable reference to improve ourselves. But it is essential to do them for the right reasons and to analyze them with the right prism. Otherwise, things can get out of hand very quickly. I wouldn't be surprised if you told me that Paul's situation, described just above, sounds sadly familiar.
Many teams or companies today measure metrics that they don't even use. They put together nice dashboards in Jira, PowerBI or Microsoft Excel. But they are unable to make a well-founded decision that will allow them to improve the way they operate and their results later on.
It's the "I was asked to" syndrome. In this case, in Paul's company, the big boss Alex wants to see indicators. So we produce metrics, and lots of them please, to show that "we're not a joke". Why? Why these? No idea.
And the fact that Alex asked for so many more indicators than Paul had originally proposed is also a symptom of another problem, which we are getting to. For once we remember the WHY of measurement, it must also point us to the WHO and WHAT. We'll talk about the WHAT (which indicators) in a future article.
Confusing transparency with surveillance
In the meantime, WHO is going to choose the measures to be carried out? WHO will be responsible for the implementation of these measures? WHO will be concerned by the values of the indicators and their evolution? And finally, WHO will have access to them?
If a team sets up certain indicators at its level, it is to improve its own operation. As such, it is up to the team to decide which indicators to track. It is up to the team to take collective responsibility for monitoring them. It is up to the team to decide on the actions that will allow it to improve its operating mode when one of the indicators indicates a decrease in quality, responsiveness, user satisfaction, etc.
This should avoid a return to a "command and control" policy , to be marked with the panties as soon as an indicator seems to deviate from its ideal trajectory. If something like COKARD exists to check in detail the indicators of each team, it is probably because Paul's company did not understand, in the first place, that an agile team was a team that was empowered, self-organized and free to choose its way of acting to accomplish a much larger goal.
An agile team should not jealously hide its indicators either, there is nothing confidential about them. But this is obviously what can happen when it feels it is losing control of its own improvement process to the detriment of a desire to shine, or to denigrate, for political purposes.
In any case, an agile team should be responsible for its measurement process: what are we measuring? why? what decisions does it allow us to make, and with whom do we share this process?
Put teams in competition with each other
When you start comparing teams against each other, especially when it's about being as high as possible in a ranking and even getting a bonus depending on your result, can you really expect mutual aid, benevolence, transparency, transfer of skills between the different groups? All these elements that should allow the whole organization to rise, and not just a small group of people who want to get ahead?
This is also where we see the emphasis on the so-called vanity metrics. They could be roughly translated as vanity metrics. These indicators are not always without interest. But they are not necessarily the most interesting to follow. However, we will try to put them forward because they are the ones that allow us to appear more successful.
It's as if one support team boasts of an increase in the number of tickets handled from one year to the next. At the same time, another support team is able to make its users more autonomous, which reduces the frequency of its interventions. Does the team that handles 1000 tickets perform better than the one that handles 500? You tell me.
This is also where certain reflexes appear, such as "fudging" certain indicators. This is not really trying to improve. But by triggering actions simply intended to inflate the indicators. For example: only calling customers whose satisfaction level we already know for our annual survey. Or implementing dozens of meaningless automated tests instead of focusing on the most critical parts of our application. Delivering a package every day, even without any real modification, to give the impression that we deliver more often...
And of course, this is also where we see managers who will put insane pressure on their teams. Because their goal is to finish at the top of Mount Olympus. It can be obvious, it can be sneaky. But in any case, this is also where we find ourselves with teams that are sticking their tongues out,burnout syndromes, and the end of the sustainable rhythm that allows us to produce real quality work in good conditions.
Teams should never be put in competition with each other, compared to each other. A team should never compare itself to anyone but itself. It isup to each team to know what its ambition is, how far it is willing to go in improving its indicators.
Measuring agility: next episode
The first three pitfalls discussed in this article:
- Measuring without knowing why
- Confusing transparency with surveillance
- Put teams in competition with each other
What other pitfalls should be avoided when measuring indicators related to the agility of our teams? What metrics should we measure? Stay tuned for future articles on this topic.