All articles in our "Measuring Agility" series:
- #1 Measuring agility: what does it mean?
- #2 Measuring agility: a good idea?
- #2 Measuring agility: a good idea?
- #3 Measuring agility: what pitfalls to avoid (1)
- #4 Measuring agility: what pitfalls to avoid (2)
- #5 Measuring agility: what to measure (1)
- #6 Measuring agility: what to measure (2)
- #7 Measuring agility: the final word
Paul understands a little better what "measuring agility" means. But another thought is now haunting his mind: how will it be perceived that he is looking to get measurements for a number of indicators around agility?
And that scares him a little. Paul is the manager of three teams, each composed of six to eight people. And even though they are nice people, you have to admit thatthey seem to evolve in a bit of a different world, talking about self-organization, horizontal management, making roasts in the middle of a meeting or meeting around a starfish twice a month.
As we explained in our previous article, Paul already has a dashboard with fairly standard indicators. But while he is autonomous in measuring them, having access to the estimates made at the beginning of the project, to the time allocated by each person, to the remaining time for each task, he will now have to rely on the people in his team to be able to measure these new values.
Paul is afraid that his teams will perceive new agility indicators as a form of surveillance, and that this will generate hostile reactions. How to present this with serenity, and how will it be received?
Measurement, an essential step to improvement?
Agility means continuous improvement. Improvement of our practices, of our postures, and ideally, of our results. And how can we really perceive an improvement if we do not compare a "before" with an "after"?
Perhaps this was the opinion of one William Edwards Deming, a 20ᵉ century American statistician, author, professor and engineer, when he introduced in Japan the famous wheel that now bears his name, the Deming wheel, also known as the Shewhart cycle: the PDCA, for Plan - Do - Check - Act.
In a cyclical, iterative, empirical model, the goal is to plan a minimum of things, implement them, check the outcome, and act accordingly to improve things for the next cycle. This is radically different from the sequential, V-cycle approach, in which we plan everything from the start and then execute the plan.
The "Check" step is of crucial importance, and we could put behind this word the capacity of an individual, a team, an organization, to adopt a retrospective point of view, why not by comparing the current state with the target we seek to reach. If this target is tangible, measurable, then our current positioning should be as well.
We will come back to the precision of the "Check" designation in a future article.
Measurement in agile approaches
In the same vein, aren't the three pillars of Scrum transparency, inspection and adaptation? Inspection, which could very well consist of measuring a certain number of indicators on which we are transparent, and comparing them with our target, in order to verify that we are going in the right direction, and to be able to adapt.
Other approaches such as Kanban for IT also focus on indicators. Throughput time, cycle time, control charts... all this information should allow us, as a team, to give visibility, to limit the variability of our process... and to adapt if the indicators are not good enough.
The notion of velocity that most Scrum teams use and that Martin Fowler, co-author of the book Planning Extreme Programming with Kent Beck, creator of XP, presents as a practice stemming from Extreme Programming, is also a measure.
In short, being agile does not mean doing things "on the fly" or without paying regular attention to the outcome of our work and our performance. Inattention to results is one of the major dysfunctions of a team as described by the American author Patrick Lencioni in his book The Five Dysfunctions of a Team.
Agile teams have always measured a certain number of indicators, not out of vanity, not to please a manager, but to improve their practices, processes and postures.
Is measuring agility a good idea?
In this sense, it should come as no shock to anyone that we try to measure a number of indicators at the level of a team or an organization. To be successful, a person, a team, an organization, should set a minimum of objectives and be able to reach them, by regularly checking (for example via measurement) if they are on the right path.
Unfortunately, this is what is sorely lacking in V-cycle initiatives today: the ability to take one's head off the handlebars for a few minutes and ask what is working well, what is not working so well, and how to improve what needs to be improved.
No, we have a plan that we have thought out beforehand, that we "unfold" for months, independently of the indicators that we measure, because if we start to look at them too closely, we might be tempted to put down the pen to correct our trajectory... which would call into question the entire plan. Any resemblance to real events (happening right now) is obviously coincidental!
So the fact that Paul's teams want to measure a certain number of indicators should not bother them... as long as Paul and his teams are measuring for the right reasons, and the results are analyzed with the right prism. This will be the topic of the third article in our series.
What pitfalls should you not fall into? Which indicators should be measured? Stay tuned for future articles on the subject.