All articles in our "Measuring Agility" series:
- #1 Measuring agility: what does it mean?
- #2 Measuring agility: a good idea?
- #3 Measuring agility: what pitfalls to avoid (1)
- #4 Measuring agility: what pitfalls to avoid (2)
- #5 Measuring agility: what to measure (1)
- #6 Measuring agility: what to measure (2)
- #7 Measuring agility: the final word
Paul thought he had a better idea. He understood what it meant to "measure the agility" of his teams or his organization. He also understood how to get the message across, internally. Measurement is part of the learning and improvement process that his teams were entitled to. He then researched different indicators that could be easily measured by his teams. Each one had a real purpose in terms of the message they were sending.
That was three months ago. Since then, to say that everything has not gone according to plan is an understatement. Between COKARD, KILLMI, performance bonuses and branding, there are many reasons to worry.
To find the first part of this article, it is here ! With the first three traps highlighted:
- Measuring without knowing why
- Confusing transparency with surveillance
- Put teams in competition with each other
Let's move on...
#1 Agile Measurement Trap: Measuring too often or not often enough
This is a real issue in any measurement approach. How often should we monitor our indicators, and make decisions accordingly? There is no single right answer to this question. Everything will depend on your context, your current situation, the indicator you are measuring. And also your ambition to improve more or less quickly.
In Paul's company, there is a COKARD once a week and a tool that requests new data daily. This forces teams to constantly feed data and monitor all the metrics they have at their disposal. Not to help them make informed decisions, but simply because their management asks them to. This is probably not the best thing to do.
This can be compared to using a bathroom scale. Will you weigh yourself every six hours to see how much weight you have lost, or gained, in the last quarter of the day? Will you weigh yourself once a year? Most people who use a scale are most likely somewhere in between. And others may never use a scale. Simply because they are satisfied with their physical appearance and have never needed one before.
Not using an indicator is always a possibility! Let's say you have a great team, a great atmosphere, the right balance of seriousness and craziness. Will you try to measure a weekly team satisfaction indicator? It's up to you, but remember that any measurement you do will take time from everyone involved, and without real action to take behind it, it's potentially time wasted.
#2 Agile Measurement Pitfall: Measuring too accurately
Visualizing and analyzing an indicator is supposed to help make informed decisions. The question that legitimately arises is how accurate do we need to be in order to make a decision?
Let's use the scale analogy again. Let's say my healthy weight is 75 kg. If I weigh more than that, I will (perhaps) decide to limit my intake of fatty or sugary foods. Or I may decide to exercise more. My scale tells me that my current weight is 77.1 kg. Would I have made a different decision if it said 77.14 kg, or 77.137 kg?
It's exactly the same with the metrics that Paul and his teams track. How many requests have been processed in a given period? What is the team's remaining workload this morning? We need to have a fairly accurate picture of the situation. However, we are not sending a satellite into orbit.
There's a reason why a lot of the things we do in an agile approach, like estimating or planning, remain very macroscopic, imprecise. Many teams, organizations, have struggled for years to make estimates and follow-ups to the nearest hour, to the nearest euro.
And for what result? Not necessarily a better result than in teams that today do without this unnecessary precision.
Keep in mind that if we spend one hour a week calculating an indicator in an inaccurate way, and that it requires four hours a week to calculate it accurately (because beyond the calculation, there is also the work required from the employees to consolidate the indicator: data entry in particular), the three hours of difference between the two calculation methods are "lost". Is it really worth it? This is not always the case.
#3 Agile measurement trap: PDCA vs PDSA
In the previous article we talked about the Deming wheel, the famous PDCA: Plan - Do - Check - Act. The truth is that Deming did not invent the PDCA, as the site of his institute indicates (and in particular the excellent document which is referenced at the end of the article in question and which I recommend to you).
William Edwards Deming, inspired by his colleague Walter Shewhart, an American statistician, and his Shewhart cycle, is at the origin of the Deming wheel. It is an empirical approach consisting of designing a product, building it, selling it and evaluating through market studies and real-life tests what users think of it. It was Japanese industry executives who transformed it into PDCA in the early 1950s. WhileDeming tried to free himself from the paternity of PDCA at the beginning of the 1990s by introducing PDSA, for Plan - Do - Study - Act.
The difference may seem small, but it is important enough for Deming to have made it. In French, to check means to verify, to make sure of something, to control, to test, to inspect, while to study means to study, to analyze, to work.
There is potentially a world of difference between these two terms and between the postures that it can assume or legitimize more or less awkwardly. There is potentially a world of difference between consolidating indicators to verify, inspect and control them, and consolidating indicators to study and analyze them.
And just like in Paul's company, it is very easy to cross the white line into a situation where achieving the indicators becomes the lifeblood of the business. Where it becomes the objective, regardless of the actions that are decided by the people on the ground and implemented to participate in the continuous improvement of the organization and its products and services.
Management 3.0 could be defined as "a mindset, combined with a collection of games, tools and practices, to help any worker manage the organization". It is a movement that has been gaining momentum since the early 2010s and the publication of Jurgen Appelo's book "Management 3.0: Leading Agile Developers, Developing Agile Leaders". Beyond the name, which may make you smile, Management 3.0 shares a number of practices that are interesting to know.
Here is a rough translation of the "twelve rules of measurement" as proposed by Management 3.0:
- Measuring for a reason
- Reduce the unknown (not everything is measurable, but we can always try to find clues that allow us to understand a situation)
- Seek to improve
- Delighting all stakeholders (not just one of them)
- Don't trust the numbers (which may be fudged, keep a certain skepticism when analyzing)
- Setting unclear targets (more like a direction than a real goal to achieve)
- Maintain ownership of your indicators (YOU measure for YOU)
- Do not tie indicators to rewards
- Promote values and transparency (avoids the desire to rig the system)
- Visualize and humanize (to help and encourage everyone to interact with the indicators)
- Measure early and often (so as not to let problems escalate unchecked)
- Try something else (change indicators from time to time for a different perspective)
These principles may not be perfect. However, they are as acceptable a starting point as any. And it has the merit of framing the process of measuring indicators in a reasonable and consistent frame of mind.
Which indicators should be measured? Stay tuned for future articles on the subject.