“How can we tell how far along we are with our agile adoption?”
I heard this question again the other day.
Usually, the person who asks the question starts to answer it:
Number of teams using agile
Number of people trained in agile
Number of projects using agile
Number of certified coaches.
Metrics like these won’t tell you what you need to know. More likely, they will lead you astray. How? Let me tell you a story.
Years ago, I worked for a company that was “installing” a Big Methodology from a Big Company. (The fact that they thought they were “installing” a methodology was probably the first warning sign.)
Every one in the department attended Big Methodology training. (This practice is sometimes called “Sheep Dip” training).
The VP mandated that all projects would use the Big Methodology.
The Installation Team audited to ensure that project managers and teams were complying and producing the required “work products” in accordance with the Required Work Products grid in the back of the very large Big Methodology binder.
Of course, there was some grumbling (from the people the Installation Team referred to as “Change Resisters.”) Eventually, people did comply. Every one went to training. Projects managers filled out the required templates, and checked the appropriate boxes. The metrics looked grand!
The VP declared, “Big Methodology is now business as usual!”
At the time, I scoffed at that statement. It was clear to me that people were not using Big Methodology, and that the promised benefits were nowhere in sight. The only things that had really changed were some check boxes and some names (documents became “work products” or “job aids,”).
But, now, I realize that the VP’s statement was TRUE!
We had Big Methodology, and things went on as they had–business as usual! Well, maybe a little worse because people were spending time producing the many documents specified on the Required Work Products grid.
The metrics the VP tracked were easy to count. But they only revealed surface compliance. They didn’t say anything about whether the organization was achieving the improvements promised by Big Methodology and hoped for by the VP.
So when you think about assessing how far along you are in your agile transformation, consider what you are trying to achieve.
I often suggest that managers track three metrics to understand how well their organization is functioning, and whether they are trending in the right direction.
The ratio of fixing work to feature work. How much time are people spending developing valuable new features vs. fixing stuff that wasn’t done right the first time? If you can figure out the sources of fixing work and make some progress there, you have a boost to productivity. Agile methods can address some of the sources of fixing work…but not all of them.
Cycle time. How long does it take to go from an idea to a valuable product in the hands of a customer? Again agile methods can help with delivery. But if it’s the upstream process–planning for products and releases–is broken, you may not see improvement until you address those issues, as well as the development process.
Number of defects escaping to production. This is a category of fixing-work that is a direct indicator that the quality of the development process is improving.
For each of these metrics, it is the trend that is important, not an absolute number. The trend will tell you if your attempts at improvement are having an effect. Remember, most changes take time to take hold. If the trend doesn’t move in a month, it may not mean you have taken the wrong action and need to change direction. If the trend isn’t moving over time, then, examine what is happening in the development area. But also look at other aspects of the system. There are few one-to-one cause and effect relationships in complex systems and the trend you see may or may not be directly related to your change. One company I worked with was alarmed to see that defects released to production went up after they started using agile methods. It turned out that prior to the effort to measure defects released to production, no one paid much attention unless the defect brought down a customer site. The increase in the defects trend was related to reporting, not a failure to improve quality.
I find that the three metrics above are generally useful for understanding how a software development organization is functioning as a system. But your reasons for adopting agile methods may be different. Consider the goals you are trying to achieve. What signals would tell you that you are moving in the right direction? How might you measure those? When you think about measures, be wary of target numbers. Measuring against targets almost always causes distortion. That means that people will behave so as to reach the target, perhaps in ways that are counter to the actual goal behind the target. Distortion will keep you from seeing the real picture, and may also cause real harm to your organization.
Useful metrics give you a window into how the system is functioning, and whether your change is having an effect. The numbers themselves are neither good nor bad. They are information that signals you to go and find out, investigate and reason about the system.
Hi Esther,
Interesting article. A couple of questions –
1. In the first metric, are you talking of ratio of effort or calendar time? If effort, does that mean, we should be tracking effort in a Kanban system? This is a question that often comes up in client discussions!
2. Since you are referring to time it takes to deliver to customer, shouldn’t it instead be Lead Time rather than Cycle time?
Absolutely agree with you that it is the trend that management should be watching, not the value at any point of time.
Regards,
Mahesh Singh
Digite, Inc.
Nice article. It’s very true that effectiveness of Agile adoption should be measured in terms of result rather than the process adoption.
Regarding the metric “Ratio of fixing work to feature work” I’ve some comments. Continuous refactoring is an essential engineering practice used by Agile teams. In this case would you consider refactoring as “fixing work”? Agile teams to just enough design and don’t do over engineering. However, most good designs are extensible so that they need less rework. But in practice, refactoring is essential.
In addition to the above, where do engineering metrics like ‘code quality’, ‘self documentation’, scalable designs etc fit? Should those also be metrics for measuring Agile Engineering adoption?
The people factor is also an essential dimension of Agile adoption. Employee job satisfaction, ‘big picture’ involvement and thinking (as everyone on the team is aware of what’s going on the project, rather than just what they work on), empowerment etc also are good measures of Agile implementation. What are your thoughts on this?
Thanks,
Prash.
Thanks for very good post! Very interesting.
I think cycle time is a good metric. Shorter cycle time will improve learning and innovation aspects, but also improves the flexibility to adapt to new/changed market opportunities. I try to convince people in my organization to start measuring cycle time for our software development.
The problem I face is that business managers only speak the “language of money”. How do you convince them that shorter cycle time is good for the business?
One approach to translate cycle time to “language of money” is to use Cost of Delay (CoD) as advocated by Don Reinertsen. How much will it cost us (or decreased revenue) if the cycle time gets for example one week longer than today?
I have tried to define Cost of Delay for our business, but it is really hard. There are so many parameters to consider. Do you know if there are some guidelines available to help with defining CoD? It would be real powerful to show the price in USD/EUR on shorter/longer cycle time?
Do you have the same experience or are you using other techniques to speak to business managers in a language they understand?
Thanks,
Henrik
Here are some ideas for measuring agile adoption:
– check how much of the Agile Manifesto a team follows, e.g. have you tried the Agile Karlskrona test?
– check how little Taylorism a company follows, e.g. are you still building top-down control structures?
– measure positive system behaviour, e.g. value streams, product quality, end user satisfaction
– measure negative system behaviour, e.g. waste produced, stressed and unsatisfied staff
The biggest problem is getting stuck in the old way of thinking and “agile by name only”. There can be no improvement if you replace old roles by new names, agility will not come without fundamental change in an organisation. I suggest multiple metrics over a longer time, agile/lean is both about people and the system they operate in.
Just some alternatives, hope this helps 🙂
I liked your article but I do have a comment which i post it in my own web site. The link is:
http://www.chrisshayan.com/my/index.php?option=com_content&view=article&id=524:metrics-for-agile&catid=38:iparadise-development-pattern&Itemid=65
Chris,
Perhaps you might read Why Not Velocity as an Agile Metric https://estherderby.com/2011/10/why-not-velocity-as-an-agile-metric.html.
Esther
Thanks Esther but respectfully I disagree because
1) agile is based on trust; if we think velocity is gonna be misused by scrum members then we are against agile manifesto.
2) the main usage of velocity is for a better commitment planning; improving of velocity is also means next time we will promise stories which our velocity can afford it.
3) we are using agile methodologies to get away from managerial red tapes.
Ah, yes…in the best of all possible worlds.
Velocity is useful. But not as an indicator to tell you how an agile adoption across an organization is going.
why not? if we use velocity and Number of defects escaping to production. then can’t we measure the adoption of agile in organizations in a time-box like a year or two?
Chris,
Perhaps you could say more about why you think it is useful to measure velocity as an agile adoption metric.
Velocity is very useful for planning. And it can by a signal of problems, in or outside the team. But it is not a goal. It just is. If the goal is to improve the ability of the organization to deliver valuable software–looking as system level metrics is more helpful than looking at a team level metric.
When you look at velocity /as a metric/ its very easy to destroy its value in planning, because people start gaming it. That’s a fact, not a commentary on character. It’s called Goodhart’s Law (Lee Copeland recent article in Better Software).
Here’s why I stay away from it, as a measure of progress for the organization as a whole.
Velocity is easy to manipulate.
Want velocity to go up? Fudge the definition of done and you finish more stories. Change the scale and complete more points (what once was a 2 point story is now a 5 point story).
Velocity is easy to misuse.
Managers who don’t see organizations as systems can use it to compare teams or punish teams. Neither of which is helpful.
Velocity—as an agile adoption metric—puts the focus in the wrong place.
Focus on velocity implies that if velocity isn’t improving there is something wrong with the team. In some cases, that might be true. But I don’t want people to look by default. When velocity isn’t improving or is erratic, it’s often due to factors that aren’t in the team’s direct control. There might be a problem with the way the work is flowing into the team. Or the team maybe interrupted every hour with production support calls (or what ever). Or the team may not have the tools they need to do their work. That’s something for the team and team coach to work on or raise up as an impediment (where mangers can work on it at the system level).
For assessing the progress of an agile adoption, I choose metrics that emphasize system performance to help managers make the shift from “work harder” thinking to “optimize the whole system” thinking. Managers after all, are responsible for creating the environment (structures, policies) and enabling conditions for teams be successful. To do that, they need a way to asses how the system is functioning. Because I presume that the point isn’t being “agile” but delivering valuable software.
For more about using velocity as a measure, see my post Working Hard or Hardly Working.
Thanks a lot for your nice response.
I agree with these assumptions:
1) Velocity is easy to manipulate.
2) Velocity is easy to misuse.
Even though these assumptions are against trust in agile methodologies like XP and Scrum.
Besides, you are talking about a concept called: “Cycle time”; as you precisely described is: How long does it take to go from an idea to a valuable product in the hands of a customer?
The velocity in scrum means: In Scrum, velocity is how much product backlog effort a team can handle in one sprint. This can be estimated by viewing previous sprints, assuming the team composition and sprint duration are kept constant
I think cycle time and velocity are same terms. Do you agree?
I agree with you that velocity can be used to plan projects and forecast release and product completion dates; or on the other hands: it can be used for commitment-based planning.
I would like to insist on agile manifesto principles specially this one:
Our highest priority is to satisfy the customer
through early and continuous delivery
of valuable software.
so the best measurement is how fast and soon we can deliver.
Respectfully I still think velocity is a must to have also it can be used as a metric. Besides, scrum is not for managers that they do not know the meaning of scrum and its ingredients; if in any organization is like that then they need training.
Hello Chris.
I do not believe that cycle time and velocity are the same thing. In fact, conflating the two can lead to considerable harm.
It is fine with me if you disagree with me. You are free to believe what ever you wish to believe.
Esther
Thanks, I learnt quite good stuff from you.
what do you think about this? http://www.chrisshayan.com/my/my/ppts/software-kpi.pdf
please share your insights
Esther,
When you say “escaped defects” there is a lot of context and meaning which isn’t too clear.
Is that “discovered by the customer” or “noticed by the customer” or “delivered without release not warning”, “discovered by an external QC group”, “discovered but not addressed”…?
I know it “could be any of these” and that the classic definition would be ‘non-duplicate defects discovered in production” but I thought I’d drop some nuance on here since I get this question a lot.