Don’t Measure the Work of Fixing Defects, Just Fix Them

I found this commentary in a user group forum from Ron Jeffries on how to measure the work of defects in sprints. I think it is a great perspective so I asked Ron if I could share it in blog form (so it retains its raw dialogue). Ron is a great thinker and possesses the heart of a teacher. He actually puts many things in perspective that you probably already know, but he states them in a way that gets you thinking deeper. So without further ado…Ron Jeffries:

Yes. The work of fixing defects distorts the picture of progress. It slows us down.

The solution is not to measure it and estimate this work. The solution is to eliminate it.

Defects are not a “fact of life”. They do not drop as the gentle rain upon the code beneath. Each defect is the result of flawed work — mistakes — on the parts of one or more people. Each defect is capable of being made less probable – very likely 10x to 100x less probable — by the specific ways the team works.

Scrum requires that the team produce a //shippable// increment of code in every Sprint. Not every few Sprints, not after five really grueling Sprints at the end of the project. Every Sprint.

Yes. If we spend more time fixing defects, we will deliver fewer completed stories. Velocity declines due to doing any kind of work that is not implementing stories. It would also decline if we spend more hours playing Doom.

End of “Story”. Velocity, counted as stories /Done/, correctly records all forms of wasted time, as well as our general capability, just by counting the stories.

We are interested in Done stories. We count them as good. We do not desire defects. If we know during the Sprint that a story we are working on is defective, we do not demonstrate it, and we do not count it as done. Our velocity declines immediately, because we didn’t get that one done.

If somehow a story slips out and then a defect is found, that story has already been recorded in velocity. Velocity is artificially
high. What should we do? One possibility is to reduce our progress bar by that amount, and insert that story back when and if it is fixed. That might be more accurate but it is not necessary.

All we do, instead, is count only the stories done next time. Time spent fixing the broken one does not count. Our progress line drops by one story, starting now. The line was artificially high for a while, but now it is back in line.

Note that the defect may take less time than it took to build the story wrong, or it may take more. Either way, the time necessary to complete the story correctly, and the number of stories completed, is back in balance with respect to that story. No estimation is required, no rationalizing about why we should treat what is obviously waste as if it were somehow a good thing.

Just count what’s done. Time spent fixing bugs is waste, and it will be shown by the amount of work done being reduced.

If a decision is made to fix a defect, then that decision should be handled as if it is a story, that is, it should come in through the planning session. It should not be accounted for on the product burndown as if it were new accomplishment, because it isn’t new accomplishment.

Extreme Programming, specifically the C3 project, specifically Ron, Chet, and Ann, introduced the notion of velocity in Extreme Programming Installed, ca. 2000 C.E.

We invented it. We’re here to tell you what it is, and why it is what it is.

Velocity is /defined/ as the number of stories the team completes per unit time. The notion is specifically designed not to be “the amount of work” the team completes, because the focus should be on what the customer (product owner) wants.

(It is perhaps worth mentioning here as well, that “story” is an idea from XP also. A story is something that the customer (PO) wants in the system. A feature. Somewhat like a PBI.)

We use velocity for two purposes, once of which is primary: Project Velocity.

Project Velocity is the rate, in stories per week or equivalent, at which the team is producing done stories.

In XP, mind you, all stories have complete unit tests and automated acceptance tests. They are really done. And when, as inevitably happens, a defect slips through, an XP team examines its process and practices, determines what tests and other actions would have prevented that defect, writes those tests, takes those actions, and ups its game.

They do this because velocity is a measure of //done stories//, and anything less makes it a measure of stories that may or may not be done. That’s not as useful a measure.

Project Velocity = stories per unit time. Punct, Schluss, Neue Absatz.

There is another slightly useful application for velocity, the number of stories done: it can be used as part of making the team’s commitment for the upcoming Sprint.

We should note here that Scrum does not ask the team to attain some velocity. Scrum asks the team to //commit// to the Sprint backlog.

You can use the amount of stories done in the past as part of deciding how much to commit to. It is not wise to treat this number as a large component of the decision, as there are many more influences on how much work the team will get done.

In Sprint planning, velocity is therefore only somewhat useful, if it is useful at all.

Now then. You express repeated concern that somehow, fixing defects is a good thing and needs to be taken into account in velocity. The, pardon the expression, error in this thinking is that fixing defects is //already// taken into account in velocity, that is, the number of stories done.

It is trivially true that if the team is working on a lot of bugs, then they won’t get many new stories done. In such a Sprint, their project velocity will go down, and it should go down, because they are progressing more slowly toward whatever the sum total of features needed for release may be.

Velocity already includes work spent fixing defects. It also includes time spent playing Doom, or snuggling in the closet. Velocity is the work actually done.

Therefore no other activity, no matter how important, can be treated as adding to velocity. Counting any other activity makes the measure of true progress, stories completed, worse, not better.

Velocity is stories done per unit time. And that’s all there is.

————————– Pause to Breathe, and Think ————————–

Now then. We might ask “shouldn’t we account for the time spent doing other things, like fixing defects?” Perhaps we should: it’s sometimes useful. We should also, perhaps, account for time spent in all hands meetings, or working on Project Beta instead of this project.

Sure. If it’s significant, account for it. I like a swim lane diagram for that, or a bar chart.

Still, none of this other work is “stories”. None of it. Yes, it’s important. Yes, we might want to account for it. It might be useful. But it’s not stories, and therefore it isn’t velocity.

Ron Jeffries
Logic is overrated as a system of thought.


9 thoughts on “Don’t Measure the Work of Fixing Defects, Just Fix Them

  1. Hi Lance,

    Although I don’t agree with you, as I think the time for fixing defects should be definitely measured (probably because of my Waterfall background), I think your post is great.

    I would like to republish your post on PM Hut (under the scrum category, where a lot of project managers will be able to read it and learn about your point view.

    • Hi,

      Absolutely, feel free to share. As for your first sentence, I am not saying that you don’t track defects, you simply don’t count them in the way of new features. Tracking them in this instance is simply in the lack of velocity that will be affected if the team is constantly fixing defects. The point is to have a team that analyzes each defect and tries to find ways to prevent them in the future bringing the defect rate to a nominal level where tracking them is not necessary. Too many teams feel that defects are a way of life and I love how Ron challenges us to ensure we step-up our engineering practices to adapt our processes for minimal defects into the system.



  2. Good points! Defects are bad, we all know that. Unfortunately, even the best development teams will release defective code at times. Often, defects aren’t found until much later in the project, perhaps even after the project is completed.

    Defects need to be logged and tracked using an appropriate tool (e.g Bugzilla). If you choose to add defects into the backlog (thus affecting velocity), you must also show defect rate and defects over time in order to get a complete picture of how the team is really doing.

    Having said that, I agree that simply fixing defects and taking the hit on velocity is the preferred approach.

    • Thanks Vin. Defects are bad, they do happen, the point is to find out why they happened and continually close the gap and get better.

  3. Pingback: Рубрика «Полезное чтиво». Выпуск 5 « XP Injection

  4. Great Blog. I fully agree that velocity, delivering stories per time spent is what matters.

    It is important for a team to continuously learn and improve, and defects can provide you with information that helps you to do so. Sometimes basic measurements, like “nr of defects that have been analysed for their causes” (eg. using Root Cause Analsys) and “learnings/improvements that have been done” van help you to make your improvement progress visible. If you learning/improvements go down (maybe due to time pressure, priorities, etc) this will surely impact your velocity. Measuring improvement can give you an early warning indicator to wacth your progress.

    But be sure to measure as little as possible, and use your your measurements, nothing’s worse then a measurement that only reported!

  5. I don’t know if I would call fixing bugs waste (at least not all of it). Yes, we could have written the code properly the first time but it would probably have taken longer. If the time taken to fix the bug is equivalent to the extra time it would have taken to do it right the first time, then you cannot talk about that time as being waste.

  6. @Antony Agree, but you often don’t know up front how much time it will cost to do it right the first time, or to solve bugs if you “do not do it right the first time”.

    The rules of thumb that I use are:
    – Ok to make mistakes, as long as you learn from them (don’t make the same mistakes over and over again)
    – Only improve your way of working when your waste is too high (don’t try to learn from every mistake)

  7. Pingback: DFW Scrum’s July MeetUp | DFW Scrum User Group

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s