Modified on by ScottWill
In one of my blog postings just over a year ago I suggested that mature teams may not need to use Story Points. However, at the end of that post, I wrote:
[If] your team is fairly new to writing and sizing User Stories, or if your team doesn’t yet have an established velocity, I would recommend sticking with Story Points for now because the process of assigning Story Points (during a Planning Poker exercise, for example) is *very* helpful in aligning a team’s thinking, as well as building team synergy, as the team goes through the process of having the discussions that naturally arise when sizing stories.
(You can read the full post here: link)
What I’d like to do with this post is cover a few issues on how to make sure you’re getting the most out of using Planning Poker.
First, keep in mind that Planning Poker is meant to be a very simple and quick process to be used by a team for assigning Story Points to a given story. It’s not meant to provide absolute precision regarding the assignment of Story Points, and the assignment of Story Points itself is meant to be relative to the sizings given to other User Stories on your team’s backlog.
Next, because Planning Poker is an estimation technique please don’t assume that everyone on the team must be 100% in agreement with the number of Story Points being assigned to a given Story. Let me explain: typically, when a team uses Planning Poker to assign Points to a User Story, there will be a smattering of “votes.” As an example, let’s say that, after the very first vote for a given Story, there are a couple of 3's, a 5, a couple of 8’s, and a 13. The scrum master should ask why folks who voted 3 thought the size was relatively small, and then ask the person who voted 13 why he thought the size was relatively big (i.e., over 4 times the estimated size of those who voted 3). After a brief discussion, the scrum master should ask for the team to vote again. Let’s say this time there’s one 3, several 5’s, and a couple of 8’s. A good scrum master should say something like, “OK team, let’s go with a 5 for this story and move on to the next story on the backlog.” What the scrum master is looking for is “harmonic convergence.” Notice how, in the second vote, there were fewer 3’s and the 13 went away. The bulk of the votes were gravitating towards a 5, and so you can see the team is getting closer (“converging”) on a given number. Good enough! Don’t waste time by voting again, and again, and again, and again (ad infinitum and ad nauseum) until the team members all vote the same – it’s not worth it. Estimation is not meant to be precise, and spending more and more time trying to be precise with estimates is a waste of time.
And here’s the main point that I’d like to make – let’s say you’re one of the team members who voted an 8 for this story even though the number finally assigned was a 5 – should you be upset? Should you start an argument? Should you want to keep voting until everyone agrees with you and assigns the Story an 8? No, no, and no. Even though you may really think the story should have been sized at an 8, you can see the team was converging on a 5, so just let it go and press on. Planning poker is not a contact sport.
Finally, once teams get the hang of Planning Poker, most of them vote no more than twice on any given story, and they usually don’t spend more than just a couple of minutes per story. Here’s the high-level process:
Step 1. If you've not assigned Story Points to any User Stories on your backlog then, as a team, quickly pick out what you all think is the smallest story on the backlog and automatically assign it a 1. All other Stories on the backlog will be sized relative to this Story.
Step 2. Grab the next User Story from the backlog.
Step 3. Have a brief discussion about the Story regarding the anticipated effort relative to the first story (the story itself should be familiar to the team since, if the writing of the Stories was done correctly, the team participated together in the writing of the given Story).
Step 4. Vote.
Step 5. If there appears to be a fairly strong consensus on a given number, go with that number, and then go back to Step 2….
Step 6. Otherwise, if there’s a fair amount of disparity in the votes cast, then have another brief discussion focusing on asking those who voted with the lowest number, and those who voted with the highest number, to give some insights into their respective thinking.
Step 7. Vote again – you’ll likely see some harmonic convergence on a specific number after the second vote, and then just go with that number. Now go back to Step 2 (lather, rinse, repeat)….
In sum, Planning Poker is a great technique to use to quickly assign Story Points to your User Stories. Just be sure that the team doesn’t over-engineer its use.
As always, please feel free to ask questions and share your experiences. Leslie and I look forward to hearing from you!
Modified on by ScottWill
I have been a strong advocate over the years of using Story Points to track team velocity. In spite of many of the self-inflicted problems teams face when using Story Points (e.g., trying to initially assign some measure of time to a Story Point, or trying to normalize Story Points across teams, or confusing Story Points and task-hours, etc.), when used correctly, Story Points (and the velocity calculations they enable) make project tracking and projection incredibly easy and relatively accurate as well. (As a side note, check out the following article by Jeff Sutherland: link.)
What I’d like to cover in this posting is that using Story Points is not the only way to track a team’s velocity. For teams that have been writing User Stories for a while, and who have gotten good at breaking their work down into small enough chunks such that one or more stories can be completed (“Done!”) in an iteration, I would like to recommend that they forego the effort of sizing User Stories with Story Points. They’ve already shown that they know how to start with an Epic, and break an Epic down into User Stories that can be completed in an iteration. Thus, they’ve demonstrated that they have a relative amount of uniformity in the way that they come up with their backlog of User Stories – their stories are all in the same range regarding the amount of time and effort required to complete each one of them.
With that in mind, such teams could simply measure velocity based on completed User Stories – the principle is the same as Story Points, but mature teams can fore-go the effort required to size each User Story with Story Points (e.g., through Planning Poker).
Let me give a couple of examples – the first one using Story Points. Team A has 50 Story Point’s worth of User Stories on its backlog, and they’ve demonstrated over the course of time that they can generally complete about 10 Story Point’s worth of stories in an iteration (velocity = 10). In order to determine how long it will take to complete the remaining stories, it’s easy to divide 50 by 10 and come up with an estimate of around 5 iterations. If a customer is asking how long before the team can get to a feature that the customer is interested in, and the User Story associated with that feature appears about 30 points down the rank-ordered backlog of stories, then the team can tell the customer that it will be about 3 iterations (30 divided by 10).
The second example is that Team B has 10 stories on its backlog and they’ve demonstrated that they typically complete about 2 User Stories every iteration (velocity = 2). To determine how long it will take to complete the remaining stories, divide 10 by 2 and come up with an estimate of 5 iterations. If a customer is asking how long before the team can get to a feature that the customer is interested in, and the User Story associated with that feature appears about 6 stories down the rank-ordered backlog of stories, then the team can tell the customer that it will be about 3 iterations (6 divided by 2).
If you’re part of a fairly experienced team, and if your team already has an established velocity, consider trying this approach in lieu of assigning Story Points – it can help save some time by not going through the sizing exercises. However, if your team is fairly new to writing and sizing User Stories, or if your team doesn’t yet have an established velocity, I would recommend sticking with Story Points for now because the process of assigning Story Points (during a Planning Poker exercise, for example) is *very* helpful in aligning a team’s thinking, as well as building team synergy, as the team goes through the process of having the discussions that naturally arise when sizing stories.
As always, please feel free to comment, provide suggestions and recommendations, or even tell us about your experiences using Story Points and/or other ways of tracking velocity. Thanks!
Modified on by ScottWill
Leslie and I will be hosting a webcast entitled, How Do Agile Teams Keep From "Waterfalling Backward?" as part of the Global Rational User Community (GRUC) webcast series.
"Waterfalling backward" describes the situation where a team that has started down the Agile path reverts to waterfall ways of thinking when difficult situations arise. We'll cover some examples of what "waterfalling backward" looks like as well as several easy-to-adopt techniques that help teams re-align with Agile if they've started "waterfalling backward."
Note that three attendees will win a copy of our book, Being Agile!
Register here: https://attendee.gotowebinar.com/register/113940558500710914
We are looking forward to the session and hope you can join us!
Modified on by ScottWill
Ever since our days in elementary school, we’ve been taught to fear failure. How many of us ever looked forward to going home on the day we received our report card and showing our parents the “F” we received for one (or more ) of our classes? Hopefully none of you ever experienced that, but was it because of the fear of failing that you did everything you could to ensure that you passed the classes you were taking?
Now think back to the days of the waterfall methodology in software development. Was there ever much worry amongst developers regarding failure of the project they were working on? Hardly… Most developers just did what they were told to do by the software architect, finished writing their code and threw it over the wall to the testers, and then went on to work on something else. Rarely (if ever) did they hear about how well customers liked (or didn’t like) the final offering since customers didn’t get their hands on the offering until months after the developers had finished writing the code, and there was typically a separate support team that handled any customer issues that came up after the offering was released. Additionally, most developers never interacted with customers (I even know of some organizations where developers were prohibited from talking to customers!).
Now, with the advent of Agile and DevOps, everyone is much more closely in tune with what customers think of your offerings – and this is a good thing! It clearly reflects the very first principle of the Agile Manifesto:
Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.
The key word here is “valuable.” How do you know what you’re delivering to your customers is valuable? After doing some initial investigation of the marketplace, discussing options with current and potential customers, and doing some brainstorming within the team on creative ways to try to satisfy the needs, the best way to determine if what you’re creating is valuable is to deliver it! Put it out there – and see how your customers react. But, what if they don’t like it? Isn’t that just a failure…?
If you’ve adopted some of the main tenets of Agile, you’re likely delivering new features on a regular basis with very short intervals (vs. putting out a big bag of features once a year like in the old waterfall days). This is the “continuous delivery” that the agile principle above is getting at. Thus, even if you find out quickly that your customers don’t like something you’ve just delivered, it’s not a failure! You’ve just gotten some important feedback that you can immediately use to determine what needs to be changed, or perhaps determine that the new feature needs to be removed altogether. Being able to get quick feedback from your customers, and make quick changes to address the feedback you get (even if the feedback is unfavorable), is how you succeed.
In sum, put out small features on a regular basis. This allows you to get immediate feedback from your customers regarding the value of the features that have been delivered, and use your continuous delivery capabilities to make quick changes in response to the feedback. Like I said, even if customers don’t give you positive feedback, the feedback is still important so that you can respond quickly with better value. This is not failure – it’s the road to success! Thus the popularity of the phrase, “Fail fast, succeed sooner!”
As always, please feel free to ask questions and share your experiences. We look forward to hearing from you!
Agile offers so many benefits but one of the most irresistible is the promise of higher productivity. Once you become more productive, how do you know that you are in fact, more productive? Most agile enthusiasts agree that highly performing agile teams move faster than traditional technology teams. There are many aspects to agile that provide improved performance including the practice of maintaining working software, queuing theory, eliminating waste and multi-tasking, whole team practices, and more. But given all these changes, how do you verify that it really works? I have been asked to “prove” that a team is more productive and our teams found a way.
During the intake process, each team estimates the size of the incoming request. I will call the request an “intake item” to avoid confusing the meaning of a story or an epic. Intake items can range dramatically from something that takes a team less than a sprint to something that takes a team many, many sprints. Our teams call the estimates, “SWAGs” to reinforce the idea that the estimate is an educated guess but still a guess.
I have seen teams that SWAG by dollar cost to do the work and teams that SWAG by approximate length of time to complete the work. Both methods seem to work but the important point is that the SWAG ranges are large enough to delineate typical work and reduce the error in estimation. So for instance one team could delineate SWAGs something like the following.
Small: less than $100,000
Medium: $100,000 - $500,000
Large: $500,000 - $1,000,000
XLarge: greater than $1,000,000
Similar rules that apply to user story writing also apply to making SWAG. Representatives of the team that will do the work estimates the SWAGs, the sizing is relative to other SWAG sizes, and once the SWAG is in place, they team does not change it. After the intake item is complete and rolled out into production, the item is marked as complete. The following shows a team that has used this mechanism for over a year.
The graph shows how many items the team completed each quarter standardized on time. It is quickly apparent that the team accomplished more each quarter. In other words, if the team size remains the same, then their productivity increased over time. It also shows that this team got better at doing really large items of work.
The next graph is another view of the same information but standardized on the number of items. That is, all items are the same size regardless of their SWAG size. This demonstrates that the majority of items completed are small.
The metrics show the results for one software product regardless of the size of the product or the size of the team building the product. We have tried this same approach with products developed by software teams that consist of one sprint team with less than 10 people. And we have tried the same approach with a larger team of over 150 people broken into many sprint teams. In all cases, we get consistent and believable results.
The value of this mechanism is that we show team productivity over time. Lines could be added to the chart to identify individual intake items. To add even more clarity we can associate real product deliverables to each of the items in the chart. This has proved to be a very simple way to validate that our migration to agile is having a positive impact. We typically combine this view with quality metrics to ensure that we maintain high quality as we increase output.
Modified on by LeslieEkas
Agile teams can move the needle on large software projects once they are cross-skilled and can pull from the same backlog. I have seen this several times and the results are amazing. The overall throughput from backlog to production increases because the value provided by each team member increases and high priority backlog items don’t get log jammed.
To get the higher throughput that agile promises, teams are encouraged to become cross-skilled so they can do any kind of work in any part of the code base. This enables them to work together better as a team and tackle the most pressing needs that arise versus waiting for a skilled team to be available. Once teams are cross skilled, they increase their throughput because they are not limited in the backlog items they can do. And it works. But be warned that the team will actually slow down at first because they need time to learn. They may slow down for several months depending on the size of the code base and the team’s ability to absorb new domain knowledge and skills. But they will start to speed up.
It can be a tough sell to move to this type of team organization from a setup where teams had backlogs specific to one part of the code base. That is because stakeholders often have what they consider to be “their own team”. That is, teams often work within a particular part of a larger code base and consequently they have their own backlog. Product owners understandably love this because they can get “their” critical work done. But at the same time, they want “more” work to be completed.
Moving teams to be cross-skilled may be a hard sell to the teams and the stakeholders because it is a very new way to work for both parties. And since the transformation takes time, it will be important to validate the improvements once teams actually are able to accomplish more. Once they are cross skilled, multiple teams can work from a single backlog because any team can take the next item in the backlog. This enables them to move work around as teams are available. This is very good news. Order the backlog by priority and have highly skilled teams work their way through the backlog.
But… that means that “my” item has to compete with other items in the backlog. That did not happen in the focused team style. Once teams work from a single backlog, stakeholders have to work together to prioritize their items into a single backlog. And they will find that some of their items do not find their way to the top. What does that say about those items? Maybe teams were doing the work “they could do” but it may not have been the most valuable work across the code base.
With cross-skilled teams, you are getting “your stuff and fast”. And it is “the stuff” that stakeholders agree is the most valuable. Once you get there, you may have to find a way to show your stakeholders that they are getting a better result.
Modified on by ScottWill
First off, let me start by saying that I am a big fan of user stories. In working with teams over the years I have found that user stories are *very* helpful in ensuring that product managers, engineers, executives, and others, all keep the customer in focus when discussing new features and capabilities. It’s just too easy for folks to embellish a feature with their own ideas about what the feature should provide when they lose sight of the customer. User stories also help by focusing everyone (including customers) on the absolute minimum that is necessary in order to meet the stated needs (thus saving everyone time and effort, and allowing more time to focus on additional items instead of trying to “boil the ocean” for just one feature).
However, I believe that there are times when creating user stories to delineate and track work is not needed. Let me provide one quick example. Several years ago I was working with a team that had responsibility for a code compiler that was built to run on one platform. As new operating systems came into existence, the team was asked if they could create a version of the compiler that would run on one of the newer operating systems. Since this team was just beginning their agile adoption journey, they thought they should create a backlog of user stories for their work and estimate the stories with story points, and so they contacted me for help. Since there were no new features being built – it was just that the compiler was, in essence, being ported to another platform – I suggested that they skip creating stories since they already had a list of compiler features from their current offering and knew that the same features would need to be part of this newer version. I told them to simply use their existing list and check off the features as they were completed. This was a bit of a surprise to the team since they thought that they had to use user stories in order to “be agile.” I told them that since they already understood their customers, and completely understood what was required in the way of features, that the effort to create and size user stories would be redundant.
That being said, I did work with them to ensure that they still adopted other agile practices. In this case, they were to work on only one feature at a time, always make sure that each feature was as complete as possible before moving on to the next feature on the list, that the new compiler must actually run on the new operating system at all times as the features were being completed, and that they did not build up any technical debt (for example, by delaying fixing defects, or performance issues, etc.).
My recommendation is that if you already have a well-defined and well-understood list of items that need to be completed – for example, when repeating a list of items that have been done before – then creating a backlog of user stories is likely not needed. In these situations, if you do choose to omit the creation of user stories, please don’t omit the other agile practices such as: rank-ordering the list so that the most important items are at the top; working on – and completing – one thing at a time before moving on to the next item on the rank-ordered list; ensuring that the environment is always operational; and not piling up technical debt.
As always, please feel free to ask questions and share your experiences. We look forward to hearing from you!
Modified on by ScottWill
A lot has been written about how many of the agile practices contribute to helping teams and organizations become more efficient. And this is no surprise since agile is built on top of many of the Lean principles that helped manufacturing become much more efficient.
In addition to the tremendous impact adopting agile practices can have on efficiency, another benefit (and one that doesn’t get quite as much “press”) is the reduction of risk that accompanies agile.
First I’d like to address how risk is significantly reduced by adopting the agile practice of “small batches.” When teams move to coding/testing/deploying small batches of functionality on a regular basis, they typically find defects much earlier than they would have had they still been using large batches. Many of you will likely recall the old waterfall days when tens of thousands of lines of code were written before the first test against the code was ever executed. Back then, when testing began, typically large numbers of defects were found in a short period of time and it would take a very long time to get the list of defects fixed. With the adoption of small batches, defects are often found within minutes of the code being checked in and the impact of any defect that is found, as well as the impact of the time required to fix the defect, is minimal since the code is fresh in everyone’s mind, and more code hasn’t been written that gets in the way of actually fixing the defect. The risk reduction is significant on just this point alone.
Another risk-reduction benefit of adopting small batches is the ability to get faster feedback from customers due to the fact that the new functionality is made available much faster than it would be if lots of different functions were packaged together into a big batch and released/deployed only once in a while. If a team puts out a small improvement, or a new feature, and gets feedback from customers that it doesn’t meet their needs, then the team has gained some valuable insights very quickly and their current investment is small. With large batches, if customers don’t like something, the team won’t know about it for quite some time because of the length of time from when the feature was built to when it was released/deployed along with all the other features in the big batch. Big batches are very risky due to the delay in getting feedback, as well as the increased costs of making changes (just think of all the additional testing that would have to take place if a change needed to be made and the batch contained ten features vs. just one feature).
The last risk-reduction benefit of adopting small batches concerns the pressure to add features. If only large batches of features are released infrequently, then typically product management wants to cram as many new features as possible into the current plan because, if the additional desired features don’t make it into the current plan, then it could be a very long time until the next plan is completed and the additional desired features are finally made available. Adopting small batches allows for continual re-ranking of the backlog of requested features based on customer feedback, new technologies, and market conditions, thus significantly reducing the pressure to cram tons and tons of stuff into a “big batch plan.”
Another agile practice that results in a significant reduction in risk is the adoption of “whole teams.” By “whole teams” I’m referring to teams that are comprised of cross-discipline and cross-functional responsibilities. While this is certainly nothing new in agile, understanding how adopting a whole team approach can reduce risk is something that doesn’t get discussed much. For example, when team members used to go off into their own little silos for months on end, and rarely interact with other team members, it was very difficult for anyone on the team to have any insight into what was being done by anyone else. And if one of the team members suddenly had to be away for a period of time (e.g., due to an illness or a family emergency), it was almost impossible for anyone else on the team to immediately pick up that person’s work.
With whole teams, not only is there regular communication and sharing of knowledge, but if a situation comes up where someone does have to be away for a lengthy period of time, the impact of the absence will be far less than it would be otherwise because others on the team will have a much better understanding of the work that person was doing and will be able to pick up the work with much less difficulty – thus reducing risk.
In conclusion, I would urge teams to include a focus on risk reduction as part of their regular reflections – it’s part of agile’s focus on continuous improvement.
Please feel free to comment on other areas that you’ve seen where adopting any of the agile practices has led to a significant reduction in risk. Thank you!
Modified on by ScottWill
A couple of years ago a well-known franchise experienced a significant computer outage that affected hundreds of their stores throughout the U.S. The impact of the outage lasted for nearly a day, and the problem made the news headlines all over the place. Obviously these are the kinds of problems that businesses don’t want to have happen to them. Loss of sales, unhappy customers, and bad press don’t make for a good day…
With that in mind, let’s assume that it had been years since this franchise had experienced any kind of outage. Do you think customers (as well as shareholders) would have been happy to hear the company’s executives say something like, “But it’s been years since we’ve had any kind outage!” Probably not… Was anyone focused on how long it had been since the last outage, or were folks more interested in the “here and now?” Given the fact that the outage made the headlines, it’s obvious that the lengthy delay in getting back to normal was the primary concern.
Now, let’s look at a hypothetical situation where a company experiences several computer outages every day, but the length of each outage is only one second (or less). Would anyone even notice? Probably not… Because the recovery time was so fast, the only folks who would likely even be aware of any of the outages would be the operations folks, and only then because they were analyzing the logs from the monitoring tools in place.
Hopefully you can see where I’m going with this – a company that experiences a failure only once every couple of years (i.e. having a very good Mean Time to Failure), but one that takes a day or more to recover from an outage (i.e., having a very poor Mean Time to Recovery), is likely to have more negative results than a company that has a very poor MTTF but that has an extremely good MTTR.
So, just to be clear, I’m not suggesting that MTTF should be ignored – referring to my second example, I’d really like to know why outages are occurring so frequently, and then work to reduce the number of outages (even if they do only last a second or so). What I am suggesting is that MTTR should be one of your “front and center” metrics. The faster you can recover from an outage, the less noticeable the outage will be and, therefore, the more negligible the impact.
If you haven’t started measuring MTTR for your offering, please allow me to suggest that you need to begin doing so ASAP. And, once you have MTTR measurements in place, then begin working to improve them (no matter how good they may be right now).
Here is an update to my Part 1 blog posting on whether or not teams are really DONE when they release software. Just to recap, I worry that agile teams do not have a reliable mechanism to ensure that new features or capabilities released into production succeed as planned. Without a verification tool in place, teams may not be able to confirm success. And worse, once teams get focused on new work they may not track what was released, “out of sight, out of mind”. Of course most teams respond to issues that get reported, but if nothing is reported, does that mean success or not?
A team experimenting with this decided to add a verification safeguard to their DONE criteria. Each story has to have a “plan to monitor feature success” in production. For this particular team, monitoring typically requires that the software logs success and failure data and the team has a way to track what is logged over time. Their DONE criteria ensures that they add logging abilities when they design the feature and they test that the logged data delivers the information required. When the new feature is in production, the team uses a monitoring tool to report on usage and failures. Once an average logging pattern is obtained, they add alerts to notify if there are problems.
Every team may not have this type of logging and monitoring capability in place but adding this DONE criteria can serve as a driver to make that happen. It took the team mentioned multiple attempts to get the right tools in place but the effort was very beneficial. They have already used the new methodology to find an issue (and fix it) before their users did. And by looking at success trends, they have been able to verify that uptake of their features were as planned.
If your team does not have a strategy to validate success of new features or capabilities, I recommend that you add this initiative to your backlog. You may have to start from the beginning – get the right design strategy in your code and then add the tools to monitor. Use this new DONE criteria as a mechanism to get started. A story is DONE when there is a defined plan for monitoring the solution in production. Check!
One of the reasons that I found agile compelling when I first heard about it was the notion that you are “DONE” with the feature when your customers were successfully using it in production and were pleased with the capability. I have coached teams to establish success criteria for user stories but have not required that it include validation of success in production. That feels like an oversight now. I am discovering scenarios where the desired feature does not deliver the success promised and there is no automatic mechanism to address the shortfall.
Now that I work in eCommerce, the business pays keen attention to data that validates that the software is successful. Even with this attention our technology teams often lose the connection between getting capabilities into production and verifying the value they do or do not produce.
I now recommend that teams take a new look at success criteria and include a measure of post-production success. Maybe teams add a “post release validation” criteria to their DONE list. The first challenge is for teams to identify useful data that enables them to track and validate success. The second challenge is to figure out when to assess the data and how to respond to it.
Solving the first problem requires that the software or customer feedback mechanism provide a way to measure success. The process to collect data is usually hardwired into eCommerce platforms so measurement should not be hard. The software could track how often the capability is used or how well the capability works by tracking errors as a percentage of success.
Success may be harder to track when there are no metrics accessible by the team. In these scenarios teams can measure speed at uptake of a new release, leverage customer surveys, or perform post-release customer feedback sessions. None of these options provide the validation that data offers but they would prove the team with a tool to ensure post release success.
It is important to re-establish our focus on post release success validation. I recommend that each story include not only success criteria for release but also for post-release.
Modified on by ScottWill
One of the things that has been extremely popular in software organizations for a long time is to count the number of defects that have been found via testing. I propose that teams no longer do this for two reasons.
First, one of the lines of thinking behind counting defects is that it is somehow a measure of quality. If we were running an assembly line in a manufacturing plant, and the assembly line didn’t change from one week to the next, but the number of defects increased, we would know there was a quality problem somewhere – something on the assembly line was likely broken. Despite some similarities, software development is not an assembly line – unlike an assembly line, every line of code that is written and tested has never been written or tested before, so expecting defect counts to be an indicator of quality doesn’t correlate. If a team found 25 defects last week, and 50 this week, what does that tell us? Is the code quality really getting worse, or did the team just do a lot more testing? If so, did the tests executed reflect real-world usage, or were the tests all edge-cases? Did the team deploy the code into a new environment that had never been used before? I’m sure you can come up with many more scenarios… Anyway, the possible reasons for the higher number of defects this week are almost boundless, and the time spent trying to determine why the variation occurred would be better used to: work more with customers to better understand their usage patterns and particular needs; create more needed functionality; improve their processes further; cross-train team members; adopt a new tool; etc., etc., etc. All of which leads to my second point…
Agile software development has its foundations in Lean Thinking, and one of the Lean principles is to eliminate waste. Does counting defects, and spending time trying to figure out what causes variations in the numbers of defects from one time period to the next, contribute directly to the success of the project? If not, then the time spent doing so should be viewed as a waste – time that could be better spent doing more important work.
In conclusion I’d like to leave you with two thoughts regarding defects: first, when defects are found, they should be fixed immediately – period. Don’t allow a backlog of defects to accrue. If it does make sense to do some root-cause analysis to determine why a particular defect occurred, then that’s fine – do so when it makes sense to do so, but it likely does NOT make sense to do so for the vast majority of the defects found. And the second point is that a mature, Agile team should be able to have “difficult” conversations when needed: “Hey Scott, I’ve noticed that I’m finding a lot of defects in your code this week – is there anything that’s distracting you, or anything I can help you with?” Instead of taking umbrage at such comments, I should be thankful that my team is willing to raise the issue and have the discussion.
So, the next time you’re asked to track the number of defects found, ask “Why…?” The answer will likely shed light on ways to eliminate waste and help overcome the “That’s the way we’ve always done it!” mentality, as well as foster a relentless, continuous improvement mindset.
As always, thoughts, comments, and questions are most welcome! Thank you!
I recently read a fascinating article in a farm journal that made me think about software engineering. You are likely thinking that there’s not much that farming and software engineering have in common and, for the most part, you’re right. However, in this particular instance, the story relates directly to typical practices in the software industry today.
The article told the story of a group of research scientists who had spent years working with bell peppers. The scientists embarked on a program to cultivate peppers that were more disease resistant, could flourish under sub-optimal conditions such as poor soil, limited water, & lack of full sun, and that would also produce greater numbers of peppers than the original.
After quite a bit of time, effort, and funding they managed to achieve some moderate success. Then someone suggested that they take some of their latest crop of peppers to a local restaurant and get some input from one of the chefs. While the article didn’t go into specifics about the chef’s reaction, it was clear that the reaction was akin to throwing the pepper in the garbage and suggesting that the scientists could have spent their time better elsewhere. The one thing that mattered most to the chef – the taste of the pepper itself – had not even been considered as one of the criteria to be taken into account by the research scientists. The scientists got caught up in their own little world of science and totally forgot that peppers were food and not specimens.
This immediately reminded me of the software industry and how so many teams have forgotten why they’re creating software. It’s not to simply write code, or execute test cases, or try out some new tool, or deploy a capability to a website – the software should ultimately meet customers’ needs. And the best way to do that is to work directly with customers as the capabilities are being created to ensure what’s being developed meets their needs.
Fortunately, the move to Agile and DevOps is making it much easier to work with customers due to the practices of continuous development and continuous deployment, as well as an intense focus on customer engagement and the monitoring of customer usage of the offerings.
The moral of the story…? Don’t forget the customer! J
Modified on by ScottWill
Just like the drip, drip, drip of a leaky faucet, we lose productive time by being regularly distracted. Some distractions can't be eliminated, but many can... I recently wrote an article for InformIT that was adapted from one of the chapters in the book that Leslie and I co-authored entitled Being Agile. The article discusses a pervasive form of regular distractions that many people have taken for granted nowadays. I thought I'd pass along a link here:
I hope you enjoy the article and that it provides some food for thought... As always, comments and questions are welcome. Thanks!
Modified on by LeslieEkas
Short feedback loops are one of the key benefits to the success of agile because they enable teams to learn fast and adjust. This technique is leveraged in numerous practices used by agile teams including the daily standup, sprint demos, code checkin, build and validation cycles, pair programming, and continuous delivery. One of the key benefits that convinced me to try agile was its claim to deliver greater value with higher quality. After I had experimented with agile for a while, I started to understand just how critical short feedback loops are to achieve these goals.
Short feedback loops enable you to learn fast so that you can adjust while the costs are low. It has been well understood in software development that the cost to fix a defect increases and in many cases increases dramatically the longer you wait to fix it. Some of the reasons are that more code may have been added making the defect more complex to fix, the new code requires additional testing and fixing, and by the time you get to the fix, you may have lost familiarity with the code in question. To combat this issue, it is critical to find defects soon after the code is checked in. One short feedback loop used to combat this is to automatically test the code through a series of progressive test gates. Additionally if the code is checked in frequently (another short feedback loop) developers get quick feedback on quality.
Another manifestation of short feedback loops is the daily standup which enables teams to identify critical blockers and get help to get them fixed before the next meeting. Someone has a problem, someone jumps into help fix it. Shorten the time to fix a problem; reduce the cost.
To ensure that agile teams are meeting the needs of their customers, they demo their new work at the end of each sprint to get timely feedback. This version of a short feedback loop is fairly well known to agile teams but it may be surprising how few teams actually do these with their end users. These demos give the product users ongoing updates on how the new code is progressing and the opportunity to provide feedback. This kind of feedback is like bug fixing, the sooner it is applied the more cost effective it is. Going a step further, releasing functionality frequently enables users to get traction and provide feedback before the functionality is complete. Customers have to be willing to experiment and respond but the win for them is that they do get an active voice in getting the value they require.
As teams continue their agile adoption, it is useful to remember why we adopt various techniques to make sure that we get the value promised. To deliver higher value with better quality we use short feedback loops so that we learn fast and adjust.