Modified on by ScottWill
This week I thought I'd re-post an article that appeared earlier on DeveloperWorks.
Early on, when I first started coaching teams on their transition to Agile, it was quite common for teams to come to me and tell me that they hadn't completed any User Stories by the end of an iteration. Initially, teams wanted to use this as a reason to increase the length of their iterations (I recommend two-week iterations – and teams always seem to want longer iterations). What they were failing to realize is that there were fundamental reasons why they weren't completing stories, and iteration length by and large had nothing to do with it. Let me explain....
One of the first questions I'd ask a team in this situation was if they were assigning User Stories to team members with the idea that each member was responsible for completing his or her User Story independently of others on the team. Almost invariably, the answer was “Yes, but how did you know that...?” What was happening in these situations was that each member of the team would work on his own User Story and, at the end of an iteration, the team would have six, or seven, or eight partially-completed User Stories. In this case, nothing was completely “Done!” by the end of the iteration – just a bunch of partially-completed stories. Since each team member was focused on completing his own story, there was no “incentive” for someone to jump in and help another team member get a story done because everyone was focused on completing his own story.
In this situation, we would remind the team about the importance of the Agile practice of “whole teams.” Part of this practice recommends that a team should try to work together on one User Story at time, complete it, and then as a team move on to the next story on the backlog. This way, the team is constantly completing User Stories and will have, at most, one unfinished User Story at the end of an iteration.
In order to pull this off, there's another practice that we recommend teams adopt – and that is the creation of small implementation tasks for each User Story. At the beginning of an iteration, the whole team goes through the process of creating the various tasks needed to complete a story (development tasks, test tasks, user documentation tasks, automation tasks, code review tasks, etc.). These task sizes should be small in size (on the order of 4 hours, 8 hours, to no more than 16 hours), which then allows for a lot of “parallelization” of the work. The concept is, for example, instead of having one developer work on a coding task for five days, you could put five developers working in parallel on five one-day development tasks and, at the end of a single day, have all the coding completed that otherwise wouldn't have been complete until the end of five days. Additionally, there should be corresponding documentation, test, automation, and code review tasks for each of the development tasks. This way, a team achieves a very tight integration between dev, doc, and QA throughout each User Story (not to mention each iteration), and the team also always has working code as the team completes one User Story before moving on to the next. By the way, this approach is not something that comes naturally to a team (especially if they're just beginning the transition to Agile), but give the team time and encouragement to try it out – the effort is worth it.
A few closing comments: obviously, piling everyone on a scrum team onto one story won't always work, especially with very small User Stories. The goal – even with a very small User Story – should be to put two or three engineers on that story while the rest of the team works on an additional story. Keep the number of stories being worked on at the same time by the team to the smallest number possible. This should go a long way to helping you consistently finish User Stories at the end of every iteration.
We welcome any comments and suggestions you may have -- thanks!
Modified on by ScottWill
Years ago, waterfall projects would typically shun requests for new features or capabilities being added to a project that was underway. After all, project success was typically measured by conformance to “the plan” – and a new request was not part of the plan. These waterfall teams would complete the opening sentence something like this, “When we get a new feature request, we put it on our list of features for consideration in the next release.” Thus, an opportunity to respond to changing conditions, changing customer needs and expectations, or changing competitive situations in a timely manner was lost.
Other waterfall projects have tried to add the new feature in along with the work that was originally committed in “the plan,” but that would often spell disaster since teams were typically already working feverishly just to meet the original commitments. In order to add something else to an already booked plan, there were very few options – mainly overtime and/or cutting corners – neither of which is a good thing. Oh, sure, there was always a planned “buffer” added into “the plan” to account for such contingencies, but when have you ever seen it work out the way it was intended? The skeptic in me would guess rarely, if ever… These teams would complete the opening sentence something like this, “When we get a new feature request, we add it to the plan and start mandating extensive overtime.” Pretty soon, employees are burnt out, frustrated at not having a life, and begin to actively seek employment opportunities elsewhere.
Mature agile teams complete the sentence something like this: “When we get a new feature request, we see it as an opportunity to get ahead of the competition as well as make our customers happy and successful.” Typically new requests surface because of changes in the competitive landscape, changing customer needs and expectations, new market opportunities, or even some combination of these and other factors. And when these new requests arrive at the door of an agile team, there’s no drama at all, no worries about overtime, no cutting corners, no consternation about not meeting “the plan,” etc. Agile teams have great flexibility to handle new requests because they are always working on just one thing at a time – not numerous features in parallel. Waterfall teams typically worked on lots of features in parallel since all the features were committed up-front and since teams typically had to show progress on all the committed features from the very beginning of the project. Thus, if a new request arrived, there was no flexibility to drop a less important feature from the plan since everything was already underway.
Agile teams, however, work on one thing at a time. And the way they do this is by having the features rank-ordered from the most important to the least important. If a new request arrives, the team finishes working on the feature currently in progress – it then has complete freedom to begin working on the brand new request. If a particular ship-date has to be met, then the team drops the least important feature from the list and replaces it with the new request. This way, the team is always working on the most important feature and has incredible flexibility to handle changing circumstances. After all, conformance to “the plan” is not the right measure of success – meeting customer needs and adapting to changing market conditions is a much better measure of success.
To summarize, working on one thing at a time (always the highest-ranked, most important item) and getting it done before starting on something new is the most efficient and effective way to work. It also allows for the greatest flexibility – especially in markets where changes can occur lightning fast.
One thing at a time, one thing at a time, one thing at a time...
Modified on by ScottWill
No doubt you’ve already heard about DevOps. What you may not be aware of is how DevOps builds on Agile principles and practices. For example, Software-as-a-Service (SaaS) projects have adopted DevOps in order to have the ability to frequently update their websites with additional features and capabilities. Some SaaS offerings are updating their production websites multiple times a day – which is a far cry from the once-a-year-release cycle that was rather typical for products built using waterfall.
You might be thinking to yourself, “Multiple times a day…? How does that happen…?” I’ll touch briefly on three essentials of Agile that contribute to this capability. The first is breaking work down into small batches and completing one small batch of work prior to moving to another. You’ll no doubt recognize this as a Lean principle derived from queuing theory. Simply stated, Lean has shown that small batches of work transitioning through a system are far more efficient than an occasional big batch of work. For our purposes, this means that having small amounts of code written, tested, and automated, where any defects have been fixed, and where any necessary user-documentation has been written, is far more efficient and productive than just having a large batch of code written that hasn’t been tested or documented.
The second point is Agile’s emphasis on “working software as the primary measure of progress” (see the 7th Principle of The Agile Manifesto). This ties in directly with the previous point – as small amounts of code are written, the goal is to get that code to production-level status as quickly as possible (“working software” that is “Done!”) so that the new code can be provided to customers with high confidence that it will work as desired, have the right level of quality, and help customers address a particular need.
The last point regards the tremendous emphasis in Agile on automation. With DevOps, the need for automation increases significantly because of the desire to deploy new features and capabilities on such a short timetable. Not only are builds and testing automated, but test environments are automatically provisioned, code that passes the automated test suite is automatically deployed into production and, if a problem occurs anywhere along the way, the process is stopped, the code is automatically rolled back, and appropriate personnel are automatically notified of the problem. The ultimate goal is to make a small code change, “push a button,” and have the new feature or capability automatically rolled out without any further human intervention.
No doubt you can tie additional Agile principles and practices to the DevOps initiative, but I thought I’d touch on just these few to show how DevOps is nothing more than a logical next step after Agile – they are not two, independent approaches to building software. As always, please feel free to share your thoughts! We look forward to hearing from you!
Just like the drip, drip, drip of a leaky faucet, we lose productive time by being regularly distracted. Some distractions can't eliminated, but many can... I recently wrote an article for InformIT that was adapted from one of the chapters in the book that Leslie and I co-authored entitled Being Agile. The article discusses a pervasive form of regular distractions that many people have taken for granted nowadays. I thought I'd pass along a link here:
I hope you enjoy the article and that it provides some food for thought... As always, comments and questions are welcome. Thanks!
Modified on by ScottWill
You might recall that I ended the last blog post with the following:
One thing at a time, one thing at a time, one thing at a time…
Why are we so convinced that this is the best way to work? Hopefully my previous post made it clear that the flexibility teams have to respond to changing circumstances by working on one thing at a time is an excellent benefit. There are additional, significant benefits to working on one thing at a time – two of which I explore in this post.
How do you know that what you’re creating will do well in the marketplace, meet customers’ needs, and generally provide high customer satisfaction? In the old, waterfall days we used to have “beta programs” where customers would be granted access to some pre-release version of the software that they could use in-house and then provide feedback. The biggest problems with beta programs revolve around the lateness of the feedback received from customers. Typically, if customers didn’t like something, or wanted something different, it was too late to make any major changes – the team was too busy fixing the backlog of defects, and the committed ship-date was often very close when the beta program started. Customers were often told, “We’ll get to that in the next release.”
However, by doing one thing at a time, your team has a much greater opportunity to understand, with a fairly high degree of certainty, that what you’re creating will meet customers’ needs. And the reason this is so is that you can involve your customers in your project throughout the entire lifecycle. When the team completes a small feature, or completes even a small part of a larger feature, that feature can be demo’d to customers. It’s common for agile teams to be demo-ing some completed functionality to customers at the end of their very first iteration. And customers love it! It gives the customers the ability to “put their fingerprints” on the product – they get to see what’s being built, provide immediate feedback, and then see the development organization respond quickly to that feedback. In other words, customers are able to provide real-time, on-going feedback so that when the product finally does ship, customers will be very comfortable in knowing that the product provides what is actually needed.
One other benefit is that customers can actually request that the product be made available earlier than planned. Say, for example, a team plans on three major features for its next release. By working on one feature at a time, it’s quite possible that customers could ask that the product be released as soon as the first two features are complete – or even just the first one! If the feature is needed, and customers can see how they can immediately make use of it, and the team has actually completed the feature, then releasing just the very first feature (instead of waiting until all three planned features are complete) may make the most business sense. But the ability to even have such a business discussion is only possible if one feature at a time is being worked on. If all three features are started in parallel, then the team has created inflexibility since nothing can ship until all three features are completed (yes, dark launches address this problem, but the additional overhead of ensuring what’s “dark” doesn’t interfere with what’s been completed must be considered).
So, to sum up, how do you know? You know because your customers are providing you real-time, on-going feedback throughout the project to help ensure that what you deliver is what they need. It’s a win-win situation for everyone!
Modified on by ScottWill
Velocity is a great tool for agile teams but it is easy to "over-engineer" the concept -- or even misuse it. Velocity is a mechanism to understand the pace of a team that uses story points to size their user stories.
First, a quick comment regarding story points – Leslie and I advocate the usage of “unit-less” sizings of user stories, meaning that story points are not associated with any measure of time. Story points are simply an assignment of size relative to other user stories that a team works on. Thus, a 3-point story should take approximately 3 times the amount of effort for the team as a 1-point story -- irrespective of how long it takes to complete either one. Note that story point sizings are just relative sizings and they work no matter how long it takes to actually complete a given story of a given size.
Back to velocity… Velocity, simply put, is the term used to describe a team’s observed capacity for completing work. Velocity is calculated by taking the average number of story points a team completes each iteration. For example, if a team completes 8 story points’ worth of user stories in its first iteration, 12 in the second, and 10 in its third, then the team’s velocity is 10 (the average of 8, 12, and 10). Note that several iterations had to be completed first in order for the team to determine its velocity (we advocate a minimum of three iterations before starting to use velocity for project planning and tracking).
Even though the concept is simple, I’ve seen teams misuse velocity and, as a consequence, reduce its value. I’ll address three of the most common misuses in the following paragraphs:
First, velocity is incorrectly used as a "report-card" each iteration. Assume the example I used above where the team has a velocity of 10. Let’s say that their next iteration they complete 6 story points. In organizations where velocity is not understood, the team will likely be called on the carpet and asked to explain what happened (the thought being that their productivity dropped by 40%!). Given that the assignment of story points is an estimation technique and not something to be measured to 26 digits of precision, there should be an understanding that the number of points a team completes each iteration will vary from iteration to iteration. Perhaps in this iteration the team is working on a story that they underestimated. Big deal… Chances are in a subsequent iteration they’ll work on a story that was overestimated. Over the course of several iterations, the team's velocity will average out. Please do not use the number of story points a team completes in any given iteration as a report-card. Use the average and save everyone a lot of headaches from micromanagement.
Secondly, and closely related to the first, is teams erroneously claiming "partial credit" for a story. When teams are seeing the number of story points completed in a given iteration used as report-card, they tend to start trying to claim partial-credit for an incomplete story. In this scenario a team might say, “We got the code done, so that’s worth 2-points out of this 5-point story.” Even though the testing, defect fixing, automation, etc., hasn’t yet been completed, they're just trying to boost the total number of points claimed in the iteration. Unfortunately, this approach defeats a number of benefits of adopting agile. First, it breaks down team synergy by reverting back to an us-vs.-them mentality. When the whole team gets “credit” for completing a user story when it’s actually complete, then the team is motivated to help each other out. If partial credit is allowed, then this can pit parts of the team against other parts. The second problem with this approach is that it violates the agile principle of working software as the measure of progress. Code that has been written, but is untested, is not “working software.” Code that has been written and tested, but where defects haven’t been fixed, is not “working software.” Working software should be the focus for the team… And let me close this section with another simple example: let’s say a team is targeting completion of two 5-point stories this iteration (their velocity has been ~10 points an iteration, so targeting 10 points’ worth of stories makes good sense). Let’s say they complete the first story and are almost complete with the second. If the team tries to take partial credit, they might claim 4 of the 5 points for the second story and, thus, their story point totals for this iteration would be 9. Next iteration, they’d plan to finish off the last bit of the second story and then, most likely target and additional 10 points' worth of new stories (since they have been averaging 10 story points so far and they almost completed 10 points last iteration). Let’s say they complete all that work in the next iteration for a total of 11 points. What’s their velocity for the two iterations? It’s 10. Now let’s take the “no partial credit” approach. The team would get 5 story points for its first iteration and 15 for its second. What’s the velocity for the two iterations calculated this way? Right, it’s also 10. This is why focusing on velocity as an average is so liberating for teams – they’re not constantly under the microscope and don’t feel compelled to play games with partial credit.
The last abuse of story points and velocity that I’ve seen is that teams get their velocity "dictated" to them. They’re told to achieve some level of velocity before even knowing what their velocity is. Remember, as mentioned at the opening, velocity reflects the team’s observed capacity to complete work each iteration. Typically, when teams are dictated a velocity, its in “fixed content/fixed date” projects (which are anathema in agile). Teams wind up working tremendous amounts of overtime to meet their dictated velocity (which is also anathema to agile).
Finally, there are legitimate ways to go about increasing velocity, as well as illegitimate ways. The most prevalent illegitimate way is forcing a team to do overtime – this is not sustainable and will only cause problems down the road. The legitimate ways to increase velocity is to pursue what I call “enabling” practices: increasing test automation, improving builds by adopting continuous integration, adopting test-driven development and pair-programming, automating provisioning of test environments, automating deployments, and many others. Yes, putting effort into maturing these practices may mean you slow down the amount of functionality you produce in the short run, but the long-term benefits (such as sustained velocity improvements) will be well worth the initial investment. Velocity is simple - keep it simple. Increasing velocity is where the hard work is.
Modified on by ScottWill
Leslie and I will be hosting a webcast entitled, How Do Agile Teams Keep From "Waterfalling Backward?" as part of the Global Rational User Community (GRUC) webcast series.
"Waterfalling backward" describes the situation where a team that has started down the Agile path reverts to waterfall ways of thinking when difficult situations arise. We'll cover some examples of what "waterfalling backward" looks like as well as several easy-to-adopt techniques that help teams re-align with Agile if they've started "waterfalling backward."
Note that three attendees will win a copy of our book, Being Agile!
Register here: https://attendee.gotowebinar.com/register/113940558500710914
We are looking forward to the session and hope you can join us!
Modified on by ScottWill
First of all we would like to wish everyone a Happy New Year! We trust that this year will be full of good challenges and significant achievements for each of you!
Regarding metrics, if it's one thing that engineers love, it's numbers... And if it's one thing that managers love, it's status. Put the two together and oftentimes you have teams that track tons of metrics across all facets of their projects.
While it may give the impression of being really thorough, and providing all sorts of deep insights, if you find yourself in an organization that tracks lots and lots of metrics, then I would encourage you to take a step back and limit the focus to just what's really important.
One of the principles of the Agile Manifesto reads, "Working software is the primary measure of progress." If you stop and think about it, that principle makes perfect sense. Customers aren't going to buy untested code and they're not going to be too happy if they buy something that falls down broken all the time because major defects haven't been fixed. All other metrics should pale in comparison to the one metric of having working software. Putting focus on this one metric will drive many of the benefits that agile promises -- three of which I'll touch on here.
First, focusing primarily on having working software provides a shared, common goal for the organization. This means that no one on the team gets any credit for his or her individual efforts which, in turn, provides plenty of motivation for helping each other out so that the team can take credit for having working software. A lot of the old "us vs. them" mentality of waterfall days is eliminated when focus is placed on working software as the primary measure of progress.
Next, having the focus on working software means that teams will actually achieve working software much earlier in a project than was typical with waterfall projects. Having working software provides plenty of opportunities to involve customers earlier in the project since customers can actually see demos of the functionality while it is being implemented and can even download it and test-drive it in their own environments. Getting customer feedback early is a huge benefit, especially if customers tell you you're going down the wrong path since you'll be able to make mid-course corrections prior to releasing the product.
Finally, having working software enables teams to move into the DevOps space where automation takes on an even more significant role, where releases are more frequent (up to multiple times a day for some Software-as-a-Service offerings), and where opportunities for doing things like "A/B" testing can be pursued. None of this is possible without having working software.
To conclude, if you agree that focusing on working software as the primary measure of project progress makes sense, but you're not quite sure how to ditch many of the metrics you may be currently tracking, I'll leave you with this analogy that Leslie uses with teams when dealing with this very issue. Think of cleaning out your closet -- you pull everything out and then put back only those things you wish to keep. Everything else is either thrown away or donated to charity. So, start with a "clean slate," put working software at the top of the list, and then add only those additional metrics that are absolutely necessary to ensure project success. Tracking anything else beyond just what's absolutely necessary should be considered a waste.
We would be interested in your thoughts and comments, especially if you've already made the transition to tracking working software as your primary metric. Thank you!
Modified on by ScottWill
Scott and I are thrilled to tell you about our new book called Being Agile: Eleven Breakthrough Techniques to Keep You From Waterfalling Backward. You can click on the link below to get access to the book or use the handy link in the Links section of this page.
Our goal for the book is to help teams that have adopted the standard practices and call themselves agile, but have not gained the real benefits that agile promises. We believe that adopting agile requires a change in thinking, not just adopting a set of practices. When teams don't get immediate results, they often fall back into old habits that will reduce their chances of success. Our book offers eleven breakthrough techniques to help overcome reflexive thinking so that you will start to respond with an agile mindset that prioritizes delivering customer value, ensuring high quality, and continuously improving.
The book consists of eleven chapters that focus on some of the critical topics for agile success. In each chapter we discuss the basic principles that are the foundation for the topic followed by the corresponding practices. Each chapter closes with a breakthrough technique to help teams achieve the real benefits of the aspect of agile covered in the chapter. It was not our goal for the book to be an agile primer or even to cover all of agile thoroughly as there are much better references for that. Our goal was to focus on principles critical to agile adoption with which teams struggle and for each area, give them the means to succeed via using an agile mindset.
Scott and I have both had our breakthrough moments and so have the teams that we have le
ad and coached. We share experiences from those teams to provide a real world context to the ideas. Most of what we have learned came from working with numerous teams and it is our hope that we continue the conversation with you. Please share your "A-ha!" moments on our blog so that we can all continue to learn together.
Being Agile (IBM Press)
Modified on by LeslieEkas
When teams complain that they “do not have enough people”, that may be a sign the team composition has not resulted in an agile whole team. Agile’s mandate for working software requires that teams be composed of the right set of people to deliver on this goal. That generally means developers, testers, writers, user experience experts and so forth. That also means that there are team members from each of the component technology disciplines so that, as a team, they can deliver end-to-end functionality.
Large teams moving to agile often have to form several smaller agile teams with each team delivering completed user stories. Teams that have traditionally been organized by technology or by discipline, are reluctant to move to a cross functional structure because there may not be enough technical and domain knowledge in each of the resulting teams. What they discover quickly is the majority of the knowledge and skills in the team resides with very few people.
The solution to this problem is to cross train the team members so that they are better able to deliver product capability with the right set of skills and domain knowledge. Many teams resist this solution because it requires upfront costs and will slow the team down for a while. In fact many teams are so convinced this will not work that they do not make any effort to cross train and end up with teams that cannot deliver working software. This is what leads them to believe that they “don’t have enough people”.
It is a very hard sell but cross training your teams so that they can deliver a greater range of product functionality is well worth the upfront costs. Software engineers usually have a product area in which they prefer to work or a technology that they favor. But good software engineers have the skills to learn new technologies as required. And it benefits a team when engineers do expand their skills because not only are they able to help complete additional work, but their expertise and perspectives are beneficial when building new software and solving problems.
It is important to let management know there will be a short-term slow down (maybe as much as several months) when cross training team members. But when teams are willing to cross learn in a results oriented way, they become more productive and tend to move faster.
There are ways to cross train and continue to move forward at the same time. Pair programming is a great way to introduce a developer to a new technology or a new area of the code. Developers can also try to fix defects in an area outside of their domain, and then request a code review of the completed fix before it gets checked in. Figuring out how to fix a problem really forces one to learn about how the code works. Engineers can also test and write test code outside their current domains so that they learn additional areas of the product. There are no doubt many other ways you can come up with and we would love to hear your ideas. But teams that are not willing to take this step will struggle to make agile stick, and will continue to complain that they “do not have enough people”.
Modified on by ScottWill
Just a quick plug for the upcoming IBM Rational Innovate Software Engineering Conference next week in Orlando at the Disney Swan & Dolphin Resort. I'll be leading the Agile Track Kickoff Session on Monday, 2 June, at 11am, and following that up with a book signing from 12noon to 1pm at the book table. It promises to be a great conference again this year! Please stop by and say "Hi!" if you'll be attending!
Main conference website: http://www-01.ibm.com/software/rational/innovate/
Agile Track Kickoff Session: https://www-950.ibm.com/events/global/innovate/agenda/preview.html?sessionid=DAG-2459
Podcast (12 minutes) covering the Agile Track: http://public.dhe.ibm.com/software/info/television/swtv/Rational_Software/podcasts/rttu/gist_moore_innovate_scaleagile_enterprise-2014-0520.mp3
Thanks -- and hope to see you there!!
Modified on by LeslieEkas
Short feedback loops are one of the key benefits to the success of agile because they enable teams to learn fast and adjust. This technique is leveraged in numerous practices used by agile teams including the daily standup, sprint demos, code checkin, build and validation cycles, pair programming, and continuous delivery. One of the key benefits that convinced me to try agile was its claim to deliver greater value with higher quality. After I had experimented with agile for a while, I started to understand just how critical short feedback loops are to achieve these goals.
Short feedback loops enable you to learn fast so that you can adjust while the costs are low. It has been well understood in software development that the cost to fix a defect increases and in many cases increases dramatically the longer you wait to fix it. Some of the reasons are that more code may have been added making the defect more complex to fix, the new code requires additional testing and fixing, and by the time you get to the fix, you may have lost familiarity with the code in question. To combat this issue, it is critical to find defects soon after the code is checked in. One short feedback loop used to combat this is to automatically test the code through a series of progressive test gates. Additionally if the code is checked in frequently (another short feedback loop) developers get quick feedback on quality.
Another manifestation of short feedback loops is the daily standup which enables teams to identify critical blockers and get help to get them fixed before the next meeting. Someone has a problem, someone jumps into help fix it. Shorten the time to fix a problem; reduce the cost.
To ensure that agile teams are meeting the needs of their customers, they demo their new work at the end of each sprint to get timely feedback. This version of a short feedback loop is fairly well known to agile teams but it may be surprising how few teams actually do these with their end users. These demos give the product users ongoing updates on how the new code is progressing and the opportunity to provide feedback. This kind of feedback is like bug fixing, the sooner it is applied the more cost effective it is. Going a step further, releasing functionality frequently enables users to get traction and provide feedback before the functionality is complete. Customers have to be willing to experiment and respond but the win for them is that they do get an active voice in getting the value they require.
As teams continue their agile adoption, it is useful to remember why we adopt various techniques to make sure that we get the value promised. To deliver higher value with better quality we use short feedback loops so that we learn fast and adjust.
Modified on by ScottWill
When a project is big enough to have two or more teams working on the same project, we recommend that each team have its own backlog of User Stories and each team size its own stories with Story Points. Of course, this inevitably raises the idea of having each team conform to some “story point standard” so that project progress and team progress can be tracked "uniformly." Doing this more or less means that teams have to normalize story points across all the teams so that, for example, a 5-point story for one team is the same as a 5-point story for every other team on the project; an 8-point story is the same for every team, etc., etc., etc.
All I have to say to this idea is: “Ugggghhh… What a collosal waste of time!" I’ve seen teams try to do this and the effort put into trying to ensure uniformity of sizings across multiple teams is enormous.
I’m happy to report that there’s an easier way that still allows for overall project progress to be easily seen as well as individual team progress. In essence, instead of normalizing based on story points we suggest “normalizing” based on the calendar. Let me explain: let’s assume, for example, there are two teams on a project, each having its own backlog that has been sized with story points. Team A has 2000 story points on its backlog and Team B has 500. In this case you would have three burndown charts to track progress: one that’s combined (showing 2500 total points), a second one for just Team A (showing 2000 total points), and a third one for just Team B (showing just 500 points). In order to determine relative progress for Team A and Team B, you base it on the number of points completed vs. the number of iterations... Thus, if you're half-way through the project (e.g., 5 of 10 planned iterations are complete), you would expect Team A to be ~1000 points complete (50% of its total) and Team B to be ~250 points complete (50% of its total).
Where things get to be a little more complicated is when you have numerous teams working on a project (for some enterprise application projects we’ve worked with, we’ve seen over 40 teams working on the same project). With two teams, it’s fairly easy to bounce back and forth between Team A’s release burndown chart and Team B’s release burndown chart. When you have more than that, you can create a very simple chart (using any spreadsheet tool) that shows “percent calendar complete” and “percent story points complete” for each team. Following are two examples that we’ve used (they show the same data, just in different formats):
You’ll notice in the first chart that the schedule is 60% complete (the black bars) and then each team’s relative progress is shown. In the second chart, a stacked bar is used to show overall te am progress. In this made-up example, Team 1 is slightly ahead, Team 5 is slightly behind, and the other teams are perhaps struggling...
I’m sure you can think of other, different ways of showing the data, so do whatever’s most helpful. The key is that this approach is so much easier and simpler than trying to normalize the sizing of story points across multiple teams.
One of the things that has been extremely popular in software organizations for a long time is to count the number of defects that have been found via testing. I propose that teams no longer do this for two reasons.
First, one of the lines of thinking behind counting defects is that it is somehow a measure of quality. If we were running an assembly line in a manufacturing plant, and the assembly line didn’t change from one week to the next, but the number of defects increased, we would know there was a quality problem somewhere – something on the assembly line was likely broken. Despite some similarities, software development is not an assembly line – unlike an assembly line, every line of code that is written and tested has never been written or tested before, so expecting defect counts to be an indicator of quality doesn’t correlate. If a team found 25 defects last week, and 50 this week, what does that tell us? Is the code quality really getting worse, or did the team just do a lot more testing? If so, did the tests executed reflect real-world usage, or were the tests all edge-cases? Did the team deploy the code into a new environment that had never been used before? I’m sure you can come up with many more scenarios… Anyway, the possible reasons for the higher number of defects this week are almost boundless, and the time spent trying to determine why the variation occurred would be better used to: work more with customers to better understand their usage patterns and particular needs; create more needed functionality; improve their processes further; cross-train team members; adopt a new tool; etc., etc., etc. All of which leads to my second point…
Agile software development has its foundations in Lean Thinking, and one of the Lean principles is to eliminate waste. Does counting defects, and spending time trying to figure out what causes variations in the numbers of defects from one time period to the next, contribute directly to the success of the project? If not, then the time spent doing so should be viewed as a waste – time that could be better spent doing more important work.
In conclusion I’d like to leave you with two thoughts regarding defects: first, when defects are found, they should be fixed immediately – period. Don’t allow a backlog of defects to accrue. If it does make sense to do some root-cause analysis to determine why a particular defect occurred, then that’s fine – do so when it makes sense to do so, but it likely does NOT make sense to do so for the vast majority of the defects found. And the second point is that a mature, Agile team should be able to have “difficult” conversations when needed: “Hey Scott, I’ve noticed that I’m finding a lot of defects in your code this week – is there anything that’s distracting you, or anything I can help you with?” Instead of taking umbrage at such comments, I should be thankful that my team is willing to raise the issue and have the discussion.
So, the next time you’re asked to track the number of defects found, ask “Why…?” The answer will likely shed light on ways to eliminate waste and help overcome the “That’s the way we’ve always done it!” mentality, as well as foster a relentless, continuous improvement mindset.
As always, thoughts, comments, and questions are most welcome! Thank you!
“If you’re not better this month than you were last month, don’t tell me you’re Agile!”
I forget exactly where I heard that quote but it was from someone well known in the industry that came up during a webcast I was listening to several years ago. It caught my attention and helped me start to focus more on why Continuous Improvement is considered a foundational principle of agile. My research led me back to the Agile Manifesto, specifically to the principles of the Agile Manifesto where one of the principles reads:
“At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.” [emphasis added]
No doubt you’ve heard of, and hopefully engaged in, regular reflections (sometimes called “retrospectives”). If you haven’t done so, or have tried them and not gotten much value, then I’d like to provide a few ways to help make them valuable and to help you and your organization adopt a Continuous Improvement mindset. Conversely, even if you do regularly hold reflections and find them useful, the hope is that the following is something else you can add to your “bag of tricks” and make them even more valuable and compelling.
First, we recommend that reflections occur at the end of every iteration. These meetings are team meetings and shouldn’t take more than an hour (a half-hour seems to be fairly typical in our circles).
Next, the meetings are not meant to be “gripe sessions.” Yes, sometimes a little bit of griping can be cathartic, but the goal of a reflection meeting is more than just walking away happy that you had the chance to vent your spleen… The goal is to figure out a way to improve between now and the next reflection meeting.
The meeting facilitator should just go around the table and ask everyone to list the pain points that they’re facing. The facilitator keeps a list of all the pain points brought up and, after the last person has spoken, shows a list of all the pain points raised – ranked from those with the highest number of mentions down to those with the least. The item at the top of the list is what the team focuses on since that is the one that is having the biggest negative impact on the team.
Now that the biggest pain point for the team has been identified, the team brainstorms on some specific actions to take to improve in that one area and includes those actions as part of the iteration plan for the next iteration.
At the end of the next iteration, during the reflection meeting, the team determines whether that pain point has been resolved. If not, the team brainstorms on some additional actions to take in the next iteration. The key here is to NOT start trying to solve another pain point until the first one has been solved to everyone’s satisfaction. Once the biggest pain point has been solved, only then should the team move to the next biggest pain point on the list. Lather, rinse, repeat every iteration – and soon Continuous Improvement will be the norm…
I’ll have some additional suggestions in the next post. In the meantime, please comment on your experiences with reflections, as well as on any helpful practices you’ve discovered. Thanks!