Modified by ScottWill
You might recall that I ended the last blog post with the following:
One thing at a time, one thing at a time, one thing at a time…
Why are we so convinced that this is the best way to work? Hopefully my previous post made it clear that the flexibility teams have to respond to changing circumstances by working on one thing at a time is an excellent benefit. There are additional, significant benefits to working on one thing at a time – two of which I explore in this post.
How do you know that what you’re creating will do well in the marketplace, meet customers’ needs, and generally provide high customer satisfaction? In the old, waterfall days we used to have “beta programs” where customers would be granted access to some pre-release version of the software that they could use in-house and then provide feedback. The biggest problems with beta programs revolve around the lateness of the feedback received from customers. Typically, if customers didn’t like something, or wanted something different, it was too late to make any major changes – the team was too busy fixing the backlog of defects, and the committed ship-date was often very close when the beta program started. Customers were often told, “We’ll get to that in the next release.”
However, by doing one thing at a time, your team has a much greater opportunity to understand, with a fairly high degree of certainty, that what you’re creating will meet customers’ needs. And the reason this is so is that you can involve your customers in your project throughout the entire lifecycle. When the team completes a small feature, or completes even a small part of a larger feature, that feature can be demo’d to customers. It’s common for agile teams to be demo-ing some completed functionality to customers at the end of their very first iteration. And customers love it! It gives the customers the ability to “put their fingerprints” on the product – they get to see what’s being built, provide immediate feedback, and then see the development organization respond quickly to that feedback. In other words, customers are able to provide real-time, on-going feedback so that when the product finally does ship, customers will be very comfortable in knowing that the product provides what is actually needed.
One other benefit is that customers can actually request that the product be made available earlier than planned. Say, for example, a team plans on three major features for its next release. By working on one feature at a time, it’s quite possible that customers could ask that the product be released as soon as the first two features are complete – or even just the first one! If the feature is needed, and customers can see how they can immediately make use of it, and the team has actually completed the feature, then releasing just the very first feature (instead of waiting until all three planned features are complete) may make the most business sense. But the ability to even have such a business discussion is only possible if one feature at a time is being worked on. If all three features are started in parallel, then the team has created inflexibility since nothing can ship until all three features are completed (yes, dark launches address this problem, but the additional overhead of ensuring what’s “dark” doesn’t interfere with what’s been completed must be considered).
So, to sum up, how do you know? You know because your customers are providing you real-time, on-going feedback throughout the project to help ensure that what you deliver is what they need. It’s a win-win situation for everyone!
Modified by ScottWill
Years ago, waterfall projects would typically shun requests for new features or capabilities being added to a project that was underway. After all, project success was typically measured by conformance to “the plan” – and a new request was not part of the plan. These waterfall teams would complete the opening sentence something like this, “When we get a new feature request, we put it on our list of features for consideration in the next release.” Thus, an opportunity to respond to changing conditions, changing customer needs and expectations, or changing competitive situations in a timely manner was lost.
Other waterfall projects have tried to add the new feature in along with the work that was originally committed in “the plan,” but that would often spell disaster since teams were typically already working feverishly just to meet the original commitments. In order to add something else to an already booked plan, there were very few options – mainly overtime and/or cutting corners – neither of which is a good thing. Oh, sure, there was always a planned “buffer” added into “the plan” to account for such contingencies, but when have you ever seen it work out the way it was intended? The skeptic in me would guess rarely, if ever… These teams would complete the opening sentence something like this, “When we get a new feature request, we add it to the plan and start mandating extensive overtime.” Pretty soon, employees are burnt out, frustrated at not having a life, and begin to actively seek employment opportunities elsewhere.
Mature agile teams complete the sentence something like this: “When we get a new feature request, we see it as an opportunity to get ahead of the competition as well as make our customers happy and successful.” Typically new requests surface because of changes in the competitive landscape, changing customer needs and expectations, new market opportunities, or even some combination of these and other factors. And when these new requests arrive at the door of an agile team, there’s no drama at all, no worries about overtime, no cutting corners, no consternation about not meeting “the plan,” etc. Agile teams have great flexibility to handle new requests because they are always working on just one thing at a time – not numerous features in parallel. Waterfall teams typically worked on lots of features in parallel since all the features were committed up-front and since teams typically had to show progress on all the committed features from the very beginning of the project. Thus, if a new request arrived, there was no flexibility to drop a less important feature from the plan since everything was already underway.
Agile teams, however, work on one thing at a time. And the way they do this is by having the features rank-ordered from the most important to the least important. If a new request arrives, the team finishes working on the feature currently in progress – it then has complete freedom to begin working on the brand new request. If a particular ship-date has to be met, then the team drops the least important feature from the list and replaces it with the new request. This way, the team is always working on the most important feature and has incredible flexibility to handle changing circumstances. After all, conformance to “the plan” is not the right measure of success – meeting customer needs and adapting to changing market conditions is a much better measure of success.
To summarize, working on one thing at a time (always the highest-ranked, most important item) and getting it done before starting on something new is the most efficient and effective way to work. It also allows for the greatest flexibility – especially in markets where changes can occur lightning fast.
One thing at a time, one thing at a time, one thing at a time...
Modified by ScottWill
When a project is big enough to have two or more teams working on the same project, we recommend that each team have its own backlog of User Stories and each team size its own stories with Story Points. Of course, this inevitably raises the idea of having each team conform to some “story point standard” so that project progress and team progress can be tracked "uniformly." Doing this more or less means that teams have to normalize story points across all the teams so that, for example, a 5-point story for one team is the same as a 5-point story for every other team on the project; an 8-point story is the same for every team, etc., etc., etc.
All I have to say to this idea is: “Ugggghhh… What a collosal waste of time!" I’ve seen teams try to do this and the effort put into trying to ensure uniformity of sizings across multiple teams is enormous.
I’m happy to report that there’s an easier way that still allows for overall project progress to be easily seen as well as individual team progress. In essence, instead of normalizing based on story points we suggest “normalizing” based on the calendar. Let me explain: let’s assume, for example, there are two teams on a project, each having its own backlog that has been sized with story points. Team A has 2000 story points on its backlog and Team B has 500. In this case you would have three burndown charts to track progress: one that’s combined (showing 2500 total points), a second one for just Team A (showing 2000 total points), and a third one for just Team B (showing just 500 points). In order to determine relative progress for Team A and Team B, you base it on the number of points completed vs. the number of iterations... Thus, if you're half-way through the project (e.g., 5 of 10 planned iterations are complete), you would expect Team A to be ~1000 points complete (50% of its total) and Team B to be ~250 points complete (50% of its total).
Where things get to be a little more complicated is when you have numerous teams working on a project (for some enterprise application projects we’ve worked with, we’ve seen over 40 teams working on the same project). With two teams, it’s fairly easy to bounce back and forth between Team A’s release burndown chart and Team B’s release burndown chart. When you have more than that, you can create a very simple chart (using any spreadsheet tool) that shows “percent calendar complete” and “percent story points complete” for each team. Following are two examples that we’ve used (they show the same data, just in different formats):
You’ll notice in the first chart that the schedule is 60% complete (the black bars) and then each team’s relative progress is shown. In the second chart, a stacked bar is used to show overall te am progress. In this made-up example, Team 1 is slightly ahead, Team 5 is slightly behind, and the other teams are perhaps struggling...
I’m sure you can think of other, different ways of showing the data, so do whatever’s most helpful. The key is that this approach is so much easier and simpler than trying to normalize the sizing of story points across multiple teams.
Modified by ScottWill
First of all we would like to wish everyone a Happy New Year! We trust that this year will be full of good challenges and significant achievements for each of you!
Regarding metrics, if it's one thing that engineers love, it's numbers... And if it's one thing that managers love, it's status. Put the two together and oftentimes you have teams that track tons of metrics across all facets of their projects.
While it may give the impression of being really thorough, and providing all sorts of deep insights, if you find yourself in an organization that tracks lots and lots of metrics, then I would encourage you to take a step back and limit the focus to just what's really important.
One of the principles of the Agile Manifesto reads, "Working software is the primary measure of progress." If you stop and think about it, that principle makes perfect sense. Customers aren't going to buy untested code and they're not going to be too happy if they buy something that falls down broken all the time because major defects haven't been fixed. All other metrics should pale in comparison to the one metric of having working software. Putting focus on this one metric will drive many of the benefits that agile promises -- three of which I'll touch on here.
First, focusing primarily on having working software provides a shared, common goal for the organization. This means that no one on the team gets any credit for his or her individual efforts which, in turn, provides plenty of motivation for helping each other out so that the team can take credit for having working software. A lot of the old "us vs. them" mentality of waterfall days is eliminated when focus is placed on working software as the primary measure of progress.
Next, having the focus on working software means that teams will actually achieve working software much earlier in a project than was typical with waterfall projects. Having working software provides plenty of opportunities to involve customers earlier in the project since customers can actually see demos of the functionality while it is being implemented and can even download it and test-drive it in their own environments. Getting customer feedback early is a huge benefit, especially if customers tell you you're going down the wrong path since you'll be able to make mid-course corrections prior to releasing the product.
Finally, having working software enables teams to move into the DevOps space where automation takes on an even more significant role, where releases are more frequent (up to multiple times a day for some Software-as-a-Service offerings), and where opportunities for doing things like "A/B" testing can be pursued. None of this is possible without having working software.
To conclude, if you agree that focusing on working software as the primary measure of project progress makes sense, but you're not quite sure how to ditch many of the metrics you may be currently tracking, I'll leave you with this analogy that Leslie uses with teams when dealing with this very issue. Think of cleaning out your closet -- you pull everything out and then put back only those things you wish to keep. Everything else is either thrown away or donated to charity. So, start with a "clean slate," put working software at the top of the list, and then add only those additional metrics that are absolutely necessary to ensure project success. Tracking anything else beyond just what's absolutely necessary should be considered a waste.
We would be interested in your thoughts and comments, especially if you've already made the transition to tracking working software as your primary metric. Thank you!
Modified by ScottWill
Velocity is a great tool for agile teams but it is easy to "over-engineer" the concept -- or even misuse it. Velocity is a mechanism to understand the pace of a team that uses story points to size their user stories.
First, a quick comment regarding story points – Leslie and I advocate the usage of “unit-less” sizings of user stories, meaning that story points are not associated with any measure of time. Story points are simply an assignment of size relative to other user stories that a team works on. Thus, a 3-point story should take approximately 3 times the amount of effort for the team as a 1-point story -- irrespective of how long it takes to complete either one. Note that story point sizings are just relative sizings and they work no matter how long it takes to actually complete a given story of a given size.
Back to velocity… Velocity, simply put, is the term used to describe a team’s observed capacity for completing work. Velocity is calculated by taking the average number of story points a team completes each iteration. For example, if a team completes 8 story points’ worth of user stories in its first iteration, 12 in the second, and 10 in its third, then the team’s velocity is 10 (the average of 8, 12, and 10). Note that several iterations had to be completed first in order for the team to determine its velocity (we advocate a minimum of three iterations before starting to use velocity for project planning and tracking).
Even though the concept is simple, I’ve seen teams misuse velocity and, as a consequence, reduce its value. I’ll address three of the most common misuses in the following paragraphs:
First, velocity is incorrectly used as a "report-card" each iteration. Assume the example I used above where the team has a velocity of 10. Let’s say that their next iteration they complete 6 story points. In organizations where velocity is not understood, the team will likely be called on the carpet and asked to explain what happened (the thought being that their productivity dropped by 40%!). Given that the assignment of story points is an estimation technique and not something to be measured to 26 digits of precision, there should be an understanding that the number of points a team completes each iteration will vary from iteration to iteration. Perhaps in this iteration the team is working on a story that they underestimated. Big deal… Chances are in a subsequent iteration they’ll work on a story that was overestimated. Over the course of several iterations, the team's velocity will average out. Please do not use the number of story points a team completes in any given iteration as a report-card. Use the average and save everyone a lot of headaches from micromanagement.
Secondly, and closely related to the first, is teams erroneously claiming "partial credit" for a story. When teams are seeing the number of story points completed in a given iteration used as report-card, they tend to start trying to claim partial-credit for an incomplete story. In this scenario a team might say, “We got the code done, so that’s worth 2-points out of this 5-point story.” Even though the testing, defect fixing, automation, etc., hasn’t yet been completed, they're just trying to boost the total number of points claimed in the iteration. Unfortunately, this approach defeats a number of benefits of adopting agile. First, it breaks down team synergy by reverting back to an us-vs.-them mentality. When the whole team gets “credit” for completing a user story when it’s actually complete, then the team is motivated to help each other out. If partial credit is allowed, then this can pit parts of the team against other parts. The second problem with this approach is that it violates the agile principle of working software as the measure of progress. Code that has been written, but is untested, is not “working software.” Code that has been written and tested, but where defects haven’t been fixed, is not “working software.” Working software should be the focus for the team… And let me close this section with another simple example: let’s say a team is targeting completion of two 5-point stories this iteration (their velocity has been ~10 points an iteration, so targeting 10 points’ worth of stories makes good sense). Let’s say they complete the first story and are almost complete with the second. If the team tries to take partial credit, they might claim 4 of the 5 points for the second story and, thus, their story point totals for this iteration would be 9. Next iteration, they’d plan to finish off the last bit of the second story and then, most likely target and additional 10 points' worth of new stories (since they have been averaging 10 story points so far and they almost completed 10 points last iteration). Let’s say they complete all that work in the next iteration for a total of 11 points. What’s their velocity for the two iterations? It’s 10. Now let’s take the “no partial credit” approach. The team would get 5 story points for its first iteration and 15 for its second. What’s the velocity for the two iterations calculated this way? Right, it’s also 10. This is why focusing on velocity as an average is so liberating for teams – they’re not constantly under the microscope and don’t feel compelled to play games with partial credit.
The last abuse of story points and velocity that I’ve seen is that teams get their velocity "dictated" to them. They’re told to achieve some level of velocity before even knowing what their velocity is. Remember, as mentioned at the opening, velocity reflects the team’s observed capacity to complete work each iteration. Typically, when teams are dictated a velocity, its in “fixed content/fixed date” projects (which are anathema in agile). Teams wind up working tremendous amounts of overtime to meet their dictated velocity (which is also anathema to agile).
Finally, there are legitimate ways to go about increasing velocity, as well as illegitimate ways. The most prevalent illegitimate way is forcing a team to do overtime – this is not sustainable and will only cause problems down the road. The legitimate ways to increase velocity is to pursue what I call “enabling” practices: increasing test automation, improving builds by adopting continuous integration, adopting test-driven development and pair-programming, automating provisioning of test environments, automating deployments, and many others. Yes, putting effort into maturing these practices may mean you slow down the amount of functionality you produce in the short run, but the long-term benefits (such as sustained velocity improvements) will be well worth the initial investment. Velocity is simple - keep it simple. Increasing velocity is where the hard work is.