Modified on by ScottWill
This week I thought I'd re-post an article that appeared earlier on DeveloperWorks.
Early on, when I first started coaching teams on their transition to Agile, it was quite common for teams to come to me and tell me that they hadn't completed any User Stories by the end of an iteration. Initially, teams wanted to use this as a reason to increase the length of their iterations (I recommend two-week iterations – and teams always seem to want longer iterations). What they were failing to realize is that there were fundamental reasons why they weren't completing stories, and iteration length by and large had nothing to do with it. Let me explain....
One of the first questions I'd ask a team in this situation was if they were assigning User Stories to team members with the idea that each member was responsible for completing his or her User Story independently of others on the team. Almost invariably, the answer was “Yes, but how did you know that...?” What was happening in these situations was that each member of the team would work on his own User Story and, at the end of an iteration, the team would have six, or seven, or eight partially-completed User Stories. In this case, nothing was completely “Done!” by the end of the iteration – just a bunch of partially-completed stories. Since each team member was focused on completing his own story, there was no “incentive” for someone to jump in and help another team member get a story done because everyone was focused on completing his own story.
In this situation, we would remind the team about the importance of the Agile practice of “whole teams.” Part of this practice recommends that a team should try to work together on one User Story at time, complete it, and then as a team move on to the next story on the backlog. This way, the team is constantly completing User Stories and will have, at most, one unfinished User Story at the end of an iteration.
In order to pull this off, there's another practice that we recommend teams adopt – and that is the creation of small implementation tasks for each User Story. At the beginning of an iteration, the whole team goes through the process of creating the various tasks needed to complete a story (development tasks, test tasks, user documentation tasks, automation tasks, code review tasks, etc.). These task sizes should be small in size (on the order of 4 hours, 8 hours, to no more than 16 hours), which then allows for a lot of “parallelization” of the work. The concept is, for example, instead of having one developer work on a coding task for five days, you could put five developers working in parallel on five one-day development tasks and, at the end of a single day, have all the coding completed that otherwise wouldn't have been complete until the end of five days. Additionally, there should be corresponding documentation, test, automation, and code review tasks for each of the development tasks. This way, a team achieves a very tight integration between dev, doc, and QA throughout each User Story (not to mention each iteration), and the team also always has working code as the team completes one User Story before moving on to the next. By the way, this approach is not something that comes naturally to a team (especially if they're just beginning the transition to Agile), but give the team time and encouragement to try it out – the effort is worth it.
A few closing comments: obviously, piling everyone on a scrum team onto one story won't always work, especially with very small User Stories. The goal – even with a very small User Story – should be to put two or three engineers on that story while the rest of the team works on an additional story. Keep the number of stories being worked on at the same time by the team to the smallest number possible. This should go a long way to helping you consistently finish User Stories at the end of every iteration.
We welcome any comments and suggestions you may have -- thanks!
Modified on by ScottWill
Years ago, waterfall projects would typically shun requests for new features or capabilities being added to a project that was underway. After all, project success was typically measured by conformance to “the plan” – and a new request was not part of the plan. These waterfall teams would complete the opening sentence something like this, “When we get a new feature request, we put it on our list of features for consideration in the next release.” Thus, an opportunity to respond to changing conditions, changing customer needs and expectations, or changing competitive situations in a timely manner was lost.
Other waterfall projects have tried to add the new feature in along with the work that was originally committed in “the plan,” but that would often spell disaster since teams were typically already working feverishly just to meet the original commitments. In order to add something else to an already booked plan, there were very few options – mainly overtime and/or cutting corners – neither of which is a good thing. Oh, sure, there was always a planned “buffer” added into “the plan” to account for such contingencies, but when have you ever seen it work out the way it was intended? The skeptic in me would guess rarely, if ever… These teams would complete the opening sentence something like this, “When we get a new feature request, we add it to the plan and start mandating extensive overtime.” Pretty soon, employees are burnt out, frustrated at not having a life, and begin to actively seek employment opportunities elsewhere.
Mature agile teams complete the sentence something like this: “When we get a new feature request, we see it as an opportunity to get ahead of the competition as well as make our customers happy and successful.” Typically new requests surface because of changes in the competitive landscape, changing customer needs and expectations, new market opportunities, or even some combination of these and other factors. And when these new requests arrive at the door of an agile team, there’s no drama at all, no worries about overtime, no cutting corners, no consternation about not meeting “the plan,” etc. Agile teams have great flexibility to handle new requests because they are always working on just one thing at a time – not numerous features in parallel. Waterfall teams typically worked on lots of features in parallel since all the features were committed up-front and since teams typically had to show progress on all the committed features from the very beginning of the project. Thus, if a new request arrived, there was no flexibility to drop a less important feature from the plan since everything was already underway.
Agile teams, however, work on one thing at a time. And the way they do this is by having the features rank-ordered from the most important to the least important. If a new request arrives, the team finishes working on the feature currently in progress – it then has complete freedom to begin working on the brand new request. If a particular ship-date has to be met, then the team drops the least important feature from the list and replaces it with the new request. This way, the team is always working on the most important feature and has incredible flexibility to handle changing circumstances. After all, conformance to “the plan” is not the right measure of success – meeting customer needs and adapting to changing market conditions is a much better measure of success.
To summarize, working on one thing at a time (always the highest-ranked, most important item) and getting it done before starting on something new is the most efficient and effective way to work. It also allows for the greatest flexibility – especially in markets where changes can occur lightning fast.
One thing at a time, one thing at a time, one thing at a time...
Modified on by ScottWill
No doubt you’ve already heard about DevOps. What you may not be aware of is how DevOps builds on Agile principles and practices. For example, Software-as-a-Service (SaaS) projects have adopted DevOps in order to have the ability to frequently update their websites with additional features and capabilities. Some SaaS offerings are updating their production websites multiple times a day – which is a far cry from the once-a-year-release cycle that was rather typical for products built using waterfall.
You might be thinking to yourself, “Multiple times a day…? How does that happen…?” I’ll touch briefly on three essentials of Agile that contribute to this capability. The first is breaking work down into small batches and completing one small batch of work prior to moving to another. You’ll no doubt recognize this as a Lean principle derived from queuing theory. Simply stated, Lean has shown that small batches of work transitioning through a system are far more efficient than an occasional big batch of work. For our purposes, this means that having small amounts of code written, tested, and automated, where any defects have been fixed, and where any necessary user-documentation has been written, is far more efficient and productive than just having a large batch of code written that hasn’t been tested or documented.
The second point is Agile’s emphasis on “working software as the primary measure of progress” (see the 7th Principle of The Agile Manifesto). This ties in directly with the previous point – as small amounts of code are written, the goal is to get that code to production-level status as quickly as possible (“working software” that is “Done!”) so that the new code can be provided to customers with high confidence that it will work as desired, have the right level of quality, and help customers address a particular need.
The last point regards the tremendous emphasis in Agile on automation. With DevOps, the need for automation increases significantly because of the desire to deploy new features and capabilities on such a short timetable. Not only are builds and testing automated, but test environments are automatically provisioned, code that passes the automated test suite is automatically deployed into production and, if a problem occurs anywhere along the way, the process is stopped, the code is automatically rolled back, and appropriate personnel are automatically notified of the problem. The ultimate goal is to make a small code change, “push a button,” and have the new feature or capability automatically rolled out without any further human intervention.
No doubt you can tie additional Agile principles and practices to the DevOps initiative, but I thought I’d touch on just these few to show how DevOps is nothing more than a logical next step after Agile – they are not two, independent approaches to building software. As always, please feel free to share your thoughts! We look forward to hearing from you!
Modified on by ScottWill
Just like the drip, drip, drip of a leaky faucet, we lose productive time by being regularly distracted. Some distractions can't be eliminated, but many can... I recently wrote an article for InformIT that was adapted from one of the chapters in the book that Leslie and I co-authored entitled Being Agile. The article discusses a pervasive form of regular distractions that many people have taken for granted nowadays. I thought I'd pass along a link here:
I hope you enjoy the article and that it provides some food for thought... As always, comments and questions are welcome. Thanks!
Modified on by ScottWill
I have been a strong advocate over the years of using Story Points to track team velocity. In spite of many of the self-inflicted problems teams face when using Story Points (e.g., trying to initially assign some measure of time to a Story Point, or trying to normalize Story Points across teams, or confusing Story Points and task-hours, etc.), when used correctly, Story Points (and the velocity calculations they enable) make project tracking and projection incredibly easy and relatively accurate as well. (As a side note, check out the following article by Jeff Sutherland: link.)
What I’d like to cover in this posting is that using Story Points is not the only way to track a team’s velocity. For teams that have been writing User Stories for a while, and who have gotten good at breaking their work down into small enough chunks such that one or more stories can be completed (“Done!”) in an iteration, I would like to recommend that they forego the effort of sizing User Stories with Story Points. They’ve already shown that they know how to start with an Epic, and break an Epic down into User Stories that can be completed in an iteration. Thus, they’ve demonstrated that they have a relative amount of uniformity in the way that they come up with their backlog of User Stories – their stories are all in the same range regarding the amount of time and effort required to complete each one of them.
With that in mind, such teams could simply measure velocity based on completed User Stories – the principle is the same as Story Points, but mature teams can fore-go the effort required to size each User Story with Story Points (e.g., through Planning Poker).
Let me give a couple of examples – the first one using Story Points. Team A has 50 Story Point’s worth of User Stories on its backlog, and they’ve demonstrated over the course of time that they can generally complete about 10 Story Point’s worth of stories in an iteration (velocity = 10). In order to determine how long it will take to complete the remaining stories, it’s easy to divide 50 by 10 and come up with an estimate of around 5 iterations. If a customer is asking how long before the team can get to a feature that the customer is interested in, and the User Story associated with that feature appears about 30 points down the rank-ordered backlog of stories, then the team can tell the customer that it will be about 3 iterations (30 divided by 10).
The second example is that Team B has 10 stories on its backlog and they’ve demonstrated that they typically complete about 2 User Stories every iteration (velocity = 2). To determine how long it will take to complete the remaining stories, divide 10 by 2 and come up with an estimate of 5 iterations. If a customer is asking how long before the team can get to a feature that the customer is interested in, and the User Story associated with that feature appears about 6 stories down the rank-ordered backlog of stories, then the team can tell the customer that it will be about 3 iterations (6 divided by 2).
If you’re part of a fairly experienced team, and if your team already has an established velocity, consider trying this approach in lieu of assigning Story Points – it can help save some time by not going through the sizing exercises. However, if your team is fairly new to writing and sizing User Stories, or if your team doesn’t yet have an established velocity, I would recommend sticking with Story Points for now because the process of assigning Story Points (during a Planning Poker exercise, for example) is *very* helpful in aligning a team’s thinking, as well as building team synergy, as the team goes through the process of having the discussions that naturally arise when sizing stories.
As always, please feel free to comment, provide suggestions and recommendations, or even tell us about your experiences using Story Points and/or other ways of tracking velocity. Thanks!
Modified on by ScottWill
You might recall that I ended the last blog post with the following:
One thing at a time, one thing at a time, one thing at a time…
Why are we so convinced that this is the best way to work? Hopefully my previous post made it clear that the flexibility teams have to respond to changing circumstances by working on one thing at a time is an excellent benefit. There are additional, significant benefits to working on one thing at a time – two of which I explore in this post.
How do you know that what you’re creating will do well in the marketplace, meet customers’ needs, and generally provide high customer satisfaction? In the old, waterfall days we used to have “beta programs” where customers would be granted access to some pre-release version of the software that they could use in-house and then provide feedback. The biggest problems with beta programs revolve around the lateness of the feedback received from customers. Typically, if customers didn’t like something, or wanted something different, it was too late to make any major changes – the team was too busy fixing the backlog of defects, and the committed ship-date was often very close when the beta program started. Customers were often told, “We’ll get to that in the next release.”
However, by doing one thing at a time, your team has a much greater opportunity to understand, with a fairly high degree of certainty, that what you’re creating will meet customers’ needs. And the reason this is so is that you can involve your customers in your project throughout the entire lifecycle. When the team completes a small feature, or completes even a small part of a larger feature, that feature can be demo’d to customers. It’s common for agile teams to be demo-ing some completed functionality to customers at the end of their very first iteration. And customers love it! It gives the customers the ability to “put their fingerprints” on the product – they get to see what’s being built, provide immediate feedback, and then see the development organization respond quickly to that feedback. In other words, customers are able to provide real-time, on-going feedback so that when the product finally does ship, customers will be very comfortable in knowing that the product provides what is actually needed.
One other benefit is that customers can actually request that the product be made available earlier than planned. Say, for example, a team plans on three major features for its next release. By working on one feature at a time, it’s quite possible that customers could ask that the product be released as soon as the first two features are complete – or even just the first one! If the feature is needed, and customers can see how they can immediately make use of it, and the team has actually completed the feature, then releasing just the very first feature (instead of waiting until all three planned features are complete) may make the most business sense. But the ability to even have such a business discussion is only possible if one feature at a time is being worked on. If all three features are started in parallel, then the team has created inflexibility since nothing can ship until all three features are completed (yes, dark launches address this problem, but the additional overhead of ensuring what’s “dark” doesn’t interfere with what’s been completed must be considered).
So, to sum up, how do you know? You know because your customers are providing you real-time, on-going feedback throughout the project to help ensure that what you deliver is what they need. It’s a win-win situation for everyone!
Modified on by ScottWill
Velocity is a great tool for agile teams but it is easy to "over-engineer" the concept -- or even misuse it. Velocity is a mechanism to understand the pace of a team that uses story points to size their user stories.
First, a quick comment regarding story points – Leslie and I advocate the usage of “unit-less” sizings of user stories, meaning that story points are not associated with any measure of time. Story points are simply an assignment of size relative to other user stories that a team works on. Thus, a 3-point story should take approximately 3 times the amount of effort for the team as a 1-point story -- irrespective of how long it takes to complete either one. Note that story point sizings are just relative sizings and they work no matter how long it takes to actually complete a given story of a given size.
Back to velocity… Velocity, simply put, is the term used to describe a team’s observed capacity for completing work. Velocity is calculated by taking the average number of story points a team completes each iteration. For example, if a team completes 8 story points’ worth of user stories in its first iteration, 12 in the second, and 10 in its third, then the team’s velocity is 10 (the average of 8, 12, and 10). Note that several iterations had to be completed first in order for the team to determine its velocity (we advocate a minimum of three iterations before starting to use velocity for project planning and tracking).
Even though the concept is simple, I’ve seen teams misuse velocity and, as a consequence, reduce its value. I’ll address three of the most common misuses in the following paragraphs:
First, velocity is incorrectly used as a "report-card" each iteration. Assume the example I used above where the team has a velocity of 10. Let’s say that their next iteration they complete 6 story points. In organizations where velocity is not understood, the team will likely be called on the carpet and asked to explain what happened (the thought being that their productivity dropped by 40%!). Given that the assignment of story points is an estimation technique and not something to be measured to 26 digits of precision, there should be an understanding that the number of points a team completes each iteration will vary from iteration to iteration. Perhaps in this iteration the team is working on a story that they underestimated. Big deal… Chances are in a subsequent iteration they’ll work on a story that was overestimated. Over the course of several iterations, the team's velocity will average out. Please do not use the number of story points a team completes in any given iteration as a report-card. Use the average and save everyone a lot of headaches from micromanagement.
Secondly, and closely related to the first, is teams erroneously claiming "partial credit" for a story. When teams are seeing the number of story points completed in a given iteration used as report-card, they tend to start trying to claim partial-credit for an incomplete story. In this scenario a team might say, “We got the code done, so that’s worth 2-points out of this 5-point story.” Even though the testing, defect fixing, automation, etc., hasn’t yet been completed, they're just trying to boost the total number of points claimed in the iteration. Unfortunately, this approach defeats a number of benefits of adopting agile. First, it breaks down team synergy by reverting back to an us-vs.-them mentality. When the whole team gets “credit” for completing a user story when it’s actually complete, then the team is motivated to help each other out. If partial credit is allowed, then this can pit parts of the team against other parts. The second problem with this approach is that it violates the agile principle of working software as the measure of progress. Code that has been written, but is untested, is not “working software.” Code that has been written and tested, but where defects haven’t been fixed, is not “working software.” Working software should be the focus for the team… And let me close this section with another simple example: let’s say a team is targeting completion of two 5-point stories this iteration (their velocity has been ~10 points an iteration, so targeting 10 points’ worth of stories makes good sense). Let’s say they complete the first story and are almost complete with the second. If the team tries to take partial credit, they might claim 4 of the 5 points for the second story and, thus, their story point totals for this iteration would be 9. Next iteration, they’d plan to finish off the last bit of the second story and then, most likely target and additional 10 points' worth of new stories (since they have been averaging 10 story points so far and they almost completed 10 points last iteration). Let’s say they complete all that work in the next iteration for a total of 11 points. What’s their velocity for the two iterations? It’s 10. Now let’s take the “no partial credit” approach. The team would get 5 story points for its first iteration and 15 for its second. What’s the velocity for the two iterations calculated this way? Right, it’s also 10. This is why focusing on velocity as an average is so liberating for teams – they’re not constantly under the microscope and don’t feel compelled to play games with partial credit.
The last abuse of story points and velocity that I’ve seen is that teams get their velocity "dictated" to them. They’re told to achieve some level of velocity before even knowing what their velocity is. Remember, as mentioned at the opening, velocity reflects the team’s observed capacity to complete work each iteration. Typically, when teams are dictated a velocity, its in “fixed content/fixed date” projects (which are anathema in agile). Teams wind up working tremendous amounts of overtime to meet their dictated velocity (which is also anathema to agile).
Finally, there are legitimate ways to go about increasing velocity, as well as illegitimate ways. The most prevalent illegitimate way is forcing a team to do overtime – this is not sustainable and will only cause problems down the road. The legitimate ways to increase velocity is to pursue what I call “enabling” practices: increasing test automation, improving builds by adopting continuous integration, adopting test-driven development and pair-programming, automating provisioning of test environments, automating deployments, and many others. Yes, putting effort into maturing these practices may mean you slow down the amount of functionality you produce in the short run, but the long-term benefits (such as sustained velocity improvements) will be well worth the initial investment. Velocity is simple - keep it simple. Increasing velocity is where the hard work is.
Modified on by ScottWill
Leslie and I will be hosting a webcast entitled, How Do Agile Teams Keep From "Waterfalling Backward?" as part of the Global Rational User Community (GRUC) webcast series.
"Waterfalling backward" describes the situation where a team that has started down the Agile path reverts to waterfall ways of thinking when difficult situations arise. We'll cover some examples of what "waterfalling backward" looks like as well as several easy-to-adopt techniques that help teams re-align with Agile if they've started "waterfalling backward."
Note that three attendees will win a copy of our book, Being Agile!
Register here: https://attendee.gotowebinar.com/register/113940558500710914
We are looking forward to the session and hope you can join us!
Modified on by ScottWill
First of all we would like to wish everyone a Happy New Year! We trust that this year will be full of good challenges and significant achievements for each of you!
Regarding metrics, if it's one thing that engineers love, it's numbers... And if it's one thing that managers love, it's status. Put the two together and oftentimes you have teams that track tons of metrics across all facets of their projects.
While it may give the impression of being really thorough, and providing all sorts of deep insights, if you find yourself in an organization that tracks lots and lots of metrics, then I would encourage you to take a step back and limit the focus to just what's really important.
One of the principles of the Agile Manifesto reads, "Working software is the primary measure of progress." If you stop and think about it, that principle makes perfect sense. Customers aren't going to buy untested code and they're not going to be too happy if they buy something that falls down broken all the time because major defects haven't been fixed. All other metrics should pale in comparison to the one metric of having working software. Putting focus on this one metric will drive many of the benefits that agile promises -- three of which I'll touch on here.
First, focusing primarily on having working software provides a shared, common goal for the organization. This means that no one on the team gets any credit for his or her individual efforts which, in turn, provides plenty of motivation for helping each other out so that the team can take credit for having working software. A lot of the old "us vs. them" mentality of waterfall days is eliminated when focus is placed on working software as the primary measure of progress.
Next, having the focus on working software means that teams will actually achieve working software much earlier in a project than was typical with waterfall projects. Having working software provides plenty of opportunities to involve customers earlier in the project since customers can actually see demos of the functionality while it is being implemented and can even download it and test-drive it in their own environments. Getting customer feedback early is a huge benefit, especially if customers tell you you're going down the wrong path since you'll be able to make mid-course corrections prior to releasing the product.
Finally, having working software enables teams to move into the DevOps space where automation takes on an even more significant role, where releases are more frequent (up to multiple times a day for some Software-as-a-Service offerings), and where opportunities for doing things like "A/B" testing can be pursued. None of this is possible without having working software.
To conclude, if you agree that focusing on working software as the primary measure of project progress makes sense, but you're not quite sure how to ditch many of the metrics you may be currently tracking, I'll leave you with this analogy that Leslie uses with teams when dealing with this very issue. Think of cleaning out your closet -- you pull everything out and then put back only those things you wish to keep. Everything else is either thrown away or donated to charity. So, start with a "clean slate," put working software at the top of the list, and then add only those additional metrics that are absolutely necessary to ensure project success. Tracking anything else beyond just what's absolutely necessary should be considered a waste.
We would be interested in your thoughts and comments, especially if you've already made the transition to tracking working software as your primary metric. Thank you!
Modified on by ScottWill
Scott and I are thrilled to tell you about our new book called Being Agile: Eleven Breakthrough Techniques to Keep You From Waterfalling Backward. You can click on the link below to get access to the book or use the handy link in the Links section of this page.
Our goal for the book is to help teams that have adopted the standard practices and call themselves agile, but have not gained the real benefits that agile promises. We believe that adopting agile requires a change in thinking, not just adopting a set of practices. When teams don't get immediate results, they often fall back into old habits that will reduce their chances of success. Our book offers eleven breakthrough techniques to help overcome reflexive thinking so that you will start to respond with an agile mindset that prioritizes delivering customer value, ensuring high quality, and continuously improving.
The book consists of eleven chapters that focus on some of the critical topics for agile success. In each chapter we discuss the basic principles that are the foundation for the topic followed by the corresponding practices. Each chapter closes with a breakthrough technique to help teams achieve the real benefits of the aspect of agile covered in the chapter. It was not our goal for the book to be an agile primer or even to cover all of agile thoroughly as there are much better references for that. Our goal was to focus on principles critical to agile adoption with which teams struggle and for each area, give them the means to succeed via using an agile mindset.
Scott and I have both had our breakthrough moments and so have the teams that we have le
ad and coached. We share experiences from those teams to provide a real world context to the ideas. Most of what we have learned came from working with numerous teams and it is our hope that we continue the conversation with you. Please share your "A-ha!" moments on our blog so that we can all continue to learn together.
Being Agile (IBM Press)
Modified on by LeslieEkas
When teams complain that they “do not have enough people”, that may be a sign the team composition has not resulted in an agile whole team. Agile’s mandate for working software requires that teams be composed of the right set of people to deliver on this goal. That generally means developers, testers, writers, user experience experts and so forth. That also means that there are team members from each of the component technology disciplines so that, as a team, they can deliver end-to-end functionality.
Large teams moving to agile often have to form several smaller agile teams with each team delivering completed user stories. Teams that have traditionally been organized by technology or by discipline, are reluctant to move to a cross functional structure because there may not be enough technical and domain knowledge in each of the resulting teams. What they discover quickly is the majority of the knowledge and skills in the team resides with very few people.
The solution to this problem is to cross train the team members so that they are better able to deliver product capability with the right set of skills and domain knowledge. Many teams resist this solution because it requires upfront costs and will slow the team down for a while. In fact many teams are so convinced this will not work that they do not make any effort to cross train and end up with teams that cannot deliver working software. This is what leads them to believe that they “don’t have enough people”.
It is a very hard sell but cross training your teams so that they can deliver a greater range of product functionality is well worth the upfront costs. Software engineers usually have a product area in which they prefer to work or a technology that they favor. But good software engineers have the skills to learn new technologies as required. And it benefits a team when engineers do expand their skills because not only are they able to help complete additional work, but their expertise and perspectives are beneficial when building new software and solving problems.
It is important to let management know there will be a short-term slow down (maybe as much as several months) when cross training team members. But when teams are willing to cross learn in a results oriented way, they become more productive and tend to move faster.
There are ways to cross train and continue to move forward at the same time. Pair programming is a great way to introduce a developer to a new technology or a new area of the code. Developers can also try to fix defects in an area outside of their domain, and then request a code review of the completed fix before it gets checked in. Figuring out how to fix a problem really forces one to learn about how the code works. Engineers can also test and write test code outside their current domains so that they learn additional areas of the product. There are no doubt many other ways you can come up with and we would love to hear your ideas. But teams that are not willing to take this step will struggle to make agile stick, and will continue to complain that they “do not have enough people”.
Modified on by ScottWill
In one of my blog postings just over a year ago I suggested that mature teams may not need to use Story Points. However, at the end of that post, I wrote:
[If] your team is fairly new to writing and sizing User Stories, or if your team doesn’t yet have an established velocity, I would recommend sticking with Story Points for now because the process of assigning Story Points (during a Planning Poker exercise, for example) is *very* helpful in aligning a team’s thinking, as well as building team synergy, as the team goes through the process of having the discussions that naturally arise when sizing stories.
(You can read the full post here: link)
What I’d like to do with this post is cover a few issues on how to make sure you’re getting the most out of using Planning Poker.
First, keep in mind that Planning Poker is meant to be a very simple and quick process to be used by a team for assigning Story Points to a given story. It’s not meant to provide absolute precision regarding the assignment of Story Points, and the assignment of Story Points itself is meant to be relative to the sizings given to other User Stories on your team’s backlog.
Next, because Planning Poker is an estimation technique please don’t assume that everyone on the team must be 100% in agreement with the number of Story Points being assigned to a given Story. Let me explain: typically, when a team uses Planning Poker to assign Points to a User Story, there will be a smattering of “votes.” As an example, let’s say that, after the very first vote for a given Story, there are a couple of 3's, a 5, a couple of 8’s, and a 13. The scrum master should ask why folks who voted 3 thought the size was relatively small, and then ask the person who voted 13 why he thought the size was relatively big (i.e., over 4 times the estimated size of those who voted 3). After a brief discussion, the scrum master should ask for the team to vote again. Let’s say this time there’s one 3, several 5’s, and a couple of 8’s. A good scrum master should say something like, “OK team, let’s go with a 5 for this story and move on to the next story on the backlog.” What the scrum master is looking for is “harmonic convergence.” Notice how, in the second vote, there were fewer 3’s and the 13 went away. The bulk of the votes were gravitating towards a 5, and so you can see the team is getting closer (“converging”) on a given number. Good enough! Don’t waste time by voting again, and again, and again, and again (ad infinitum and ad nauseum) until the team members all vote the same – it’s not worth it. Estimation is not meant to be precise, and spending more and more time trying to be precise with estimates is a waste of time.
And here’s the main point that I’d like to make – let’s say you’re one of the team members who voted an 8 for this story even though the number finally assigned was a 5 – should you be upset? Should you start an argument? Should you want to keep voting until everyone agrees with you and assigns the Story an 8? No, no, and no. Even though you may really think the story should have been sized at an 8, you can see the team was converging on a 5, so just let it go and press on. Planning poker is not a contact sport.
Finally, once teams get the hang of Planning Poker, most of them vote no more than twice on any given story, and they usually don’t spend more than just a couple of minutes per story. Here’s the high-level process:
Step 1. If you've not assigned Story Points to any User Stories on your backlog then, as a team, quickly pick out what you all think is the smallest story on the backlog and automatically assign it a 1. All other Stories on the backlog will be sized relative to this Story.
Step 2. Grab the next User Story from the backlog.
Step 3. Have a brief discussion about the Story regarding the anticipated effort relative to the first story (the story itself should be familiar to the team since, if the writing of the Stories was done correctly, the team participated together in the writing of the given Story).
Step 4. Vote.
Step 5. If there appears to be a fairly strong consensus on a given number, go with that number, and then go back to Step 2….
Step 6. Otherwise, if there’s a fair amount of disparity in the votes cast, then have another brief discussion focusing on asking those who voted with the lowest number, and those who voted with the highest number, to give some insights into their respective thinking.
Step 7. Vote again – you’ll likely see some harmonic convergence on a specific number after the second vote, and then just go with that number. Now go back to Step 2 (lather, rinse, repeat)….
In sum, Planning Poker is a great technique to use to quickly assign Story Points to your User Stories. Just be sure that the team doesn’t over-engineer its use.
As always, please feel free to ask questions and share your experiences. Leslie and I look forward to hearing from you!
Modified on by ScottWill
A lot has been written about how many of the agile practices contribute to helping teams and organizations become more efficient. And this is no surprise since agile is built on top of many of the Lean principles that helped manufacturing become much more efficient.
In addition to the tremendous impact adopting agile practices can have on efficiency, another benefit (and one that doesn’t get quite as much “press”) is the reduction of risk that accompanies agile.
First I’d like to address how risk is significantly reduced by adopting the agile practice of “small batches.” When teams move to coding/testing/deploying small batches of functionality on a regular basis, they typically find defects much earlier than they would have had they still been using large batches. Many of you will likely recall the old waterfall days when tens of thousands of lines of code were written before the first test against the code was ever executed. Back then, when testing began, typically large numbers of defects were found in a short period of time and it would take a very long time to get the list of defects fixed. With the adoption of small batches, defects are often found within minutes of the code being checked in and the impact of any defect that is found, as well as the impact of the time required to fix the defect, is minimal since the code is fresh in everyone’s mind, and more code hasn’t been written that gets in the way of actually fixing the defect. The risk reduction is significant on just this point alone.
Another risk-reduction benefit of adopting small batches is the ability to get faster feedback from customers due to the fact that the new functionality is made available much faster than it would be if lots of different functions were packaged together into a big batch and released/deployed only once in a while. If a team puts out a small improvement, or a new feature, and gets feedback from customers that it doesn’t meet their needs, then the team has gained some valuable insights very quickly and their current investment is small. With large batches, if customers don’t like something, the team won’t know about it for quite some time because of the length of time from when the feature was built to when it was released/deployed along with all the other features in the big batch. Big batches are very risky due to the delay in getting feedback, as well as the increased costs of making changes (just think of all the additional testing that would have to take place if a change needed to be made and the batch contained ten features vs. just one feature).
The last risk-reduction benefit of adopting small batches concerns the pressure to add features. If only large batches of features are released infrequently, then typically product management wants to cram as many new features as possible into the current plan because, if the additional desired features don’t make it into the current plan, then it could be a very long time until the next plan is completed and the additional desired features are finally made available. Adopting small batches allows for continual re-ranking of the backlog of requested features based on customer feedback, new technologies, and market conditions, thus significantly reducing the pressure to cram tons and tons of stuff into a “big batch plan.”
Another agile practice that results in a significant reduction in risk is the adoption of “whole teams.” By “whole teams” I’m referring to teams that are comprised of cross-discipline and cross-functional responsibilities. While this is certainly nothing new in agile, understanding how adopting a whole team approach can reduce risk is something that doesn’t get discussed much. For example, when team members used to go off into their own little silos for months on end, and rarely interact with other team members, it was very difficult for anyone on the team to have any insight into what was being done by anyone else. And if one of the team members suddenly had to be away for a period of time (e.g., due to an illness or a family emergency), it was almost impossible for anyone else on the team to immediately pick up that person’s work.
With whole teams, not only is there regular communication and sharing of knowledge, but if a situation comes up where someone does have to be away for a lengthy period of time, the impact of the absence will be far less than it would be otherwise because others on the team will have a much better understanding of the work that person was doing and will be able to pick up the work with much less difficulty – thus reducing risk.
In conclusion, I would urge teams to include a focus on risk reduction as part of their regular reflections – it’s part of agile’s focus on continuous improvement.
Please feel free to comment on other areas that you’ve seen where adopting any of the agile practices has led to a significant reduction in risk. Thank you!
Modified on by ScottWill
Just a quick plug for the upcoming IBM Rational Innovate Software Engineering Conference next week in Orlando at the Disney Swan & Dolphin Resort. I'll be leading the Agile Track Kickoff Session on Monday, 2 June, at 11am, and following that up with a book signing from 12noon to 1pm at the book table. It promises to be a great conference again this year! Please stop by and say "Hi!" if you'll be attending!
Main conference website: http://www-01.ibm.com/software/rational/innovate/
Agile Track Kickoff Session: https://www-950.ibm.com/events/global/innovate/agenda/preview.html?sessionid=DAG-2459
Podcast (12 minutes) covering the Agile Track: http://public.dhe.ibm.com/software/info/television/swtv/Rational_Software/podcasts/rttu/gist_moore_innovate_scaleagile_enterprise-2014-0520.mp3
Thanks -- and hope to see you there!!
Modified on by LeslieEkas
Short feedback loops are one of the key benefits to the success of agile because they enable teams to learn fast and adjust. This technique is leveraged in numerous practices used by agile teams including the daily standup, sprint demos, code checkin, build and validation cycles, pair programming, and continuous delivery. One of the key benefits that convinced me to try agile was its claim to deliver greater value with higher quality. After I had experimented with agile for a while, I started to understand just how critical short feedback loops are to achieve these goals.
Short feedback loops enable you to learn fast so that you can adjust while the costs are low. It has been well understood in software development that the cost to fix a defect increases and in many cases increases dramatically the longer you wait to fix it. Some of the reasons are that more code may have been added making the defect more complex to fix, the new code requires additional testing and fixing, and by the time you get to the fix, you may have lost familiarity with the code in question. To combat this issue, it is critical to find defects soon after the code is checked in. One short feedback loop used to combat this is to automatically test the code through a series of progressive test gates. Additionally if the code is checked in frequently (another short feedback loop) developers get quick feedback on quality.
Another manifestation of short feedback loops is the daily standup which enables teams to identify critical blockers and get help to get them fixed before the next meeting. Someone has a problem, someone jumps into help fix it. Shorten the time to fix a problem; reduce the cost.
To ensure that agile teams are meeting the needs of their customers, they demo their new work at the end of each sprint to get timely feedback. This version of a short feedback loop is fairly well known to agile teams but it may be surprising how few teams actually do these with their end users. These demos give the product users ongoing updates on how the new code is progressing and the opportunity to provide feedback. This kind of feedback is like bug fixing, the sooner it is applied the more cost effective it is. Going a step further, releasing functionality frequently enables users to get traction and provide feedback before the functionality is complete. Customers have to be willing to experiment and respond but the win for them is that they do get an active voice in getting the value they require.
As teams continue their agile adoption, it is useful to remember why we adopt various techniques to make sure that we get the value promised. To deliver higher value with better quality we use short feedback loops so that we learn fast and adjust.