The goal of the Being Agile blog is to discuss what it means to be agile and help each other achieve the benefits that agile promises. Scott Will and I each were each introduced to agile software development in early 2007 when IBM hired Mary and Tom Poppendieck to train a few lucky groups in Software Group. Since then we had the opportunity to both practice and coach agile.
We learned that being successful with agile is no easy feat because agile requires continuous discipline, it requires focus in order to get something done, really done, and it requires that you work in a productive, interactive team environment to be productive. To succeed with agile we have learned that you have to think differently about planning and executing software projects, as well as reacting to problems. You can never become complacent and you must relentlessly look for ways to improve.
LeslieEkas 270000USBE 4,819 Views
The goal of the Being Agile blog is to discuss what it means to be agile and help each other achieve the benefits that agile promises. Scott Will and I each were each introduced to agile software development in early 2007 when IBM hired Mary and Tom Poppendieck to train a few lucky groups in Software Group. Since then we had the opportunity to both practice and coach agile.
LeslieEkas 270000USBE Tags:  waterations whole-teams whole-team backlog defect waterfall get-to-done 5,801 Views
Teams migrating to agile from waterfall have a significant learning curve to overcome, so they tend to adopt some of the easier to manage practices first (like the daily standup, time-boxed iterations, and reflections to name the some of the typical ones). If they do not tackle practices like regularly having working software, getting to "Done!" every iteration, and not accumulating debt, then the benefits realized will likely be somewhat minimal. Stated a different way, if a team's agile adoption does not produce some clear improvements in a timely fashion, then the team is not likely to make the effort to continually get better.
A sure sign of this "stagnant" situation is teams who are doing "waterations." Waterations are iterations that look like mini-waterfalls. They typically start with design in the first couple of days, then proceed to coding for a number of days, then followed by testing, and so on. Teams that operate this way don't normally get to "Done!" until the last day of the iteration (if they get there at all).
To get real benefit from adopting agile -- and make sure that it "sticks," teams should move beyond just the basics and continually look to adopt the more rigorous practices. For example, keeping debt at or near zero throughout a project, a concerted "whole team" focus where progress is measured at the team level instead of at the individual level, coding and testing being done throughout an iteration instead of being separated into phases within an iteration, having comprehensive "Done!" criteria and sticking to it, as well as others will help teams realize more significant benefits. We'll be addressing each of these topics in future posts.
LeslieEkas 270000USBE Tags:  programming whole-team software pair cross agile train defects whole-teams 6,612 Views
When teams complain that they “do not have enough people”, that may be a sign the team composition has not resulted in an agile whole team. Agile’s mandate for working software requires that teams be composed of the right set of people to deliver on this goal. That generally means developers, testers, writers, user experience experts and so forth. That also means that there are team members from each of the component technology disciplines so that, as a team, they can deliver end-to-end functionality.
Large teams moving to agile often have to form several smaller agile teams with each team delivering completed user stories. Teams that have traditionally been organized by technology or by discipline, are reluctant to move to a cross functional structure because there may not be enough technical and domain knowledge in each of the resulting teams. What they discover quickly is the majority of the knowledge and skills in the team resides with very few people.
The solution to this problem is to cross train the team members so that they are better able to deliver product capability with the right set of skills and domain knowledge. Many teams resist this solution because it requires upfront costs and will slow the team down for a while. In fact many teams are so convinced this will not work that they do not make any effort to cross train and end up with teams that cannot deliver working software. This is what leads them to believe that they “don’t have enough people”.
It is a very hard sell but cross training your teams so that they can deliver a greater range of product functionality is well worth the upfront costs. Software engineers usually have a product area in which they prefer to work or a technology that they favor. But good software engineers have the skills to learn new technologies as required. And it benefits a team when engineers do expand their skills because not only are they able to help complete additional work, but their expertise and perspectives are beneficial when building new software and solving problems.
It is important to let management know there will be a short-term slow down (maybe as much as several months) when cross training team members. But when teams are willing to cross learn in a results oriented way, they become more productive and tend to move faster.
There are ways to cross train and continue to move forward at the same time. Pair programming is a great way to introduce a developer to a new technology or a new area of the code. Developers can also try to fix defects in an area outside of their domain, and then request a code review of the completed fix before it gets checked in. Figuring out how to fix a problem really forces one to learn about how the code works. Engineers can also test and write test code outside their current domains so that they learn additional areas of the product. There are no doubt many other ways you can come up with and we would love to hear your ideas. But teams that are not willing to take this step will struggle to make agile stick, and will continue to complain that they “do not have enough people”.
ScottWill 2000001RT9 Tags:  communication practice lean parallelization shared-goal waste one-thing-at-a-time eliminate standup coordination whole-team status share daily 10,064 Views
A while ago I started working with a team that had stopped having “standup” meetings altogether. Initially, the team started with the typical daily team meetings. After a little while they moved from having daily meetings to having them only three times a week. Subsequently they went to just once a week. When I asked why they had decreased the frequency of their meetings, and then ultimately stopped having them altogether, I was told that no one thought they were very useful and some of the team members had already stopped attending.
ScottWill 2000001RT9 Tags:  blocking-issues complete today whole-team plan daily offer standup planning help problem meeting questions status blockers three 6,756 Views
In the last post I did a quick overview of the purpose and benefits of the “daily standup” meeting. As for conducting the meeting itself, I like the idea of using "The 3 Questions” to guide the meeting:
Each team member provides answers to the three questions – keep in mind that the answers should focus on information important to other team members. Telling the team that you’ve completed a development task, and that the code is ready for testing, is something important to the team. On the other hand, telling the team that you had your weekly 1-on-1 meeting with your manager yesterday really isn’t that important. What I want to focus on here, though, is the third question...
What is your experience when a team member raises a blocking issue during the daily team meeting? Do team members immediately offer to help with getting rid of it? (It is a blocking issue after all…) Or you do hear the sounds of crickets chirping? If the latter, then permit me to make a recommendation: I would encourage you to offer to help, especially if it’s an area you know: “Hey Scott, I hit a similar problem Tuesday – I can help you with that.” Otherwise, if it’s an area outside your area of expertise, you can still offer to help: “Hey Scott, I don’t know if I can provide immediate relief for your blocking issue or not, but I’ll catch up with you right after this meeting and see if there’s anything I can do to help.” Consider it an opportunity to learn as well as an opportunity to help.
This is how an agile “whole team” should operate – blocking issues shouldn’t fester, and team members should be willing to help others on the team when needed.
While the sounds of silence may be wonderful in some instances (i.e., a peaceful walk in the woods), the sounds of silence when blocking issues are raised should be deafening. Be willing to help a team member out when needed and, by doing so, you’ll not only be setting a good example, you’ll be helping the team continue to make progress on your project.
ScottWill 2000001RT9 Tags:  rank-order together whole-teams criteria backlog userstories agile tasks iteration small documentation work completing test development 6,216 Views
This week I thought I'd re-post an article that appeared earlier on DeveloperWorks.
Early on, when I first started coaching teams on their transition to Agile, it was quite common for teams to come to me and tell me that they hadn't completed any User Stories by the end of an iteration. Initially, teams wanted to use this as a reason to increase the length of their iterations (I recommend two-week iterations – and teams always seem to want longer iterations). What they were failing to realize is that there were fundamental reasons why they weren't completing stories, and iteration length by and large had nothing to do with it. Let me explain....
One of the first questions I'd ask a team in this situation was if they were assigning User Stories to team members with the idea that each member was responsible for completing his or her User Story independently of others on the team. Almost invariably, the answer was “Yes, but how did you know that...?” What was happening in these situations was that each member of the team would work on his own User Story and, at the end of an iteration, the team would have six, or seven, or eight partially-completed User Stories. In this case, nothing was completely “Done!” by the end of the iteration – just a bunch of partially-completed stories. Since each team member was focused on completing his own story, there was no “incentive” for someone to jump in and help another team member get a story done because everyone was focused on completing his own story.
In this situation, we would remind the team about the importance of the Agile practice of “whole teams.” Part of this practice recommends that a team should try to work together on one User Story at time, complete it, and then as a team move on to the next story on the backlog. This way, the team is constantly completing User Stories and will have, at most, one unfinished User Story at the end of an iteration.
In order to pull this off, there's another practice that we recommend teams adopt – and that is the creation of small implementation tasks for each User Story. At the beginning of an iteration, the whole team goes through the process of creating the various tasks needed to complete a story (development tasks, test tasks, user documentation tasks, automation tasks, code review tasks, etc.). These task sizes should be small in size (on the order of 4 hours, 8 hours, to no more than 16 hours), which then allows for a lot of “parallelization” of the work. The concept is, for example, instead of having one developer work on a coding task for five days, you could put five developers working in parallel on five one-day development tasks and, at the end of a single day, have all the coding completed that otherwise wouldn't have been complete until the end of five days. Additionally, there should be corresponding documentation, test, automation, and code review tasks for each of the development tasks. This way, a team achieves a very tight integration between dev, doc, and QA throughout each User Story (not to mention each iteration), and the team also always has working code as the team completes one User Story before moving on to the next. By the way, this approach is not something that comes naturally to a team (especially if they're just beginning the transition to Agile), but give the team time and encouragement to try it out – the effort is worth it.
A few closing comments: obviously, piling everyone on a scrum team onto one story won't always work, especially with very small User Stories. The goal – even with a very small User Story – should be to put two or three engineers on that story while the rest of the team works on an additional story. Keep the number of stories being worked on at the same time by the team to the smallest number possible. This should go a long way to helping you consistently finish User Stories at the end of every iteration.
We welcome any comments and suggestions you may have -- thanks!
One of the primary reasons that agile teams are “more productive” is that they work together closely to complete small batches of work frequently and continuously. The better the team is at working together, the better their productivity. However, teams struggle to improve their ability to work together, and so this becomes one of the most significant barriers to real agile success.
ScottWill 2000001RT9 Tags:  story-points user-stories velocity whole-teams agile project-tracking project-management 6,478 Views
Velocity is a great tool for agile teams but it is easy to "over-engineer" the concept -- or even misuse it. Velocity is a mechanism to understand the pace of a team that uses story points to size their user stories.
Finally, there are legitimate ways to go about increasing velocity, as well as illegitimate ways. The most prevalent illegitimate way is forcing a team to do overtime – this is not sustainable and will only cause problems down the road. The legitimate ways to increase velocity is to pursue what I call “enabling” practices: increasing test automation, improving builds by adopting continuous integration, adopting test-driven development and pair-programming, automating provisioning of test environments, automating deployments, and many others. Yes, putting effort into maturing these practices may mean you slow down the amount of functionality you produce in the short run, but the long-term benefits (such as sustained velocity improvements) will be well worth the initial investment. Velocity is simple - keep it simple. Increasing velocity is where the hard work is.
LeslieEkas 270000USBE Tags:  user-stories agile waterfall customer-value coaching book whole-teams breakthrough 7,474 Views
Scott and I are thrilled to tell you about our new book called Being Agile: Eleven Breakthrough Techniques to Keep You From Waterfalling Backward. You can click on the link below to get access to the book or use the handy link in the Links section of this page.
Our goal for the book is to help teams that have adopted the standard practices and call themselves agile, but have not gained the real benefits that agile promises. We believe that adopting agile requires a change in thinking, not just adopting a set of practices. When teams don't get immediate results, they often fall back into old habits that will reduce their chances of success. Our book offers eleven breakthrough techniques to help overcome reflexive thinking so that you will start to respond with an agile mindset that prioritizes delivering customer value, ensuring high quality, and continuously improving.
The book consists of eleven chapters that focus on some of the critical topics for agile success. In each chapter we discuss the basic principles that are the foundation for the topic followed by the corresponding practices. Each chapter closes with a breakthrough technique to help teams achieve the real benefits of the aspect of agile covered in the chapter. It was not our goal for the book to be an agile primer or even to cover all of agile thoroughly as there are much better references for that. Our goal was to focus on principles critical to agile adoption with which teams struggle and for each area, give them the means to succeed via using an agile mindset.
Scott and I have both had our breakthrough moments and so have the teams that we have le
ScottWill 2000001RT9 Tags:  waste agile eliminate-waste project-tracking testing project-management progress coding working-software metrics lean 8,328 Views
First of all we would like to wish everyone a Happy New Year! We trust that this year will be full of good challenges and significant achievements for each of you!
While it may give the impression of being really thorough, and providing all sorts of deep insights, if you find yourself in an organization that tracks lots and lots of metrics, then I would encourage you to take a step back and limit the focus to just what's really important.
Teams create processes to maintain repeatable results. Process is necessary. But when the process needs adjustment, teams often take the quick route and just add more and more to their process to address their new problems. The result is often process bloat, frustration, and project inefficiency. Then the team is back where it started.
ScottWill 2000001RT9 Tags:  normalize project-management multiple-teams story-points burnup tracking velocity burndown sizing user-stories agile scaled 13,663 Views
When a project is big enough to have two or more teams working on the same project, we recommend that each team have its own backlog of User Stories and each team size its own stories with Story Points. Of course, this inevitably raises the idea of having each team conform to some “story point standard” so that project progress and team progress can be tracked "uniformly." Doing this more or less means that teams have to normalize story points across all the teams so that, for example, a 5-point story for one team is the same as a 5-point story for every other team on the project; an 8-point story is the same for every team, etc., etc., etc.
All I have to say to this idea is: “Ugggghhh… What a collosal waste of time!" I’ve seen teams try to do this and the effort put into trying to ensure uniformity of sizings across multiple teams is enormous.
I’m happy to report that there’s an easier way that still allows for overall project progress to be easily seen as well as individual team progress. In essence, instead of normalizing based on story points we suggest “normalizing” based on the calendar. Let me explain: let’s assume, for example, there are two teams on a project, each having its own backlog that has been sized with story points. Team A has 2000 story points on its backlog and Team B has 500. In this case you would have three burndown charts to track progress: one that’s combined (showing 2500 total points), a second one for just Team A (showing 2000 total points), and a third one for just Team B (showing just 500 points). In order to determine relative progress for Team A and Team B, you base it on the number of points completed vs. the number of iterations... Thus, if you're half-way through the project (e.g., 5 of 10 planned iterations are complete), you would expect Team A to be ~1000 points complete (50% of its total) and Team B to be ~250 points complete (50% of its total).
Where things get to be a little more complicated is when you have numerous teams working on a project (for some enterprise application projects we’ve worked with, we’ve seen over 40 teams working on the same project). With two teams, it’s fairly easy to bounce back and forth between Team A’s release burndown chart and Team B’s release burndown chart. When you have more than that, you can create a very simple chart (using any spreadsheet tool) that shows “percent calendar complete” and “percent story points complete” for each team. Following are two examples that we’ve used (they show the same data, just in different formats):
You’ll notice in the first chart that the schedule is 60% complete (the black bars) and then each team’s relative progress is shown. In the second chart, a stacked bar is used to show overall te am progress. In this made-up example, Team 1 is slightly ahead, Team 5 is slightly behind, and the other teams are perhaps struggling...
I’m sure you can think of other, different ways of showing the data, so do whatever’s most helpful. The key is that this approach is so much easier and simpler than trying to normalize the sizing of story points across multiple teams.
LeslieEkas 270000USBE 5,092 Views
Successful agile teams require timely leadership to help the team work together, make decisions and progress. A team leader may have a job associated with leadership like a scrum master, or a team leader may emerge when the opportunity presents itself. Making leadership work on an agile team requires that you learn to lead and to follow. Early in my career as a manager, I partnered with a leader from another team to design and build a new product. I was full of energy and excitement and I wanted my team to accomplish great things. However I was aggressive, a lot of the time. I was quick to pass out assignments, check on progress, and be impatient. My new partner had a very different personality and I noticed that she had a particular trait that I lacked. She was very good at working with teams in real time to solve a problem. I told myself to stop and pay attention. How does she do it?
She came to each discussion prepared. She did her homework in advance so that she had knowledge of the topic and often an opinion. At the beginning of the meeting she set the tone to one of knowledge sharing, respectful discussions, and decision making. She started by reviewing the problem at hand and made sure that everyone in the meeting was on the same page. And then the real difference came, when others offered their thoughts, she listened. I mean she really listened and paid attention to what they were saying. She would read back the thoughts to make sure that she understood and let the team know that she understood. She did not allow herself to get distracted. Her focus on a topic helped to keep the meeting on course. Her natural ability to hear everyone’s input and respond to it let them know their input was valuable. Finally, she was always calm. This was the deal breaker.
After watching her run very successful discussions, I paid attention to my own behavior. I worked harder to make sure that I came prepared. But when I disagreed with an opinion my heart beat would increase, and my impulse was to jump in fast and offer my opinion. That kind of rapid response can seem like an interruption and result in sparring. Sometimes sparring leads to successful results but it often discourages others from participating. Problem solving usually requires thoughtful debate and compromise. Pushing ideas back and forth allows teams to think together. But it is important to keep the tone to one of debate not sparring; otherwise good thinkers may shut down and not offer their input. What I learned to do was to fight the impulse to react and instead stay grounded and stay calm. Now when I participate in a problem solving discussion and that impulse to jump in strikes, I try to take a deep breath and wait to speak. I speak only after my pulse drops. This behavior has saved me many times because waiting allowed me to appreciate that my thoughts were not always well considered. Listening to others and leveraging their input enabled me to work out a more constructive comment.
I learned more from my old partner and many other leaders then they will ever know. Now, as a more mature leader, I have a set of personas, each of which is good at some aspect of leadership. Now when I get into trouble, I think, “What would she have done in this situation?” And I try her approach. My point is that some days you have to lead and some days you should follow and learn. As a leader, this is how you can foster the agile practice of continuous improvement.
ScottWill 2000001RT9 Tags:  release-cycles working-software saas devops automation agile 6,488 Views
No doubt you’ve already heard about DevOps. What you may not be aware of is how DevOps builds on Agile principles and practices. For example, Software-as-a-Service (SaaS) projects have adopted DevOps in order to have the ability to frequently update their websites with additional features and capabilities. Some SaaS offerings are updating their production websites multiple times a day – which is a far cry from the once-a-year-release cycle that was rather typical for products built using waterfall.
You might be thinking to yourself, “Multiple times a day…? How does that happen…?” I’ll touch briefly on three essentials of Agile that contribute to this capability. The first is breaking work down into small batches and completing one small batch of work prior to moving to another. You’ll no doubt recognize this as a Lean principle derived from queuing theory. Simply stated, Lean has shown that small batches of work transitioning through a system are far more efficient than an occasional big batch of work. For our purposes, this means that having small amounts of code written, tested, and automated, where any defects have been fixed, and where any necessary user-documentation has been written, is far more efficient and productive than just having a large batch of code written that hasn’t been tested or documented.
The second point is Agile’s emphasis on “working software as the primary measure of progress” (see the 7th Principle of The Agile Manifesto). This ties in directly with the previous point – as small amounts of code are written, the goal is to get that code to production-level status as quickly as possible (“working software” that is “Done!”) so that the new code can be provided to customers with high confidence that it will work as desired, have the right level of quality, and help customers address a particular need.
The last point regards the tremendous emphasis in Agile on automation. With DevOps, the need for automation increases significantly because of the desire to deploy new features and capabilities on such a short timetable. Not only are builds and testing automated, but test environments are automatically provisioned, code that passes the automated test suite is automatically deployed into production and, if a problem occurs anywhere along the way, the process is stopped, the code is automatically rolled back, and appropriate personnel are automatically notified of the problem. The ultimate goal is to make a small code change, “push a button,” and have the new feature or capability automatically rolled out without any further human intervention.
No doubt you can tie additional Agile principles and practices to the DevOps initiative, but I thought I’d touch on just these few to show how DevOps is nothing more than a logical next step after Agile – they are not two, independent approaches to building software. As always, please feel free to share your thoughts! We look forward to hearing from you!
Just a quick plug for the upcoming IBM Rational Innovate Software Engineering Conference next week in Orlando at the Disney Swan & Dolphin Resort. I'll be leading the Agile Track Kickoff Session on Monday, 2 June, at 11am, and following that up with a book signing from 12noon to 1pm at the book table. It promises to be a great conference again this year! Please stop by and say "Hi!" if you'll be attending!
Main conference website: http://www-01.ibm.com/software/rational/innovate/
Agile Track Kickoff Session: https://www-950.ibm.com/events/global/innovate/agenda/preview.html?sessionid=DAG-2459
Podcast (12 minutes) covering the Agile Track: http://public.dhe.ibm.com/software/info/television/swtv/Rational_Software/podcasts/rttu/gist_moore_innovate_scaleagile_enterprise-2014-0520.mp3
Thanks -- and hope to see you there!!
ScottWill 2000001RT9 Tags:  manifesto agile continuous reflection retrospective iteration principles improvement continuous-improvement planning 14,947 Views
I forget exactly where I heard that quote but it was from someone well known in the industry that came up during a webcast I was listening to several years ago. It caught my attention and helped me start to focus more on why Continuous Improvement is considered a foundational principle of agile. My research led me back to the Agile Manifesto, specifically to the principles of the Agile Manifesto where one of the principles reads:
No doubt you’ve heard of, and hopefully engaged in, regular reflections (sometimes called “retrospectives”). If you haven’t done so, or have tried them and not gotten much value, then I’d like to provide a few ways to help make them valuable and to help you and your organization adopt a Continuous Improvement mindset. Conversely, even if you do regularly hold reflections and find them useful, the hope is that the following is something else you can add to your “bag of tricks” and make them even more valuable and compelling.
First, we recommend that reflections occur at the end of every iteration. These meetings are team meetings and shouldn’t take more than an hour (a half-hour seems to be fairly typical in our circles).
Next, the meetings are not meant to be “gripe sessions.” Yes, sometimes a little bit of griping can be cathartic, but the goal of a reflection meeting is more than just walking away happy that you had the chance to vent your spleen… The goal is to figure out a way to improve between now and the next reflection meeting.
The meeting facilitator should just go around the table and ask everyone to list the pain points that they’re facing. The facilitator keeps a list of all the pain points brought up and, after the last person has spoken, shows a list of all the pain points raised – ranked from those with the highest number of mentions down to those with the least. The item at the top of the list is what the team focuses on since that is the one that is having the biggest negative impact on the team.
Now that the biggest pain point for the team has been identified, the team brainstorms on some specific actions to take to improve in that one area and includes those actions as part of the iteration plan for the next iteration.
At the end of the next iteration, during the reflection meeting, the team determines whether that pain point has been resolved. If not, the team brainstorms on some additional actions to take in the next iteration. The key here is to NOT start trying to solve another pain point until the first one has been solved to everyone’s satisfaction. Once the biggest pain point has been solved, only then should the team move to the next biggest pain point on the list. Lather, rinse, repeat every iteration – and soon Continuous Improvement will be the norm…
I’ll have some additional suggestions in the next post. In the meantime, please comment on your experiences with reflections, as well as on any helpful practices you’ve discovered. Thanks!
ScottWill 2000001RT9 Tags:  retrospective fixing-problems agile continuous-improvement iteration reflection continuous planning 12,483 Views
In my previous post I mentioned how to use end-of-iteration reflection meetings to determine what a team should focus on for improvement in the next iteration. In this post I want to briefly cover two different ways that a team can tackle those improvement actions.
In the first – and simplest – case, let’s say a team needs to fix a problem with their automation framework that is causing some instability. So far, the team has just addressed the immediate symptoms in order to get the automation back up and running, but there’s a sense within the team that the problem runs deeper and needs some focused effort. In this case, the team needs to set aside some dedicated time during the next iteration in order to solve this problem at its root so that it doesn’t keep coming back time and time again, and continually costing the team. Some teams I’ve worked with have simply written a user story and put it on their backlog of user stories to ensure that the dedicated time actually occurs. In this case, the story goes to the very top of the backlog – the team addresses it at the outset of the next iteration (since it’s now #1), fixes their automation problem, and only then starts working on the next story on their backlog. Whether you choose to write a user story or not is up to you – the KEY is that you set aside time to handle smaller issues.
The second case involves larger problems – problems that will take time to solve and that can be quite complex. A simple example of this may be that the automation framework that the team has been using for years may no longer be supported, and now the team has to move to a brand new automation framework. This undertaking isn’t going to be able to be handled in a day or two – even with the whole team focused on helping. Since there’s always pressure to get new features and functionality out the door as quickly as possible, the team most likely isn’t going to be able to hit the “pause button” for four weeks while they switch over to the new framework. What we suggest here is a bit of a compromise: create a small team of automation folks and dedicate them 100% of the time to moving the current automation to the new automation framework. The remaining team members continue to focus on new feature work. The important thing with this suggestion is that you’re getting your new automation framework implemented while still getting new feature work completed (albeit at a somewhat slower pace) – but you’re doing so without having engineers multitask across different sets of responsibilities. This approach honors the lean principle of working on one thing at a time.
Just to recap – if your tackling a small improvement action, dedicate time. If it’s a larger improvement action, dedicate a team.
As always, please feel free to respond with any additional advice, insights, experiences, and/or questions that you may have. Thanks!
LeslieEkas 270007GH5J 3,826 Views
I have recently spent many hours in user story writing sessions and am reminded of some fundamental best practices. One in particular that I find to be essential is to make sure that user stories are measurable. My advice to teams is to review their stories and remove words like faster, easier, better, and improved if there is not a measurement attached. Rather consider phrases like a 50% reduction in response time, 2 clicks to resolution, time to deploy less than 1 hour, etc.
Teams often tell me they cannot include real measurements because they do not know what is achievable before they complete the work. But I disagree; teams rarely develop software without a notion of what is technically possible. Let me offer a simple analogy; the first (and only) time I signed up for a marathon, I did not know if I could finish it. I did however know that I could complete a ½ marathon. Adding another 13.1 miles for me was no small feat, but I knew how to train and I had an idea what my legs could do. Similarly with developing software, teams typically know what technical options are available to leverage and what benefits they enable.
In addition to having a measurable goal that is technically feasible, the goal also has to satisfy the stakeholder. For example if a sales application could not provide buying options in seconds or sub-second response time, the customer may lose interest and investigate an alternative. Targeting a 10 second goal will not satisfy the stakeholder needs. Teams generally know what measureable boundaries exist to ensure that the user story successfully succeeds.
The idea is to combine the best approximation of what measurable goal can be developed with the available technology with a value that would satisfy the user story stakeholder. The worst thing that could happen would be that the story fails. If the team is practicing evolutionary architecture with thin vertical stories to test early then they will know quickly what is possible. In fact what I tend to see is that teams overachieve their goal.
User stories that are measurable often require a team’s best educated guess at an achievable goal. The risk is that if teams do not set a target, or rather a measureable goal, then they do not tend to achieve the goal. If I had not signed up or a marathon, I am pretty sure I would not have ever run that far.
ScottWill 2000001RT9 Tags:  lean new-feature agile flexibility new-request project-management project-tracking waterfall 16,989 Views
Years ago, waterfall projects would typically shun requests for new features or capabilities being added to a project that was underway. After all, project success was typically measured by conformance to “the plan” – and a new request was not part of the plan. These waterfall teams would complete the opening sentence something like this, “When we get a new feature request, we put it on our list of features for consideration in the next release.” Thus, an opportunity to respond to changing conditions, changing customer needs and expectations, or changing competitive situations in a timely manner was lost.
Other waterfall projects have tried to add the new feature in along with the work that was originally committed in “the plan,” but that would often spell disaster since teams were typically already working feverishly just to meet the original commitments. In order to add something else to an already booked plan, there were very few options – mainly overtime and/or cutting corners – neither of which is a good thing. Oh, sure, there was always a planned “buffer” added into “the plan” to account for such contingencies, but when have you ever seen it work out the way it was intended? The skeptic in me would guess rarely, if ever… These teams would complete the opening sentence something like this, “When we get a new feature request, we add it to the plan and start mandating extensive overtime.” Pretty soon, employees are burnt out, frustrated at not having a life, and begin to actively seek employment opportunities elsewhere.
Mature agile teams complete the sentence something like this: “When we get a new feature request, we see it as an opportunity to get ahead of the competition as well as make our customers happy and successful.” Typically new requests surface because of changes in the competitive landscape, changing customer needs and expectations, new market opportunities, or even some combination of these and other factors. And when these new requests arrive at the door of an agile team, there’s no drama at all, no worries about overtime, no cutting corners, no consternation about not meeting “the plan,” etc. Agile teams have great flexibility to handle new requests because they are always working on just one thing at a time – not numerous features in parallel. Waterfall teams typically worked on lots of features in parallel since all the features were committed up-front and since teams typically had to show progress on all the committed features from the very beginning of the project. Thus, if a new request arrived, there was no flexibility to drop a less important feature from the plan since everything was already underway.
Agile teams, however, work on one thing at a time. And the way they do this is by having the features rank-ordered from the most important to the least important. If a new request arrives, the team finishes working on the feature currently in progress – it then has complete freedom to begin working on the brand new request. If a particular ship-date has to be met, then the team drops the least important feature from the list and replaces it with the new request. This way, the team is always working on the most important feature and has incredible flexibility to handle changing circumstances. After all, conformance to “the plan” is not the right measure of success – meeting customer needs and adapting to changing market conditions is a much better measure of success.
To summarize, working on one thing at a time (always the highest-ranked, most important item) and getting it done before starting on something new is the most efficient and effective way to work. It also allows for the greatest flexibility – especially in markets where changes can occur lightning fast.
One thing at a time, one thing at a time, one thing at a time...
ScottWill 2000001RT9 Tags:  project-tracking benefit efficiency flexibility agile project-management stakeholders customer-interaction 6,955 Views
You might recall that I ended the last blog post with the following:
One thing at a time, one thing at a time, one thing at a time…
Why are we so convinced that this is the best way to work? Hopefully my previous post made it clear that the flexibility teams have to respond to changing circumstances by working on one thing at a time is an excellent benefit. There are additional, significant benefits to working on one thing at a time – two of which I explore in this post.
How do you know that what you’re creating will do well in the marketplace, meet customers’ needs, and generally provide high customer satisfaction? In the old, waterfall days we used to have “beta programs” where customers would be granted access to some pre-release version of the software that they could use in-house and then provide feedback. The biggest problems with beta programs revolve around the lateness of the feedback received from customers. Typically, if customers didn’t like something, or wanted something different, it was too late to make any major changes – the team was too busy fixing the backlog of defects, and the committed ship-date was often very close when the beta program started. Customers were often told, “We’ll get to that in the next release.”
However, by doing one thing at a time, your team has a much greater opportunity to understand, with a fairly high degree of certainty, that what you’re creating will meet customers’ needs. And the reason this is so is that you can involve your customers in your project throughout the entire lifecycle. When the team completes a small feature, or completes even a small part of a larger feature, that feature can be demo’d to customers. It’s common for agile teams to be demo-ing some completed functionality to customers at the end of their very first iteration. And customers love it! It gives the customers the ability to “put their fingerprints” on the product – they get to see what’s being built, provide immediate feedback, and then see the development organization respond quickly to that feedback. In other words, customers are able to provide real-time, on-going feedback so that when the product finally does ship, customers will be very comfortable in knowing that the product provides what is actually needed.
One other benefit is that customers can actually request that the product be made available earlier than planned. Say, for example, a team plans on three major features for its next release. By working on one feature at a time, it’s quite possible that customers could ask that the product be released as soon as the first two features are complete – or even just the first one! If the feature is needed, and customers can see how they can immediately make use of it, and the team has actually completed the feature, then releasing just the very first feature (instead of waiting until all three planned features are complete) may make the most business sense. But the ability to even have such a business discussion is only possible if one feature at a time is being worked on. If all three features are started in parallel, then the team has created inflexibility since nothing can ship until all three features are completed (yes, dark launches address this problem, but the additional overhead of ensuring what’s “dark” doesn’t interfere with what’s been completed must be considered).
So, to sum up, how do you know? You know because your customers are providing you real-time, on-going feedback throughout the project to help ensure that what you deliver is what they need. It’s a win-win situation for everyone!
LeslieEkas 270007GH5J 5,472 Views
Teams sometimes struggle to write down the first story in user stories sessions, even if it is not the first time the team has written stories together. Brainstorming often works to get the team headed in the right direction. Some of the following questions might help:
I have run many brainstorming sessions before we start to write stories. In the process I have come up a few rules to help the process be successful.
Rule 1: Brainstorm. What I mean by this is to allow the session to be a “storm”. Just write down everything you think of without filtering the relevance. By working this way you may discover what does and what does NOT matter. It will help you discover the limits of the stories that you need to write. It will allow you the opportunity to think “out of the box”.
Rule 2: Brainstorm with your whole team and your stakeholders. Writing stories without your stakeholders is doable but I have seen far better results with your stakeholders helping with the process. In fact I was in a brainstorming/user writing session with a stakeholder. The team started by writing the stories assuming they knew what was required. When I asked the stakeholder what problem he was trying to solve, we discovered through the following conversation, that the software already provided a solution. Frankly, it was the best user story session I have ever attended!
Rule 3: Take notes. I know we said this in our book and we may have said it in our blog but it is worth repeating. Take notes while you write stories and take notes during the brainstorming session. In fact it does not hurt to have someone take notes verbatim. How many times does someone come up with the perfect phrase but cannot repeat what they just said!?
Rule 4: Avoid getting into the “how” you are going to do it. Avoiding the “how” in user stories may sound like a broken record, so I probably do not need to elaborate further. Sometimes however it may be necessary – but proceed with caution.
Rule 5: Limit your time. Brainstorming can be very useful but don't overdo it. I suggest 30-90 minutes, 2 hours tops if the scope is large. Once you have an idea of what your minimal viable product or in this case, solution is, start by writing a story that takes the first step to achieve that.
Try brainstorming if you are off to a slow start writing stories or are just plain stuck. If it works, use it. If it does not work, skip it.
Extra Credit: Make your first few stories vertical stories. Vertical stories are harder to write because they force teams to think across the architecture and let go of their concern that the story is not shippable. But vertical stories provide so much value that it is worth the effort to try to think vertical.
LeslieEkas 270007GH5J Tags:  learn-fast agile-feedback-loops fail-fast feedback-loops learn-do-adjust daily-standup agile 12,768 Views
Short feedback loops are one of the key benefits to the success of agile because they enable teams to learn fast and adjust. This technique is leveraged in numerous practices used by agile teams including the daily standup, sprint demos, code checkin, build and validation cycles, pair programming, and continuous delivery. One of the key benefits that convinced me to try agile was its claim to deliver greater value with higher quality. After I had experimented with agile for a while, I started to understand just how critical short feedback loops are to achieve these goals.
Short feedback loops enable you to learn fast so that you can adjust while the costs are low. It has been well understood in software development that the cost to fix a defect increases and in many cases increases dramatically the longer you wait to fix it. Some of the reasons are that more code may have been added making the defect more complex to fix, the new code requires additional testing and fixing, and by the time you get to the fix, you may have lost familiarity with the code in question. To combat this issue, it is critical to find defects soon after the code is checked in. One short feedback loop used to combat this is to automatically test the code through a series of progressive test gates. Additionally if the code is checked in frequently (another short feedback loop) developers get quick feedback on quality.
Another manifestation of short feedback loops is the daily standup which enables teams to identify critical blockers and get help to get them fixed before the next meeting. Someone has a problem, someone jumps into help fix it. Shorten the time to fix a problem; reduce the cost.
To ensure that agile teams are meeting the needs of their customers, they demo their new work at the end of each sprint to get timely feedback. This version of a short feedback loop is fairly well known to agile teams but it may be surprising how few teams actually do these with their end users. These demos give the product users ongoing updates on how the new code is progressing and the opportunity to provide feedback. This kind of feedback is like bug fixing, the sooner it is applied the more cost effective it is. Going a step further, releasing functionality frequently enables users to get traction and provide feedback before the functionality is complete. Customers have to be willing to experiment and respond but the win for them is that they do get an active voice in getting the value they require.
As teams continue their agile adoption, it is useful to remember why we adopt various techniques to make sure that we get the value promised. To deliver higher value with better quality we use short feedback loops so that we learn fast and adjust.
ScottWill 2000001RT9 Tags:  distractions focus interruptions flow task-switching mulit-tasking agile time 5,541 Views
Just like the drip, drip, drip of a leaky faucet, we lose productive time by being regularly distracted. Some distractions can't be eliminated, but many can... I recently wrote an article for InformIT that was adapted from one of the chapters in the book that Leslie and I co-authored entitled Being Agile. The article discusses a pervasive form of regular distractions that many people have taken for granted nowadays. I thought I'd pass along a link here:
I hope you enjoy the article and that it provides some food for thought... As always, comments and questions are welcome. Thanks!
What Customer? What Do Research Scientists, Bell Peppers, and Software Engineers Have to Do With Each Other?
ScottWill 2000001RT9 Tags:  devops stakeholders customer-interaction agile-feedback-loops continuous-deployment agile continuous-delivery customer-value science customer 5,845 Views
I recently read a fascinating article in a farm journal that made me think about software engineering. You are likely thinking that there’s not much that farming and software engineering have in common and, for the most part, you’re right. However, in this particular instance, the story relates directly to typical practices in the software industry today.
The article told the story of a group of research scientists who had spent years working with bell peppers. The scientists embarked on a program to cultivate peppers that were more disease resistant, could flourish under sub-optimal conditions such as poor soil, limited water, & lack of full sun, and that would also produce greater numbers of peppers than the original.
After quite a bit of time, effort, and funding they managed to achieve some moderate success. Then someone suggested that they take some of their latest crop of peppers to a local restaurant and get some input from one of the chefs. While the article didn’t go into specifics about the chef’s reaction, it was clear that the reaction was akin to throwing the pepper in the garbage and suggesting that the scientists could have spent their time better elsewhere. The one thing that mattered most to the chef – the taste of the pepper itself – had not even been considered as one of the criteria to be taken into account by the research scientists. The scientists got caught up in their own little world of science and totally forgot that peppers were food and not specimens.
This immediately reminded me of the software industry and how so many teams have forgotten why they’re creating software. It’s not to simply write code, or execute test cases, or try out some new tool, or deploy a capability to a website – the software should ultimately meet customers’ needs. And the best way to do that is to work directly with customers as the capabilities are being created to ensure what’s being developed meets their needs.
Fortunately, the move to Agile and DevOps is making it much easier to work with customers due to the practices of continuous development and continuous deployment, as well as an intense focus on customer engagement and the monitoring of customer usage of the offerings.
The moral of the story…? Don’t forget the customer! J
ScottWill 2000001RT9 Tags:  agile testing defects continuous-improvement eliminate-waste quality metrics lean 7,471 Views
One of the things that has been extremely popular in software organizations for a long time is to count the number of defects that have been found via testing. I propose that teams no longer do this for two reasons.
First, one of the lines of thinking behind counting defects is that it is somehow a measure of quality. If we were running an assembly line in a manufacturing plant, and the assembly line didn’t change from one week to the next, but the number of defects increased, we would know there was a quality problem somewhere – something on the assembly line was likely broken. Despite some similarities, software development is not an assembly line – unlike an assembly line, every line of code that is written and tested has never been written or tested before, so expecting defect counts to be an indicator of quality doesn’t correlate. If a team found 25 defects last week, and 50 this week, what does that tell us? Is the code quality really getting worse, or did the team just do a lot more testing? If so, did the tests executed reflect real-world usage, or were the tests all edge-cases? Did the team deploy the code into a new environment that had never been used before? I’m sure you can come up with many more scenarios… Anyway, the possible reasons for the higher number of defects this week are almost boundless, and the time spent trying to determine why the variation occurred would be better used to: work more with customers to better understand their usage patterns and particular needs; create more needed functionality; improve their processes further; cross-train team members; adopt a new tool; etc., etc., etc. All of which leads to my second point…
Agile software development has its foundations in Lean Thinking, and one of the Lean principles is to eliminate waste. Does counting defects, and spending time trying to figure out what causes variations in the numbers of defects from one time period to the next, contribute directly to the success of the project? If not, then the time spent doing so should be viewed as a waste – time that could be better spent doing more important work.
In conclusion I’d like to leave you with two thoughts regarding defects: first, when defects are found, they should be fixed immediately – period. Don’t allow a backlog of defects to accrue. If it does make sense to do some root-cause analysis to determine why a particular defect occurred, then that’s fine – do so when it makes sense to do so, but it likely does NOT make sense to do so for the vast majority of the defects found. And the second point is that a mature, Agile team should be able to have “difficult” conversations when needed: “Hey Scott, I’ve noticed that I’m finding a lot of defects in your code this week – is there anything that’s distracting you, or anything I can help you with?” Instead of taking umbrage at such comments, I should be thankful that my team is willing to raise the issue and have the discussion.
So, the next time you’re asked to track the number of defects found, ask “Why…?” The answer will likely shed light on ways to eliminate waste and help overcome the “That’s the way we’ve always done it!” mentality, as well as foster a relentless, continuous improvement mindset.
As always, thoughts, comments, and questions are most welcome! Thank you!
ScottWill 2000001RT9 Tags:  continuous-improvement software waterfall development gruc webcast agile 1 Comment 6,296 Views
Leslie and I will be hosting a webcast entitled, How Do Agile Teams Keep From "Waterfalling Backward?" as part of the Global Rational User Community (GRUC) webcast series.
"Waterfalling backward" describes the situation where a team that has started down the Agile path reverts to waterfall ways of thinking when difficult situations arise. We'll cover some examples of what "waterfalling backward" looks like as well as several easy-to-adopt techniques that help teams re-align with Agile if they've started "waterfalling backward."
Note that three attendees will win a copy of our book, Being Agile!
Register here: https://attendee.gotowebinar.com/register/113940558500710914
We are looking forward to the session and hope you can join us!
One of the reasons that I found agile compelling when I first heard about it was the notion that you are “DONE” with the feature when your customers were successfully using it in production and were pleased with the capability. I have coached teams to establish success criteria for user stories but have not required that it include validation of success in production. That feels like an oversight now. I am discovering scenarios where the desired feature does not deliver the success promised and there is no automatic mechanism to address the shortfall.
Now that I work in eCommerce, the business pays keen attention to data that validates that the software is successful. Even with this attention our technology teams often lose the connection between getting capabilities into production and verifying the value they do or do not produce.
I now recommend that teams take a new look at success criteria and include a measure of post-production success. Maybe teams add a “post release validation” criteria to their DONE list. The first challenge is for teams to identify useful data that enables them to track and validate success. The second challenge is to figure out when to assess the data and how to respond to it.
Solving the first problem requires that the software or customer feedback mechanism provide a way to measure success. The process to collect data is usually hardwired into eCommerce platforms so measurement should not be hard. The software could track how often the capability is used or how well the capability works by tracking errors as a percentage of success.
Success may be harder to track when there are no metrics accessible by the team. In these scenarios teams can measure speed at uptake of a new release, leverage customer surveys, or perform post-release customer feedback sessions. None of these options provide the validation that data offers but they would prove the team with a tool to ensure post release success.
It is important to re-establish our focus on post release success validation. I recommend that each story include not only success criteria for release but also for post-release.
LeslieEkas 270007GH5J Tags:  done-criteria agile-team are-we-done-yet monitor-success log-success log-failure 4,974 Views
Here is an update to my Part 1 blog posting on whether or not teams are really DONE when they release software. Just to recap, I worry that agile teams do not have a reliable mechanism to ensure that new features or capabilities released into production succeed as planned. Without a verification tool in place, teams may not be able to confirm success. And worse, once teams get focused on new work they may not track what was released, “out of sight, out of mind”. Of course most teams respond to issues that get reported, but if nothing is reported, does that mean success or not?
A team experimenting with this decided to add a verification safeguard to their DONE criteria. Each story has to have a “plan to monitor feature success” in production. For this particular team, monitoring typically requires that the software logs success and failure data and the team has a way to track what is logged over time. Their DONE criteria ensures that they add logging abilities when they design the feature and they test that the logged data delivers the information required. When the new feature is in production, the team uses a monitoring tool to report on usage and failures. Once an average logging pattern is obtained, they add alerts to notify if there are problems.
Every team may not have this type of logging and monitoring capability in place but adding this DONE criteria can serve as a driver to make that happen. It took the team mentioned multiple attempts to get the right tools in place but the effort was very beneficial. They have already used the new methodology to find an issue (and fix it) before their users did. And by looking at success trends, they have been able to verify that uptake of their features were as planned.
If your team does not have a strategy to validate success of new features or capabilities, I recommend that you add this initiative to your backlog. You may have to start from the beginning – get the right design strategy in your code and then add the tools to monitor. Use this new DONE criteria as a mechanism to get started. A story is DONE when there is a defined plan for monitoring the solution in production. Check!
ScottWill 2000001RT9 Tags:  velocity user-stories story-points agile userstories burndown 2 Comments 6,445 Views
I have been a strong advocate over the years of using Story Points to track team velocity. In spite of many of the self-inflicted problems teams face when using Story Points (e.g., trying to initially assign some measure of time to a Story Point, or trying to normalize Story Points across teams, or confusing Story Points and task-hours, etc.), when used correctly, Story Points (and the velocity calculations they enable) make project tracking and projection incredibly easy and relatively accurate as well. (As a side note, check out the following article by Jeff Sutherland: link.)
What I’d like to cover in this posting is that using Story Points is not the only way to track a team’s velocity. For teams that have been writing User Stories for a while, and who have gotten good at breaking their work down into small enough chunks such that one or more stories can be completed (“Done!”) in an iteration, I would like to recommend that they forego the effort of sizing User Stories with Story Points. They’ve already shown that they know how to start with an Epic, and break an Epic down into User Stories that can be completed in an iteration. Thus, they’ve demonstrated that they have a relative amount of uniformity in the way that they come up with their backlog of User Stories – their stories are all in the same range regarding the amount of time and effort required to complete each one of them.
With that in mind, such teams could simply measure velocity based on completed User Stories – the principle is the same as Story Points, but mature teams can fore-go the effort required to size each User Story with Story Points (e.g., through Planning Poker).
Let me give a couple of examples – the first one using Story Points. Team A has 50 Story Point’s worth of User Stories on its backlog, and they’ve demonstrated over the course of time that they can generally complete about 10 Story Point’s worth of stories in an iteration (velocity = 10). In order to determine how long it will take to complete the remaining stories, it’s easy to divide 50 by 10 and come up with an estimate of around 5 iterations. If a customer is asking how long before the team can get to a feature that the customer is interested in, and the User Story associated with that feature appears about 30 points down the rank-ordered backlog of stories, then the team can tell the customer that it will be about 3 iterations (30 divided by 10).
The second example is that Team B has 10 stories on its backlog and they’ve demonstrated that they typically complete about 2 User Stories every iteration (velocity = 2). To determine how long it will take to complete the remaining stories, divide 10 by 2 and come up with an estimate of 5 iterations. If a customer is asking how long before the team can get to a feature that the customer is interested in, and the User Story associated with that feature appears about 6 stories down the rank-ordered backlog of stories, then the team can tell the customer that it will be about 3 iterations (6 divided by 2).
If you’re part of a fairly experienced team, and if your team already has an established velocity, consider trying this approach in lieu of assigning Story Points – it can help save some time by not going through the sizing exercises. However, if your team is fairly new to writing and sizing User Stories, or if your team doesn’t yet have an established velocity, I would recommend sticking with Story Points for now because the process of assigning Story Points (during a Planning Poker exercise, for example) is *very* helpful in aligning a team’s thinking, as well as building team synergy, as the team goes through the process of having the discussions that naturally arise when sizing stories.
As always, please feel free to comment, provide suggestions and recommendations, or even tell us about your experiences using Story Points and/or other ways of tracking velocity. Thanks!
ScottWill 2000001RT9 Tags:  monitor analyzing recovery operations analysis devops failure metrics monitoring mttf logging measure mttr 7,709 Views
A couple of years ago a well-known franchise experienced a significant computer outage that affected hundreds of their stores throughout the U.S. The impact of the outage lasted for nearly a day, and the problem made the news headlines all over the place. Obviously these are the kinds of problems that businesses don’t want to have happen to them. Loss of sales, unhappy customers, and bad press don’t make for a good day…
With that in mind, let’s assume that it had been years since this franchise had experienced any kind of outage. Do you think customers (as well as shareholders) would have been happy to hear the company’s executives say something like, “But it’s been years since we’ve had any kind outage!” Probably not… Was anyone focused on how long it had been since the last outage, or were folks more interested in the “here and now?” Given the fact that the outage made the headlines, it’s obvious that the lengthy delay in getting back to normal was the primary concern.
Now, let’s look at a hypothetical situation where a company experiences several computer outages every day, but the length of each outage is only one second (or less). Would anyone even notice? Probably not… Because the recovery time was so fast, the only folks who would likely even be aware of any of the outages would be the operations folks, and only then because they were analyzing the logs from the monitoring tools in place.
Hopefully you can see where I’m going with this – a company that experiences a failure only once every couple of years (i.e. having a very good Mean Time to Failure), but one that takes a day or more to recover from an outage (i.e., having a very poor Mean Time to Recovery), is likely to have more negative results than a company that has a very poor MTTF but that has an extremely good MTTR.
So, just to be clear, I’m not suggesting that MTTF should be ignored – referring to my second example, I’d really like to know why outages are occurring so frequently, and then work to reduce the number of outages (even if they do only last a second or so). What I am suggesting is that MTTR should be one of your “front and center” metrics. The faster you can recover from an outage, the less noticeable the outage will be and, therefore, the more negligible the impact.
If you haven’t started measuring MTTR for your offering, please allow me to suggest that you need to begin doing so ASAP. And, once you have MTTR measurements in place, then begin working to improve them (no matter how good they may be right now).
ScottWill 2000001RT9 Tags:  improvement agile risk retrospective reflection small-batches risk-reduction whole-teams lean 6,027 Views
A lot has been written about how many of the agile practices contribute to helping teams and organizations become more efficient. And this is no surprise since agile is built on top of many of the Lean principles that helped manufacturing become much more efficient.
In addition to the tremendous impact adopting agile practices can have on efficiency, another benefit (and one that doesn’t get quite as much “press”) is the reduction of risk that accompanies agile.
First I’d like to address how risk is significantly reduced by adopting the agile practice of “small batches.” When teams move to coding/testing/deploying small batches of functionality on a regular basis, they typically find defects much earlier than they would have had they still been using large batches. Many of you will likely recall the old waterfall days when tens of thousands of lines of code were written before the first test against the code was ever executed. Back then, when testing began, typically large numbers of defects were found in a short period of time and it would take a very long time to get the list of defects fixed. With the adoption of small batches, defects are often found within minutes of the code being checked in and the impact of any defect that is found, as well as the impact of the time required to fix the defect, is minimal since the code is fresh in everyone’s mind, and more code hasn’t been written that gets in the way of actually fixing the defect. The risk reduction is significant on just this point alone.
Another risk-reduction benefit of adopting small batches is the ability to get faster feedback from customers due to the fact that the new functionality is made available much faster than it would be if lots of different functions were packaged together into a big batch and released/deployed only once in a while. If a team puts out a small improvement, or a new feature, and gets feedback from customers that it doesn’t meet their needs, then the team has gained some valuable insights very quickly and their current investment is small. With large batches, if customers don’t like something, the team won’t know about it for quite some time because of the length of time from when the feature was built to when it was released/deployed along with all the other features in the big batch. Big batches are very risky due to the delay in getting feedback, as well as the increased costs of making changes (just think of all the additional testing that would have to take place if a change needed to be made and the batch contained ten features vs. just one feature).
The last risk-reduction benefit of adopting small batches concerns the pressure to add features. If only large batches of features are released infrequently, then typically product management wants to cram as many new features as possible into the current plan because, if the additional desired features don’t make it into the current plan, then it could be a very long time until the next plan is completed and the additional desired features are finally made available. Adopting small batches allows for continual re-ranking of the backlog of requested features based on customer feedback, new technologies, and market conditions, thus significantly reducing the pressure to cram tons and tons of stuff into a “big batch plan.”
Another agile practice that results in a significant reduction in risk is the adoption of “whole teams.” By “whole teams” I’m referring to teams that are comprised of cross-discipline and cross-functional responsibilities. While this is certainly nothing new in agile, understanding how adopting a whole team approach can reduce risk is something that doesn’t get discussed much. For example, when team members used to go off into their own little silos for months on end, and rarely interact with other team members, it was very difficult for anyone on the team to have any insight into what was being done by anyone else. And if one of the team members suddenly had to be away for a period of time (e.g., due to an illness or a family emergency), it was almost impossible for anyone else on the team to immediately pick up that person’s work.
With whole teams, not only is there regular communication and sharing of knowledge, but if a situation comes up where someone does have to be away for a lengthy period of time, the impact of the absence will be far less than it would be otherwise because others on the team will have a much better understanding of the work that person was doing and will be able to pick up the work with much less difficulty – thus reducing risk.
In conclusion, I would urge teams to include a focus on risk reduction as part of their regular reflections – it’s part of agile’s focus on continuous improvement.
Please feel free to comment on other areas that you’ve seen where adopting any of the agile practices has led to a significant reduction in risk. Thank you!
First off, let me start by saying that I am a big fan of user stories. In working with teams over the years I have found that user stories are *very* helpful in ensuring that product managers, engineers, executives, and others, all keep the customer in focus when discussing new features and capabilities. It’s just too easy for folks to embellish a feature with their own ideas about what the feature should provide when they lose sight of the customer. User stories also help by focusing everyone (including customers) on the absolute minimum that is necessary in order to meet the stated needs (thus saving everyone time and effort, and allowing more time to focus on additional items instead of trying to “boil the ocean” for just one feature).
However, I believe that there are times when creating user stories to delineate and track work is not needed. Let me provide one quick example. Several years ago I was working with a team that had responsibility for a code compiler that was built to run on one platform. As new operating systems came into existence, the team was asked if they could create a version of the compiler that would run on one of the newer operating systems. Since this team was just beginning their agile adoption journey, they thought they should create a backlog of user stories for their work and estimate the stories with story points, and so they contacted me for help. Since there were no new features being built – it was just that the compiler was, in essence, being ported to another platform – I suggested that they skip creating stories since they already had a list of compiler features from their current offering and knew that the same features would need to be part of this newer version. I told them to simply use their existing list and check off the features as they were completed. This was a bit of a surprise to the team since they thought that they had to use user stories in order to “be agile.” I told them that since they already understood their customers, and completely understood what was required in the way of features, that the effort to create and size user stories would be redundant.
That being said, I did work with them to ensure that they still adopted other agile practices. In this case, they were to work on only one feature at a time, always make sure that each feature was as complete as possible before moving on to the next feature on the list, that the new compiler must actually run on the new operating system at all times as the features were being completed, and that they did not build up any technical debt (for example, by delaying fixing defects, or performance issues, etc.).
My recommendation is that if you already have a well-defined and well-understood list of items that need to be completed – for example, when repeating a list of items that have been done before – then creating a backlog of user stories is likely not needed. In these situations, if you do choose to omit the creation of user stories, please don’t omit the other agile practices such as: rank-ordering the list so that the most important items are at the top; working on – and completing – one thing at a time before moving on to the next item on the rank-ordered list; ensuring that the environment is always operational; and not piling up technical debt.
As always, please feel free to ask questions and share your experiences. We look forward to hearing from you!
Agile teams can move the needle on large software projects once they are cross-skilled and can pull from the same backlog. I have seen this several times and the results are amazing. The overall throughput from backlog to production increases because the value provided by each team member increases and high priority backlog items don’t get log jammed.
To get the higher throughput that agile promises, teams are encouraged to become cross-skilled so they can do any kind of work in any part of the code base. This enables them to work together better as a team and tackle the most pressing needs that arise versus waiting for a skilled team to be available. Once teams are cross skilled, they increase their throughput because they are not limited in the backlog items they can do. And it works. But be warned that the team will actually slow down at first because they need time to learn. They may slow down for several months depending on the size of the code base and the team’s ability to absorb new domain knowledge and skills. But they will start to speed up.
It can be a tough sell to move to this type of team organization from a setup where teams had backlogs specific to one part of the code base. That is because stakeholders often have what they consider to be “their own team”. That is, teams often work within a particular part of a larger code base and consequently they have their own backlog. Product owners understandably love this because they can get “their” critical work done. But at the same time, they want “more” work to be completed.
Moving teams to be cross-skilled may be a hard sell to the teams and the stakeholders because it is a very new way to work for both parties. And since the transformation takes time, it will be important to validate the improvements once teams actually are able to accomplish more. Once they are cross skilled, multiple teams can work from a single backlog because any team can take the next item in the backlog. This enables them to move work around as teams are available. This is very good news. Order the backlog by priority and have highly skilled teams work their way through the backlog.
But… that means that “my” item has to compete with other items in the backlog. That did not happen in the focused team style. Once teams work from a single backlog, stakeholders have to work together to prioritize their items into a single backlog. And they will find that some of their items do not find their way to the top. What does that say about those items? Maybe teams were doing the work “they could do” but it may not have been the most valuable work across the code base.
With cross-skilled teams, you are getting “your stuff and fast”. And it is “the stuff” that stakeholders agree is the most valuable. Once you get there, you may have to find a way to show your stakeholders that they are getting a better result.
LeslieEkas 270007GH5J Tags:  agile-estimates agile-productivity throughput agile-sizing agile-metrics agile-swag 5,245 Views
Agile offers so many benefits but one of the most irresistible is the promise of higher productivity. Once you become more productive, how do you know that you are in fact, more productive? Most agile enthusiasts agree that highly performing agile teams move faster than traditional technology teams. There are many aspects to agile that provide improved performance including the practice of maintaining working software, queuing theory, eliminating waste and multi-tasking, whole team practices, and more. But given all these changes, how do you verify that it really works? I have been asked to “prove” that a team is more productive and our teams found a way.
During the intake process, each team estimates the size of the incoming request. I will call the request an “intake item” to avoid confusing the meaning of a story or an epic. Intake items can range dramatically from something that takes a team less than a sprint to something that takes a team many, many sprints. Our teams call the estimates, “SWAGs” to reinforce the idea that the estimate is an educated guess but still a guess.
I have seen teams that SWAG by dollar cost to do the work and teams that SWAG by approximate length of time to complete the work. Both methods seem to work but the important point is that the SWAG ranges are large enough to delineate typical work and reduce the error in estimation. So for instance one team could delineate SWAGs something like the following.
Small: less than $100,000
Medium: $100,000 - $500,000
Large: $500,000 - $1,000,000
XLarge: greater than $1,000,000
Similar rules that apply to user story writing also apply to making SWAG. Representatives of the team that will do the work estimates the SWAGs, the sizing is relative to other SWAG sizes, and once the SWAG is in place, they team does not change it. After the intake item is complete and rolled out into production, the item is marked as complete. The following shows a team that has used this mechanism for over a year.
The graph shows how many items the team completed each quarter standardized on time. It is quickly apparent that the team accomplished more each quarter. In other words, if the team size remains the same, then their productivity increased over time. It also shows that this team got better at doing really large items of work.
The next graph is another view of the same information but standardized on the number of items. That is, all items are the same size regardless of their SWAG size. This demonstrates that the majority of items completed are small.
The metrics show the results for one software product regardless of the size of the product or the size of the team building the product. We have tried this same approach with products developed by software teams that consist of one sprint team with less than 10 people. And we have tried the same approach with a larger team of over 150 people broken into many sprint teams. In all cases, we get consistent and believable results.
The value of this mechanism is that we show team productivity over time. Lines could be added to the chart to identify individual intake items. To add even more clarity we can associate real product deliverables to each of the items in the chart. This has proved to be a very simple way to validate that our migration to agile is having a positive impact. We typically combine this view with quality metrics to ensure that we maintain high quality as we increase output.
ScottWill 2000001RT9 Tags:  convergence story-points planning-poker velocity user-stories agile agile-estimates whole-teams 2 Comments 5,547 Views
In one of my blog postings just over a year ago I suggested that mature teams may not need to use Story Points. However, at the end of that post, I wrote:
[If] your team is fairly new to writing and sizing User Stories, or if your team doesn’t yet have an established velocity, I would recommend sticking with Story Points for now because the process of assigning Story Points (during a Planning Poker exercise, for example) is *very* helpful in aligning a team’s thinking, as well as building team synergy, as the team goes through the process of having the discussions that naturally arise when sizing stories.
(You can read the full post here: link)
What I’d like to do with this post is cover a few issues on how to make sure you’re getting the most out of using Planning Poker.
First, keep in mind that Planning Poker is meant to be a very simple and quick process to be used by a team for assigning Story Points to a given story. It’s not meant to provide absolute precision regarding the assignment of Story Points, and the assignment of Story Points itself is meant to be relative to the sizings given to other User Stories on your team’s backlog.
Next, because Planning Poker is an estimation technique please don’t assume that everyone on the team must be 100% in agreement with the number of Story Points being assigned to a given Story. Let me explain: typically, when a team uses Planning Poker to assign Points to a User Story, there will be a smattering of “votes.” As an example, let’s say that, after the very first vote for a given Story, there are a couple of 3's, a 5, a couple of 8’s, and a 13. The scrum master should ask why folks who voted 3 thought the size was relatively small, and then ask the person who voted 13 why he thought the size was relatively big (i.e., over 4 times the estimated size of those who voted 3). After a brief discussion, the scrum master should ask for the team to vote again. Let’s say this time there’s one 3, several 5’s, and a couple of 8’s. A good scrum master should say something like, “OK team, let’s go with a 5 for this story and move on to the next story on the backlog.” What the scrum master is looking for is “harmonic convergence.” Notice how, in the second vote, there were fewer 3’s and the 13 went away. The bulk of the votes were gravitating towards a 5, and so you can see the team is getting closer (“converging”) on a given number. Good enough! Don’t waste time by voting again, and again, and again, and again (ad infinitum and ad nauseum) until the team members all vote the same – it’s not worth it. Estimation is not meant to be precise, and spending more and more time trying to be precise with estimates is a waste of time.
And here’s the main point that I’d like to make – let’s say you’re one of the team members who voted an 8 for this story even though the number finally assigned was a 5 – should you be upset? Should you start an argument? Should you want to keep voting until everyone agrees with you and assigns the Story an 8? No, no, and no. Even though you may really think the story should have been sized at an 8, you can see the team was converging on a 5, so just let it go and press on. Planning poker is not a contact sport.
Finally, once teams get the hang of Planning Poker, most of them vote no more than twice on any given story, and they usually don’t spend more than just a couple of minutes per story. Here’s the high-level process:
Step 1. If you've not assigned Story Points to any User Stories on your backlog then, as a team, quickly pick out what you all think is the smallest story on the backlog and automatically assign it a 1. All other Stories on the backlog will be sized relative to this Story.
Step 2. Grab the next User Story from the backlog.
Step 3. Have a brief discussion about the Story regarding the anticipated effort relative to the first story (the story itself should be familiar to the team since, if the writing of the Stories was done correctly, the team participated together in the writing of the given Story).
Step 4. Vote.
Step 5. If there appears to be a fairly strong consensus on a given number, go with that number, and then go back to Step 2….
Step 6. Otherwise, if there’s a fair amount of disparity in the votes cast, then have another brief discussion focusing on asking those who voted with the lowest number, and those who voted with the highest number, to give some insights into their respective thinking.
Step 7. Vote again – you’ll likely see some harmonic convergence on a specific number after the second vote, and then just go with that number. Now go back to Step 2 (lather, rinse, repeat)….
In sum, Planning Poker is a great technique to use to quickly assign Story Points to your User Stories. Just be sure that the team doesn’t over-engineer its use.
As always, please feel free to ask questions and share your experiences. Leslie and I look forward to hearing from you!
ScottWill 2000001RT9 3,908 Views
Ever since our days in elementary school, we’ve been taught to fear failure. How many of us ever looked forward to going home on the day we received our report card and showing our parents the “F” we received for one (or more ) of our classes? Hopefully none of you ever experienced that, but was it because of the fear of failing that you did everything you could to ensure that you passed the classes you were taking?
Now think back to the days of the waterfall methodology in software development. Was there ever much worry amongst developers regarding failure of the project they were working on? Hardly… Most developers just did what they were told to do by the software architect, finished writing their code and threw it over the wall to the testers, and then went on to work on something else. Rarely (if ever) did they hear about how well customers liked (or didn’t like) the final offering since customers didn’t get their hands on the offering until months after the developers had finished writing the code, and there was typically a separate support team that handled any customer issues that came up after the offering was released. Additionally, most developers never interacted with customers (I even know of some organizations where developers were prohibited from talking to customers!).
Now, with the advent of Agile and DevOps, everyone is much more closely in tune with what customers think of your offerings – and this is a good thing! It clearly reflects the very first principle of the Agile Manifesto:
Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.
The key word here is “valuable.” How do you know what you’re delivering to your customers is valuable? After doing some initial investigation of the marketplace, discussing options with current and potential customers, and doing some brainstorming within the team on creative ways to try to satisfy the needs, the best way to determine if what you’re creating is valuable is to deliver it! Put it out there – and see how your customers react. But, what if they don’t like it? Isn’t that just a failure…?
If you’ve adopted some of the main tenets of Agile, you’re likely delivering new features on a regular basis with very short intervals (vs. putting out a big bag of features once a year like in the old waterfall days). This is the “continuous delivery” that the agile principle above is getting at. Thus, even if you find out quickly that your customers don’t like something you’ve just delivered, it’s not a failure! You’ve just gotten some important feedback that you can immediately use to determine what needs to be changed, or perhaps determine that the new feature needs to be removed altogether. Being able to get quick feedback from your customers, and make quick changes to address the feedback you get (even if the feedback is unfavorable), is how you succeed.
In sum, put out small features on a regular basis. This allows you to get immediate feedback from your customers regarding the value of the features that have been delivered, and use your continuous delivery capabilities to make quick changes in response to the feedback. Like I said, even if customers don’t give you positive feedback, the feedback is still important so that you can respond quickly with better value. This is not failure – it’s the road to success! Thus the popularity of the phrase, “Fail fast, succeed sooner!”
As always, please feel free to ask questions and share your experiences. We look forward to hearing from you!