“If you’re not better this month than you were last month, don’t tell me you’re Agile!”
I forget exactly where I heard that quote but it was from someone well known in the industry that came up during a webcast I was listening to several years ago. It caught my attention and helped me start to focus more on why Continuous Improvement is considered a foundational principle of agile. My research led me back to the Agile Manifesto, specifically to the principles of the Agile Manifesto where one of the principles reads:
“At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.” [emphasis added]
No doubt you’ve heard of, and hopefully engaged in, regular reflections (sometimes called “retrospectives”). If you haven’t done so, or have tried them and not gotten much value, then I’d like to provide a few ways to help make them valuable and to help you and your organization adopt a Continuous Improvement mindset. Conversely, even if you do regularly hold reflections and find them useful, the hope is that the following is something else you can add to your “bag of tricks” and make them even more valuable and compelling.
First, we recommend that reflections occur at the end of every iteration. These meetings are team meetings and shouldn’t take more than an hour (a half-hour seems to be fairly typical in our circles).
Next, the meetings are not meant to be “gripe sessions.” Yes, sometimes a little bit of griping can be cathartic, but the goal of a reflection meeting is more than just walking away happy that you had the chance to vent your spleen… The goal is to figure out a way to improve between now and the next reflection meeting.
The meeting facilitator should just go around the table and ask everyone to list the pain points that they’re facing. The facilitator keeps a list of all the pain points brought up and, after the last person has spoken, shows a list of all the pain points raised – ranked from those with the highest number of mentions down to those with the least. The item at the top of the list is what the team focuses on since that is the one that is having the biggest negative impact on the team.
Now that the biggest pain point for the team has been identified, the team brainstorms on some specific actions to take to improve in that one area and includes those actions as part of the iteration plan for the next iteration.
At the end of the next iteration, during the reflection meeting, the team determines whether that pain point has been resolved. If not, the team brainstorms on some additional actions to take in the next iteration. The key here is to NOT start trying to solve another pain point until the first one has been solved to everyone’s satisfaction. Once the biggest pain point has been solved, only then should the team move to the next biggest pain point on the list. Lather, rinse, repeat every iteration – and soon Continuous Improvement will be the norm…
I’ll have some additional suggestions in the next post. In the meantime, please comment on your experiences with reflections, as well as on any helpful practices you’ve discovered. Thanks!
Modified on by ScottWill
Just a quick plug for the upcoming IBM Rational Innovate Software Engineering Conference next week in Orlando at the Disney Swan & Dolphin Resort. I'll be leading the Agile Track Kickoff Session on Monday, 2 June, at 11am, and following that up with a book signing from 12noon to 1pm at the book table. It promises to be a great conference again this year! Please stop by and say "Hi!" if you'll be attending!
Main conference website: http://www-01.ibm.com/software/rational/innovate/
Agile Track Kickoff Session: https://www-950.ibm.com/events/global/innovate/agenda/preview.html?sessionid=DAG-2459
Podcast (12 minutes) covering the Agile Track: http://public.dhe.ibm.com/software/info/television/swtv/Rational_Software/podcasts/rttu/gist_moore_innovate_scaleagile_enterprise-2014-0520.mp3
Thanks -- and hope to see you there!!
Modified on by ScottWill
No doubt you’ve already heard about DevOps. What you may not be aware of is how DevOps builds on Agile principles and practices. For example, Software-as-a-Service (SaaS) projects have adopted DevOps in order to have the ability to frequently update their websites with additional features and capabilities. Some SaaS offerings are updating their production websites multiple times a day – which is a far cry from the once-a-year-release cycle that was rather typical for products built using waterfall.
You might be thinking to yourself, “Multiple times a day…? How does that happen…?” I’ll touch briefly on three essentials of Agile that contribute to this capability. The first is breaking work down into small batches and completing one small batch of work prior to moving to another. You’ll no doubt recognize this as a Lean principle derived from queuing theory. Simply stated, Lean has shown that small batches of work transitioning through a system are far more efficient than an occasional big batch of work. For our purposes, this means that having small amounts of code written, tested, and automated, where any defects have been fixed, and where any necessary user-documentation has been written, is far more efficient and productive than just having a large batch of code written that hasn’t been tested or documented.
The second point is Agile’s emphasis on “working software as the primary measure of progress” (see the 7th Principle of The Agile Manifesto). This ties in directly with the previous point – as small amounts of code are written, the goal is to get that code to production-level status as quickly as possible (“working software” that is “Done!”) so that the new code can be provided to customers with high confidence that it will work as desired, have the right level of quality, and help customers address a particular need.
The last point regards the tremendous emphasis in Agile on automation. With DevOps, the need for automation increases significantly because of the desire to deploy new features and capabilities on such a short timetable. Not only are builds and testing automated, but test environments are automatically provisioned, code that passes the automated test suite is automatically deployed into production and, if a problem occurs anywhere along the way, the process is stopped, the code is automatically rolled back, and appropriate personnel are automatically notified of the problem. The ultimate goal is to make a small code change, “push a button,” and have the new feature or capability automatically rolled out without any further human intervention.
No doubt you can tie additional Agile principles and practices to the DevOps initiative, but I thought I’d touch on just these few to show how DevOps is nothing more than a logical next step after Agile – they are not two, independent approaches to building software. As always, please feel free to share your thoughts! We look forward to hearing from you!
Modified on by LeslieEkas
Successful agile teams require timely leadership to help the team work together, make decisions and progress. A team leader may have a job associated with leadership like a scrum master, or a team leader may emerge when the opportunity presents itself. Making leadership work on an agile team requires that you learn to lead and to follow. Early in my career as a manager, I partnered with a leader from another team to design and build a new product. I was full of energy and excitement and I wanted my team to accomplish great things. However I was aggressive, a lot of the time. I was quick to pass out assignments, check on progress, and be impatient. My new partner had a very different personality and I noticed that she had a particular trait that I lacked. She was very good at working with teams in real time to solve a problem. I told myself to stop and pay attention. How does she do it?
She came to each discussion prepared. She did her homework in advance so that she had knowledge of the topic and often an opinion. At the beginning of the meeting she set the tone to one of knowledge sharing, respectful discussions, and decision making. She started by reviewing the problem at hand and made sure that everyone in the meeting was on the same page. And then the real difference came, when others offered their thoughts, she listened. I mean she really listened and paid attention to what they were saying. She would read back the thoughts to make sure that she understood and let the team know that she understood. She did not allow herself to get distracted. Her focus on a topic helped to keep the meeting on course. Her natural ability to hear everyone’s input and respond to it let them know their input was valuable. Finally, she was always calm. This was the deal breaker.
After watching her run very successful discussions, I paid attention to my own behavior. I worked harder to make sure that I came prepared. But when I disagreed with an opinion my heart beat would increase, and my impulse was to jump in fast and offer my opinion. That kind of rapid response can seem like an interruption and result in sparring. Sometimes sparring leads to successful results but it often discourages others from participating. Problem solving usually requires thoughtful debate and compromise. Pushing ideas back and forth allows teams to think together. But it is important to keep the tone to one of debate not sparring; otherwise good thinkers may shut down and not offer their input. What I learned to do was to fight the impulse to react and instead stay grounded and stay calm. Now when I participate in a problem solving discussion and that impulse to jump in strikes, I try to take a deep breath and wait to speak. I speak only after my pulse drops. This behavior has saved me many times because waiting allowed me to appreciate that my thoughts were not always well considered. Listening to others and leveraging their input enabled me to work out a more constructive comment.
I learned more from my old partner and many other leaders then they will ever know. Now, as a more mature leader, I have a set of personas, each of which is good at some aspect of leadership. Now when I get into trouble, I think, “What would she have done in this situation?” And I try her approach. My point is that some days you have to lead and some days you should follow and learn. As a leader, this is how you can foster the agile practice of continuous improvement.
Modified on by ScottWill
When a project is big enough to have two or more teams working on the same project, we recommend that each team have its own backlog of User Stories and each team size its own stories with Story Points. Of course, this inevitably raises the idea of having each team conform to some “story point standard” so that project progress and team progress can be tracked "uniformly." Doing this more or less means that teams have to normalize story points across all the teams so that, for example, a 5-point story for one team is the same as a 5-point story for every other team on the project; an 8-point story is the same for every team, etc., etc., etc.
All I have to say to this idea is: “Ugggghhh… What a collosal waste of time!" I’ve seen teams try to do this and the effort put into trying to ensure uniformity of sizings across multiple teams is enormous.
I’m happy to report that there’s an easier way that still allows for overall project progress to be easily seen as well as individual team progress. In essence, instead of normalizing based on story points we suggest “normalizing” based on the calendar. Let me explain: let’s assume, for example, there are two teams on a project, each having its own backlog that has been sized with story points. Team A has 2000 story points on its backlog and Team B has 500. In this case you would have three burndown charts to track progress: one that’s combined (showing 2500 total points), a second one for just Team A (showing 2000 total points), and a third one for just Team B (showing just 500 points). In order to determine relative progress for Team A and Team B, you base it on the number of points completed vs. the number of iterations... Thus, if you're half-way through the project (e.g., 5 of 10 planned iterations are complete), you would expect Team A to be ~1000 points complete (50% of its total) and Team B to be ~250 points complete (50% of its total).
Where things get to be a little more complicated is when you have numerous teams working on a project (for some enterprise application projects we’ve worked with, we’ve seen over 40 teams working on the same project). With two teams, it’s fairly easy to bounce back and forth between Team A’s release burndown chart and Team B’s release burndown chart. When you have more than that, you can create a very simple chart (using any spreadsheet tool) that shows “percent calendar complete” and “percent story points complete” for each team. Following are two examples that we’ve used (they show the same data, just in different formats):
You’ll notice in the first chart that the schedule is 60% complete (the black bars) and then each team’s relative progress is shown. In the second chart, a stacked bar is used to show overall te am progress. In this made-up example, Team 1 is slightly ahead, Team 5 is slightly behind, and the other teams are perhaps struggling...
I’m sure you can think of other, different ways of showing the data, so do whatever’s most helpful. The key is that this approach is so much easier and simpler than trying to normalize the sizing of story points across multiple teams.
Teams create processes to maintain repeatable results. Process is necessary. But when the process needs adjustment, teams often take the quick route and just add more and more to their process to address their new problems. The result is often process bloat, frustration, and project inefficiency. Then the team is back where it started.
Processes are typically built to get work done consistently and to increase efficiency and reliability. Improving a process however, can be time consuming and is more often avoided. So teams take short cuts. Here is an example that might sound familiar for a team that uses a process to checkin code. 1. Create unit tests to validate your code 2. Rebuild code with latest source code 3. Validate local build with unit test suite 4. Checkin new code and new tests.
The team discovers that some new code does not align with the design patterns that have been established so they add a rule to review all new code with the design leader. The team gets behind on updating older release versions with new code changes so they add another rule that requires that they follow the same rules for all the older release lines. Sooner than later the process to checkin code becomes so cumbersome that developers avoid it until the last possible moment.
Many of the best planned processes suffer this fate. But how do you avoid it? And what do you do if you see it coming? Agile has roots in kaizen, from the Japanese word for continuous improvement. In Mathew May’s book The Elegant Solution (167), he suggests that you should create a standard, follow it, and determine how to make it better. And then repeat that process. The standard provides the starting place but it should be adjusted to work better and to meet the changing needs of the demands. So having a process is good. Adjusting the process is good. Continuously improving the process is even better. But continuously adding steps to the process…? Not so good.
Here is one idea that I’ve found helpful: When you create a process, define measurable goals for the success of the process. For example, “We need a consistent process to checkin new code that takes no more than 10 minutes and enables 95% build success per iteration." If the team needs to add a code review to the process, then how can they manage it as part of the checkin process? Well, one team I know of faced this very situation and decided that they would spend 30 minutes every day at a particular time to do team code reviews. No matter what stage the code was in, the developers could get their code reviewed. While it still required time to do the reviews, that step was removed from their checkin process.
The key idea is to include a measurable goal for your process when you define it. Then stick with that goal when you change the process. Including a metric that defines process success requires that you look for ways to incorporate new requirements and continue to meet the metric goal. It may involve a little more work but it will ensure that the process continues to solve the problem without becoming the problem.
If you have any ideas on how to avoid process bloat please respond to this blog post and share your ideas!
Modified on by ScottWill
First of all we would like to wish everyone a Happy New Year! We trust that this year will be full of good challenges and significant achievements for each of you!
Regarding metrics, if it's one thing that engineers love, it's numbers... And if it's one thing that managers love, it's status. Put the two together and oftentimes you have teams that track tons of metrics across all facets of their projects.
While it may give the impression of being really thorough, and providing all sorts of deep insights, if you find yourself in an organization that tracks lots and lots of metrics, then I would encourage you to take a step back and limit the focus to just what's really important.
One of the principles of the Agile Manifesto reads, "Working software is the primary measure of progress." If you stop and think about it, that principle makes perfect sense. Customers aren't going to buy untested code and they're not going to be too happy if they buy something that falls down broken all the time because major defects haven't been fixed. All other metrics should pale in comparison to the one metric of having working software. Putting focus on this one metric will drive many of the benefits that agile promises -- three of which I'll touch on here.
First, focusing primarily on having working software provides a shared, common goal for the organization. This means that no one on the team gets any credit for his or her individual efforts which, in turn, provides plenty of motivation for helping each other out so that the team can take credit for having working software. A lot of the old "us vs. them" mentality of waterfall days is eliminated when focus is placed on working software as the primary measure of progress.
Next, having the focus on working software means that teams will actually achieve working software much earlier in a project than was typical with waterfall projects. Having working software provides plenty of opportunities to involve customers earlier in the project since customers can actually see demos of the functionality while it is being implemented and can even download it and test-drive it in their own environments. Getting customer feedback early is a huge benefit, especially if customers tell you you're going down the wrong path since you'll be able to make mid-course corrections prior to releasing the product.
Finally, having working software enables teams to move into the DevOps space where automation takes on an even more significant role, where releases are more frequent (up to multiple times a day for some Software-as-a-Service offerings), and where opportunities for doing things like "A/B" testing can be pursued. None of this is possible without having working software.
To conclude, if you agree that focusing on working software as the primary measure of project progress makes sense, but you're not quite sure how to ditch many of the metrics you may be currently tracking, I'll leave you with this analogy that Leslie uses with teams when dealing with this very issue. Think of cleaning out your closet -- you pull everything out and then put back only those things you wish to keep. Everything else is either thrown away or donated to charity. So, start with a "clean slate," put working software at the top of the list, and then add only those additional metrics that are absolutely necessary to ensure project success. Tracking anything else beyond just what's absolutely necessary should be considered a waste.
We would be interested in your thoughts and comments, especially if you've already made the transition to tracking working software as your primary metric. Thank you!
Modified on by ScottWill
Scott and I are thrilled to tell you about our new book called Being Agile: Eleven Breakthrough Techniques to Keep You From Waterfalling Backward. You can click on the link below to get access to the book or use the handy link in the Links section of this page.
Our goal for the book is to help teams that have adopted the standard practices and call themselves agile, but have not gained the real benefits that agile promises. We believe that adopting agile requires a change in thinking, not just adopting a set of practices. When teams don't get immediate results, they often fall back into old habits that will reduce their chances of success. Our book offers eleven breakthrough techniques to help overcome reflexive thinking so that you will start to respond with an agile mindset that prioritizes delivering customer value, ensuring high quality, and continuously improving.
The book consists of eleven chapters that focus on some of the critical topics for agile success. In each chapter we discuss the basic principles that are the foundation for the topic followed by the corresponding practices. Each chapter closes with a breakthrough technique to help teams achieve the real benefits of the aspect of agile covered in the chapter. It was not our goal for the book to be an agile primer or even to cover all of agile thoroughly as there are much better references for that. Our goal was to focus on principles critical to agile adoption with which teams struggle and for each area, give them the means to succeed via using an agile mindset.
Scott and I have both had our breakthrough moments and so have the teams that we have le
ad and coached. We share experiences from those teams to provide a real world context to the ideas. Most of what we have learned came from working with numerous teams and it is our hope that we continue the conversation with you. Please share your "A-ha!" moments on our blog so that we can all continue to learn together.
Being Agile (IBM Press)
Modified on by ScottWill
Velocity is a great tool for agile teams but it is easy to "over-engineer" the concept -- or even misuse it. Velocity is a mechanism to understand the pace of a team that uses story points to size their user stories.
First, a quick comment regarding story points – Leslie and I advocate the usage of “unit-less” sizings of user stories, meaning that story points are not associated with any measure of time. Story points are simply an assignment of size relative to other user stories that a team works on. Thus, a 3-point story should take approximately 3 times the amount of effort for the team as a 1-point story -- irrespective of how long it takes to complete either one. Note that story point sizings are just relative sizings and they work no matter how long it takes to actually complete a given story of a given size.
Back to velocity… Velocity, simply put, is the term used to describe a team’s observed capacity for completing work. Velocity is calculated by taking the average number of story points a team completes each iteration. For example, if a team completes 8 story points’ worth of user stories in its first iteration, 12 in the second, and 10 in its third, then the team’s velocity is 10 (the average of 8, 12, and 10). Note that several iterations had to be completed first in order for the team to determine its velocity (we advocate a minimum of three iterations before starting to use velocity for project planning and tracking).
Even though the concept is simple, I’ve seen teams misuse velocity and, as a consequence, reduce its value. I’ll address three of the most common misuses in the following paragraphs:
First, velocity is incorrectly used as a "report-card" each iteration. Assume the example I used above where the team has a velocity of 10. Let’s say that their next iteration they complete 6 story points. In organizations where velocity is not understood, the team will likely be called on the carpet and asked to explain what happened (the thought being that their productivity dropped by 40%!). Given that the assignment of story points is an estimation technique and not something to be measured to 26 digits of precision, there should be an understanding that the number of points a team completes each iteration will vary from iteration to iteration. Perhaps in this iteration the team is working on a story that they underestimated. Big deal… Chances are in a subsequent iteration they’ll work on a story that was overestimated. Over the course of several iterations, the team's velocity will average out. Please do not use the number of story points a team completes in any given iteration as a report-card. Use the average and save everyone a lot of headaches from micromanagement.
Secondly, and closely related to the first, is teams erroneously claiming "partial credit" for a story. When teams are seeing the number of story points completed in a given iteration used as report-card, they tend to start trying to claim partial-credit for an incomplete story. In this scenario a team might say, “We got the code done, so that’s worth 2-points out of this 5-point story.” Even though the testing, defect fixing, automation, etc., hasn’t yet been completed, they're just trying to boost the total number of points claimed in the iteration. Unfortunately, this approach defeats a number of benefits of adopting agile. First, it breaks down team synergy by reverting back to an us-vs.-them mentality. When the whole team gets “credit” for completing a user story when it’s actually complete, then the team is motivated to help each other out. If partial credit is allowed, then this can pit parts of the team against other parts. The second problem with this approach is that it violates the agile principle of working software as the measure of progress. Code that has been written, but is untested, is not “working software.” Code that has been written and tested, but where defects haven’t been fixed, is not “working software.” Working software should be the focus for the team… And let me close this section with another simple example: let’s say a team is targeting completion of two 5-point stories this iteration (their velocity has been ~10 points an iteration, so targeting 10 points’ worth of stories makes good sense). Let’s say they complete the first story and are almost complete with the second. If the team tries to take partial credit, they might claim 4 of the 5 points for the second story and, thus, their story point totals for this iteration would be 9. Next iteration, they’d plan to finish off the last bit of the second story and then, most likely target and additional 10 points' worth of new stories (since they have been averaging 10 story points so far and they almost completed 10 points last iteration). Let’s say they complete all that work in the next iteration for a total of 11 points. What’s their velocity for the two iterations? It’s 10. Now let’s take the “no partial credit” approach. The team would get 5 story points for its first iteration and 15 for its second. What’s the velocity for the two iterations calculated this way? Right, it’s also 10. This is why focusing on velocity as an average is so liberating for teams – they’re not constantly under the microscope and don’t feel compelled to play games with partial credit.
The last abuse of story points and velocity that I’ve seen is that teams get their velocity "dictated" to them. They’re told to achieve some level of velocity before even knowing what their velocity is. Remember, as mentioned at the opening, velocity reflects the team’s observed capacity to complete work each iteration. Typically, when teams are dictated a velocity, its in “fixed content/fixed date” projects (which are anathema in agile). Teams wind up working tremendous amounts of overtime to meet their dictated velocity (which is also anathema to agile).
Finally, there are legitimate ways to go about increasing velocity, as well as illegitimate ways. The most prevalent illegitimate way is forcing a team to do overtime – this is not sustainable and will only cause problems down the road. The legitimate ways to increase velocity is to pursue what I call “enabling” practices: increasing test automation, improving builds by adopting continuous integration, adopting test-driven development and pair-programming, automating provisioning of test environments, automating deployments, and many others. Yes, putting effort into maturing these practices may mean you slow down the amount of functionality you produce in the short run, but the long-term benefits (such as sustained velocity improvements) will be well worth the initial investment. Velocity is simple - keep it simple. Increasing velocity is where the hard work is.
Modified on by LeslieEkas
One of the primary reasons that agile teams are “more productive” is that they work together closely to complete small batches of work frequently and continuously. The better the team is at working together, the better their productivity. However, teams struggle to improve their ability to work together, and so this becomes one of the most significant barriers to real agile success.
If you think about how work goes in a typical waterfall project, usually the first thing that happens is that the team decides what they are going to do and then very quickly they split into separate sub-teams to do the work. Generally the team does most of its coordinated work at the end of the project when they have to make it all work. This is the "hockey stick" effect of waterfall. From my days in waterfall I remember long hours, nights, and weekends working through problems with the rest of team. In fact I remember reconnecting with people that I had not worked closely with since the last “end of project march.”
Agile breaks this pattern by working closely together from the beginning of the project. The “end of project” style of coordinated team work in waterfall happens throughout every iteration. This is what makes agile so productive. However, “getting there” can be hard because working where your domain knowledge and skills are the strongest is where you want to work and where you know you will be the most productive.
As the software industry has progressed, software projects have become larger and more complex. With size and complexity, specialization typically follows. We have been encouraged to specialize in order to capitalize on our personal strengths and then use those strengths to produce better product capabilities. Specialized skills are critical so that we build the right solutions. However, learning new skills and increasing your technical breadth is also important. To make an agile team more productive, team members needs to be willing to broaden their skills so that they can help the rest of team get to “Done!” in addition to completing their own work.
This does not mean you have to become an expert at everything or even offer your help when it is not needed. When I was in school getting my Computer Science degree, our professors told us that “You are not a <insert favorite software language> developer, you are a software engineer.” We have to remind ourselves that, as engineers, we need to continue to expand our skills and knowledge, and be willing to try any job that needs attention. If everyone on the team assumes this attitude, the team will become more productive.
Here is the analogy that I think of when I am trying to explain this behavior. In the movie Apollo 13, after the initial oxygen explosion happens and disaster ensues, the mission commander asks that his team leaders to call in everyone on their teams to help figure out how to get the crew safely back to Earth. In the movie, teams that have not worked together are given hard problems to solve fast -- and they do solve them. This is the way very productive agile teams work together throughout the project.