Modified by ScottWill
Leslie and I will be hosting a webcast entitled, How Do Agile Teams Keep From "Waterfalling Backward?" as part of the Global Rational User Community (GRUC) webcast series.
"Waterfalling backward" describes the situation where a team that has started down the Agile path reverts to waterfall ways of thinking when difficult situations arise. We'll cover some examples of what "waterfalling backward" looks like as well as several easy-to-adopt techniques that help teams re-align with Agile if they've started "waterfalling backward."
Note that three attendees will win a copy of our book, Being Agile!
Register here: https://attendee.gotowebinar.com/register/113940558500710914
We are looking forward to the session and hope you can join us!
One of the things that has been extremely popular in software organizations for a long time is to count the number of defects that have been found via testing. I propose that teams no longer do this for two reasons.
First, one of the lines of thinking behind counting defects is that it is somehow a measure of quality. If we were running an assembly line in a manufacturing plant, and the assembly line didn’t change from one week to the next, but the number of defects increased, we would know there was a quality problem somewhere – something on the assembly line was likely broken. Despite some similarities, software development is not an assembly line – unlike an assembly line, every line of code that is written and tested has never been written or tested before, so expecting defect counts to be an indicator of quality doesn’t correlate. If a team found 25 defects last week, and 50 this week, what does that tell us? Is the code quality really getting worse, or did the team just do a lot more testing? If so, did the tests executed reflect real-world usage, or were the tests all edge-cases? Did the team deploy the code into a new environment that had never been used before? I’m sure you can come up with many more scenarios… Anyway, the possible reasons for the higher number of defects this week are almost boundless, and the time spent trying to determine why the variation occurred would be better used to: work more with customers to better understand their usage patterns and particular needs; create more needed functionality; improve their processes further; cross-train team members; adopt a new tool; etc., etc., etc. All of which leads to my second point…
Agile software development has its foundations in Lean Thinking, and one of the Lean principles is to eliminate waste. Does counting defects, and spending time trying to figure out what causes variations in the numbers of defects from one time period to the next, contribute directly to the success of the project? If not, then the time spent doing so should be viewed as a waste – time that could be better spent doing more important work.
In conclusion I’d like to leave you with two thoughts regarding defects: first, when defects are found, they should be fixed immediately – period. Don’t allow a backlog of defects to accrue. If it does make sense to do some root-cause analysis to determine why a particular defect occurred, then that’s fine – do so when it makes sense to do so, but it likely does NOT make sense to do so for the vast majority of the defects found. And the second point is that a mature, Agile team should be able to have “difficult” conversations when needed: “Hey Scott, I’ve noticed that I’m finding a lot of defects in your code this week – is there anything that’s distracting you, or anything I can help you with?” Instead of taking umbrage at such comments, I should be thankful that my team is willing to raise the issue and have the discussion.
So, the next time you’re asked to track the number of defects found, ask “Why…?” The answer will likely shed light on ways to eliminate waste and help overcome the “That’s the way we’ve always done it!” mentality, as well as foster a relentless, continuous improvement mindset.
As always, thoughts, comments, and questions are most welcome! Thank you!
I recently read a fascinating article in a farm journal that made me think about software engineering. You are likely thinking that there’s not much that farming and software engineering have in common and, for the most part, you’re right. However, in this particular instance, the story relates directly to typical practices in the software industry today.
The article told the story of a group of research scientists who had spent years working with bell peppers. The scientists embarked on a program to cultivate peppers that were more disease resistant, could flourish under sub-optimal conditions such as poor soil, limited water, & lack of full sun, and that would also produce greater numbers of peppers than the original.
After quite a bit of time, effort, and funding they managed to achieve some moderate success. Then someone suggested that they take some of their latest crop of peppers to a local restaurant and get some input from one of the chefs. While the article didn’t go into specifics about the chef’s reaction, it was clear that the reaction was akin to throwing the pepper in the garbage and suggesting that the scientists could have spent their time better elsewhere. The one thing that mattered most to the chef – the taste of the pepper itself – had not even been considered as one of the criteria to be taken into account by the research scientists. The scientists got caught up in their own little world of science and totally forgot that peppers were food and not specimens.
This immediately reminded me of the software industry and how so many teams have forgotten why they’re creating software. It’s not to simply write code, or execute test cases, or try out some new tool, or deploy a capability to a website – the software should ultimately meet customers’ needs. And the best way to do that is to work directly with customers as the capabilities are being created to ensure what’s being developed meets their needs.
Fortunately, the move to Agile and DevOps is making it much easier to work with customers due to the practices of continuous development and continuous deployment, as well as an intense focus on customer engagement and the monitoring of customer usage of the offerings.
The moral of the story…? Don’t forget the customer! J
Just like the drip, drip, drip of a leaky faucet, we lose productive time by being regularly distracted. Some distractions can't eliminated, but many can... I recently wrote an article for InformIT that was adapted from one of the chapters in the book that Leslie and I co-authored entitled Being Agile. The article discusses a pervasive form of regular distractions that many people have taken for granted nowadays. I thought I'd pass along a link here:
I hope you enjoy the article and that it provides some food for thought... As always, comments and questions are welcome. Thanks!
Modified on by LeslieEkas
Short feedback loops are one of the key benefits to the success of agile because they enable teams to learn fast and adjust. This technique is leveraged in numerous practices used by agile teams including the daily standup, sprint demos, code checkin, build and validation cycles, pair programming, and continuous delivery. One of the key benefits that convinced me to try agile was its claim to deliver greater value with higher quality. After I had experimented with agile for a while, I started to understand just how critical short feedback loops are to achieve these goals.
Short feedback loops enable you to learn fast so that you can adjust while the costs are low. It has been well understood in software development that the cost to fix a defect increases and in many cases increases dramatically the longer you wait to fix it. Some of the reasons are that more code may have been added making the defect more complex to fix, the new code requires additional testing and fixing, and by the time you get to the fix, you may have lost familiarity with the code in question. To combat this issue, it is critical to find defects soon after the code is checked in. One short feedback loop used to combat this is to automatically test the code through a series of progressive test gates. Additionally if the code is checked in frequently (another short feedback loop) developers get quick feedback on quality.
Another manifestation of short feedback loops is the daily standup which enables teams to identify critical blockers and get help to get them fixed before the next meeting. Someone has a problem, someone jumps into help fix it. Shorten the time to fix a problem; reduce the cost.
To ensure that agile teams are meeting the needs of their customers, they demo their new work at the end of each sprint to get timely feedback. This version of a short feedback loop is fairly well known to agile teams but it may be surprising how few teams actually do these with their end users. These demos give the product users ongoing updates on how the new code is progressing and the opportunity to provide feedback. This kind of feedback is like bug fixing, the sooner it is applied the more cost effective it is. Going a step further, releasing functionality frequently enables users to get traction and provide feedback before the functionality is complete. Customers have to be willing to experiment and respond but the win for them is that they do get an active voice in getting the value they require.
As teams continue their agile adoption, it is useful to remember why we adopt various techniques to make sure that we get the value promised. To deliver higher value with better quality we use short feedback loops so that we learn fast and adjust.
Modified on by LeslieEkas
Teams sometimes struggle to write down the first story in user stories sessions, even if it is not the first time the team has written stories together. Brainstorming often works to get the team headed in the right direction. Some of the following questions might help:
Describe the problem you are trying to solve?
What benefit will solving the problem provide the business?
What are the consequences if you do not solve the problem?
What are the critical edge cases to consider? (establish limits)
What capabilities of the solution need to be considered? E.g. security, performance, error recovery, etc?
What is the minimal problem set to solve? What is the best way to validate that a solution will solve the problem quickly? Think minimal viable product.
I have run many brainstorming sessions before we start to write stories. In the process I have come up a few rules to help the process be successful.
Rule 1: Brainstorm. What I mean by this is to allow the session to be a “storm”. Just write down everything you think of without filtering the relevance. By working this way you may discover what does and what does NOT matter. It will help you discover the limits of the stories that you need to write. It will allow you the opportunity to think “out of the box”.
Rule 2: Brainstorm with your whole team and your stakeholders. Writing stories without your stakeholders is doable but I have seen far better results with your stakeholders helping with the process. In fact I was in a brainstorming/user writing session with a stakeholder. The team started by writing the stories assuming they knew what was required. When I asked the stakeholder what problem he was trying to solve, we discovered through the following conversation, that the software already provided a solution. Frankly, it was the best user story session I have ever attended!
Rule 3: Take notes. I know we said this in our book and we may have said it in our blog but it is worth repeating. Take notes while you write stories and take notes during the brainstorming session. In fact it does not hurt to have someone take notes verbatim. How many times does someone come up with the perfect phrase but cannot repeat what they just said!?
Rule 4: Avoid getting into the “how” you are going to do it. Avoiding the “how” in user stories may sound like a broken record, so I probably do not need to elaborate further. Sometimes however it may be necessary – but proceed with caution.
Rule 5: Limit your time. Brainstorming can be very useful but don't overdo it. I suggest 30-90 minutes, 2 hours tops if the scope is large. Once you have an idea of what your minimal viable product or in this case, solution is, start by writing a story that takes the first step to achieve that.
Try brainstorming if you are off to a slow start writing stories or are just plain stuck. If it works, use it. If it does not work, skip it.
Extra Credit: Make your first few stories vertical stories. Vertical stories are harder to write because they force teams to think across the architecture and let go of their concern that the story is not shippable. But vertical stories provide so much value that it is worth the effort to try to think vertical.
Modified on by ScottWill
You might recall that I ended the last blog post with the following:
One thing at a time, one thing at a time, one thing at a time…
Why are we so convinced that this is the best way to work? Hopefully my previous post made it clear that the flexibility teams have to respond to changing circumstances by working on one thing at a time is an excellent benefit. There are additional, significant benefits to working on one thing at a time – two of which I explore in this post.
How do you know that what you’re creating will do well in the marketplace, meet customers’ needs, and generally provide high customer satisfaction? In the old, waterfall days we used to have “beta programs” where customers would be granted access to some pre-release version of the software that they could use in-house and then provide feedback. The biggest problems with beta programs revolve around the lateness of the feedback received from customers. Typically, if customers didn’t like something, or wanted something different, it was too late to make any major changes – the team was too busy fixing the backlog of defects, and the committed ship-date was often very close when the beta program started. Customers were often told, “We’ll get to that in the next release.”
However, by doing one thing at a time, your team has a much greater opportunity to understand, with a fairly high degree of certainty, that what you’re creating will meet customers’ needs. And the reason this is so is that you can involve your customers in your project throughout the entire lifecycle. When the team completes a small feature, or completes even a small part of a larger feature, that feature can be demo’d to customers. It’s common for agile teams to be demo-ing some completed functionality to customers at the end of their very first iteration. And customers love it! It gives the customers the ability to “put their fingerprints” on the product – they get to see what’s being built, provide immediate feedback, and then see the development organization respond quickly to that feedback. In other words, customers are able to provide real-time, on-going feedback so that when the product finally does ship, customers will be very comfortable in knowing that the product provides what is actually needed.
One other benefit is that customers can actually request that the product be made available earlier than planned. Say, for example, a team plans on three major features for its next release. By working on one feature at a time, it’s quite possible that customers could ask that the product be released as soon as the first two features are complete – or even just the first one! If the feature is needed, and customers can see how they can immediately make use of it, and the team has actually completed the feature, then releasing just the very first feature (instead of waiting until all three planned features are complete) may make the most business sense. But the ability to even have such a business discussion is only possible if one feature at a time is being worked on. If all three features are started in parallel, then the team has created inflexibility since nothing can ship until all three features are completed (yes, dark launches address this problem, but the additional overhead of ensuring what’s “dark” doesn’t interfere with what’s been completed must be considered).
So, to sum up, how do you know? You know because your customers are providing you real-time, on-going feedback throughout the project to help ensure that what you deliver is what they need. It’s a win-win situation for everyone!
Modified on by ScottWill
Years ago, waterfall projects would typically shun requests for new features or capabilities being added to a project that was underway. After all, project success was typically measured by conformance to “the plan” – and a new request was not part of the plan. These waterfall teams would complete the opening sentence something like this, “When we get a new feature request, we put it on our list of features for consideration in the next release.” Thus, an opportunity to respond to changing conditions, changing customer needs and expectations, or changing competitive situations in a timely manner was lost.
Other waterfall projects have tried to add the new feature in along with the work that was originally committed in “the plan,” but that would often spell disaster since teams were typically already working feverishly just to meet the original commitments. In order to add something else to an already booked plan, there were very few options – mainly overtime and/or cutting corners – neither of which is a good thing. Oh, sure, there was always a planned “buffer” added into “the plan” to account for such contingencies, but when have you ever seen it work out the way it was intended? The skeptic in me would guess rarely, if ever… These teams would complete the opening sentence something like this, “When we get a new feature request, we add it to the plan and start mandating extensive overtime.” Pretty soon, employees are burnt out, frustrated at not having a life, and begin to actively seek employment opportunities elsewhere.
Mature agile teams complete the sentence something like this: “When we get a new feature request, we see it as an opportunity to get ahead of the competition as well as make our customers happy and successful.” Typically new requests surface because of changes in the competitive landscape, changing customer needs and expectations, new market opportunities, or even some combination of these and other factors. And when these new requests arrive at the door of an agile team, there’s no drama at all, no worries about overtime, no cutting corners, no consternation about not meeting “the plan,” etc. Agile teams have great flexibility to handle new requests because they are always working on just one thing at a time – not numerous features in parallel. Waterfall teams typically worked on lots of features in parallel since all the features were committed up-front and since teams typically had to show progress on all the committed features from the very beginning of the project. Thus, if a new request arrived, there was no flexibility to drop a less important feature from the plan since everything was already underway.
Agile teams, however, work on one thing at a time. And the way they do this is by having the features rank-ordered from the most important to the least important. If a new request arrives, the team finishes working on the feature currently in progress – it then has complete freedom to begin working on the brand new request. If a particular ship-date has to be met, then the team drops the least important feature from the list and replaces it with the new request. This way, the team is always working on the most important feature and has incredible flexibility to handle changing circumstances. After all, conformance to “the plan” is not the right measure of success – meeting customer needs and adapting to changing market conditions is a much better measure of success.
To summarize, working on one thing at a time (always the highest-ranked, most important item) and getting it done before starting on something new is the most efficient and effective way to work. It also allows for the greatest flexibility – especially in markets where changes can occur lightning fast.
One thing at a time, one thing at a time, one thing at a time...
Modified on by LeslieEkas
I have recently spent many hours in user story writing sessions and am reminded of some fundamental best practices. One in particular that I find to be essential is to make sure that user stories are measurable. My advice to teams is to review their stories and remove words like faster, easier, better, and improved if there is not a measurement attached. Rather consider phrases like a 50% reduction in response time, 2 clicks to resolution, time to deploy less than 1 hour, etc.
Teams often tell me they cannot include real measurements because they do not know what is achievable before they complete the work. But I disagree; teams rarely develop software without a notion of what is technically possible. Let me offer a simple analogy; the first (and only) time I signed up for a marathon, I did not know if I could finish it. I did however know that I could complete a ½ marathon. Adding another 13.1 miles for me was no small feat, but I knew how to train and I had an idea what my legs could do. Similarly with developing software, teams typically know what technical options are available to leverage and what benefits they enable.
In addition to having a measurable goal that is technically feasible, the goal also has to satisfy the stakeholder. For example if a sales application could not provide buying options in seconds or sub-second response time, the customer may lose interest and investigate an alternative. Targeting a 10 second goal will not satisfy the stakeholder needs. Teams generally know what measureable boundaries exist to ensure that the user story successfully succeeds.
The idea is to combine the best approximation of what measurable goal can be developed with the available technology with a value that would satisfy the user story stakeholder. The worst thing that could happen would be that the story fails. If the team is practicing evolutionary architecture with thin vertical stories to test early then they will know quickly what is possible. In fact what I tend to see is that teams overachieve their goal.
User stories that are measurable often require a team’s best educated guess at an achievable goal. The risk is that if teams do not set a target, or rather a measureable goal, then they do not tend to achieve the goal. If I had not signed up or a marathon, I am pretty sure I would not have ever run that far.
Modified on by ScottWill
In my previous post I mentioned how to use end-of-iteration reflection meetings to determine what a team should focus on for improvement in the next iteration. In this post I want to briefly cover two different ways that a team can tackle those improvement actions.
In the first – and simplest – case, let’s say a team needs to fix a problem with their automation framework that is causing some instability. So far, the team has just addressed the immediate symptoms in order to get the automation back up and running, but there’s a sense within the team that the problem runs deeper and needs some focused effort. In this case, the team needs to set aside some dedicated time during the next iteration in order to solve this problem at its root so that it doesn’t keep coming back time and time again, and continually costing the team. Some teams I’ve worked with have simply written a user story and put it on their backlog of user stories to ensure that the dedicated time actually occurs. In this case, the story goes to the very top of the backlog – the team addresses it at the outset of the next iteration (since it’s now #1), fixes their automation problem, and only then starts working on the next story on their backlog. Whether you choose to write a user story or not is up to you – the KEY is that you set aside time to handle smaller issues.
The second case involves larger problems – problems that will take time to solve and that can be quite complex. A simple example of this may be that the automation framework that the team has been using for years may no longer be supported, and now the team has to move to a brand new automation framework. This undertaking isn’t going to be able to be handled in a day or two – even with the whole team focused on helping. Since there’s always pressure to get new features and functionality out the door as quickly as possible, the team most likely isn’t going to be able to hit the “pause button” for four weeks while they switch over to the new framework. What we suggest here is a bit of a compromise: create a small team of automation folks and dedicate them 100% of the time to moving the current automation to the new automation framework. The remaining team members continue to focus on new feature work. The important thing with this suggestion is that you’re getting your new automation framework implemented while still getting new feature work completed (albeit at a somewhat slower pace) – but you’re doing so without having engineers multitask across different sets of responsibilities. This approach honors the lean principle of working on one thing at a time.
Just to recap – if your tackling a small improvement action, dedicate time. If it’s a larger improvement action, dedicate a team.
As always, please feel free to respond with any additional advice, insights, experiences, and/or questions that you may have. Thanks!