In 2009 I wrote a white paper entitled The Agile Scaling Model (ASM): Adapting Agile Methods for Complex Environments for IBM Rational. Apparently it's been taken down, which I think is unfortunate as it contains some interesting ideas that your organization may be able to benefit from.
The original white paper addresses several key issues:
-
It provides and explains a definition for disciplined agile delivery. A more up to date discussion of DAD can be found on the Disciplined Agile Delivery site.
-
It describes criteria to determine is a team is agile. I've explored this issue via several surveys over the years since then. See the January 2013 How Agile Are You? results.
-
It describes the ASM, which distinguishes between core agile development techniques, disciplined agile delivery strategies, and agility at scale. The ASM was superceded in early 2013 by the Software Development Context Framework (SDCF). Perhaps this is why the ASM paper was taken down??
-
It overviews the eight scaling factors which a delivery team may face, scaling factors which motivate changes in the process that you will follow and the tools that you will adopt. The SDCF provides my recent thoughts regarding scaling factors. I have also run various IT Surveys over the years exploring how well organizations fare at scaling agile.
-
It describes the implications of the ASM. My blog posting Scaling Agile: Start with a Disciplined Foundation covers this very well.
-
It argues that you should strive to be as agile as you need to be, and that will be driven by the situation that you face.
Modified by ScottAmbler 120000HESD
|
The Agile Scaling Model (ASM) is a contextual framework for effective adoption and tailoring of agile practices to meet the unique challenges faced by a system delivery team of any size. The ASM distinguishes between three scaling
categories: - Core agile development. Core agile methods, such as Scrum and Agile Modeling, are self governing, have a value-driven system development lifecycle (SDLC), and address a portion of the development lifecycle. These methods, and their practices, such as daily stand up meetings and requirements envisioning, are optimized for small, co-located teams developing fairly straightforward systems.
- Disciplined agile delivery. Disciplined agile delivery processes, which include Dynamic System Development Method (DSDM) and Open Unified Process (OpenUP), go further by covering the full software development lifecycle from project inception to transitioning the system into your production environment (or into the marketplace as the case may be). Disciplined agile delivery processes are self organizing within an appropriate governance framework and take both a risk and value driven approach to the lifecycle. Like the core agile development category, this category is also focused on small, co-located teams delivering fairly straightforward systems. To address the full delivery lifecycle you need to combine practices from several core methods, or adopt a method which has already done so.
- Agility at Scale. This category focuses on disciplined agile delivery where one or more scaling factors are applicable. The eight scaling factors are team size, geographical distribution, regulatory compliance, organizational complexity, technical complexity, organizational distribution, domain complexity, and enterprise discipline. All of these scaling factors are ranges, and not all of them will likely be applicable to any given project, so you need to be flexible when scaling agile approaches to meet the needs of your unique situation. To address these scaling factors you will need to tailor your disciplined agile delivery practices and in some situations adopt a handful of new practices to address the additional risks that you face at scale.
The first step in scaling agile approaches is to move from partial methods to a full-fledged, disciplined agile delivery process. Mainstream agile development processes and practices, of which there are many, have certainly garnered a lot of attention in recent years. They’ve motivated the IT community to pause and consider new ways of working, and many organizations have adopted and been successful with them. However, these mainstream strategies (such as Extreme Programming (XP) or Scrum, which the ASM refers to as core agile development strategies) are never sufficient on their own; as a result organizations must combine and tailor them to address the full delivery life cycle. When doing so the smarter organizations also bring a bit more discipline to the table, even more so than what is required by core agile processes themselves, to address governance and risk.
The second step to scaling agile is to recognize your degree of complexity. A lot of the mainstream agile advice is oriented towards small, co-located teams developing relatively straightforward systems. But once your team grows, or becomes distributed, or you find yourself working on a system that isn’t so straightforward, you find that the mainstream agile advice doesn’t work quite so well – at least not without sometimes significant modification. Each of the scaling factors introduces their own risks, and when addressed effectively can actually reduce project risk, and for your project team to succeed you will want to identify the scaling factors applicable to the situation that you face and act accordingly. Unfortunately, this is a lot easier said (OK, in this case blogged about) than done.
IBM Rational advocates disciplined agile delivery as the minimum that your organization should consider if it wants to succeed with agile techniques. You may not be there yet, still in the learning stages. But our experience is that you will quickly discover how one or more of the scaling factors is applicable, and as a result need to change the way you work.
Further reading:
|
At Agile 2009 in August Sue McKinney, VP of Development Transformation with IBM Software Group, was interviewed by DZone's Nitin Bharti about IBM's experiences adopting agile techniques. There are over 25,000 developers within IBM Software Group alone. Follow the link to the interview to view it online (there is also a text transcript posted there. There's some great insights into the realities of scaling agile in large teams, in distributed agile development, and in particular how to transform a large organization's development staff.
|
When I talk to people about scaling agile techniques, or about agile software development in general, I often put describe strategies in terms of various risks. I find that this is an effective way for people to understand the trade-offs that they're making when they choose one strategy over another. The challenge with this approach is that you need to understand these risks that you're taking on, and the risks that you're mitigating, with the techniques that you adopt. Therein lies the rub, because the purveyors of the various process religions ( oops I mean methodologies) rarely seem to coherently the discuss the risks which people take on (and there's always risk) when following their dogma (oops, I mean sage advice).
For example, consider the risks associated with the various strategies for initially specifying requirements or design. At the one extreme we have the traditional strategy of writing initial detailed speculations, more on this term in a minute, and at the other extreme we have the strategy of just banging out code. In between are Agile Modeling (AM) strategies such as requirements envisioning and architecture envisioning (to name a few AM strategies). Traditionalists will often lean towards the former approach, particularly when several agile scaling factors apply, whereas disciplined agile developers will lean towards initial envisioning. There are risks with both approaches.
Let's consider the risks involved with writing detailed speculations (there's that term again):
-
You're speculating, not specifying. There is clearly some value with doing some up-front requirements or architecture modeling, although the data regarding the value of modeling is fairly slim (there is a lot of dogma about it though), but that value quickly drops off in practice. However, the more you write the greater the chance that you're speculating what people want (when it comes to requirements) or how you're going to build it (when it comes to architecture/design). Traditionalists will often underestimate the risks that they're taking on when they write big requirements up front (BRUF) , or create big models up front (BMUF) in general, but in the case of BRUF the average is that a large percentage of the functionality produced is never used in practice -- this is because the detailed requirements "specifications" contained many speculations as to what people wanted, many of which proved to be poor guesses in practice.
-
You're effectively committing to decisions earlier than you should. A side effect of writing detailed speculations is that by putting in the work to document, validate, and then update the detailed speculations the decisions contained in the speculations become firmer and firmer. You're more likely to be willing to change the content of a two-page, high-level overview of your system requirements than you are to change the content of a 200-page requirements speculation that has been laboriously reviewed and accepted by your stakeholders. In effect the decision of what should be built gets "carved in stone" early in the process. One of the principles of lean software development is to defer decisions as late as possible, only making them when you need to, thereby maximizing your flexibility. In this case by making requirements decisions early in the process through writing detailed speculations, you reduce your ability to deliver functionality which meets the actual needs of your stakeholders, thereby increasing project risk.
-
You're increasing communication risk. We've known for decades that of all the means of communication that we have available to us, that sharing documentation with other people is the riskiest and least effective strategy available to us for communicating information (face-to-face communication around a shared sketching environment is the most effective). At scale, particularly when the team is large or the team geographically distributed, you will need to invest a little more time producing specifications then when the team is co-located, to reduce the inherent risks associated with those scaling factors, but that doesn't give you license to write huge tomes. Agile documentation strategies still apply at scale. Also, if you use more sophisticated tooling you'll find it easier to promote collaboration on agile teams at scale.
-
You're traveling heavy. Extreme Programming (XP) popularized the concept of traveling light. The basic idea is that any artifact that you create must be maintained throughout the rest of the project (why create a document if you have no intention of keeping it up to date). The implication is that the more artifacts you create the slower you work due to the increased maintenance burden.
There are also risks involved with initial envisioning:
-
You still need to get the details. Just because you're not documenting the details up front doesn't imply that you don't need to understand them at some point. Agile Modeling includes several strategies for exploring details throughout the agile system development life cycle (SDLC), including iteration modeling performed at the beginning of each iteration as part of your overall iteration planning activities, just in time (JIT) model storming throughout the iteration, and test-driven development (TDD) for detailed JIT executable specification.
-
You need access to stakeholders. One of the fundamental assumptions of agile approaches is that you'll have active stakeholder participation throughout a project. You need to be able to get information from your stakeholders in a timely manner for the previously listed AM techniques to work effectively. My experience is that this is fairly straightforward to achieve if you educate the business as to the importance of doing so and you stand up and fight for it when you need to. Unfortunately many people don't insist on access to stakeholders and put their projects at risk as a result.
-
You may still need some documented speculations. As noted previously you may in fact need to invest in some specifications, particularly at scale, although it's important to recognize the associated risks in doing so. For example, in regulatory compliance situations you will find that you need to invest more in documented speculations simply to ensure that you fulfill your regulatory obligations (my advice, as always, is to read the regulations and then address them in a practical manner).
The ways that you approach exploring requirements, and formulating architecture/design, are important success criteria regardless of your process religion/methodology. No strategy is risk free, and every strategy makes sense within given criteria. As an IT professional you need to understand the risks involved with the various techniques so that you can make the trade-offs best suited for your situation. One process size does not fit all.
My final advice is to take a look at the Disciplined Agile Delivery (DAD) framework as it provides a robust strategy for addressing the realities of agile software development in enterprise settings.
Modified by ScottAmbler 120000HESD
|
In the early days of agile, the applications where agile development was applied were smaller in scope and relatively straightforward. Today, the picture has changed significantly and organizations want to apply agile development to a broader set of projects. Agile hence needs to adapt to deal with the many business, organization, and technical complexities today’s software development organizations are facing. This is what Agility@Scale is all about – explicitly addressing the complexities which disciplined agile delivery teams face in the real world.These agile scaling factors which we've found to be important are: - Team size. Mainstream agile processes work very well for smaller teams of ten to fifteen people, but what if the team is much larger? What if it’s fifty people? One hundred people? One thousand people? Paper-based, face-to-face strategies start to fall apart as the team size grows.
- Geographical distribution. What happens when the team is distributed, perhaps on floors within the same building, different locations within the same city, or even in different countries? Suddenly effective collaboration becomes more challenging and disconnects are more likely to occur.
- Compliance requirement. What if regulatory issues – such as Sarbanes Oxley, ISO 9000, or FDA CFR 21 – are applicable? These issues bring requirements of their own that may be imposed from outside your organization in addition to the customer-driven product requirements.
- Enterprise discipline. Most organizations want to leverage common infrastructure platforms to lower cost, reduce time to market, and to improve consistency. To accomplish this they need effective enterprise architecture, enterprise business modeling, strategic reuse, and portfolio management disciplines. These disciplines must work in concert with, and better yet enhance, your disciplined agile delivery processes.
- Organizational complexity. Your existing organization structure and culture may reflect traditional values, increasing the complexity of adopting and scaling agile strategies within your organization. To make matters worse different subgroups within your organization may have different visions as to how they should work. Individually the strategies can be quite effective, but as a whole they simply don’t work together effectively.
- Organization distribution. Sometimes a project team includes members from different divisions, different partner companies, or from external services firms. This lack of organizational cohesion can greatly increase the risk to your project.
- Technical complexity. Some applications are more complex than others. It’s fairly straightforward to achieve high-levels of quality if you’re building a new system from scratch, but not so easy if you’re working with existing legacy systems and legacy data sources which are less than perfect. It’s straightforward to build a system using a single platform, not so easy if you’re building a system running on several platforms or built using several disparate technologies. Sometimes the nature of the problem that your team is trying to address is very complex in its own right.
Each factor has a range of complexities, and each team will have a different combination and therefore will need a process, team structure, and tooling environment tailored to meet their unique situation. Further reading:[ Read More]
|