A few days later someone asked a series of questions that I thought would make an interesting blog posting, so here goes: How much of IBM's projects (in percentage) are agile at the moment?I don’t have exact numbers, but I believe that 90%+ of our teams in SWG are applying agile techniques in practical ways that make sense for their projects. The primary goal is to be effective – in frequent releases, higher quality, and happy customers – not just agile. By the way, there is roughly 30,000 developers in SWG. Can all of IBM's projects work with an agile methodology?It’s certainly possible, but it may not always make sense. Products that are in maintenance mode with few bugs or feature requirements may not benefit as much from agile practices -- those teams will likely continue to do whatever it is that they have been doing. Having said that, it's still highly desirable to apply agile techniques on maintenance projects. Also, agile methods can be harder to use on some projects than others, for example, around hardware development. As a general rule, I believe that the majority of software projects can benefit from agile techniques. The primary determinant of whether a team can adopt agile techniques is culture and skill – not team size, the domain, or the degree of geographic distribution. That notion surprises many people who think that large agile teams or geographically distributed agile teams can’t succeed in adopting agile practices. Are agile projects sub-parts of large waterfall projects?In some cases, that may happen. I’m sure it’s also true in reverse. We see many customers who are migrating from waterfall projects to a more agile way of doing things, and they often start this migration with smaller sub-projects. At IBM, we have tens of thousands of developers worldwide on hundreds of teams, so we have examples of pretty much any combination of agile, iterative, and traditional practices that you can imagine. There’s definitely not one size that fits all, which is a key aspect of the Disciplined Agile Delivery (DAD) process framework. What do you think the impact of these numbers will be on the PM community?The IBM PM community is embracing agile. And the reality is that a majority of development organizations around the world are moving to agile now as well (as much as 80% in some of the recent studies I’ve seen). I look forward to the increased adoption of agile methods by the PM community in general. The fact that PMI now offers an Agile Certified Practitioner training program certainly underscores the fact that agile practices are being adopted widely in the mainstream which is a great thing to see.
|
I'm happy to announce that a revised version of the Lean Development Governance white paper which I co-wrote with Per Kroll is now available. This version of the paper reflects our learnings over the past few years helping organizations to improve their governance strategies. There's a more detailed description of the paper here.
|
For several years now I've written various articles and newsletters on the topics of estimating and funding strategies for software development projects, and in particular for agile software development projects. Whenever I talk to people about agile software development, or coach them in how to succeed at it, some of the very first questions that I'll be asked, particularly by anyone in a management role, is how to fund agile software development projects. Apparently a lot of people think that you can only apply agile strategies on small, straightforward projects where it makes sense to do a time and materials (T&M) approach. In fact you can apply agile strategies in a much greater range of situations, as the various surveys that myself and others are showing time and again. My goal with this blog posting is to summarize the various strategies for, and issues surrounding, the funding of agile software development projects. There are three basic strategies for funding projects, although several variations clearly exist. These strategies are: - "Fixed price". At the beginning of the project develop, and then commit to, an initial estimate based on your up-front requirements and architecture modeling efforts. Hopefully this estimate is given as a range, studies have shown that up-front estimating techniques such as COCOMO II or function points are accurate within +/- 30% most of the time although my July 2009 State of the IT Union survey found that on average organizations are shooting for +/- 11% (their actuals come in at +/- 19% on average, but only after doing things such as dropping scope, changing the estimate, or changing the schedule). Fixed-price funding strategies are very risky in practice because they promote poor behavior on the part of development teams to overcome the risks foisted upon them as the result of this poor business decision. It is possible to do agile on a fixed budget but I really wouldn't recommend it (nor would I recommend it for traditional projects). If you're forced to take a fixed-price approach, and many teams are because the business hopes to reduce their financial risk via this approach not realizing that it actually increases their risk, then adopt strategies that reduce the risk.
- Stage gate. Estimate and then fund the project for given periods of time. For example, fund the project for a 3-month period then evaluate it's viability, providing funding for another period of time only to the extent that it makes sense. Note that stages don't have to be based on specific time periods, they could instead be based on goals such as to intiate the project, prove the architecture with working code, or to build a portion of the system. Disciplined agile methods such as Open Unified Process have built in stage-gate decision points which enable this sort of strategy. When the estimation technique is pragmatic, the best approaches are to have either the team itself provide an estimate for the next stage or to have an expert provide a good gut feel estimate (or better yet have the expert work with the team to develop the estimate). Complex approaches such as COCOMO II or SLIM are often little more than a process facade covering up the fact that software estimating is more of an art or a science, and prove to be costly and time consuming in practice.
- Time and materials (T&M). With this approach you pay as you go, requiring your management team to actually govern the project effectively. Many organizations believe a T&M strategy to be very risky, which it is when your IT governance strategy isn't very effective. An interesting variation, particularly in a situation where a service provider is doing the development, is an approach where a low rate is paid for their time which covers their basic costs, the cost of materials is paid out directly, and delivery bonuses are paid for working software. This spreads the risk between the customer/stakeholder and the service provider. The service provider has their costs covered but won't make a profit unless they consistently deliver quality software.
The point is that there are several strategies for funding agile software development projects, just like there are several strategies for funding traditional software development projects. My experience is that fixed-price funding strategies are incredibly poor practice which increases the risk of your software development projects dramatically. I recognize how hard it can be to change this desire on the part of our business stakeholders, but have also had success changing their minds. If you choose to perservere, which is a difficult decision to make, you can help your organization's decision makers to adopt more effective strategies. Like you they want to improve the effectiveness of your IT efforts. Further reading: (In recommended order)- Something's Gotta Give: Argues for a flexibly approach to funding, schedule, and/or scope.
- Agile on a Fixed Budget: Describes in detail how to take a fixed-price approach on agile projects.
- The Dire Consequences of Fixed-Price IT Projects: Describes in detail the questionable behavior exhibited by IT teams when forced to take a fixed-price approach.
- Is Fixed-Price Software Development Unethical?: Questions the entire concept of fixed-price IT projects, overviewing some of the overwhelming evidence against this really poor practice.
- Reducing the Risk of Fixed-Price Projects: Describes viable strategies for addressing some of the problems resulting from the decision of fixed-price projects.
- Strategies for Funding Software Development Projects: Describes several variations on the strategies described above.
- Lies, Great Lies, and Software Development Project Plans: Summarizes some results from the July 2009 State of the IT Union survey which explored issues related to project funding (among many).
[ Read More]
Modified by ScottAmbler 120000HESD
|
I've been getting a lot of questions lately about applying the acceleration metric in practice. So, here's some answers to frequently asked questions: 1. How do I monetize acceleration? This is fairly straightforward to do. For example, assume your acceleration is 0.007 (0.7%), there are five people on the team, your annual burdened cost per person is $150,000, and you have two week iterations. All these numbers are made up, but you know how to calculate acceleration now and IT management had darn well better know the average burdened cost (salary plus overhead) of their staff. So, per iteration the average burdened cost per person must be $150,000/26 = $5,770. Productivity improvement per iteration for this team must be $5,770 * 5 * .007 = $202. If the acceleration stayed constant at 0.7% the overall productivity improvement for the year would be (1.007)^26 (assuming the team works all 52 weeks of the year) which would be 1.198 or 19.8%. This would be a savings of $148,500 (pretty much the equivalent of one new person). Of course a 20% productivity increase over an entire year is a really aggressive improvement, regardless of some of the claims made by the agile snake oil salesman out there, although at 10-15% increase is a reasonable expectation. What I'd really want to do is calculate the acceleration for the year by comparing the velocity from the beginning of the year to the end of the year (in Western cultures I'd want to avoid comparing iterations near to the holidays). So, if the team velocity the first week of February 2008 was 20 points, now the same team's velocity the first week of February 2009 was 23 points, that's an acceleration of (23-20)/20 = 15% over a one year period, for a savings of $112,500. 2. Is acceleration really unitless? For the sake of comparison it is. The "units" are % change in points per iteration, or % change in points per time period depending on the way that you want to look at it. Because it's a percentage I can easily monetize it, as you see above, and use it as a basis of comparison. 3. How do I convince teams to share their data? This can be difficult. Because acceleration is easy to calculate for agile teams, and because it's easy to use to compare teams (my team has .7% acceleration whereas other teams down the hall from mine have accelerations of .3% and -.2% of teams), people are concerned that this metric will be used against them. OK, to be fair, my team might be OK with this. ;-) Seriously though, this is a valid fear that will only be addressed by an effective governance program based on enablement, collaboration, and trust instead of the traditional command-and-control approach. Management's track record regarding how they've used measurements in the past, and how they've governed in general, have a great effect on people's willingness to trust them with new metrics such as acceleration. The implication is that you need to build up trust, something that could take years if it's possible at all. 4. Why does this work for agile teams? Agile teams are self organizing, and an implication of that is that they will be held accountable for their estimates. Because of this accountability, and because velocity is a vital input into their planning and estimation efforts, agile teams are motivated to calculate their velocity accurately and to track it over time. Because they're eager to get their velocity right, and because acceleration is based on velocity, there's an exceptionally good chance that it's accurate. 5. What about function points or similar productivity measures? Function points can be calculated for projects being developed via an agile approach, or other approaches for that matter, but it's a very expensive endeavor compared to calculating acceleration (which is essentially free) and likely will be seen as a bureaucratic overhead by the development team. My rule of thumb is that if you're not being explicitly paid to count function points (for example the US DoD will often pay contracting companies to create estimates based on function point counts) then I wouldn't bother with them. 6. What about calculating acceleration for iterative project teams? Iterative project teams, perhaps following Rational Unified Process (RUP), can choose to calculate and track their velocity and thereby their acceleration. The key is to allow the team to be self organizing and accountable for their estimates, which in turn motivates them to get their velocity right just like agile teams (RUP can be as agile as you want to make it, don't let anyone tell you differently). 7. What about calculating acceleration for traditional project teams? In theory this should work, in practice it is incredibly unlikely. Traditional teams don't work in iterations where working software is produced on a regular basis, they're typically not self organizing, and therefore there really isn't any motivate to calculate velocity (even if they do, there is little motivation to get it right). Without knowing the velocity you can't calculate acceleration. If you can't trust the velocity estimate, and I certainly wouldn't trust a traditional team's velocity estimate, then you can't trust your acceleration calculation. So, my fall back position to calculate productivity improvement would be to do something like function point counting (which is expensive and difficult to compare between teams due to different fudge factors used by different FP counters) and then looking at change in FPs delivered over time. 8. How can I apply this across a department? It is fairly straightforward to roll up the acceleration of project teams into an overall acceleration measure for a portfolio of teams simply by taking a weighted average based on team size. However, this is only applicable to teams that are in a position to report an accurate acceleration (the agile and iterative teams) and of course are willing to do so. 9. What does a negative acceleration tell me? If the acceleration is negative then productivity on the team is going down, likely an indicator of quality and/or team work problems. However, you don't want to manage by the numbers so you should talk to the team to see what's actually going on. 10. What does a zero acceleration tell me? This is an indication that the team's productivity is not increasing, and that perhaps they should consider doing retrospectives at the end of each iteration and then acting on the results from those retrospectives. Better yet they can "dial up" their process improvement efforts by adopting something along the lines of IBM Rational Self Check. Further reading:[ Read More]
|
I'm often asked by customers for case studies of successful agile adoptions or agile projects in general. This is definitely a valid request, and yes, such case studies exist. But I'm often concerned that the people making these requests don't appreciate the implications of what they're asking for. My concerns with case studies are: - The juicy information is rarely included. The information that you really want to find out, such as what went wrong and why it went wrong, is rarely discussed. If problems, oops I mean "challenges", are discussed at all they're typically glossed over in favor of focusing in on the positives. Although many people want to write up the juicy bits this information is invariably edited out through the company's vetting process. In short, my advice is to take case studies with a grain of salt.
- Some case studies are more fiction than fact. Although this isn't a problem with IBM case studies due to the governance efforts of my good friends in IBM's legal department (we love you folks, really) it can be an issue with some case studies.
- The case study may no longer be true today. Stuff happens. Perhaps the case study was mostly true at the time it was written, but now that time has passed problems have appeared that weren't apparent earlier, thus the effort wasn't as nearly as successful as it was written up. For example, a few years ago I ran into the manager of a team that I had read about in one case study, only to find out that once the study was published the key team members left the company to become consultants in that subject area. Having lost these people, who were all very highly skilled, his system proved to be unmaintainable by the rest of his staff who weren't so highly skilled and had to be rewritten. Over time the success story turned into an abject failure.
- Waiting for case studies puts you in the position of follower. For every case study that gets written, dozens, if not hundreds of similar efforts didn't get written up. Writing case studies is hard, takes time, and the writer seldom gets much benefit from doing so. The lag time between the project completing and the case study being published can be many, many months and sometimes years. The implication is that by the time you wait for several case studies that are similar to your situation you've pretty much lost all opportunity for competitive advantage and are now merely trying to catch up to the organizations who are clearly ahead of you (the writers of the case studies).
- What has the requester given back to the community? I often hear people lament that there isn't enough case studies, or isn't something close enough to their situation. Yet, when I ask them how many case studies they've written and the answer is usually none. If you want to get you also need to give. ;-)
So, next time you think you need a case study before making a decision, recognize that you may be paying a fairly high opportunity cost for information that is questionable at best. Further reading:[ Read More]
|
The explicit phases of the Unified Process -- Inception, Elaboration, Construction, and Transition -- and their milestones are important strategies for scaling agile software development to meet the real-world needs of modern organizations. Yes, I realize that this is heresy for hard-core agilists who can expound upon the evils of serial development, yet these very same people also take a phased approach to development although are loathe to admit it. The issue is that the UP phases are like seasons of a project: although you'll do the same types of activities all throughout a project, the extent to which you do them and the way in which you do them change depending on your goals. For example, at the beginning of a development project if you want to be effective you need to do basic things like identify the scope of the project, identify a viable architecture strategy, start putting together your team, and obtain support for the project. Towards the end of a project your focus is on the activities surrounding the deployment of your system into production, including end-of-lifecycle testing efforts, training, cleaning up of documentation, piloting the system with a subset of users, and so on. In between you focus on building the system, including analysis, design, testing, and coding of it. Your project clearly progresses through different phases, or call them seasons if the term phase doesn't suit you, whether your team is agile or not. The UP defines four phases, each of which address a different kind of risk:1. Inception. This phase focuses on addressing business risk by having you drive to scope concurrence amongst your stakeholders. Most projects have a wide range of stakeholdres, and if they don't agree to the scope of the project and recognize that others have conflicting or higher priority needs you project risks getting mired in political infighting. In the Eclipse Way this is called the "Warm Up" iteration and in other agile processes "Iteration 0".2. Elaboration. The goal of this phase is to address technical risk by proving the architecture through code. You do this by building and end-to-end skeleton of your system which implements the highest-risk requirements. Some people will say that this approach isn't agile, that your stakeholders should by the only ones to prioritize requirements. Yes, I agree with that, but I also recognize that there are a wide range of stakeholders, including operations people and enterprise architects who are interested in the technical viability of your approach. I've also noticed that the high-risk requirements are often the high-business-value ones anyway, so you usually need to do very little reorganization of your requirements stack.3. Construction. This phase focuses on implementation risk, addressing it through the creation of working software each iteration. This phase is where you put the flesh onto the skeleton.4. Transition. The goal of this phase is to address deployment risk. There is usually a lot more to deploying software than simply copying a few files onto a server, as I indicated above. Deployment is often a complex and difficult task, one which you often need good guidance to succeed at. Each phase ends with a milestone review, which could be as simple as a short meeting, where you meet with prime stakeholders who will make a "go/no-go" decision regarding your project. They should consider whether the project still makes sense, perhaps the situation has changed, and that you're addressing the project risks appropriately. This is important for "agile in the small" but also for "agile in the large" because at scale your risks are often much greater. Your prime stakeholders should also verify that you have in fact met the criteria for exiting the phase. For example, if you don't have an end-to-end working skeleton of your system then you're not ready to enter the Construction phase. Holding these sorts of milestone reviews improves your IT governance efforts by giving senior management valuable visibility at the level that they actually need: when you have dozens or hundreds of projects underway, you can't attend all of the daily stand up meetings of each team, nor do you even want to read summary status reports. These milestone reviews enable you to lower project risk. Last Autumn I ran a survey via Dr. Dobb's Journal (www.ddj.com) which explore how people actually define success for IT projects and how successful we really were. We found that when people define success in their own terms that Agile has a 71% success rate compared with 63% for traditional approaches. Although it's nice to that Agile appears to be lower risk than traditional approaches, a 71% success rate still implies a 29% failure rate. The point is that it behooves us to actively monitor development projects to determine if they're on track, and if not either help them to get back on track or cancel them as soon as we possibly can. Hence the importance of occasional milestone reviews where you make go/no-go decisions. If you're interested in the details behind the project, they can be found at http://www.ambysoft.com/surveys/success2007.html . Done right, phases are critical to your project success, particularly at scale. Yes, the traditional community seems to have gone overboard with phase-based approaches, but that doesn't mean that we need to make the same mistakes. Let's keep the benefit without the cost of needless bureaucracy.[ Read More]
|
During 2007 Per Kroll and myself invested a significant amount of time development a framework for lean development governance. This effort resulted in a series of three articles that were published in Rational Edge and a recently published white paper. The articles go into the various practices in detail whereas the paper provides an overview aimed at executives. I also recently did a webcast which is now available online. The URLs are at the bottom of this blog posting. Development governance isn’t a sexy topic, but it critical to the success of any IT department. I like to compare traditional, command-and-control approaches to governance to herding cats – you do a bunch of busy work which seems like a great idea in theory, but in the end the cats will ignore your efforts and stay in the room. Yet getting cats out of a room is easy to accomplish, as long as you know what motivates cats. Simply wave some fish in front of their noses and you’ll find that you can lead them out of the room with no effort at all. Effective governance for lean development isn’t about command and control. Instead, the focus is on enabling the right behaviors and practices through collaborative and supportive techniques. It is far more effective to motivate people to do the right thing than it is to try to force them to do so. This framework is based on the philosophical foundation provided by the 7 principles proposed in the book “Lean Software Development” by Mary and Tom Poppendieck. The 7 principles are:1. Eliminate Waste. The three biggest sources of waste in software development are the addition of extra features, churn, and crossing organizational boundaries. Crossing organizational boundaries can increase costs by 25% or more because they create buffers which slow down response time and interfere with communication. It is critical that development teams are allowed to organize themselves, and run themselves, in a manner which reflects the work that they’re trying to accomplish. 2. Build Quality In. If you routinely find problems with your verification process then your process must be defective. When it comes to governance, if you regularly find that developers are doing things that you don’t want them to do or are not doing things that they should be then your approach to governance must be at fault. The strategy is not to make governance yet another set of activities that you layer on top of your software process but instead should embed into your process to make it as easy as possible for developers to do the right thing. 3. Create Knowledge. Planning is useful, but learning is essential. 4. Defer Commitment. You do not need to start software development by defining a complete specification, but instead work iteratively. You can support the business effectively through flexible architectures that are change tolerant and by scheduling irreversible decisions to the last possible moment. This also requires the ability to closely couple end-to-end business scenarios to capabilities developed in potentially several different applications by different projects. 5. Deliver Fast. It is possible to deliver high-quality systems fast and in a timely manner. By limiting the work of a team to their capacity, by not trying to force them to do more than they are capable but instead ask them to self-organize and thereby determine what they can accomplish, you can establish a reliable and repeatable flow of work. 6. Respect People. Sustainable advantage is gained from engaged, thinking people. The implication is that you need a human resources strategy which is specific to IT, that you need to focus on enabling teams not on controlling them. 7. Optimize the Whole. If you want to govern your development efforts effectively you must look at the bigger picture, not just individual project teams. You need to understand the high-level business process which the individual systems support, processes which often cross multiple systems. You need to manage programs of interrelated systems so that you can deliver a complete product to your stakeholders. Measurements should address how well you’re delivering business value, because that is the raison d’etre of your IT department. Based on our experiences, and guided by the 7 principles, Per Kroll and I identified 18 practices of lean development governance. We've organized these practices into 6 categories:1. The Roles & Responsibilities category: - Promote Self-Organizing Teams. The best people for planning work are the ones who are going to do it. - Align Team Structure With Architecture. The organization of your project team should reflect the desired architectural structure of the system you are building to streamline the activities of the team. 2. The Organization category: - Align HR Policies With IT Values. Hiring, retaining, and promoting technical staff requires different strategies compared to non-technical staff. - Align Stakeholder Policies With IT Values. Your stakeholders may not understand the implications of the decisions that they make, for example that requiring an “accurate” estimate at the beginning of a project can dramatically increase project risk instead of decrease it as intended. 3. The Processes category: - Adapt the Process. Because teams vary in size, distribution, purpose, criticality, need for oversight, and member skillset you must tailor the process to meet a team’s exact needs. - Continuous Improvement. You should strive to identify and act on lessons learned throughout the project, not just at the end. - Embedded Compliance. It is better to build compliance into your day-to-day process, instead of having a separate compliance process that often results in unnecessary overhead. - Iterative Development. An iterative approach to software delivery allows progressive development and disclosure of software components, with a reduction of overall failure risk, and provides an ability to make fine-grained adjustment and correction with minimal lost time for rework. - Risk-Based Milestones. You want to mitigate the risks of your project, in particular business and technical risks, early in the lifecycle. You do this by having throughout your project several milestones that teams work toward. 4. The Measures category: - Simple and Relevant Metrics. You should automate metrics collection as much as possible, minimize the number of metrics collected, and know why you’re collecting them. - Continuous Project Monitoring. Automated metrics gathering enables you to monitor projects and thereby identify potential issues so that you can collaborate closely with the project team to resolve problems early. 5. The Mission & Principles category: - Business-Driven Project Pipeline. You should invest in the projects that are well-aligned to the business direction, return definable value, and match well with the priorities of the enterprise. - Pragmatic Governance Body. Effective governance bodies focus on enabling development teams in a cost-effective and timely manner. They typically have a small core staff with a majority of members being representatives from the governed organizations. - Staged Program Delivery. Programs, which are collections of related projects, should be rolled out in increments over time. Instead of holding back a release to wait for a subproject, each individual subprojects must sign up to predetermined release date. If the subproject misses it skips to the next release, minimizing the impact to the customers of the program. - Scenario-Driven Development. By taking a scenario-driven approach, you can understand how people will actually use your system, thereby enabling you to build something that meets their actual needs. The whole cannot be defined without understanding the parts, and the parts cannot be defined in detail without understanding the whole. 6. The Polices & Standards category: - Valued Corporate Assets. Guidance, such as programming guidelines or database design conventions, and reusable assets such as frameworks and components, will be adopted if they are perceived to add value to developers. You want to make it as easy as possible for developers to comply to, and more importantly take advantage of, your corporate IT infrastructure. - Flexible Architectures. Architectures that are service-oriented, component-based, or object-oriented and implement common architectural and design patterns lend themselves to greater levels of consistency, reuse, enhanceability, and adaptability. - Integrated Lifecycle Environment. Automate as much of the “drudge work”, such as metrics gathering and system build, as possible. Your tools and processes should fit together effectively throughout the lifecycle. The URLs for the 3 articles:Principles and Organizations: http://www.ibm.com/developerworks/rational/library/jun07/kroll/Processes and Measures: http://www.ibm.com/developerworks/rational/library/jul07/kroll_ambler/Roles and Policies: http://www.ibm.com/developerworks/rational/library/aug07/ambler_kroll/ The URL for the white paper:https://www14.software.ibm.com/webapp/iwm/web/preLogin.do?lang=en_US&source=swg-ldg The URL for the webcast:https://www14.software.ibm.com/webapp/iwm/web/preLogin.do?lang=en_US&source=dw-c-wcsdpr&S_PKG=112907C[ Read More]
|
My new paper Scaling Agile: An Executive Guide is now available. As the title suggests the paper overviews how to scale agile strategies to meet your organization's unique needs. The executive summary:Agile software development is a highly collaborative, quality-focused approach to software and systems delivery, which emphasizes potentially shippable working solutions produced at regular intervals for review and course correction. Built upon the shoulders of iterative development techniques, and standing in stark contrast to traditional serial or sequential software engineering methods, agile software delivery techniques hold such promise that IBM has begun to adopt agile processes throughout its Software Group, an organization with over 25,000 developers. But how can practices originally designed for small teams (10-12) be “scaled up” for significantly larger operations? The answer is what IBM calls “agility@scale.” There are two primary aspects of scaling agile techniques that you need to consider. First is scaling agile techniques at the project level to address the unique challenges individual project teams face. This is the focus of the Agile Scaling Model (ASM). Second is scaling your agile strategy across your entire IT department, as appropriate. It is fairly straightforward to apply agile on a handful of projects, but it can be very difficult to evolve your organizational culture and structure to fully adopt the agile way of working. The Agile Scaling Model (ASM) defines a roadmap for effective adoption and tailoring of agile strategies to meet the unique challenges faced by a software and systems delivery team. Teams must first adopt a disciplined delivery lifecycle that scales mainstream agile construction techniques to address the full delivery process, from project initiation to deployment into production. Then teams must determine which agile scaling factors – team size, geographical distribution, regulatory compliance, domain complexity, organizational distribution, technical complexity, organizational complexity, or enterprise discipline, if any — are applicable to a project team and then tailor their adopted strategies accordingly to address their specific range of complexities. When scaling agile strategies across your entire IT organization you must effectively address five strategic categories — the Five Ps of IT: People, principles, practices, process, and products (i.e., technology and tooling). Depending on your organizational environment the level of focus on each area will vary. What we are finding within many organizations, including IBM, is that the primary gating factor for scaling agile across your entire organization is your organization’s ability to absorb change.
|
A common goal of IT governance is to determine the productivity of various techniques, tools, and people as part of the overall effort to improve said productivity. If you can easily measure productivity you can easily identify what is working for you in given situations, or what is not working for you, and adjust accordingly. A common question that customers ask me is how do you measure productivity on agile teams. Although you could use traditional strategies such as function point (FP) counting, or another similar strategy, this can require a lot of effort in practice. Remember that we don't only want to measure productivity, we want to do so easily. Ideally it would be nice to do so using information already being generated by the team and therefore we won't add any additional bureaucratic overhead. A common metric captured by agile teams is their velocity. Velocity is an agile measure of how much work a team can do during a given iteration. At the beginning of an iteration a team will estimate the work that they're about to do in terms of points. At the beginning of a project the team will formulate a point system, which typically takes a few iterations to stabilize, so that they can consistently estimate the work each iteration. Although the point system is arbitrary, my team might estimate that a given work item is two points worth of effort whereas your team might think that it's seven points of effort, the important thing is that it's consistent. So if there is another work item requiring similar effort, my team should estimate that it's two points and your team seven points. With a consistent point system in place, each team can accurately estimate the amount of work that they can do in the current iteration by assuming that they can achieve the same amount of work as last iteration (an XP concept called "yesterday's weather"). So, if my team delivered 27 points of functionality last iteration we would reasonably assume that we can do the same this iteration. So, is it possible to use velocity as a measure of productivity? The answer is not directly. For example, we have two teams, A and B, each of 5 people and each working on a web site and each having two-week long iterations. Team A reports a velocity of 17 points for their current iteration and team B a velocity of 51 points. They're both comprised of 5 people, therefore team B must be three times (51/17) as productive as team A. No! You can't compare the velocity of the two teams because they're measuring in different units. Team A is reporting in their points and B in their points, so you can't compare them directly, The traditional strategy would be to ask the teams to use the same unit of points, which might be a viable strategy with two teams although likely not if you have twenty agile teams and particularly not if you have two hundred teams. Regardless of the number of teams that you have it would minimally require some coordination to normalize the units and perhaps even some training and development and support of velocity calculation guidelines. Sounds like unnecessary bureaucracy that I would prefer to avoid. Worse yet, so-called "consistent" measurements such as FPs are anything but because there's always some sort of fudge factor involved in the process which will vary by individual estimator. An easier solution exists. Instead of comparing velocities you instead calculate the acceleration of each team. For example, consider the reported velocities of each team below. Team A's velocity is increasing over time whereas team B's velocity is trending downwards. All things being equal, you can assume that team A's productivity is increasing whereas B's is decreasing. Of course it's not wise to manage simply by the numbers, so instead of assuming what is going on I would rather go and talk with the people on the two teams. Doing so I might find out that team A has adopted quality-oriented practices such as continuous integration and static code analysis which team B has not, indicating that I might want to help team B adopt these practices and hopefully increase their productivity. Team A: 17 18 17 18 19 20 21 22 22 ...Team B: 51 49 50 47 48 45 44 44 41 ... There are several advantages to using acceleration as an indicator of productivity over traditional techniques such as FP counting:1. It's easy to calculate. For example, the acceleration of team A from iteration 1 to iteration 6 is (20-17)/17 = 0.176 whereas for team B it is (45-51)/51 = -.118. Of course, you don't need to calculate the acceleration over such a long period of time, you could do it iteration by iteration, although I find that doing it over several iterations gives a more accurate value. You'll need to experiment to determine what works for you.2. It is inexpensive. Acceleration is based on information already being collected by the team, their velocity, so there is no extra work to be done by the team. 3. It is unlikely to be gamed. Teams aren't motivated to fake their velocity because it provides them with important information required to manage themselves effectively. 4. It is easy to automate. For example, Rational Team Concert (RTC) calculates velocity automatically from its work item list (an extension of Scrum's product backlog) and does trend reporting via it's web-based project reporting functionality, providing a visual representation of the team's acceleration (or deceleration as the case may be).5. It offers the opportunity for more effective governance. This approach reflects three of the practices of Lean Development Governance: Simple and Relevant Metrics, Continuous Project Monitoring, and Integrated Lifecycle Environment.6. You can easily adjust for changing team size. If the size of a team varies over time, and it will, this metric falls apart the way that I've described it. To address this issue you need to normalize it by dividing by the number of people on the team to get the average acceleration per team member.7. You can easily monetize this metric. By knowing the acceleration of the project team and knowing how much they're spending each iteration, you can estimate the amount of money you're saving through process improvement. For example, if you're spending $100,000 per iteration and your acceleration is 2%, your cost savings is $2,000 per iteration. Of course, nothing is perfect, and there are a few potential disadvantages:1. It is an indirect measure of productivity. Truth be told velocity really is a productivity measure, it's just that because it's measured in different units it's difficult to compare between teams. Acceleration is merely an indicator of the change in productivity.2. You actually need to measure what you're interested in. When you step back and think about it, you're not really interested in measuring your productivity, regardless of what the metrics wonks have been telling you the past few decades. In this case what you really want to know is your change in productivity because your real goal is to improve your productivity over time, which is what acceleration actually measures.3. Management must be flexible. For this to be acceptable senior management must be willing to think outside the "traditional metrics box". Using a non-standard, simple metric to calculate productivity? Preposterous! Directly measuring what you're truly interested in instead of calculating trends over long periods of time? Doubly preposterous!4. Your existing measurement program may be questioned. Once management learns how easy it can be to obtain metrics which enables them to truly govern software development projects they may begin to question the investment that they've made in the past in overly complex and expensive metrics schemes. This can be dangerous for the metrics professionals in your organization, particularly if your metrics group doesn't have valid measurements around the value of their own work. Ummmmm....5. The terminology sounds scientific. Terms such as velocity and acceleration can motivate some of us to start believing that we understand the "laws of IT physics", something which I doubt very highly that as an industry we understand. All it would take is for someone to start throwing around terms like "standard theory" and "unified model" and we'd really be in trouble. Wait a minute..... ;-) In summary, measuring the acceleration of development teams is an easy to collect, straightforward measure of team productivity. I hope that I've given you some food for thought, and would be eager to hear about your experiences applying this metric in practice. [ Read More]
|
I recently wrote a detailed article about Large Agile Teams that was a detailed walkthrough of how to structure agile teams of various sizes. I suspect that this is the most comprehensive online discussion of this topic. The article addressed the following topics:
-
Organizing Agile Teams. The article starts with a summary of the results of some industry research that I've done regarding the size of agile teams, showing that agile techniques are in fact being successfully applied on a variety of team sizes. It then goes into detail describing the organization structure of agile teams at various sizes. The article starts with a discussion of small agile teams, covering the common rhetoric of how to organize such a team and then making observations about what actually happens in practice. It then walks through two approaches to organizing medium sized teams of 15 to 50 people - a structure for a single team and a structure for a team of teams. Finally, it walks through how to organize a large agile program of 50+ people, focusing a fair bit on the need for a leadership team to coordinate the overall activities within the program. This advice is similar to what is seen in the SAFe framework although proves to be a bit more flexible and pragmatic in practice.
-
Supporting Large Agile Teams. The leadership structure to support a large agile team is reasonably straightforward once you understand the issues that such a team faces. In this section the article overviews the need for three important sub teams within your overall leadership team: The Product Delivery Team, The Product Management Team, and The Architecture Team. It also describes the need for an optional Independent Testing/Integration Team, something misleadingly labeled an integration team in SAFe, that reflects some of the known agile testing and quality practices that I've been writing about for several years.
-
Organizing subteams. The article includes a detailed discussion for how to organize the work addressed by agile sub teams within a large agile program. These strategies include feature teams, component teams, and internal open source teams. As you would expect with the Disciplined Agile Delivery (DAD) framework, the article clearly summarizes the advantages and disadvantages of each approach on provides guidance for when (not) to apply each one. I suspect you'll find this portion of the article to be one of the most coherent discussions of the Feature vs. Component team debate.
-
Tailoring agile practices. The article provides a detailed overview of how the various DAD process goals are tailored to address the issues faced by large teams. This advice includes: Do a bit more up-front requirements exploration; Do a bit more up-front architectural modelling; Do a bit more initial planning; Adopt more sophisticated coordination activities; Adopt more sophisticated testing strategies; and Integrate regularly. My hope is that you find this part of the article very illuminating regarding how the DAD framework provides flexible and lightweight advice for tailoring your approach to address the context of the situation that you face.
-
Other Resources. The article ends with a collection of links to other resources on this topic.
I welcome any feedback that you may have about Large Agile Teams.
|
The basic idea behind DevOps is that your development strategy and operations strategy should reflect one another, that you should strive to optimize the whole IT process. This implies that development teams should work closely with your operations staff to deliver new releases smoothly into production and that your operations staff should work closely with development teams to streamline critical production issues. DevOps has its source in agile software development, and it is an explicit aspect of the Disciplined Agile Delivery (DAD) process framework. As a result there is a collection of agile development strategies which enable effective DevOps throughout the agile delivery lifecycle. These strategies include: - Initial requirements envisioning. Disciplined agile teams invest time at the beginning of the project to identify the high-level scope in a light-weight, collaborative manner. This includes common operations requirements such as the need to backup and restore data sources, to instrument the solution so that it can be monitored in real time by operations staff, or to architect the solution in a modular manner to enable easier deployment.
- Initial architecture envisioning. Disciplined agile teams will also identify a viable architectural strategy which reflects the requirements of their stakeholders and your organization’s overall architectural strategy (hence the need to work closely with your enterprise architects and operations staff). One goal is to ensure that the team is building (or buying) a solution which will work well with the existing operational infrastructure and to begin negotiating any infrastructural changes (such as deploying new technologies) early in the project. Another goal is to ensure that operations-oriented requirements are addressed by the architecture from the very start.
- Initial release planning. As part of release planning the disciplined agile team works closely with their operations group to identify potential release windows to aim for, any release blackout periods to avoid, and the need for operations-oriented milestone reviews later in the lifecycle (if appropriate).
- Active stakeholder participation. Disciplined agile teams work closely with their stakeholders, including both operations and support staff, all the way through the lifecycle to ensure that their evolving needs are understood.
- Continuous integration (CI). This is a common technical agile practice where the solution is built/compiled, regression tested, and maybe even run through code analysis tools. CI promotes greater quality which in turn enables easier releases into production.
- Parallel independent testing. For enterprise-class development or at scale, particularly when the domain or technology is very complex or in regulatory environments, disciplined agile team will find they need to support their whole team testing efforts with an independent test team running in parallel to the development team. These testing issues often include validation of non-functional requirements – such as security, performance, and availability concerns – and around production system integration. All of these issues are of clear importance for operations departments.
- Continuous deployment. With this practice you automate the promotion of your working solution between environments. By automating as much of the deployment effort as possible, and by running it often, the development team increases the chance of a successful deployment and thereby reduces the risk to the operations environment. Note that deployment into production is generally not automatic, as this is an important decision to be made by your operations/release manager(s).
- Continuous documentation. With this practice supporting documentation, including operations and support documentation, is evolved throughout the lifecycle in concert with the development of new functionality.
- Production release planning. This is the subset of your release planning efforts which focuses on the activities required to deploy into production.
- Production readiness reviews. There should be at least one review, performed by the person(s) responsible for your operations environment, before the solution is deployed into production. The more critical the system, the more product readiness reviews may be required.
- End-of-lifecycle testing. Minimally you will need to run your full automated regression test suite against your baselined code once construction ends. There may also be manual acceptance reviews or testing to be performed, and any appropriate fixing and retesting required to ensure that the solution is truly ready for production.
There’s more to it though than simply adopting some good practices. Your process must also embrace several supporting philosophies. The Disciplined Agile Delivery (DAD) process framework not only adopts the practices listed above, and more, but it also promotes several philosophies which enable DevOps: - Delivery teams should be enterprise aware, that they should work with people such as operations staff and enterprise architects to understand and work towards a common operational infrastructure for your organization.
- Operations and support people should be recognized as key stakeholders of the solution being worked on.
- The delivery team should focus on solutions over software. Software is clearly important, but we will often provide new or upgraded hardware, supporting documentation (including operations and support procedures), change the business/operational processes that stakeholders follow, and even help change the organizational structure in which our stakeholders work.
- Your process should include an explicit governance strategy. Effective governance strategies motivate and enable development teams to leverage and enhance the existing infrastructure, follow existing organizational conventions, and work closely with enterprise teams – all of which help to streamline operations and support of the delivered solutions.
For more detail about this topic, I think that you will find the article I wrote for the December 2011 issue of Cutter IT Journal entitled “ Disciplined Agile Delivery and Collaborative DevOps” to be of value.
|