In Part I of this three-part article series, we introduced lean software development governance, and described the mission and principles of lean governance, along with the organization and stakeholder collaboration required for project-by-project success. Here in Part II, we will focus on the practices surrounding the essential processes and measures to be used in lean software development governance. "Processes" refers to the strategies used for effective lean development -- the basic concepts of which will be familiar to those who use the Rational Unified Process®, or RUP®. "Measures" refers to metrics strategies used to foster informed executive decision-making with supporting targets and incentives.
The practices in this category promote strategies for running a project efficiently and effectively, so project teams and executives get the transparency and oversight required for lean governance, without unnecessary overhead. The specific practices associated with this category are:
- Iterative Development
- Risk-Based Milestones
- Adapt the Process
- Continuous Improvement
- Embedded Compliance
Compared to traditional (often called "waterfall") software development techniques -- in which requirements are locked into place, teams spend months coding to those specifications, testers receive completed modules only at the end of the coding cycle, and requirements mis-matches are discovered so late in the cycle that deadlines are missed -- iterative development techniques lead to a successful outcome more often.
An iterative approach divides a project into a sequence of short timeboxes called iterations (which are sometimes referred to as "sprints" or "cycles"). In each iteration you evolve the requirements, analysis, design, implementation, and test assets. Each iteration has a well-defined set of objectives, and produces a partial working implementation of the final system. Each successive iteration builds on the work of previous iterations to evolve and refine the system until the final product is complete.
Benefits of iterative development
By dividing a project into a series of timeboxed iterations, and by delivering executable tested code as part of each iteration, you accomplish a number of things related to effective governance:
- Timeboxing forces fast decision-making and a crisp focus on what matters the most. When your deadline is two years out, you feel less of an urgency to deliver and take decisions than when your deadline is four weeks out.
- Regular delivery of working software increases feedback opportunities. By frequently delivering tested code, you get invaluable feedback from the compilation, through integration, from various test tools, and from key stakeholders that can help you assess the working code. This feedback allows you to find problems early, when they are least expensive to correct. Taking corrective action while you still have time and the ability to do so allows your project to deliver higher business value.
- Fact-based governance. No matter how much we may dislike it, most discussions that occur during the first two-thirds of a project are subjective: "I reviewed the architecture and in my opinion, there are some major flaws." With iterative development, we move from these subjective discussions to objective or fact-based discussions: "I looked at the performance test data, and they confirmed my view that the architecture does not allow the required load."
- Iterative development increases your ability to build systems that meet the true needs of your stakeholders. From a software economics perspective, iterative development allows for a much more favorable progress curve: A tested system is produced more rapidly, because it experiences many relatively small course corrections along the way, as opposed to a system that has gone far off course over many months or even years without testing. Iterative development leads to systems that adhere much better to the strategic mission, versus an original (and flawed) detailed requirements specification.
Figure 1: Based on frequent demonstrations and stakeholder feedback, the small course corrections early in the lifecycle of an iterative ("modern") project lead to more rapid success, compared to a "waterfall" project.
There are a number of trade-offs that need to be considered as you migrate to iterative development:
- Training and mentoring. Iterative development requires an investment in retraining and coaching to ensure successful deployment. Traditional IT professionals will need to be weaned from the false security of trying to think everything through up front. People will also need training in skills outside of their current specialty so that they can more easily shift between project activities. IT management and business stakeholders will need to understand what input and deliverables should be expected at what time. This training and mentoring may be done incrementally.
- Individual skillset requirements change. Iterative development requires a different skill set and set of role definitions. Traditional project management required team members who were narrow specialists; iterative development works best when specialists can also have a broader skill set. As an example, instead of team members either being designers or implementers, you would expect a developer to do design as well as implementation. Some people may not effectively make the transition to iterative development.
- Project resource changes. Iterative techniques requires a redistribution of personnel when a certain skill set is needed. For example, testers are needed earlier in the lifecycle, since testing is done throughout the project versus only late in the project. Similarly, architects are needed also later in the lifecycle because architecture issues that are addressed early 1 are continuously assessed throughout the project. This will require retraining and rebalancing of work force.
- Project management requires a higher degree of involvement. Iterative development is harder on the project manager, especially for the first three-quarters of the project. This is because iterative development forces difficult decisions to be taken early, and because there are more moving parts to the project. Rather than having everybody doing requirements for three months, you will in each iteration do a little requirements, architecture, design, implementation, and testing. The benefit is that you frequently avoid painful disappointments and tough resetting of expectations late in the project. An iterative evangelist's ability to sell changes in how to manage projects to project managers and middle managers is crucial for a successful rollout of iterative development methods.
The following anti-patterns indicate that iterative development is not (properly) applied:
- Detailed speculative planning. Plan the whole lifecycle in detail, tracking variances against the plan and updating the details as the project progresses. (Overly detailed planning can actually contribute to project failure because it distracts the project manager from productive management activities.)
- Documentation-driven tracking. Assess status in the first two-thirds of the project by relying on reviews of specifications and intermediate work products, rather than assessing status of test results and demonstrations of working software.
- Long iterations. Use very few or very long iterations, and avoid stakeholder feedback as long as possible, thus avoiding any corrective actions. The reason you do iterative development is to get feedback and quickly accommodate it. We believe that you lose the main benefits of this practice when you have fewer than four iterations or single iterations lasting longer than two months.
We suggest four-week long iterations by default. 2 As you recognize that some other length for iterations is more appropriate for your context, change the iteration length. Strive for the shortest iteration length possible for your environment. There should be no time between iterations.
We have found that iterative development is most effective when you combine it with a deliberate balance of early risk reduction and early value creation. This means that as you prioritize what to focus on in each iteration, you choose to develop those features that represent the biggest business, organizational, programmatic, and technical risks while delivering the most value. However, these two objectives (greatest risk and greatest value) are not always fully aligned, which forces a deliberate choice between maximizing early value creation and early risk reduction.
Driving down risk reduces uncertainty in choice of technology, including commercial off-the-shelf (COTS) selections and architectural stability, as well as our understanding of team effectiveness. This reduction in uncertainty also reduces the variance in estimations, since there is a direct correlation between these factors and the ability to produce sound estimates.
Since early risk reduction and value creation are fundamental for project success (see Figure 2), it is important to have the right control points in place. This is why all four RUP phases end with a management milestone requiring project deliverables to be assessed to demonstrate proper risk reduction and assess value creation. For example, at the end of Elaboration you want to drive as much technical risk as possible and deliver a stable architecture. The team needs to show that they do have an executable architecture, with a few selected scenarios that can be executed, and with a risk list that reflects the mitigation of many key technical and other risks. This risk reduction needs to be balanced with the value of the running code and the value created by forcing hard decisions to be taken early on. What is the right balance for one project may be different for the next one.
Figure 2: Risk reduction (pale curve) and value (purple curve) during the project lifecycle.
Benefits of risk-based milestones
Risk-based milestones have the following benefits:
- Stakeholder insight. Milestones expose projects to scrutiny by stakeholders, injecting fresh air into the project. Stakeholder insight provides direction or reaction to events and circumstances not visible to the project team.
- Early value creation. The most value is created by making hard decisions and validating that those are the right decisions. This reflects the nature of systems as manifestations of human creativity. This also means that you have the most innovation early in the project, as shown in Figure 3 below.
- Early loss prevention. Some projects are based on bad assumptions and are doomed to fail from the very start. While these projects are clearly unfortunate, you want to avoid burning up too much money before realizing that the project was a bad idea. Risk-based iterations makes bad projects explicit very early in the lifecycle.
- Improved productivity. By driving out technical risk early the architecture will stabilize more rapidly. This enables more cost-effective execution because fewer technical unknowns will blind-side you later in the project.
- Reduced variance in estimation. Estimates will always have a variance, and the variance is proportional to the unknowns or risk. Early risk mitigation hence means that the variance in your estimates will be reduced more rapidly.
Figure 3: Early value creation means that the most innovation occurs early in the project, while risk gradually decreases over time.
There are a number of trade-offs that need to be considered as you migrate to risk-based milestones:
- New planning paradigm. Risk mitigation works when you change your tactics based on the risk you are facing. Risk-driven milestones therefore require an adaptive planning process, by which plans are adapted to reality. Remember Eisenhower's observation that "Plans are worthless, but planning is everything." If you are not willing to accept evolving plans, risk-based milestones are not for you.
- Adaptive process. Similar to adaptive planning, you need to adapt your process. If your process is overly prescriptive, it does not provide the team with the flexibility they need to succeed, especially in the Innovation stages shown in Figure 3. More process seldom drives more innovation. Later, in the Cost Efficient stage, you want more rigid processes.
The following anti-patterns are associated with risk-based milestones:
- Cookie cutter process. The team is forced to follow a detailed, prescriptive, and "repeatable" process that allows no room to react to discovered risks.
- Document-driven tracking. Risks are identified, added to a risk list, discussed by some form of project steering committee, and if ever addressed by the team, this is not done until a significant amount of time has passed. It's no good to identify a risk if all you can do is lament that you don't have the time or budget to properly address it, or simply determine that you'll put it off until the next release due to scheduling concerns.
- Detailed speculative planning. Detailed plans for the entire project are established early on, and management is focused on tracking-to-plan for the remainder of the project.
We suggest using RUP's Inception, Elaboration, Construction, and Transition milestones, which are well-defined and centered on risk mitigation.
It is critical to adapt the development process 3 to the needs of the project. It is not a question of more process being better or of less process being better. Rather, the amount of ceremony, precision, and control present in a project must be tailored according to a variety of factors -- including the size and distribution of teams, the amount of externally imposed constraints, the phase the project is in, and the need for project auditability and traceability.
First, we believe that more process -- whether usage of more artifacts, production of more detailed documentation, development and maintenance of more models that need to be synchronized, or more formal reviews -- is not necessarily better. Rather, you need to adapt the process to project needs. As a project grows in size, becomes more distributed, uses more complex technology, has more stakeholders, and needs to adhere to more stringent compliance standards, the process needs to become more structured. But for smaller projects with co-located teams and known technology, the process should be more streamlined.
Second, a project should adapt process ceremony to lifecycle phase. The beginning of a project is typically accompanied by considerable uncertainty, and you want to encourage a lot of creativity to develop an application that addresses the business needs. More process typically reduces creativity, so you should use less process at the beginning of a project, when uncertainty is an everyday factor. On the other hand, late in the project you often want to introduce more control, such as feature freeze or change-control boards, to remove undesired creativity and risks associated with late introduction of defects. This translates to using more process late in the project.
Third, an organization should strive to continuously improve the process. Do an assessment at the end of each iteration and at project end to capture lessons learned, and leverage that knowledge to improve the process. Encourage all team members to look continuously for opportunities to improve, and feed those improvements to organizations invested in process improvements, such as your company's Software Engineering and Process Group (SEPG). Funding and time needs to be specifically allocated for these activities.
Fourth, you need to balance project plans and associated estimates with the uncertainty of a project. This means that early on in a project, when uncertainty typically runs fairly high, plans and associated estimates need to focus on big-picture planning and outlines rather than aiming to provide five-digit levels of precision where clearly none exist. Early development activities should aim to drive out uncertainty and enable gradual increases in planning precision.
Figure 4: Factors determining the amount of process discipline required. Many factors -- project size, team distributions, complexity of technology, the number of stakeholders, compliance requirements, the particular stage of the project -- determine how structured a process you need.
Adapting the process carries several benefits:
- Improved productivity. A process adapted to the needs of the project can increase, versus hamper, the productivity of the project by providing a unifying force for the team, and by resulting in more work effort devoted to productive, versus overhead, activities. It can also increase productivity of individual team members by providing templates, examples, and guidance.
- Repeatable results. An adaptable process allows the team to adapt to the tactical needs of the project. This gives the team the support and the flexibility they need to accomplish repeatable results. However, repeatable results in many cases require some adaptability in the process to meet tactical needs; "repeatability" in this case cannot simply mean a 100% identical reproduction of the process, but rather repeatability of process stages, each of which proceeds through an understood range of pacing, outcome, and measures.
- Early value creation and risk reduction. By adapting and reducing process ceremony to the uncertainty of the project, innovation is enabled. More value can be created early in projects, also allowing early risk reduction.
- Sharing lessons learned between teams and projects. By adapting the process based on learning from previous projects, you effectively share knowledge across teams.
There are a number of important trade-offs associated with adapting the process:
- Requires investment. Adapting the process requires an investment in knowledge so the process can be effectively adapted and deployed.
- Requires insight. To adapt the process correctly, the organization needs sufficient software engineering insights to understand the context for whether a practice should be adopted, and the extent to which it needs to be adapted.
- Managing variations. Allowing teams to adapt the process increases the difficulty of governing projects -- particularly, ensuring that suitable best practices are followed across a set of projects. This is because the teams will be producing various work products to differing levels of detail at different points in time.
The following anti-patterns are associated with adapting the process:
- More process is better. Always regard more process, more documentation, and more detailed up-front planning as better, including insistence on early estimates and adherence to those estimates.
- Consistent and repeatable process. No matter what, always use an identical process. (In fact, the goal should be to allow variation to enable each project to succeed. Hence, always using the same process, and the same amount of process throughout the project is a anti-pattern that will lead to failure. The true goal is consistent and repeatable results.)
- Ad-hoc process. You either make up the process as you go along, or perhaps you adapt the process to such a great extent each time that no process is recognizable from one project to the next. (When no process is defined, there are no predictable results, risks aren't identified and mitigated, and there is no shared learning across teams.)
We recommend leveraging the RUP process framework with customized out-of-the-box delivery processes based on project needs.
Software development is a dynamic endeavor where priorities, requirements, and sometimes even team members are constantly changing. Furthermore, software development is very complex, because it addresses a range of sometimes conflicting issues. Because of the inherent dynamics and complexity, you cannot reasonably predict at the beginning of a project the details of how you are going to work downstream. You should try, but if you want to be truly effective at software development you must be willing to learn as you go and change your tactics as necessary.
The fundamental concept behind this practice is simple: Improve the way you work whenever the opportunity presents itself. The old saw "you should learn something new every day" is a good one. Yet this practice goes ones step further and recommends that you act on what you learn and increase your overall effectiveness.
There are several ways that you can identify potential improvements to your software process during the execution of a software project:
- Informal improvement sessions. On a regular basis gather your team, and potentially your project's key stakeholders, and simply ask them to discuss what is going right, what is going wrong, and potential ways to improve how you work together.
- Retrospectives. A retrospective 4 is a facilitated meeting where four questions are asked: What did we do well that, if we don't discuss, we might forget? What did we learn? What should we do differently next time? What still puzzles us? The goal of retrospectives, which can be held at any time during the lifecycle, is to identify potential areas for improvement.
- Staff suggestion box. Sometimes the easiest way to identify potential ways to improve things is to make it easy for people to provide suggestions at any time in an anonymous manner. "Staff suggestion boxes" could be physical, although more often than not are now implemented electronically.
- Personal reflection. A good habit to promote within your staff is for them to take the time to reflect occasionally on how well they're doing, how well they're interacting with others, and how well they're meeting their actual goals. This reflection often leads to personal strategies for improvement but can also lead to suggestions for overall process improvement.
- Editable process. Provide teams with a base process, but also provide them with the authority and tools, such as Wikis, to edit that process as they see fit.
There are several benefits to this practice:
- You learn as you go. Teams can take advantage of new insights right away, instead of waiting until the next project to try out improvements. This increases productivity more quickly. This is especially effective when combined with the practice "Develop Iteratively," described above, since lessons learned during one iteration can directly be leveraged for next iteration.
- The team has clear control over its destiny. This practice effectively empowers teams to do their own process improvement and enables them to self-organize more easily. (Next month, in Part III, the practice "Self-Organizing Teams" will provide more details on this concept.)
There are several trade-offs associated with this practice:
- It requires investment. You must take time out of your project schedule to invest in process improvement activities.
- You need to act. There is no point to identifying opportunities for improvement if you don't actually act on them.
- You need to be honest with yourself. Many of the problems that a team faces will be centered around individuals and the way that they interact with one another. People involved with the project need to feel safe enough to point out problems and the potential solutions, even though a solution might go against how others would like to operate.
- You may need change and configuration management of your process. Some project teams will need to conform to regulations such as ISO 900X or the FDA's 21 CFR Part 11, which require that a team's process be defined and that the team provide proof that the process was followed. The implication is that you may need to track all changes to your process, the reason why you made each change, and when the change was made to conform to these regulations.
The following anti-patterns are related to software process improvement:
- Lessons indicated. Many project teams identify a collection of "lessons learned," but nobody acts on them. Until you actually improve your software process through what you've "learned," you merely have a collection of "lessons indicated."
- Delayed improvement. Potential process improvements are identified, often in a "project post-mortem," at or near the end of the project. At this point it is too late for the project team to act on any of the lessons indicated, and if the team members are about to be split up among other teams on other projects, it is very likely that there will never be any serious action taken.
We recommend that you invest two hours at the end of each iteration in order to hold an informal process improvement meeting or retrospective.
Embedded compliance refers to the concept that compliance to regulations as well as corporate policies and guidance should be automated wherever possible, and when that isn't an option it should be part of your culture and daily activities. The easier compliance is to achieve, the greater the chance IT professionals will actually comply. But if compliance requires significant amounts of extra work, particularly when that work is perceived to be onerous by the people doing it, then chances are greater that your compliance effort will be subverted by development teams.
Much of the "drudge work" of compliance can be automated through tooling. For example, the Sarbanes-Oxley act (SOX) defines strict guidelines for how financial data should be retained, accessed, and modified. Much of the tracking required by SOX can be automated via IBM® Tivoli®-based products such as Tivoli Access Manager and Tivoli Storage Manager. Similarly, the Food and Drug Administration guidelines, as well as the Capability Maturity Model Integrated (CMMI) guidelines, require that traceability from requirement to test cases to design to code be maintained. A combination of tools such as IBM Rational RequisitePro®, IBM Rational Software Architect, and IBM Rational ClearQuest® automate much of the traceability effort through integration of requirements definition, design modeling, and defect tracking functionality.
There will always be need for human intervention when it comes to compliance. Compliance can become part of your culture when people understand why it is important, including the underlying principles. Not everyone in your IT department needs to understand the intricacies of SOX, but they should understand that your organization needs to be able to show how financial numbers were generated and that source data shouldn't be changed after the fact. Similarly, everyone doesn't need to be a usability expert, but they should understand that consistency is important and that they need to conform to your organizational user interface guidelines. By embedding compliance into your corporate culture through education, supporting activities such as compliancy reviews become much easier to conduct because people will have embedded compliance, of their own volition, into their daily jobs.
There are four benefits to embedded compliance:
- Benefits of compliance. The usual benefits of compliance clearly apply, such as keeping your business running, and the ability to make marketing claims that you're ISO-900X compliant, CMMI compliant, etc. In addition, most compliance efforts will lead to improved operational effectiveness.
- Lower cost. By embedding compliance into your tools and processes you decrease the overhead of reaching compliance. Manual approaches to compliance, such as onerous external documentation and detailed reviews, prove to be very expensive in practice and not very effective because people often subvert complex compliancy efforts.
- Less push-back from project teams. Most IT professionals aren't thrilled by bureaucracy in general, let alone bureaucracy resulting from compliance needs. Embedded compliancy makes it as easy as possible for people to do the right thing.
- Higher level of compliance. When most, if not all, compliance needs are automated there is a much greater chance that project teams will comply and that they will produce the required documentation to prove it. For example, to ensure requirements traceability, if your version control system were to require that the person checking in the work product indicate the requirement or defect identifier they were working on, then requirements traceability would be greatly improved: ten seconds of effort each time something is checked in can save weeks of painful and error-prone work at the end of the project. By contrast, when compliance is performed manually, the effort is typically left to the last minute and performed half-heartedly.
There are several trade-offs associated with embedded compliance:
- Tool investment. You may need to invest in new tools to automate as much as possible your compliancy efforts. You may also discover that you simply need to start using previously unused features in your existing toolset. Minimally you will need to do some investigation, training, and tool configuration.
- Cultural investment. You need to invest in building a compliancy culture within your organization. This includes investment in training and education as well as in development of pragmatic guidelines for people to refer to.
- Streamlining the process. You need to look at your current compliance processes, or compliance needs, and determine how to embed a minimal set of compliance-related tasks into your current software development process to enable compliance. This requires knowledge and investment.
The following anti-patterns are related to regulatory compliance:
- Documentation inundation. The need for compliance is given higher priority than the need for your day-to-day business, and your organization begins to drown in unnecessary paperwork. Your goal should be to minimize the work required to remain compliant, and better yet, make compliance a business enabler instead of an overhead cost.
- Fear-driven compliance. Organizations often over-invest in their compliance solution. Most regulations have significant leeway built-in, often in recognition that organizations have different purposes requiring different target levels for compliance. A common mistake is to not include front-line staff in the definition of your compliancy effort, reducing the chance of identifying a pragmatic approach to a new mandate.
Define a minimal solution which is integrated into process and tools based on the intent of the appropriate regulations. To accomplish this, assign the right people to interpret the regulations that you must conform to and to create guidelines for development teams to follow. If you assign bureaucrats to this effort, you will end up with a bureaucratic solution; if you assign "command-and-control" people to the effort you will end up with a "command-and-control" solution; and if you assign pragmatic, knowledgeable people to the effort you will end up with a pragmatic and workable solution.
The practices in this category promote effective metrics strategies to foster informed executive decision-making with supporting targets and incentives. These practices are:
- Simple and relevant metrics
- Continuous project monitoring
Measurements are necessary to understand how you are doing so you can take corrective actions as needed. There are however a couple of important imperatives when using metrics:
- Metrics needs to be simple. Most organizations either use no metrics at all, meaning that they are flying blind, or they overdo it. Simple metrics, such as amount of money being spent or number of defects currently outstanding, have two key attributes. First, they are easy to collect metrics, and ideally, metrics collection is automated, since manual metrics collection is frequently slow and error prone. Secondly, it needs to be easy to understand and interpret the metrics so you can take action when you see something is wrong.
- Metrics needs to be common. Metrics need to be commonly used throughout your organization. If each project has its own set of metrics, and their own interpretations of those metrics, executive oversight becomes impossible. Teams may choose to impose some additional metrics, relevant to their projects, and that is fine as long as the basic, organization-wide measurements are also kept up to date.
- Metrics need to be relevant. Many organizations that collect metrics take few actions based on the results. This is frequently because the metric is not sufficiently relevant. For every metric, you should specify what action you should take based on certain threshold values. If you cannot specify actions to take, why do you measure? In addition, you should have as few metrics as possible, since you want to focus on the essentials, rather than observing and reacting to less essential issues.
Benefits of simple and relevant metrics
When done right, taking simple and relevant metrics carries many benefits:
- Fact-based governance. Metrics collected automatically from the development and management tools used by the teams provides accurate information about the situation at hand. Automated metrics reduce the introduction of errors through manual and incorrect collection, analysis, or management of data and associated measurements. Automated collection can also be done faster and more frequently than manual collection. All of the above leads to governance being based on more accurate and up-to-date data, or what we call fact-based governance.
- Painless governance. Automated collection also reduces the need for time-consuming and expensive manual collection, analysis, and management. Simple metrics reduce the complexity when you're forced to do manual metrics collection, analysis, and management. This reduces cost and overhead associated with governance, or what we call painless governance.
- Proactive governance. Relevant metrics will give you a heads-up when something is not right at an earlier stage than you would typically have noticed otherwise. By observing a trend of increasing defects, for example, you may conclude that you are on a path to delivering a low-quality build at the end of your next iteration; thus, you can de-scope features for that iteration to ensure that you still deliver a high-quality build. Early detection of problems provides you a freedom of choice versus being forced to do fire-fighting after the fact.
- Process improvement. Metrics allow you to understand what works and what does not work. This provides you with the information you need to have open discussions about what went wrong and how you can avoid problems in the future.
There are a number of important trade-offs to make as you collect metrics:
- Number of metrics. Once you get started with metrics collection it is tempting to collect many different types. This feels more "rigorous," and who knows, that extra metric may actually come in handy one day. But we strongly recommend you start with the smallest number of metrics you can, then add additional metrics only as needed. Our recommendation is to have a handful of metrics, noting that less is better. As new metrics are adopted, old metrics should be dropped; otherwise, the complexity and cost of collecting the metrics increases with diminishing returns on the value they are providing. If you have not used a metric lately, why are you still collecting it?
- Rocking the boat. Collecting metrics may unveil certain truths people in the organization would rather keep hidden. Without metrics, you are effectively driving blind. Imagine you're driving blind for ten months on a project. It feels comfortable; you believe that everything is on track. But what happens when, at the very end of the project, you see that it is doomed? You are forced to find reasons -- perhaps even imagine them -- for why the project could never have succeeded in the first place. Since you have no metrics to analyze, you make excuses, such as "customers constantly changing their minds," "unforeseeable technical problems," and "technical incompatibilities." With metrics, on the other hand, you at least have an objective view of what went wrong in these cases, which most likely will prove to be a combination of things -- including some problems that need to be addressed in your organization.
- Investments. Metrics do not come for free. If you automate them there is an up-front investment. If you adopt manual metrics there is a smaller up-front investment, but a larger ongoing investment.
- Deviations: A project is given a certain set of parameters within which to operate, such as cost and quality. If a project is moving outside those parameters, or is about to, you want the metric at hand to indicate such deviation.
- Trust. Metrics can be an invaluable tool when you use them for honest discussions, to learn, and to reward people. You should not punish people when metrics turn bad, but you should punish people who are ignoring bad metrics.
- You still need to talk with people. Metrics may occasionally indicate that you have a problem, but they will never provide all of the information that you need to make a good decision. You still need to talk with people on a regular basis to understand root causes.
The following anti-patterns are associated with metrics:
- Document-based earned value. In traditional governance approaches, you account a certain percentage of "earned value," and hence progress, as you complete critical documents such as requirements specifications or detailed design documents. In reality, most specifications are flawed -- you do not know how flawed until you have implemented and tested according to the specification. Traditional earned value hence gives a false sense of security of progress. The only true measure of progress on a software development project is the regular delivery of working software.
- Metrics without action: Collecting metrics without taking actions as the metrics reach certain thresholds is counter-productive, since you incur the cost for metrics collection but gain no benefits.
We recommend you start with metrics that cover the following areas of your software development projects:
- Value: You need a measure indicating to what extent you are adding value to your organization. For example, use case points, or some other form of project velocity based on the team's own internal measure of functionality, delivered each iteration indicate how much functionality is being implemented by the team. Note that value should be accounted for in terms of working functionality not in terms of delivered specifications.
- Quality: You need a measure indicating the quality of your application, such as defect trends.
- Cost: You need a measure of what resources you have consumed, such as work months (effort) or money spent.
Not only are these metrics useful for determining the current status of your project, but also they can be used to determine how far a team has deviated from initial expectations set for the team. Expectations are set at the business case level for time, spending, and use of resources and may be updated periodically throughout the project.
Continuous project monitoring means exactly what the term implies -- you regularly monitor the health of the IT projects within your organization through automated metrics collection, project reviews, and even word of mouth. Many organizations have dozens, and sometimes hundreds, of IT projects underway at any given time. Each project will have its own current status which will change throughout the lifecycle. You may have projects at different locations and time zones, and you may have several software processes within your organization. Regardless of these challenges, each project must still be monitored if it is to be governed effectively. Furthermore, because you want to react to adverse changes and to emerging opportunities quickly you want to monitor projects continuously.
There are several ways, which can be combined, to continuously monitor projects:
- Automated measurements. Metrics are captured about a project through automated means from existing tools used daily throughout a project. These metrics are processed and displayed using project scorecard software to summarize the quantifiable status of each project.
- Project reviews. A project review, including milestone reviews, is a periodic review held at the end of an iteration or at other management milestones. This ensures that working code has been delivered, that it satisfies current stakeholder needs, and that the project is healthy. As a by-product of the review, areas for improvement should be identified and successes celebrated.
- "Post-mortem" reviews. A "post-mortem" review occurs at the end of the project, either because the system has been successfully delivered into production or the project has been cancelled. When the project has been successful, the goal is to assess whether the vision of the project was achieved, whether the system was on track to achieve its stated benefits, and whether the stakeholders are satisfied with how the project went. When the project was deemed a failure, the goals are to provide team members an opportunity to vent their frustrations and hopefully allow them to move on and ideally identify any areas for improvement so that any mistakes will not be repeated; unfortunately, these goals seem to be rarely achieved in the wake of a failed project.
- Verbal communication. Sometimes the best way to determine the current status of a project is to listen to what people are telling you. Want to find out how a project is coming along? Ask someone working on that team. Your metrics may tell you that something is happening on the project, but until you ask the team you'll never know what is actually happening.
The automated metrics approach enables scaling of project monitoring efforts to determine which projects need attention. Milestone reviews ensure continual delivery of value, and project reviews enable you to learn from your overall experience.
There are several benefits to this continuous project monitoring:
- Fact-based governance. By continuously monitoring projects you can base your governance activities on up-to-date, accurate facts. Common trouble indicators include variances from expected values or negative trends such as increasing defects.
- Earlier feedback. Continuous monitoring provides earlier problem detection, which allows you to take corrective actions sooner rather than later. This enables you to get projects on track or, if necessary, to cancel them before your losses mount.
- Effective governance. Monitoring of the right metrics (see the practice "Simple and Relevant Metrics" described above) drives the right behavior on project teams because "you get what you measure" (see below). Continuous monitoring motivates the right behaviors throughout the project, not just at milestone points.
There are several trade-offs associated with this practice:
- You need to identify the right metrics. You will get what you measure -- meaning that teams understand as important anything that you require them to track as a metric, and you can be certain they will focus on those things being measured. This psychological fact means you need to ensure that you measure the right things. Measuring investment, functionality delivered, and defect trends is a really good start, because you can leverage these simple metrics to determine the effectiveness of project teams. See the practice "Simple and Relevant Metrics" for more details on this concept.
- You need to be flexible. Metrics and the trends they reveal will change in relevancy throughout the project. For example, during the Elaboration phase of a project your defect trend rate may increase for awhile as you learn about how various technologies work together, yet late in Construction and during Transition you would expect the defect trend rate to be going down steadily. By understanding how metrics vary according to project phase, you are better prepared to detect actual anomalies.
- "Warning signs" are only warnings. Each project is different and each project team is working in a dynamic environment. Some warning signs indicate long-term challenges which need to be addressed, and some signs indicate a short term difficulty which doesn't require governance attention. Sometimes "things happen," such as a political change within the project's stakeholder community or the passage of new legislation, which can put a team in temporary trouble.
- You need to invest in automation. In the short term, you will need to invest in tooling to automatically gather the metrics which you're interested in.
- You still need to talk to people. Metrics can only provide warning signs; they can't accurately indicate the type of trouble (or opportunities) faced by a team, nor can they provide the information required to help the team. To effectively govern a project you're going to have to get actively involved with the team and work closely with them.
The following anti-patterns are related to project monitoring:
- Management by metrics. Corrective actions are taken to fix symptoms based on reported metrics, without properly understanding root causes. For example, assume that a team's reported defects rose 57% over one iteration. This might indicate that the team is in trouble, or perhaps they just hired a really good investigative tester, or perhaps they've started using their defect tracking software to manage requirements as well as traditional defect reports (thus streamlining their process and improving their overall productivity). After discussing within the team, you know whether you should for example delay the release or continue as normal.
- Metrics deluge. A common problem with automated metrics collection is that you gather a lot of metrics because it's easy to do. Sure, you have lots of numbers available to you, but what do they really mean and how can you use them effectively? Having a few relevant metrics which provide valuable information to you is far more important than having a dozens of metrics which you're not sure how to use -- it just seemed like a good idea to collect them. Quality over quantity should be your goal.
We suggest that you begin by following the practice "Simple and Relevant Metrics," because that practice will help you automatically capture and display your metrics via project scorecard software. When a project seems to be deviating from its expected course, you should talk with the project team members to determine what the metric(s) mean and how the team can execute better.
Next month, our final Part III installment of this article series will cover roles and responsibilities, as well as policies and standards, relevant to lean software development governance. Stay tuned!
1 See "Agile Best Practice: Initial High-Level Architecture Modeling" at http://www.agilemodeling.com/essays/initialArchitectureModeling.htm
2 In the August 2007 issue of Dr. Dobb's Journal Scott Ambler summarizes a survey which shows that the majority of agile development teams have iteration lengths of between two and four weeks.
3 This section is derived from Agility and Discipline Made Easy -- Practices from OpenUP and RUP, by Kroll and MacIsaac.
4 Norman L. Kerth. Project Retrospectives: A Handbook for Team Reviews. Dorset House, 2001.
- Participate in the discussion forum.
- A new forum has been created specifically for Rational Edge articles, so now you can share your thoughts about this or other articles in the current issue or our archives. Read what your colleagues the world over have to say, generate your own discussion, or join discussions in progress. Begin by clicking HERE.
Global Rational User Group Community
Scott W. Ambler is the Practice Leader Agile Development with IBM Rational and works with IBM customers around the world to improve their software processes. He is the founder of the Agile Modeling (AM), Agile Data (AD), Agile Unified Process (AUP), and Enterprise Unified Process (EUP) methodologies. He is the (co-)author of nineteen books, including Refactoring Databases, Agile Modeling, Agile Database Techniques, The Object Primer 3rd Edition, and The Enterprise Unified Process. He is a senior contributing editor at Dr. Dobb’s Journal. His personal home page is www.ibm.com/rational/bios/ambler.html and his Agile at Scale blog is www.ibm.com/developerworks/blogs/page/ambler.
Per Kroll is Chief Architect for IBM Rational Expertise Development and Innovation, an organization leveraging communities, portals, methods, training and services assets to enable customers and partners in software and systems development. Per is also the project leader on the Eclipse Process Framework Project, an open source project centered on software practices, and he has been one of the driving forces behind RUP over the last 10 years. Per has twenty years of software development experience in supply chain management, telecom, communications, and software product development. He co-authored The Rational Unified Process Made Easy - A Practitioner's Guide, with Philippe Kruchten, and Agility and Discipline Made Easy - Practices from OpenUP and RUP, with Bruce MacIsaac. A frequent speaker at conferences, Per has authored numerous articles on software engineering.