Measuring Project Health -- Part III

Proper measurements during a project's Transition phase will help assure successful deployment of a developed solution into a production environment. This final installment of a three-part series also discusses managing projects embedded in a larger program. This content is part of the The Rational Edge.

Share:

Kurt Bittner, Consultant, Iterative Software Development

Author photoKurt Bittner works on product strategy for the Rational software division of the IBM Software Group. He has been trying to figure out better ways to help teams develop software for over twenty-three years. He was a member of the original RUP development team and has led development teams in a variety of industries. When not focusing on software development he likes to relax by climbing frozen waterfalls. Kurt and Ian Spence have co-authored Use Case Modeling (Addison-Wesley 2003) and Managing Iterative Software Development Projects (Addison-Wesley 2006).



15 May 2007

Also available in Chinese

measurements illustrationPart I of this series introduced the concept of measurement as a mechanism for assessing project health, and I described specific measurements that can help you assess the health of projects during the Inception phase. Part II of this series considered the use of measurement to assess and guide the work on the Elaboration and Construction phases. In this third and final article in this series, I cover measurements for the Transition phase, which is focused on deploying the developed solution into a "production" environment. As such, the focus of measurement in the Transition phase is on assessing release readiness and on wrapping up the project as a whole.

This article concludes with a discussion of managing projects embedded in a larger program -- assessing project health and value delivery from a cross-project perspective. Projects that comprise a program are inextricably linked, and problems in one project can threaten the health of the overall program success. Spotting problems early enables them to be addressed earlier, potentially saving the program.

The Transition Phase

With the bulk of the development work completed, the goal of the Transition phase is to deploy the solution to its intended user base. This involves a range of different kinds of work:

  • Final verification of all functionality.
  • Defect fixing to resolve any issues that need to be handled prior to deployment.
  • Installation and configuration of the final product in the "production" environment, including any data migration that may be needed. For many systems, data conversion and finally putting the application into "production" are complex problems that require significant attention; in some cases the issues may be complex enough to warrant a separate project to just handle the deployment issues. In these cases, the "development" project deploys its solution to a staged deployment environment which is managed by the deployment project. In shops that supports 24X7 customer access, a parallel "conversion" environment is often established. When the application is ready to be made generally available, users are migrated to it.
  • Training of users. Few systems are so intuitively easy to use that their users do not require some training. Systems accessed directly by customers must provide some way to lead customers through a tutorial the first time they access the new system. Training may also include an introduction to changes to business processes being implemented in concert with the new system.
  • Training of support staff. Just like machinery, systems must be maintained. During the Transition phase, the project might need to integrate the rollout and patching of the new system with the ongoing maintenance of the old system.

It's important to note that by the Transition phase you absolutely should not be implementing new features or scenarios. If you are, you have simply fooled yourself into believing that you have exited the Construction phase. In the Transition phase, the sole focus should be getting the solution ready to deploy and then actually deploying it.

The Transition phase concludes when the solution has been successfully deployed and the maintenance and support responsibilities have been handed over to the team that will support and maintain the solution on an ongoing basis.

Measurement in the Transition phase

The measurements to be made in the Transition phase focus on suitability of the system for deployment. This means focusing on test coverage and defect levels, including the relationship between defect discovery and defect close rates. The kinds of things you need to consider in order to determine suitability for deployment include:

  • Suitability for deployment. Quality measures, including performance, scalability, and supportability measures in addition to traditional defect measures.
  • Adoption measures. Especially measures related to the number of people actually using the solution, with an eye to answering the question "Are enough people using the release candidate to know whether it is meeting quality goals?"
  • User satisfaction. This is usually measured by reported defects, with indications as to whether the reporter of the defect considers the defect to be a "stop ship" defect (in other words, they feel very strongly that the defect needs to be fixed before the solution is ready to deploy).

What to measure

In order to gather information to assess release readiness, you need to test. Of course, testing of all types should have been going on throughout the project. The difference now is that your focus shifts to testing whether the various qualities of the system are within acceptable thresholds for release. Testing during the Transition phase provides the raw data from which analysis will tell you whether you are ready to release. You need to test, at minimum:

  • All "must have" requirements
  • All requirements related to performance and scalability thresholds
  • All requirements related to usability thresholds
  • All requirements related to supportability -- things that affect the way that the system will be serviced and supported once it is released

In other words, anything that would affect the releasability of the system needs to be tested, and the underlying requirements need to specify acceptable ranges for the measures. All "must have" requirements (requirements that "must" be satisfied by the system in order for it to be minimally acceptable) need to have acceptable ranges defined, and it is a good idea to define acceptable ranges for all "should have" requirements (requirements that "should" be satisfied by the system, but if they are not the system would still meet minimal acceptability criteria) as well. If there are minimum performance or scalability goals that must be met, they need to be specified.

When testing scenarios, capturing "pass/fail" measures is usually sufficient, but for requirements related to performance and scalability, actual values are needed. Since trends are usually important, you need to gather a range of data observations. For example:

  • The number of concurrent users the system can support, showing how response times vary as users are added. You will need to simulate the transaction mix of the overall user community to provide an accurate picture, and the transactions will need to be randomized to ensure that testing practices themselves do not bias the results.
  • The data scalability of the system, showing how response times vary under an average transaction and user mix, but with varying amounts of data in storage. Many systems are sensitive not only to transaction workload but also data volumes.
  • In real-time systems, the speed and capacity of data feeds and event timing is similarly important to measure.

Analyzing trends and assessing test results

It is a good idea to establish measurement goals for requirements, as in cases where minimum performance requirements are specified. In almost all cases where a measurement goal is established, there will be ranges of acceptability, with minimal thresholds for the requirement to be met. There should be test cases for each requirement that proves that the requirement has been met. When these test cases are executed the test results need to be analyzed to ensure that all actual results are within the acceptable range of values. Values falling outside acceptable ranges or failing to meet minimum acceptance criteria should result in defects that must be resolved before the solution can be deployed.

Trends for test data should be analyzed to assess whether the values, while currently within acceptable thresholds, might eventually stray outside. Sample data sets and user loads should be increased to the point where measurement thresholds are exceeded to see how much growth in system workload can be accommodated before performance or response times become unacceptable.

Tests producing results not within acceptable tolerances should result in defects being filed, enabling the resulting work to be tracked to completion.

Other considerations affecting releasability

Harder to quantify but still essential is measuring the "supportability" of the system. This is usually done via a qualitative assessment of the discussions you have with the support staff for the solution. Support staff includes the people who will maintain the system as well as IT operations staff who will have to keep the system running.

Things you will need to evaluate include:

  • Status of handover activities: has a "test deployment" (including data conversion if applicable) been performed? Was it successful? What issues were uncovered?
  • What has the user adoption experience been? Is there sufficient confidence that the users will accept the new system?
  • Is the support staff ready to take over support for the system? Do they have the necessary skills to support the solution? Have procedures for backup and recovery, including disaster recovery, been updated and tested to ensure that they are effective?
  • If the solution needs to be deployed to more than one system, is it sufficiently simple to ensure verifiable repeatability?

There is a tendency to overlook these factors when considering issues of releasability. Problems in these areas require technical resolution much earlier in the project, and the underlying needs should have been captured as requirements. The focus here is to ensure that the issues previously identified have been satisfactorily resolved.

Concluding the Transition phase: Retrospective assessment

The conclusion of the Transition phase is also the conclusion of the project. An evolution of the solution is complete, and a solution has been deployed to its users; the solution has also been handed over to production support or the next project. Once the dust settles, it is a good time to step back and consider what went well, what could have gone better, and what went poorly. This is usually conducted as an open discussion among the participants on the project, led by an independent facilitator. The goal of the session is not to assign blame but to identify areas of improvement for future projects. In conducting the retrospective assessment you should encourage people to reflect on the processes followed, identifying what worked well and any things that need improvement. Be as balanced as possible. For example:

  • Focus on both good and bad
  • Be open and honest about what worked and what did not
  • Be non-judgmental; separate "what was done" from "who did what"

We all need to learn from mistakes as well as successes, so don't sweep mistakes under the rug -- being open and honest about what is working and what is not is important if you are to improve project success over time. Don't focus on assigning blame, but on what needs to be done to make the next project better. You should strive to create a learning culture in which experimentation for improvement is rewarded.

In preparation for the next evolution or project, you should develop recommendations about what to change next time; make decisions and put them into action. Learn from experience and communicate this knowledge to others so that other project teams can benefit from your experiences. It seems obvious to say this, but I have observed with uncomfortable frequency that even though iteration, phase, and project reviews may be conducted, little long-term change in behavior results from project to project in the same organization. It is too easy to ascribe project problems to individuals or unique events; my experience has been that most problems are systemic and will repeat until the root causes are dealt with.

This is especially important when projects are part of a larger program. Generally all projects in a program follow the same (or very similar) processes, and if the project problems are due to problems related to the process or related measurements used to monitor progress and risk, it is important to resolve these as early as possible so as not to have other projects in the program repeat the same problems.

Measuring projects embedded in programs

Now that we've covered, in three articles, the ways to measure project health during the essential phases of an iterative software development project, let's consider a larger context. Frequently, we must manage projects embedded in a larger program, which requires assessing project health and value delivery from a cross-project perspective. Projects that comprise a program are inextricably linked, and problems in one project can threaten the health of the overall program success. Spotting problems early enables them to be addressed earlier, potentially saving the program.

A program is a group of related projects managed in a coordinated way. Programs usually include an element of ongoing work. Examples of different kinds of programs include:

  • Strategic programs, in which projects share vision and objectives
  • Business cycle programs, in which projects share budget or resources but might differ in their vision and objectives
  • Infrastructure programs, in which projects define and deploy supporting technology used by many other projects, resulting in shared standards
  • Research and development programs, in which projects share assessment criteria
  • Partnership programs, in which projects span collaborating organizations

This is just a sampling of the many different kinds of programs, and there are often variations that combine different aspects of them. The important thing to remember about a program is that it organizes projects to focus attention on the delivery of a specific set of strategic business results. True programs differ from large projects in that they focus on the delivery of a step change in an organization's capability, which in turn can affect all aspects of the business including the business processes, financial structures, and information technology.

Examples of different kinds of projects that can benefit from coordination within a program include:

  • The development project that builds the software
  • A project to train the users of the system
  • A project to upgrade servers and infrastructure for the new system
  • A project to train the support personnel

Since the type of work performed on each of these is quite different, it makes sense to have a separate project for each; but since all the projects need to be successful in order for the business value to be delivered, a program is needed to manage the overall effort.

Organizing projects into programs

Organizing projects into programs provides explicit oversight for, and coordination between, the individual projects. Sometimes additional management is needed so that the benefits achieved by executing the projects in a coordinated way is greater than that which would accrue if the projects were all executed individually. Within a program, projects often share objectives and have a common process, at least to the degree that they will share milestones, measurements, and review criteria.

To distinguish between project iteration milestones and phase-ends, I will introduce the concept of a "stage." In effect, a stage is to a program what an evolution is to a project -- it is used to manage a concerted effort toward some common end. The relationship between program stages and projects is shown in Figure 1.

Diagram of relationship between program stages and projects.

Figure 1: Relationship between program stages and projects

Each stage delivers an incremental change in organizational capability. The end of a stage provides a major review point at which the program results can be evaluated and assessed against desired outcomes.

Stages can be used to group related projects, as shown in Figure 2.

Diagram of related projects grouped according to stages.

Figure 2: Using program stages to group projects with shared goals

Just like the evolutions of a software development project, the stages can overlap, building upon one another and sharing objectives as they incrementally deliver products as part of a larger, coordinated program. Because most programs are long-lived (some last for decades), there many be an unlimited number of stages in a program.

Figure 3 shows how work can be organized within a stage, with the stage governed by a control project. Notice how the phases for the control project sometimes end a little later than the corresponding phases of the projects it manages. This enables the results from the managed projects to be rolled up to the control project.

Diagram of how a stage can be governed by a control project.

Figure 3 -- Using control projects to govern program stages

How is a stage managed in practice? The control project starts the stage by discovering the overall stage goals and desired outcomes for the stage as a whole and creating or updating the overall architecture to be shared by all projects within the stage. As the architecture stabilizes, additional projects in the stage are initiated and overseen by the control project. Each stage would have its own control project for management purposes.

Measuring program stages

Since each stage is independent of other stages, they can and should be managed separately. The control project provides a useful control and measurement vehicle for the stage, and the same measures for projects that were discussed in this and both earlier articles can be applied to a control project managing a program stage. The control project goes through the same lifecycle as a project evolution -- it goes through all four phases in the IBM® Rational® Unified Process (RUP®) lifecycle.

The control project, however, is different from a normal project in that most of the real work of the stage occurs in the sub-projects coordinated by the control project. Measurements of the control project therefore need to consolidate and roll-up the measurements from the sub-projects within the stage. For example, the risks and issues from the sub-projects would be consolidated to form the risks and issues for the control project. Other project health measurements can be rolled-up in similar ways to form a view of how the stage is progressing.

Summary

The Transition phase of RUP is focused on final delivery of the solution to its intended users. Measurements of project health for the Transition phase focus on quality and release readiness, including not only the readiness of the code but also the readiness of the user community to accept the release and that of the support organization to support the release. Upon project completion it is important to review the entire project to determine what could be done better next time. The measurements taken throughout the project will help with this review effort, just as they have benefited the project throughout its course.

Sometimes more than one project is needed to deliver desired business value, and a program can be a useful management mechanism for ensuring successful coordination of the work of two or more projects. Since programs are often long-lived (they can last for years or decades), it is customary to break them into stages, or time boxes focused on a specific delivery of business value. A control project is a useful mechanism for coordinating the projects contributing to a stage. Introducing this control project also enables the results from sub-projects to be rolled-up to the program level.

Further reading

This article was drawn from material initially presented in Managing Iterative Software Development Projects, by Kurt Bittner and Ian Spence, and published in 2006 by Addison-Wesley.

Comments

developerWorks: Sign in

Required fields are indicated with an asterisk (*).


Need an IBM ID?
Forgot your IBM ID?


Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.

 


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name



The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.

 


All information submitted is secure.

Dig deeper into Rational software on developerWorks


static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=Rational
ArticleID=218657
ArticleTitle=Measuring Project Health -- Part III
publish-date=05152007