Discuss this topic online! After you've read this article, click here to access the RPM forum where you can post questions and comments.
The concept of "value" has a history that predates the software business by several millennia. In the Athenian Academy, in the fourth century B.C., Aristotle (384-322) pragmatically argued that the value of an object was based on the need for it. One millennium later, leadership in the Christian church expressed the view that work was inherently good, which lead to a "model" that proposed value as a function of the amount of work that had gone into producing something. This forerunner of the "cost plus" value approach held sway well into the second millennium until William Jevons (1835-1882) and Carl Menger (1840-1921) re-incarnated Aristotle's thoughts in their "marginal analysis" theory, developed in 1871. This theory held that "Value depends entirely on utility." No matter what costs are incurred in producing something, when the product came to market its value would depend solely on the utility the buyer expects to receive.
The notion of "value as utility" is nonetheless very much influenced by context, as witnessed by the classic argument that the value of a glass of water may in certain circumstances exceed that of a glass full of diamonds. Value is also influenced by marketing practices -- for example, branding -- that attempt to increase the perceived value of a product.
In this article, I will first explore the landscape of accepted valuation practices with three typical and familiar valuation techniques. I will then briefly cover modeling and simulation techniques. Finally, I will suggest how long-standing IBM Rational development practices may be reframed as a "Real Options" approach, leveraging flexibility in software project management to create added value in software products.
Two terms are important in this discussion: product and value. A software product isn't just an executable program; it can also refer to a software service, system, or process. Value is expressed as a price and is measured by the revenues that flow to the producer and consumers of the software over its lifecycle. The scope of the paper is limited to software projects that target perceived needs of the marketplace; it does not include projects performed under contract to fulfill specific customer needs.
IBM Rational1 has always been a leading proponent of the practice of software economics, the main strength of which is in estimating project cost. The following equation, taken from COCOMO II, is well known throughout much of the Rational community:
Effort = (Personnel) (Environment) (Quality) (SizeProcess*)2
*reflects process effectiveness
This equation captures the following key factors:
- Effort: person-months required to complete the project
- Personnel: factors considering the abilities of the team
- Environment: factors considering tools and techniques
- Quality: factors considering the required product quality
- Size: number of human-generated source instructions composing the end product
- Process: formula based on the effectiveness of the process used to produce the end product
While the "cost side" of software economics has traditionally been Rational's strong point in advocating software economic analysis, the "value side" of the economic question is treated by Barry Boehm and Dan Port in a paper on "Value-Based Software Engineering (VBSE),"3 which they define as follows:
VBSE is involved with software and information system product and process technology and their interaction with human values. It uses risk considerations to balance software discipline and flexibility, and to answer other key "how much is enough?" questions.
Boehm and Port refer to adding value to software design:
The goal of our roadmap is supported by a key intermediate outcome: designers at all levels must make design decisions that are better for value added than those they make today. Design decisions are of the essence in product and process design, the structure and dynamic management of larger programs, the distribution of programs in a portfolio of strategic initiatives, and to national software policy. Better decision-making is the key enabler of greater value added.
Some well-known project valuation techniques -- from simple to complex -- are illustrated in Figure 1:
Figure 1: Value measurement techniques are shown along the black curve from low to high. From "IT Options Analysis" October 2002, ISMC Orlando, Nancy Burchfield, Principal IT Optimization, IBM Global Services.
Click to enlarge
The project valuation techniques shown in Figure 1 will be briefly explained in the following sections.
The first set of valuation techniques is quite well known:
- Return on Investment (ROI)
- Net Present Value (NPV)
- Internal Rate of Return (IRR)
The second set of techniques is usually applied when a software product is developed or enhanced to fulfill a perceived marketplace need:
- Sensitivity Analysis
- Monte Carlo Simulation
The third technique, Real Options, I will examine in the light of IBM Rational software development principles and practices, which come under the general heading of "options analysis."
Three familiar valuation techniques
Let's take a look at the first three modes of valuation shown in Figure 1: ROI, NPV, and IRR.
Value can be directly calculated in terms of ROI (return on investment), which determines time-to-payback, as follows:
ROI is a relatively straightforward investment metric that expresses how much time is required (the payback period) to recover the original investment. It may also be expressed as a percentage figure -- ROI%.
ROI is calculated as:
Measured according to this formula, the best investment is the one with the shortest payback period. For example, if a project costs $100,000 and is expected to return $20,000 annually, the payback period would be $100,000 / $20,000, or five years.
There are three main problems with the ROI/payback period method:
- It ignores benefits that may occur after the payback period, and so does not measure longer term profitability.
- It ignores the time value of money -- the discount rate.
- It does not factor in risk, when risk of failure in project work of any kind may exceed 50 percent. For these reasons, other methods of capital budgeting like NPV, IRR are generally preferred.
However the "simple" ROI should not be discounted. It is widely used and, with a tendency to seek a shorter payback period for software investments, it does have value and currency. Consider the following survey, from CFOEurope.com, which asked seven leading academics to describe the best ways for chief financial officers (CFOs) to create value:4
When Kees Koedijk set out with his colleagues at Erasmus University to survey CFOs about capital budgeting, "we wanted to find out what was inside their minds," he says. "We already had a pretty good picture of the capital structure concepts; what we were missing was insight into which techniques finance managers were using to base their investment decisions on."
The results of the survey, which polled over 300 CFOs in the UK, the Netherlands, Germany, and France in late 2004, surprised Koedijk. Well ahead of net present value (NPV) and internal rate of return (IRR), the most frequently used capital budgeting technique is the ROI/payback period.
By contrast, Koedijk cites a 2001 study showing that that IRR and NPV were far more popular among US firms.
In addition, the "Business Case" template within the Rational Unified Process®, or RUP®, refers to ROI:
For a commercial software product, the Business Case should include a set of assumptions about the project and the order of magnitude return on investment (ROI) if those assumptions are true. For example, the ROI will have an order of magnitude of five if completed in one year, two if completed in two years, and a negative number after that. These assumptions are checked again at the end of the elaboration phase, when the scope and plan are known with more accuracy. The return is based on the cost estimate and the potential revenue estimates.5
IRR (Internal Rate of Return)
Often used in capital budgeting, IRR is the interest rate that makes net present value of all cash flow equal zero.
RUP refers to IRR in the Business Case template:
In the case of the internal rate of return calculation, a net present value of zero is assumed, and the internal rate of return needed to produce this is computed. This internal rate of return (IRR) for the project is then compared to a minimum required rate of return for projects of similar risk. If the IRR for the project is greater than the minimum required rate of return, the project has positive net economic benefit for the company.6
NPV (Net Present Value)
NPV is an approach used in capital budgeting where the present value of cash inflows is subtracted by the present value of cash outflows. It measures the profitability of a project by comparing the value of a dollar today to the value of that same dollar in the future, taking inflation and returns into account. NPV analysis is sensitive to the reliability of future cash inflows that an investment or project will yield.
Here is the formula for NPV calculation:7
C = Cash flows, in, or out, of the project; r is the discount (interest) rate per period (t)
The RUP Business Case template also refers to NPV:
For internal software projects, return is either calculated in terms of the 'Net Present Value' of the project, or in terms of an internal rate of return. With the net present value, the future stream of cash flows accruing to the project are estimated (including negative cash flows related to project development and support) and then discounted back at a required rate of return determined by the organization based on the risk of the project. A net present value of greater than zero indicates that the project has positive net economic benefit to the company.
If the NPV of a prospective project is positive, it should be accepted. However, if NPV is negative, the project should probably be rejected because cash flows will also be negative.
Sensitivity Analysis and Simulation
Before we leap into an explanation of the next two valuation techniques, we should be grounded in a few concepts regarding valuation models. A model is a simplification -- often a mathematical one -- of reality. One such model has been already mentioned in this article: the COCOMO II equation that links effort (cost) to personnel, environment, quality, size, and process. A value model for a software product can be a formula or an algorithm that helps estimate the number of licenses that can be sold, at what price, over the product lifecycle.
When a software product is developed to satisfy a perceived marketplace need, there will always be uncertainty as to:
- The value of the market for the product
- The market share that the product can capture
At the very least, value estimates should be produced based on best, worst, and most likely value scenarios. For this reason, RUP refers to a range of different value scenarios for products:
Value can be characterized for conservative, aggressive, or nominal states (calculations) and should be considered in the analysis. Conservative states usually will include hard savings and highly likely savings areas (seen or predicted), while an aggressive approach will take hard- and soft-savings examples together for the ROI calculations. Nominal cases will take a middle-of-the-road approach and will be consistent with most acceptable/believable ROI case studies. Aggressive estimates are often used for soft-dollar savings, or when baseline data is not verifiable.9
The value model will have inputs such as market size, growth rate, prices, discounts, and lead-in to sales of other products; the model computes how these combined inputs translate to license sales. The model may also include contextual factors, such as macroeconomic growth statistics, competitive and regulatory requirements, pricing (discounts), and so on, that will affect the value.
In all cases, a range, even of only a few value estimates is better than a single "point" estimate based on the assumption that all "will go well." In other words, only one result is no better than no result, especially where the marketplace is concerned.
Now let's discuss Sensitivity Analysis and Monte Carlo Simulation.
Sensitivity Analysis is a technique that can determine which uncertainties in the inputs to a value model will produce the greatest effects on output -- software product value. If a small change in, say, the market growth rate results in relatively large changes in value, then this particular input must be measured accurately and tracked closely, because the outcome is clearly "sensitive" to that particular input.
Sensitivity Analysis studies how the value model depends upon its inputs, structure, and its underlying assumptions, and it can show how the output of a model can be apportioned, qualitatively or quantitatively, to different sources of variation in inputs.
Sensitivity Analysis can also be used to increase the confidence in the model and its predictions, by providing an understanding of how the model responds to changes in the inputs, such as data used to calibrate it, model structures, or other factors such as the model independent variables.
Monte Carlo Simulation
A Monte Carlo Simulation involves the use of random numbers and probability to find solutions to complex problems. The term was first coined by S. Ulam and Nicholas Metropolis in reference to games of chance, a popular attraction in Monte Carlo, in the Kingdom of Monaco.
The PMI (Project Management Institute) defines Monte Carlo Simulation as:
A technique that performs a project simulation many times to calculate a distribution of likely results.
The Monte Carlo Simulation typically uses random number10 generators to generate multiple scenarios of a model by repeatedly sampling values from the probability distributions for the various input variables.
If enough data is available, and the model is realistic, the final result of the simulation is an estimate of project value (often expressed as NPV) along with some measure of variance (one standard deviation of the NPV) that expresses the risk of the project. The estimate of risk is calculated from the distribution (curve) of value (NPV) estimates that are generated by the simulation.
Software development projects are expensive, risky, and usually present the sponsoring organization with uncertainty in the commercial lifecycle. In general, a manager can benefit by waiting as long as possible before committing funds to a project or before locking in to a set of features. In practical terms, delaying a project commitment has a double benefit:
- It protects the pool of investment capital available for projects.
- It holds that pool of capital until it can be sunk into the best possible design features.
On the other hand, delay may create the risk of a missed opportunity. How do we balance these benefits with the risk?
"Real Options" is a term coined by Stewart Myers in 1977 and refers to the application of options pricing theory to the valuation of non-financial or "real" investments.
In 1972, Parnas introduced information hiding as an approach to devising modular structures for software designs.11 The approach promised to dramatically improve the adaptability of software and is now an accepted principle in object-oriented analysis and design theory.
Baldwin and Clark12 appear to have been the first to observe that the value of modularity in design (of computer systems) can be modeled as options. They make a statistical assumption: that the values of independently developed alternatives to an existing module are normally distributed about its value.
Today, software engineering processes based on options models are becoming better understood.13 For example, Boehm and Port refer to "design for change" and the "information hiding" approach proposed by Parnas:
The promotion of Parnas' concept of information hiding modules, for example, is based on the following rationale: most of the life-cycle cost of a software system is expended change [Lientz-Swanson, 1980]. For a system to create value the cost of an increment should be proportional to the benefits delivered; but if a system has not been designed for change, the cost of change (re wring code) will be disproportionate to the benefits [Parnas, 1972]. Information hiding modularity is a key to design for change. Design for change is thus promoted as a value-maximizing strategy provided one could anticipate changes correctly. While this is a powerful heuristic, we lack adequate models of the connections between this technical concept and value creation under given circumstances.14
Furthermore, Dean Johnson et al. suggest that uncertainty about the future directions of the marketplace can be managed by designing modularity (e.g., a portability layer) into the application in the early stages of the project:
The inclusion of such a layer involves an additional up front development cost. However it can save money or create value at a future stage of development by allowing designers to make changes quickly instead of rewriting large chunks of code. Using a real options approach we estimate the value of the flexibility that such portability confers. Sensitivity analysis is applied to examine the relationship between value of portability and changes in factors such as the probability that the application will have to be rewritten, expected application life, and the volatility of future redevelopment costs.15
And on IBM developerWorks, August 2005, the authors of "Service-oriented agility: Methods for successful Service-Oriented Architecture (SOA) development, Part 2: How to mix oil and water,"16 refer to what could be a "flexible," Real-Options approach to successful SOA development in their "Principle 3: Decide as late as possible":
Deciding as late as possible means leaving service interface details open until there is clear evidence of what they should look like, rather than pretending to know everything. This forces the development teams to synchronize with the rest of the company frequently, and it also results in a better services model.
The authors explicitly mention "options-based software economics" in their "economic model of engagement" section:
Traditionally, software development is seen as something that generates cost. Recently, software development is seen as something that generates revenue that can help to exploit economic options. Options-based software economics draws analogies from the financial markets: short iterations that deliver running software are seen as real options. Just like financial options, real options provide the benefit that you can buy the chance to gain from an uncertain future by investing in just a little bit now. But it is not necessary to go that far: It will even benefit a SOA engagement if you create a simple economic model of the engagement and use it to drive development decisions. With the economic model in their hands, the team members are empowered to figure out for themselves what is important for the business: They can all work from the same assumptions. If you consider eliminating features, your marketing department might speculate that they would sell "X" percent fewer units without these features.17
Flexibility is built into the iterative approach described by RUP. In fact, RUP iterations can be visualized as being "mini projects" with a plan, deliverables, and assessment. They may also be seen from the value and business perspectives as "mini investments" that keep the software design in play while plans and estimates firm up and uncertainties decrease. In other words, iterative development brings the benefits of delay while managing its associated risk of missing a market opportunity.
Over the years, RUP has evolved into a process engineering platform that is now called IBM Rational Method Composer. RMC enables teams to define, configure, tailor, and practice a consistent software development process. An RMC Delivery Process provides a complete lifecycle model that has been detailed by sequencing Method Content in work breakdown structures (WBS). Software designers can author and publish their process as a process Web site or export it to IBM Rational Portfolio Manager as a work breakdown structure.
Project managers can then execute and track the projects with a range of earned value metrics, while a portfolio manager can evaluate investment metrics such as NPV, IRR, and gross profit in the Portfolio Dashboard feature of the portfolio management toolset.
Creating sustainable value
The tools and methods provided by IBM for software project valuation cannot in and of themselves create value, but they can help customers understand which projects are most likely to create sustainable value for their businesses. With this insight, project managers and business leaders can approach their projects with more flexibility and adaptability, which adds value to the overall software portfolio.
Software development managers need an approach, method, and language to assess the economic value of projects if they are to allocate resources properly to competing initiatives. Outsiders such as customers, investors, shareholders, and auditors usually like to think that important business decisions are founded on valid and reliable valuation methods.
Practitioners using IBM Rational products can apply techniques such as NPV and IRR to assess software economic value and a COCOMO II-based algorithm to assess effort. Furthermore, they can apply software development principles and practices through Rational Unified Process, Rational Method Composer, and IBM Rational Portfolio Manager to create value for their customers, while simultaneously managing cost, risk, and time-to-market for their own businesses.
1 In particular, see Walker Royce, Software Project Management: A Unified Framework, Addison-Wesley, 1998.
2 B. Boehm and K.J. Sullivan, "Software Economics: A Roadmap," in The Future of Software Engineering, 22nd International Conference on Software Engineering, June 2000.
3 This paper is available as a free download (after registration) from ZDNet UK at http://whitepapers.zdnet.co.uk/0,39025945,60081824p-39000572q,00.htm
5 See the Rational Unified Process, available from IBM. For the latest information on available RUP modules and packaging, see also this month's article by Per Kroll on IBM Rational Method Composer, at http://www.ibm.com/developerworks/rational/library/nov05/kroll/index.html
6 from the Rational Unified Process. Op. sit.
7 from Investopedia, at http://www.investopedia.com/terms/n/npv.asp
8 from the Rational Unified Process. Op. sit.
9 from the Rational Unified Process. Op. sit.
11 D.L. Parnas, "On the criteria to be used in decomposing systems into modules," in Communications of the ACM, pp. 1053--1058, December 1972.
12 C. Baldwin and K. Clark, "Modularity and Real Options," Harvard Business School Working Paper, 93-?, 1993. University of Virginia Department of Computer Science Technical Report CS-2001-13. Submitted for publication to ESEC/FSE 2001
13 A full range of references to the domain are available at http://www.niwotridge.com/Resources/PM-SWEResources/SWOptions.htm
14 Barry Boehm and Dan Port, University of Southern California and Kevin Sullivan, University of Virginia -- White Paper for Workshop on New Visions for Software Design and Productivity: "Value Based Software Engineering," available as a MS Word document at: http://www.isis.vanderbilt.edu/sdp/Papers/Barry%20Boehm%20(Value%20Based%20Software).doc
15 Dean L. Johnson, Brent J. Lekvin, and James E. Northey, Michigan Technological University, Michigan Technological University, and Jordan & Jordan, respectively -- "Some Evidence Concerning the Economic Value of Software Portability: A Real Options Approach," in Financial Decisions, Spring 2005, available from FinancialDecisionsOnline at: http://www.financialdecisionsonline.org/current/JohnsonLekvinNorthey.pdf
16 Gottrried Luef, Christoph Steindl, and Pal Krogdahl, "Service-oriented agility: Methods for successful Service-Oriented Architecture (SOA) development, Part 2: How to mix oil and water," IBM developerWorks, August 2005, at: http://www-128.ibm.com/developerworks/webservices/library/ws-agile2.html