February 6, 2018 By Holly Cummins 5 min read

Working out how much a project will cost

When customers visit the IBM Cloud Garage to discuss a potential project, one of the most frequently asked questions is “how much is my project going to cost?”.

It’s generally accepted that – without a working time machine – at least one corner of the project management triangle needs to be flexible. Trying to nail down scope, cost, and schedule at the beginning of a project is a notorious anti-pattern; delivering a pre-determined list of features on-schedule within a given budget may give the illusion of success, but quality will almost always suffer in some way. These latent problems, or technical debt, can significantly affect the success of a project in the field, and also the cost of future extensions.

In the Garage, we’re convinced that time-boxing iterations while keeping scope flexible is the way to go. We work at a sustainable pace, instead of rushing to meet a deadline, inadvertently injecting exhaustion-induced defects, and then burning out and collapsing to recover until the next rush. In order to enable this sustainable pace, we can’t commit to a detailed feature list at the beginning of the project.

This flexibility can initially feel uncomfortable for teams who are used to more rigidly scoped projects. However, it makes products far better, for two important reasons. No one is cutting corners – or burning themselves out and making errors – to meet the letter of a contractual obligation. More importantly, the beginning of a project is when everyone involved (including the customer) knows least about what the actual requirements are, so it’s the worst time to make predictions. Requirements will change as the project progresses; if they don’t, that means you’re not learning as you go. Writing the wrong requirements in stone into a contract at the beginning of a project is pretty self-defeating. The end result is that the development team are too busy implementing requirements that no one really wants, just because they’re in the contract, to be able to pivot and work on the stuff that we’ve learned does have value. Over-specifying feature lists at the beginning of a project is a terrible risk, both technically and in terms of the user value of the finished product.

If we don’t nail down a feature list, how do we make a commitment about how much value we’ll deliver? In the Garage, we proud of what we do. I know my team is awesome, and they deliver great products. However, if we’re working with a customer we’ve never worked with before, they don’t necessarily already know we’re awesome. There’s no pre-existing trust relationship, and they’re possibly comparing our value proposition to that of another vendor. It’s difficult for a customer to evaluate value-for-money of the Cloud Garage unless we give some kind of estimate about how much a project will cost (the money), as well as describing how we can provide unique solutions for business problems (the value). Once this estimate is in place, we can then establish a partnership to do something amazing.

There’s another aspect, too. In general businesses will have capped budgets, determined either by how much seed funding they’ve received, or by internal budgeting decisions. A business will need to decide if they can afford to build the minimum viable product that allows them to start testing business hypotheses, before starting that build. Building half a product that never reaches viability, and then throwing it out because the money’s run out, is bad business. In other words, a customer needs enough information to be able to decide whether to go ahead with a project, and by definition that information is needed at the beginning of a project.

Our job in the garage is to work with customers to make a viability assessment, so that we can then help them build a great project. This is how we blend the reality of need for estimating and budget with flexibility to ensure everyone gets to a great product.

Sizing methodologies

There’s a growing body of academic and industry research about the optimum technique for sizing, including a whole range of models for software cost, from the simple to the super complex. Some are surprisingly old (dating back to the 1980s), and some are impressively mathematical. We aim for continual improvement in the Garage, so we’re following the research closely, and we have experimented with a number of different sizing methodologies. We need something low cost, because experience has taught us that spending more time on estimates doesn’t actually reduce project risk. On the other hand, the estimate needs to have enough predictive value to allow a customer to make a sound go/no-go decision.

These are the ones we recommend:

    • The fastest approach is to estimate projects based on our experience of similar projects. As well as being quick, this is surprisingly effective – when we’ve tried different techniques side by side, we’ve found that ‘previous experience’ estimates line up pretty well with ‘break down and add up’ estimates, and of course they’re faster to produce. The ‘previous experience’ approach falls down when we do a project which is the first of its kind, and since the Garage specialises in innovation, that’s actually fairly often.

    • When a customer is exploring uncharted territory, our starting point is the process described in Kent Beck and Martin’s Fowler’s Planning Extreme Programming. The basic principle is to break the project down into smaller pieces of work, estimate them, and then add back up.

    • Another way of adding a bit more rigour to the ‘experience’ estimate, is to lay out a straw-man architecture with CRC cards. Since we’re thinking at the architectural level, instead of “Class-Responsibility-Collaborator”, we do something more like “Component-Responsibility-Collaborator”. Actually, it’s “Component-or-Actor-Responsibility-Collaborator”, but that acronym gets pretty messy. We use different colour post-its for “things we write”, “things we interact with”, and “actors”. Our aim is to get some of these relationships out on paper and get a sense of the shape of the project, rather than to produce a final architecture. What the rough architecture gives us is a “landscape” that we can then compare to other projects we’ve done in the past, to produce an experience-based effort estimate.

    • Another approach is to make the gut feel less rigorous. In other words, the best way to handle the huge uncertainty in cost estimation is just to acknowledge it. What the ‘real’ project ends up costing will land somewhere on a spectrum, and there are a whole bunch of business and technical factors that can influence that. So rather than trying to guess those factors away, we can simply advise a customer to plan for both a likely-best and likely-worst case: “we think this project will take somewhere between two and four three week phases.” I call this spectrum-sizing.

Explore, then size

All of the sizing methodologies I’ve described work better when there’s a good understanding of what should be built. Some of them don’t work at all without that understanding. We recommend all our customers start their projects with a design thinking workshop. These workshops bring users, business stakeholders, and technical experts into the same room to define a minimum viable product which is small in scope, technically feasible, and addresses a *real* user need. The workshops have all sorts of benefits (which is why we recommend them so whole-heartedly), but one benefit is that it moves us from the position where we know least to having much more knowledge about what our users actually want us to build. That understanding will continue to shift, but the workshop outputs are a valuable input into an estimation exercise.


As I was writing this blog, Kent Beck, the father of extreme programming, posted a blog on the same subject. I won’t try and reproduce it here (you should just go read it, because it’s great), but I was pleased to see that some of his arguments line up with what I’d already written. Kent points out that in an ideal world one would do everything, but in a world where resources are finite, and doing one thing means not doing another thing, estimates help us make informed choices about where we should put our resources. He summarises his position as “Estimate Sometimes”. “Estimate Sometimes” isn’t the catchiest strap line, but it’s the right thing to do, for us and our customers. We need to make sure, though, that our estimates are not turned into prescriptions about duration or commitments about detailed feature lists. Experience has taught us that it is unwise to make those sorts of decisions at the point in the project cycle where we know least. The Bluemix Garage’s final recommendation? Estimate sometimes, and then leave that estimate aside and use all the feedback we can get over the course of the project to make sure we deliver the right thing. Estimates are necessary for good business reasons, but the most important deliverable from a garage project is an engaging product.

Was this article helpful?

More from

Enhance your data security posture with a no-code approach to application-level encryption

4 min read - Data is the lifeblood of every organization. As your organization’s data footprint expands across the clouds and between your own business lines to drive value, it is essential to secure data at all stages of the cloud adoption and throughout the data lifecycle. While there are different mechanisms available to encrypt data throughout its lifecycle (in transit, at rest and in use), application-level encryption (ALE) provides an additional layer of protection by encrypting data at its source. ALE can enhance…

Attention new clients: exciting financial incentives for VMware Cloud Foundation on IBM Cloud

4 min read - New client specials: Get up to 50% off when you commit to a 1- or 3-year term contract on new VCF-as-a-Service offerings, plus an additional value of up to USD 200K in credits through 30 June 2025 when you migrate your VMware workloads to IBM Cloud®.1 Low starting prices: On-demand VCF-as-a-Service deployments begin under USD 200 per month.2 The IBM Cloud benefit: See the potential for a 201%3 return on investment (ROI) over 3 years with reduced downtime, cost and…

How an AI Gateway provides leaders with greater control and visibility into AI services 

2 min read - Generative AI is a transformative technology that many organizations are experimenting with or already using in production to unlock rapid innovation and drive massive productivity gains. However, we have seen that this breakneck pace of adoption has left business leaders wanting more visibility and control around the enterprise usage of GenAI. When I talk with clients about their organization’s use of GenAI, I ask them these questions: Do you have visibility into which third-party AI services are being used across…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters