Working out how much a project will cost

When customers visit the IBM Cloud Garage to discuss a potential project, one of the most frequently asked questions is “how much is my project going to cost?”.

It’s generally accepted that – without a working time machine – at least one corner of the project management triangle needs to be flexible. Trying to nail down scope, cost, and schedule at the beginning of a project is a notorious anti-pattern; delivering a pre-determined list of features on-schedule within a given budget may give the illusion of success, but quality will almost always suffer in some way. These latent problems, or technical debt, can significantly affect the success of a project in the field, and also the cost of future extensions.

In the Garage, we’re convinced that time-boxing iterations while keeping scope flexible is the way to go. We work at a sustainable pace, instead of rushing to meet a deadline, inadvertently injecting exhaustion-induced defects, and then burning out and collapsing to recover until the next rush. In order to enable this sustainable pace, we can’t commit to a detailed feature list at the beginning of the project.

This flexibility can initially feel uncomfortable for teams who are used to more rigidly scoped projects. However, it makes products far better, for two important reasons. No one is cutting corners – or burning themselves out and making errors – to meet the letter of a contractual obligation. More importantly, the beginning of a project is when everyone involved (including the customer) knows least about what the actual requirements are, so it’s the worst time to make predictions. Requirements will change as the project progresses; if they don’t, that means you’re not learning as you go. Writing the wrong requirements in stone into a contract at the beginning of a project is pretty self-defeating. The end result is that the development team are too busy implementing requirements that no one really wants, just because they’re in the contract, to be able to pivot and work on the stuff that we’ve learned does have value. Over-specifying feature lists at the beginning of a project is a terrible risk, both technically and in terms of the user value of the finished product.

If we don’t nail down a feature list, how do we make a commitment about how much value we’ll deliver? In the Garage, we proud of what we do. I know my team is awesome, and they deliver great products. However, if we’re working with a customer we’ve never worked with before, they don’t necessarily already know we’re awesome. There’s no pre-existing trust relationship, and they’re possibly comparing our value proposition to that of another vendor. It’s difficult for a customer to evaluate value-for-money of the Cloud Garage unless we give some kind of estimate about how much a project will cost (the money), as well as describing how we can provide unique solutions for business problems (the value). Once this estimate is in place, we can then establish a partnership to do something amazing.

There’s another aspect, too. In general businesses will have capped budgets, determined either by how much seed funding they’ve received, or by internal budgeting decisions. A business will need to decide if they can afford to build the minimum viable product that allows them to start testing business hypotheses, before starting that build. Building half a product that never reaches viability, and then throwing it out because the money’s run out, is bad business. In other words, a customer needs enough information to be able to decide whether to go ahead with a project, and by definition that information is needed at the beginning of a project.

Our job in the garage is to work with customers to make a viability assessment, so that we can then help them build a great project. This is how we blend the reality of need for estimating and budget with flexibility to ensure everyone gets to a great product.

Sizing methodologies

There’s a growing body of academic and industry research about the optimum technique for sizing, including a whole range of models for software cost, from the simple to the super complex. Some are surprisingly old (dating back to the 1980s), and some are impressively mathematical. We aim for continual improvement in the Garage, so we’re following the research closely, and we have experimented with a number of different sizing methodologies. We need something low cost, because experience has taught us that spending more time on estimates doesn’t actually reduce project risk. On the other hand, the estimate needs to have enough predictive value to allow a customer to make a sound go/no-go decision.

These are the ones we recommend:

    • The fastest approach is to estimate projects based on our experience of similar projects. As well as being quick, this is surprisingly effective – when we’ve tried different techniques side by side, we’ve found that ‘previous experience’ estimates line up pretty well with ‘break down and add up’ estimates, and of course they’re faster to produce. The ‘previous experience’ approach falls down when we do a project which is the first of its kind, and since the Garage specialises in innovation, that’s actually fairly often.

    • When a customer is exploring uncharted territory, our starting point is the process described in Kent Beck and Martin’s Fowler’s Planning Extreme Programming. The basic principle is to break the project down into smaller pieces of work, estimate them, and then add back up.

    • Another way of adding a bit more rigour to the ‘experience’ estimate, is to lay out a straw-man architecture with CRC cards. Since we’re thinking at the architectural level, instead of “Class-Responsibility-Collaborator”, we do something more like “Component-Responsibility-Collaborator”. Actually, it’s “Component-or-Actor-Responsibility-Collaborator”, but that acronym gets pretty messy. We use different colour post-its for “things we write”, “things we interact with”, and “actors”. Our aim is to get some of these relationships out on paper and get a sense of the shape of the project, rather than to produce a final architecture. What the rough architecture gives us is a “landscape” that we can then compare to other projects we’ve done in the past, to produce an experience-based effort estimate.

    • Another approach is to make the gut feel less rigorous. In other words, the best way to handle the huge uncertainty in cost estimation is just to acknowledge it. What the ‘real’ project ends up costing will land somewhere on a spectrum, and there are a whole bunch of business and technical factors that can influence that. So rather than trying to guess those factors away, we can simply advise a customer to plan for both a likely-best and likely-worst case: “we think this project will take somewhere between two and four three week phases.” I call this spectrum-sizing.

Explore, then size

All of the sizing methodologies I’ve described work better when there’s a good understanding of what should be built. Some of them don’t work at all without that understanding. We recommend all our customers start their projects with a design thinking workshop. These workshops bring users, business stakeholders, and technical experts into the same room to define a minimum viable product which is small in scope, technically feasible, and addresses a *real* user need. The workshops have all sorts of benefits (which is why we recommend them so whole-heartedly), but one benefit is that it moves us from the position where we know least to having much more knowledge about what our users actually want us to build. That understanding will continue to shift, but the workshop outputs are a valuable input into an estimation exercise.


As I was writing this blog, Kent Beck, the father of extreme programming, posted a blog on the same subject. I won’t try and reproduce it here (you should just go read it, because it’s great), but I was pleased to see that some of his arguments line up with what I’d already written. Kent points out that in an ideal world one would do everything, but in a world where resources are finite, and doing one thing means not doing another thing, estimates help us make informed choices about where we should put our resources. He summarises his position as “Estimate Sometimes”. “Estimate Sometimes” isn’t the catchiest strap line, but it’s the right thing to do, for us and our customers. We need to make sure, though, that our estimates are not turned into prescriptions about duration or commitments about detailed feature lists. Experience has taught us that it is unwise to make those sorts of decisions at the point in the project cycle where we know least. The Bluemix Garage’s final recommendation? Estimate sometimes, and then leave that estimate aside and use all the feedback we can get over the course of the project to make sure we deliver the right thing. Estimates are necessary for good business reasons, but the most important deliverable from a garage project is an engaging product.


More from

IBM TechXchange underscores the importance of AI skilling and partner innovation

3 min read - Generative AI and large language models are poised to impact how we all access and use information. But as organizations race to adopt these new technologies for business, it requires a global ecosystem of partners with industry expertise to identify the right enterprise use-cases for AI and the technical skills to implement the technology. During TechXchange, IBM's premier technical learning event in Las Vegas last week, IBM Partner Plus members including our Strategic Partners, resellers, software vendors, distributors and service…

Kubernetes version 1.28 now available in IBM Cloud Kubernetes Service

2 min read - We are excited to announce the availability of Kubernetes version 1.28 for your clusters that are running in IBM Cloud Kubernetes Service. This is our 23rd release of Kubernetes. With our Kubernetes service, you can easily upgrade your clusters without the need for deep Kubernetes knowledge. When you deploy new clusters, the default Kubernetes version remains 1.27 (soon to be 1.28); you can also choose to immediately deploy version 1.28. Learn more about deploying clusters here. Kubernetes version 1.28 In…

“Teams will get smarter and faster”: A conversation with Eli Manning

3 min read - For the last three years, IBM has worked with two-time champion Eli Manning to help spread the word about our partnership with ESPN. The nature of that partnership is pretty technical, involving powerful AI models—built with watsonx—that analyze massive data sets to generate insights that help ESPN Fantasy Football team owners manage their teams. Eli has not only helped us promote awareness of these insights, but also to unpack the technology behind them, making it understandable and accessible to millions.…

Temenos brings innovative payments capabilities to IBM Cloud to help banks transform

3 min read - The payments ecosystem is at an inflection point for transformation, and we believe now is the time for change. As banks look to modernize their payments journeys, Temenos Payments Hub has become the first dedicated payments solution to deliver innovative payments capabilities on the IBM Cloud for Financial Services®—an industry-specific platform designed to accelerate financial institutions' digital transformations with security at the forefront. This is the latest initiative in our long history together helping clients transform. With the Temenos Payments…