October 4, 2019 By Bryan Casey 6 min read

A closer look at the fail-fast philosophy—an iterative, hypothesis-driven approach to developing and launching new ideas.

The fail-fast philosophy and more traditional strategy and planning processes both have their own sort of intellectual and emotional baggage associated with them around “The Correct Way Work Ought to Be Done.”

There are some pretty hot takes on both sides saying fail fast is either a perfect combination of Lean principles with the scientific method or snake oil that will never replace the core work of business leaders, which is analyzing (imperfect) data and taking calculated risks.

The more time I spent thinking about these hot takes, the more ridiculous they both sounded. The reality is that they (being fail-fast and traditional strategies) are really complementary approaches that can collectively accelerate getting from problem to solution.

Maybe that’s a boring “answer is in the middle” reality check, but it’s almost certainly true.

So, what is failing fast?

Failing fast is a philosophy that takes an iterative, hypothesis-driven approach to developing and launching new ideas. It is heavily related to the concept of a Minimum Viable Product (MVP) and is premised on getting early feedback that can either validate or invalidate an idea.

As the name implies, a central tenet of the fail-fast approach is working through the cycle of hypothesis, MVP, and results analysis as quickly as possible, functionally increasing the velocity at which an organization can learn and adapt to change.

This approach makes it so good ideas can be scaled more quickly and ideas that aren’t driving their intended outcome can be tweaked or retired, reducing the time and cost associated with programs unlikely to succeed.

The net result is organizations that can test and innovate more cost-effectively and spend more time working on things that do work and less time on the things that don’t.

Failing fast works together with thoughtful analysis and big bets

The above description, superficially, seems hard to argue with and it is, in some. There’s something objectively methodical and scientific about the approach. But the detractors will say that analysis and calculated risk-taking are part of the lifeblood of any successful organization.

IBM’s own Rob Thomas, the GM of our Analytics and AI business, wrote a few years back that “suggesting a pilot, instead of doing it, is a sign of weakness.”

So, what gives? Does the fail-fast mentality live in conflict with the more traditional approaches to strategic planning and risk-taking?

The answer to this question is where some of the binary responses to fail fast come from. But the truth is that the fail-fast philosophy can not only coexist with the more traditional approaches to strategy, analysis, and risk-taking, it can augment those approaches and actually help make them more successful.

For example, based on all the available information, you might decide that it is essential for your organization to break into a new geographic market. There is enough data to confidently make that decision and begin executing against it with both urgency and commitment. Not only will there be no “testing” of this strategy, more likely the organization will remain committed to it through any initial setbacks and bumps in the road.

But this is where the traditional strategy process and the fail-fast mantra can begin to augment one another even more. Where a fail-fast approach can complement the strategic decision to enter a market is the way it can quickly validate or invalidate the “how.” While the desired end destination might be clear, if the path there is less clear, testing and iteration can help light the way until the organization finds its footing.

In this case, using a fail-fast approach to determine whether or not to enter a given market would be the wrong approach. Initial lack of success, when paired with fail-fast thinking, might encourage you to abandon a strategic market because your first pass at expansion failed. Instead, the decision to enter the market was a product of strategic analysis and planning and failing fast plays the role of getting to “how” more quickly.

Fail fast, testing, and iteration can be an integrated part of the strategic process

Perhaps the best example of how the fail-fast philosophy can be centrally integrated into an organization’s strategy process is the approach described by A.G. Lafley and Roger Martin in Playing to Win: How Strategy Really Works.

The following represents the strategy process defined in that work:

  • Frame the choice: Turn challenges into mutually exclusive approaches that might address the issue.
  • Generate possibilities: Expand the list to be as comprehensive as can be .
  • Specify conditions: Identify conditions that must be true in order for the possibility or approach to be a viable solution.
  • Identify barriers to choice: Determine which of those conditions is least likely to hold.
  • Design tests: Build a valid test of the hypothesis (that the entire strategy team agrees is a valid).
  • Conduct tests: Execute the tests and review results.
  • Choose: Based on results, choose the path forward.

On its surface, this approach might not look like what you typically think of when you hear “fail fast,” but it demonstrates how the tradeoff between strategic planning and fail-fast approaches are actually a false choice. Designing thoughtful, creative, and strategic tests can be a critical component of an organization’s core strategy process.

In some ways, this approach takes the fail-fast mentality even further by focusing the initial tests on the elements of a strategy least likely to succeed. The reason for doing this is that if a strategy rests on four assumptions, and you test the easiest three first and those tests all succeed, and then the final, most-difficult test fails, the time spent on the three successful tests was actually a waste.

In our desire to see ideas succeed, it would be easy to start with the assumptions most likely to hold. But a desire to get to the right answer quickly should push us to test the most difficult pieces first.

Best practices for failing fast

The fail-fast mentality can be applied to problems both large and small, but getting results with this approach rests on some important pillars that can determine the success and impact of the program.

I asked my colleague (and general growth wizard) Peter Ikladious what he thinks of failing fast and he said, “I find that fail fast is a great concept that, despite many orgs stating it, it’s hard for them to actually do it. Fail fast needs top leadership buy-in. They have to recognize that there are no magic bullets and that not everything will work as expected. Without leadership support, teams are either too scared to try anything that could fail or do things in private which limits the learning effects of failing fast.”

There are two really essential ideas in Peter’s comments that ring true with my experience and the approaches within Playing to Win.

The first is that leadership needs to really buy the approach. If teams are given the opportunity to explore new ideas and the confidence that their jobs, careers, bonuses, and promotions aren’t at risk if an idea doesn’t succeed, that’s an environment where testing and innovation can thrive.

In my experience, it’s even better to reframe the notion of “failure” altogether. The testing process is about both the speed at which a business can get to the right answer and the learnings along the way.

If the supposed failures are documented and internally communicated in really clear, compelling ways, that experience can help other teams avoid testing the same ideas over and over again. In that sense, the results of any test, regardless of success or failure, represent an increase of an organization’s institutional knowledge if they are documented and shared appropriately. That knowledge has value.

A successful test could be one that doesn’t work but that no one in your organization ever wastes time attempting again.

This thought bridges into the second, which is that the real impact of tests can only be achieved when paired with transparency. This transparency is essential for generating buy-in, driving scale, and even sharing learnings of what doesn’t work. Without the “learning effects,” as Peter described it, much of the value in terms of institutional knowledge is never created.

Summary

Failing fast is not the answer to every business problem that exists in the world, but it is a useful methodology for managing uncertainty and risk in an intelligent, cost-effective way. It doesn’t tell you what problem to solve, but it can help you assess the best way to solve it.

While not rocket science, the keys for any good-fail fast approach rest on the following:

  • Designing valid, strategically impactful tests.
  • Creating an environment that values this approach and redefines failure.
  • Building a culture of transparency and institutional knowledge.

Like any tool, failing fast works best when it’s paired with the right job. Failing fast is not a substitute for the work organizations do on strategic analysis and planning but it can be a useful tool incorporated into that process. It can help quickly validate or invalidate the core assumptions any idea rests on. It can make a company more honest with itself much more quickly. It can also increase an organization’s appetite for risk and innovation by minimizing the time and expense of evaluating new ideas.

The fail-fast philosophy can ultimately be applied to small tests and programs without much effort or thought, but it can take some really creative thinking to apply it to an organization’s larger problems. 

When applied correctly, fail-fast approaches can, ideally, be used to increase the size of the ideas an organization tests while decreasing their cost. Any organization that can get that mix right and scale learnings and outcomes effectively will be able to move faster and with greater impact.

That should be the real, final objective of any embrace of fail-fast philosophy. It’s not about increasing the volume of tests an organization does, it’s about increasing its net impact, with testing as a means to that end.

Was this article helpful?
YesNo

More from Cloud

Bigger isn’t always better: How hybrid AI pattern enables smaller language models

5 min read - As large language models (LLMs) have entered the common vernacular, people have discovered how to use apps that access them. Modern AI tools can generate, create, summarize, translate, classify and even converse. Tools in the generative AI domain allow us to generate responses to prompts after learning from existing artifacts. One area that has not seen much innovation is at the far edge and on constrained devices. We see some versions of AI apps running locally on mobile devices with…

IBM Tech Now: April 8, 2024

< 1 min read - ​Welcome IBM Tech Now, our video web series featuring the latest and greatest news and announcements in the world of technology. Make sure you subscribe to our YouTube channel to be notified every time a new IBM Tech Now video is published. IBM Tech Now: Episode 96 On this episode, we're covering the following topics: IBM Cloud Logs A collaboration with IBM watsonx.ai and Anaconda IBM offerings in the G2 Spring Reports Stay plugged in You can check out the…

The advantages and disadvantages of private cloud 

6 min read - The popularity of private cloud is growing, primarily driven by the need for greater data security. Across industries like education, retail and government, organizations are choosing private cloud settings to conduct business use cases involving workloads with sensitive information and to comply with data privacy and compliance needs. In a report from Technavio (link resides outside ibm.com), the private cloud services market size is estimated to grow at a CAGR of 26.71% between 2023 and 2028, and it is forecast to increase by…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters