Busting the biggest myths about modernization

By | 5 minute read | January 27, 2021

A hybrid model, making efficient use of the strengths of all available systems, is the key to achieving the lowest possible cost per transaction and maintaining SLAs.

There are a number of good reasons for your organization to modernize its applications and architecture. You could be seeking to lower costs across development, maintenance and operations. Perhaps you want to improve user experiences or react more quickly to new business requirements and opportunities. Whatever the reason, you’re likely feeling overwhelmed by the sheer number of factors involved—and uncertain about the best way forward.

You’re not the first to tread this path and can refer back to what’s worked—and hasn’t worked—for other businesses. All that said, you’ve probably heard a few common misconceptions about modernization. Here are four big ones:

  1. Moving everything off of the IBM Z® platform will reduce costs and save millions of dollars every year.
  2. Moving read-only type applications (those that read data but do not update data) off of the IBM Z platform will always reduce costs.
  3. Modernization means re-writing the entire application.
  4. Application agility is only possible in the cloud.

As the buzz around cloud computing amplified a few years ago, many customers began moving some on-premises workloads, and creating new ones, in the cloud. But upon migrating the initial, “easy” 20% of their workloads, they stumbled at the highly complex 80% that remained—with more stringent requirements for performance, security and data consistency. These operations are best performed on the IBM Z platform. So, what does that mean for a modernization strategy?

Analysts and industry leaders have agreed on the answer: hybrid cloud. This framework has become the new normal across enterprise computing and can be achieved with a well-architected combination of cloud and the IBM Z platform.

So, let’s return to that first misconception. What would modernization look like if you decided to move everything to the cloud?

Breaking down operating costs: Best-case scenario

We’ve seen new executives try to make a splash when joining a company. For example, an executive promises to cut IT costs in half by creating a state-of-the-art reservation system and getting their company off of the IBM Z platform.

According to their estimation, all they need is five years and $100 million. But their accountants do the math and realize that it would take at least 15 years to break even and begin realizing a return on their investment.

Let’s break that down. For the sake of this example, let’s say their annual operating cost for the IBM Z architecture is $20 million, including hardware, software, database, networking, and operational expenses. On the other hand, the company expects that a new, state-of-the-art cloud-based system would cost $10 million a year.

As they spend $20 million annually to develop their new system, they will also spend $20 million a year to run their existing system—so, after five years of development, the business has spent $200 million on their reservation systems. Then, after a decade of using the new cloud-based system, costing $10 million annually ($100 million over 10 years), they can expect to have incurred a total of $300 million in expenses.

If instead of pouring resources, money, and time into this new system, they’d remained on the IBM Z platform for those 15 years, their expense would equal $300 million as well.

So, while this new executive effectively made a splash, it probably wasn’t the kind of splash they hoped for. Instead of optimizing what already worked, they doubled the expenses for five years, lost the opportunity costs to develop new functionality, and had to re-write and re-platform their applications and data for the cloud-based system. Beyond those initial pains, they now have a larger, more complicated environment to manage and debug. Instead of a single cluster environment on IBM Z, they’re now working with dozens of applications and database clusters.

This is all a best-case scenario. Because, if development is dragged out past the expected five years, or the new platform doesn’t run as cheaply as expected, their break-even point is now decades beyond where they originally expected.

Breaking down operating costs: Worst-case scenario

Now, let’s talk about a worst-case scenario, where a newly developed solution actually costs more to run wholly on the cloud than it did on the IBM Z platform.

First, Company A wrote a brand-new cloud native reservation platform costing upwards of $100 million to develop. After merging with Company B, which used an IBM Z based reservation system, they had to determine which system to keep.

They did an apples-to-apples comparison of the cost of each bookable item (such as an airline flight, hotel, train) on both platforms. The result? Using their new cloud-based reservation system, Company A only managed 1/3 of the bookable items of Company B and their operating expenses were 3x higher.

In other words, their newly fashioned architecture was 9x more costly than the IBM Z platform. It won’t require any mental gymnastics to figure out which system they decided to keep.

The second scenario illustrates that cost is not the only factor to consider.

A financial industry customer developed a new cloud-native system for real-time transaction processing, but once deployed into production, the system had issues processing larger workload volume, reliability, and meeting other SLAs. In the end, the cloud-based system was replaced by a z/TPF system to process that workload.

The key factors in both customer examples are the fundamental differences in architecture. The winning solution was z/TPF, with applications and data co-located on an OS specifically designed for high-volume, write-intensive workloads, all leveraging IBM Z hardware. The cloud-based systems, with its workloads split over dozens of server clusters, running on general-purpose operating systems on commodity hardware, created several issues. In one case, it was nearly an order of magnitude more costly, and in the other, it struggled with latency, performance and database contention—all factors that impede workload scaling—and made it nearly impossible to maintain SLAs.

Developing a hybrid model

The point is not to run all your workloads on the IBM Z platform, just as you shouldn’t run all workloads on cloud. A hybrid model, making efficient use of the strengths of all available systems, is the key to achieving the lowest possible cost per transaction and maintaining SLAs. Your aim should be to progressively modernize key assets, while connecting your components through open standards, leveraging micro and macro services and building out an event-driven hybrid cloud architecture.

In a future post, I’ll discuss the distribution of workloads across the hybrid cloud, and dig into what, where and why for designing an architecture best positioned for many more years of success. I also hope to tackle the remaining misconceptions, explain how to modernize existing z/TPF assets and how to leverage modern DevOps principles in your z/TPF development process.