IT Infrastructure

The hybrid cloud – from ‘big bang’ to ‘fit for purpose’

Share this post:


The Mars 2020 Rover, which recently landed on Mars, is equipped with two Power processors. Indeed, the versatile Power platform is also one of the building blocks of an effective hybrid cloud solution, as are, for example, IBM’s Flash Core SSDs with hardware-based encryption and compression.

Hybrid Cloud has been a buzzword for some time when it comes to enterprise IT architecture. The aim is to allow workloads to move freely between the cloud and on-premises installations based on data classifications and other considerations (such as cost). But how do you manage this correctly? In this interview, the IBM experts Tonny Bastiaans, Barend Baarssen and Robbin Koolaard go into the intricacies of hybrid cloud, what is involved and what exactly it breaks down into.


Step towards microservices and DevOps

Under hybrid cloud, IBM provides IT infrastructure solutions (servers and data storage) for critical workloads. But hybrid cloud is more than just an infrastructure play, states Tonny Bastiaans from IBM. “Companies that only modernize their IT environment while the rest of the organization remains the same will not get very far.” The core of the hybrid cloud story is that it represents an organization-wide change. “Historically, that has always been the case. We have slowly moved away from mainframes, which often only ran one application, towards virtual machines. Then we moved to a service-oriented architecture, and agile working became the buzzword in the business. Now, companies are taking the next step with microservices and DevOps.”

Tonny wants to clarify that infrastructure is an enabler. “In the same way as an environment with OpenShift and other tooling enables you to work in a new way.” In his conversations with customers, these aspects come up frequently. “I hear from many organizations that they want to move their IT to the cloud, and I always ask what they really want: move to the cloud, or work in a cloud-ish way? By the latter, I mean being able to work flexibly, scale resources up or down, and only pay for what you use. When the Dutch suddenly had to work from home a lot during the corona pandemic outbreak last year, it put a lot of pressure on IT departments. The trick was to be flexible. Now you see many organizations embracing the DevOps way of working with many native cloud applications.”


No dependency on cloud service providers

Because of legislation and regulations, and sometimes involving cost concerns, organizations are moving workloads back to an on-premises environment. “In many cases, it is more attractive to transfer compute to data than to transfer data to compute environments in the cloud, where there is also the risk of ‘Hotel California’ scenarios: you can check in any time you like, but you can never leave. You don’t want to be locked into a particular supplier or environment. However, if you work on-premise, you want the same way of working as in the cloud, with the same flexibility. That is where the layer with OpenShift and also our capabilities with our servers and storage help.”

The IBM Power hardware platform enables flexible working and is therefore an important part of a hybrid cloud solution. Barend Baarssen from IBM: “It allows you to scale up and down quickly and also work very securely on separate machines for DevOps. You build your virtual environments with compute, memory and storage. Automation is easy, with Ansible Playbooks that you can roll out to your Power environment, for example. And if something fails, you can restart your virtual machine on the fly without affecting the rest of the Power environment.” Naturally, virtual environments can also be set up on other platforms, from Intel for example. “But,” says Barend, “that is not as protected as a virtual environment on IBM Power. In the latter environment, you will not be bothered by ‘noisy neighbors.’ You can also scale up quickly with additional processors and memory. Also, it is possible to allocate dedicated resources for workloads.”


Cost-efficient with ISVs

That last one is particularly interesting for companies working with heavy databases such as Oracle. “If you have an environment that can access everything, an ISV might charge you for all the available cores, whereas Power allows you to shield your compute environment around one core or even a part of one. Of course, hardware like IBM Power comes at a cost, but an ISV can be an even greater expense. If you bring this under control, it will be cheaper in most cases. In addition, Power is known for its excellent I/O performance. “We develop the processor ourselves. The result is a chip that addresses memory efficiently and can process and store data quickly. A new development is the coming of in-memory databases such as SAP HANA. It is also important that you can write data to and from the RAM memory quickly and that your server has a large internal memory.”

With Power – which is an important component in the Mars 2020 Rover mission mentioned above – IBM is deliberately stepping away from the “rat race” of ever-higher clock speeds with each new generation. “Throughout the hybrid cloud story, it is important to be able to move data quickly. You don’t only need fast compute, but also fast storage,” says Tonny. But just as important is OpenShift as an intermediate layer, he notes. “People sometimes ask us why we chose Red Hat. Aside from the fact that we have contributed to the open-source world for a long time, OpenShift has a lot to do with it. In fact, it is the catch-all for organizations that want to combine on-prem work with the cloud. As long as you make sure that your applications are ready for OpenShift, you can do that in any environment, including Intel and mainframe environments and IBM Cloud, Azure, Google Cloud and AWS. Everything revolves around this layer.”



Along with compute, storage is an important part of hybrid cloud. Robbin Koolaard (IBM) complements his colleagues: “It’s an inseparable part. With our IBM Spectrum Virtualize software stack, you can manage and operate all our flash system storage solutions – from the smallest to the largest – in the same way and even include the public cloud. We now support IBM Cloud and AWS. Azure is expected to be added in the fourth quarter.”

“The high-end models, IBM FlashSystem 7200 and FlashSystem 9200, can even take snapshots and retrieve them directly from the cloud,” says Robbin. “The differences can also be found in less obvious things: for example, our systems use our own FlashCore Module SSDs. These offer hardware-based encryption but also compression. This enables you to achieve a compression ratio of up to 5:1 for classic databases. By controlling the hardware on the SSD, the processor of the storage system can focus on delivering even better performance and other important tasks.”


Fit for purpose

What are the main pitfalls of hybrid cloud initiatives? “I think the biggest mistake is that companies see the cloud as Utopia, where everything has to happen in a big bang. This leads to hasty decisions and sub-optimization. Fortunately, that idea is on its way out. Companies are increasingly recognizing that the cloud – especially the public cloud – is not always the fastest or most cost-effective environment. Many companies are now switching back to an approach that revolves around fit for purpose; For example, running large databases on-prem with the application in the cloud because it is faster, safer and cheaper. The database load is often easy to predict, and therefore the choice to run it in one’s own data center is often smarter because of the predictable costs. By running the app in the cloud, you benefit from the scalability that comes with it,” states Tonny.

The best is not to bring critical workloads back into the on-premises environment in a big bang, either. “This can only be done one piece at a time because you are often bound by contracts and clauses. What’s more, you have to think carefully about how to run the desired infrastructure efficiently and reliably,” says Barend. The guiding principle is flexibility. “We offer an on-prem solution, but can also connect well with Google and Azure, for example. It certainly doesn’t all have to be IBM. That’s what we want to make clear at the Tech Talk: we want to offer you a way to make your IT flexible, whatever the current situation. And of course, we hope that people will ask questions, and we can discuss learning challenges.”

Visit the webinar to learn more about hybrid cloud, power, storage and how to apply them.


More IT Infrastructure stories

Data-driven asset management with IBM Maximo Application Suite and Cloud Pak for Data

IBM has enhanced its Enterprise Asset Management platform, IBM Maximo Application Suite (MAS), with IBM Cloud Pak for Data: a supporting platform which provides a framework for combining a variety of data from different areas of an organization. How does IBM Cloud Pak for Data help organizations gain additional asset management insights from available data? […]

Continue reading

Being a data driven organization: What does this mean at Allianz?

Reading Time: 8 minutes From calculating risks and premiums to understanding customer behavior, data is of vital importance in the insurance business. Allianz, a multinational financial services company that focuses on insurance and asset management, has recently transformed its operations on a data level to reinvent its insurance business. Read an extract from a recent […]

Continue reading

Hardware is not dead – Power10 is paving the way for hybrid cloud and energy savings

  IBM has recently launched the first Power10 server. A new generation of servers, which due to changes in structure, components and functionality, provide significant improvements in terms of performance, computing power and energy consumption. These servers are paving the way for a more sustainable and flexible business based on a hybrid cloud. But what […]

Continue reading