Co-locate Applications and Consolidate Infrastructure

Minimize latency, optimize infrastructure efficiencies, reduce HW/SW costs.

Overview

Maximize the benefits of the IBM Z and LinuxONE platform strengths to deliver performance with co-location and cost savings through optimized infrastructure efficiency and consolidation.

Deploy cloud on IBM Z and LinuxONE by adopting Red Hat OpenShift Container Platform to co-locate cloud apps close to mainframe data and applications, for example running on IBM z/OS, to achieve low response time and meet enterprise SLA, leverage cloud native development and tooling to achieve consistency across the enterprise and a growing ISV ecosystem.
Consolidate infrastructure to optimize hardware and software costs. Because not only Data Management software is licensed by the total number of Linux cores it is running on, but these costs also continue to increase rapidly with x86 servers and in addition to greenhouse gas emissions.


The IBM Z and LinuxONE platforms can address these challenges while providing the industry’s leading performance, high availability and resiliency.

Achieve

4.7x
lower latency with OpenShift on IBM Z vs Public Cloud1
48%
up to 48% lower TCO2
Filter
Clear
Journey

Adopt Red Hat OpenShift container platform

Use Red Hat OpenShift for IBM Z and LinuxONE to accelerate digital transformation, co-locate cloud-native applications with system of record data and reduce costs.
Document

Red Hat OpenShift Container Platform for IBM Z and IBM LinuxONE...

This reference architecture showcases a prescriptive, pre-validated, private cloud solution from Red Hat® and IBM that provides IT as a Service (ITaaS), and the rapid provisioning and lifecycle management of containerized apps, virtual machines (VMs), and associated application and infrastructure services for cloud users such as software developers, data scientists, and solution architects

Blog

Accelerate application development with Red Hat and IBM Z

Read how Red Hat OpenShift 4.8 and an ecosystem of software offerings accelerate your modernization journey.

Blog

Meet the Family – Red Hat and IBM software on OpenShift for...

In this community blog, we dig deeper into the rich portfolio of Red Hat and IBM software now available for IBM Z and IBM LinuxONE.

Video

Atruvia AG (formerly Fiducia & GAD) discusses Red Hat OpenShift...

Hear a client explain the value of deployment of the Red Hat OpenShift platform on IBM Z

Footnotes

  • “Run an OLTP workload on OpenShift Container Platform 4.4 with up to 4.7x lower latency co-located to the used database on z15 T01 using a HiperSocket connection versus on compared x86 platform using a 10 Gb TCP/IP connection to the same database”. This statement is based on an IBM internal study designed to replicate banking OLTP workload usage in the marketplace deployed on OpenShift Container Platform (OCP) 4.4.12 on z15 T01 using z/VM versus on compared x86 platform using KVM accessing the same PostgreSQL 12 database running in a z15 T01 LPAR. 3 OLTP workload instances were run in parallel driven remotely from JMeter 5.2.1 with 16 parallel threads. Results may vary. z15 T01 configuration: The PostgreSQL database ran in a LPAR with 12 dedicated IFLs, 256 GB memory, 1TB FlashSystem 900 storage, RHEL 7.7 (SMT mode). The OCP Master and Worker nodes ran on z/VM 7.1 in a LPAR with 30 dedicated IFLs, 448 GB memory, DASD storage, and HiperSocket connection to the PostgreSQL LPAR. x86 configuration: The OCP Master and Worker nodes ran on KVM on RHEL 8.2 on 30 Skylake Intel® Xeon® Gold CPU @ 2.30GHz with Hyperthreading turned on, 448 GB memory, RAID5 local SSD storage, and 10Gbit Ethernet connection to the PostgreSQL LPAR.
  • This is an IBM internal study designed to replicate banking OLTP workload usage in the marketplace on an IBM LinuxONE III T02 using eight IFLs across two LPARs. Seven IFLs and a total of 640 GB memory were allocated to one LPAR for three OpenShift masters and four worker nodes. One IFL and a total of 128 GB memory were allocated to the second LPAR for the OpenShift load balancer. IBM Storage DS8886 was used to create eight 250 GB DASD minidisks for each of the eight z/VM guests running in the LPARs. The OpenShift cluster version 4.2.20, using Red Hat Enterprise Linux CoreOS (RHCOS) for IBM Z, was running across seven z/VM guests and the remaining eighth z/VM guest was running the OpenShift load balancer. SMT was enabled across all IFLs. The x86 configuration was comprised of six servers running KVM with 15 guests (three masters and twelve workers) for the OpenShift cluster version 4.3.5 with RHCOS and a seventh server was used for the load balancer on RHEL 7.6. For x86 storage each guest operating system was configured with a 100 GB of virtual disk. Each guest had access to all vCPUs of the KVM server on which it was running. Compared x86 models for the cluster were all 2-socket servers containing a mix of 6-core, 8- core, 12-core and 16-core Haswell, Skylake and Ivy Bridge x86 processors using a total of 136 cores with a total of 2,304 GB memory. The load balancer was a 2-socket 8-core server with a total of 384 GB memory. Both environments used jMeter to drive maximum throughput against two OLTP workload instances and were sized to deliver comparable results (15,487 responses per second (RPS) with IBM Z and 14,325 RPS with x86). The results were obtained under laboratory conditions, not in an actual customer environment. IBM’s internal workload studies are not benchmark applications. Prices, where applicable, are based on U.S. prices as of 02/12/2020 from our website and x86 hardware pricing is based on IBM analysis of U.S. prices as of 03/01/2020 from IDC. Price comparison is based on a three year total cost of ownership including HW, SW, networking, floor space, people, energy/cooling costs and three years of service & support.