The announcement last week of IBM’s new mainframe server z14 discusses how the new hardware brings more capacity and flexibility to computing than ever before. The new z14 mainframe server is a technology behemoth and has so many features that it takes a huge amount of documentation that details the specifics.
Being the nerd that I am I skipped over the glossy announcements and went straight for the details in the IBM z14 Redbook Technical Guide that can be downloaded here to get a better understanding of the massive scope and breadth of the new z14 mainframe capabilities. The Redbook z14 Technical Guide is over 520 pages long since there are so many new features, capacity improvements and configurable components. In addition to this general IBM z14 Technical Guide, there are several other IBM z14 Redbooks to further detail out all the capabilities, capacities and components of the new IBM z14 mainframe server.
The truth is that going from the z13 to z14 improves all the dimensions of computing. Going from a maximum of 141 processors in the z13 to 170 processors in a single box in the z14, from 10 TB of addressable memory per LPAR to 32 TB, from a system bandwidth of 384 interconnects to 832 and going from a processor capacity index (PCI) of a single engine from 1695 to 1832 means that the z14 mainframe is a nice improvement in all computing performance metrics.
One of the first things to notice is how the new IBM z14 mainframe design is focused on the digital trust economy with one dedicated cryptographic co-processor per processor unit to keep data as well as the entire processing encrypted and safe without any performance compromise. From disk to processor to memory, data encryption is applied throughout the computing environment for entire systems, applications and transactions yielding encryption designed to thwart any hacking attack method from any attack vector without creating any performance issue.
The processor units are the industry’s fastest, with a 14 nms microprocessor, running at 5.2 GHz paired with an encryption co-processor. These advanced processors also extend the single instruction multiple data (SIMD), introduced in the z13 mainframe, with new z14 SIMD instructions to support decimal operations. These new z14 SIMD capabilities will improve all packed decimal facilities especially for COBOL applications and other languages such as Java, C++ and PL/1 using new compilers optimized for the new z14 environment. This could give a huge performance boost for all financial company operations worldwide.
IBM announced that later this year a LinuxOne LPAR will be able to run container images such as Docker and other proprietary containers against the z/OS engines within z14 mainframes. Since most of these containers are not processor intensive, these container images can be processed essentially for free along with improved response time by working within the mainframe complex. These new Docker container services are perfect for IoT and micro-services enabling avoidance of an outside network to achieve quick performance within the mainframe complex capabilities.
IBM has also added 50 new optimized z14 z/OS micro-code instruction sets for Java. The improved microcode delivers Java 1.5 times faster on IBM z14 than alternative x86 platforms. Running against the new IBM processors provides pause-less Java garbage collection, improving one of the most expensive operations within Java JVM applications. IBM says this z14 feature alone may improve Java transaction workloads up to 10 times by reducing pause times within these intense Java application workloads.
The new z14 I/O capabilities are further enhanced with all the improvements related to the new zHyperLink technology that optimizes communications with storage, improves throughput and reduces I/O latency. zHyperLink technology does this by eliminating the cache disruptions, un-dispatching the running request and queuing delays to resume the I/O requests. This provides an improved I/O data link of 8 gigabytes per second (GBps). New lower latency through the new IBM zHyperLink Express connection provides a new, short-distance (150 meters), z14 I/O adapter designed for up to five times lower latency for read requests than previous High Performance FICON. The new z14 zHyperLink provides up to three times the memory (up to 32 terabytes), three times faster input-output and a significant reduction in SAN latency, making all the I/O operations faster in the new z14 environment.
Memory capabilities are also improved in the new z14 with 32 TB of addressable LPAR memory compared to a z13 that only provided 10 TB and double the entire memory available in the box, going from 96 TB to 192 TB of memory. In addition, the new IBM Virtual Flash Memory (VFM) is the replacement for the Flash Express features that were available on the IBM zEC12 and IBM z13. Its new large memory capabilities also provides 1 MB large pages that become pageable if Virtual Flash Memory is enabled and available.
Mainframe configurations are as flexible as ever to match all your workloads’ main processor capacity requirements and all your workloads’ specialty zIIP and IFL processor requirements. In addition, the number of I/O connections and the amount of memory are flexible and configurable for matching your workloads with the best balance of I/O and CPU processing power.
These powerful configuration options also make mainframe-pricing complex as ever, so pay special attention to capacity planning workload growth estimates. Help your capacity planners as soon as possible with your DB2 storage and SQL statements processing growth estimates. They just might get you a new z/14 for Christmas year-end processing crunch, if you ask nicely.
Dave Beulke is a system strategist, application architect, and performance expert specializing in Big Data, data warehouses, and high performance internet business solutions. He is an IBM Gold Consultant, Information Champion, President of DAMA-NCR, former President of International DB2 User Group, and frequent speaker at national and international conferences. His architectures, designs, and performance tuning techniques help organization better leverage their information assets, saving millions in processing costs. Follow him on Twitter or connect through LinkedIn.