Archive

Dinosaurs and the cloud: The mainframe’s role

Share this post:

Dinosaurs are not usually something you equate with cloud computing, but bear with me while using this analogy. Around 20 years ago I had my summer internship in the internal IT department at IBM in Norway, and I can still remember a poster that hung on the one of the office doors. It said “The dinosaurs are still alive,” and showed a picture of one of the new mainframes. This was around the time where the mainframes transitioned to CMOS and later 64-bit computing, so it reflected the whole rejuvenation of the computing platform.

IBM Cloud mainframeLooking back at that poster, I feel the message is still extremely relevant for mainframes today. They are not an isolated phenomenon in traditional IT but are actually extremely relevant in a cloud computing mindset. In this article, I will try to explain the reasons behind this statement.

Cloud computing embodies some very clear characteristics that are inherent in the mainframe and the ecosystem associated with the mainframe: elasticity, broad network access and virtualization.

The mainframe is probably the best platform for virtualization: The virtualization technology was first introduced on the mainframe as far back as 1972 (or earlier if you count the time-sharing predecessors). This experience is unique in the IT business and provides a very solid and proven ground for virtualization. Currently, the zEnterprise is capable of running a multitude of operating systems inside its ecosystem, ranging from the traditional mainframe operating systems such as z/OS to open systems such as Linux. Using the zBX extensions it is even capable of running x86 versions of Linux and even Windows Server 2008. This makes it a very good platform for running your cloud workloads. Due to the maturity of the whole platform and the high integration between the platform, hypervisor and operating systems, one traditionally sees a much higher possible utilization of the mainframe resources compared to other platforms.

The mainframe has broad networking capabilities: The mainframe supports all your traditional networking protocols and technologies, as well as Software Defined Networking (SDN) internally inside the chassis. This makes it a very good platform on top of which to deploy any network-oriented solution.

The mainframe has elasticity and efficiency: Inside one mainframe you can add multiple mainframe processors (general purpose cores or specialized cores), multiple x86 or POWER7 processors using the zBX or even appliances such as the DataPower XI50z. All these resources can be virtualized into a single system image and managed as one system. This provides both horizontal and vertical elasticity.

The mainframe is very secure: The mainframe platform has multiple security mechanisms integrated into all levels of the platform. As such, mainframes have been protecting sensitive business critical data for multiple decades. The mainframe has been awarded the Common Criteria Evaluation Assurance Level 5 (EAL5) and has as such the highest security rating for any commercially available system.

In addition to the features I have already mentioned above, the mainframe has some other neat features that can be interesting in a cloud scenario.

A large portion of the world’s data resides or passes through a mainframe on a regular basis. Any cloud service would probably connect to a mainframe or work on data that has been fed from a mainframe. As such the mainframe would be a part of the cloud service either directly or indirectly. The cloud service provider might not even be aware of the data originating from a mainframe.

The mainframe has supported Software Defined Environments (SDE) before this became a widespread concept. Internally in the mainframe, there has been a Software Defined Environment ever since the virtualization came into play. All virtual machines (VMs) were given virtual (or software defined if you like) resources such as disks, network, tape drives and even punch-card readers. I still remember working inside a VM on a mainframe environment and requesting more virtual disks being created and made available to my applications on demand.

dinosaurs and the cloudThe mainframe is open and supports technologies like Linux and OpenStack. When talking about the mainframe being open I am not talking about open in the sense that open source is open (the source code to the mainframe is in no means available to anyone), but open in the sense that it supports more and more open and common standards. More and more open application programming interfaces (APIs) can be used to integrate towards the mainframe, and interfaces to support open cloud technologies like OpenStack are being made available. This is important for any cloud provider that is thinking about deploying a massive Linux installation in their data center. Imagine the power of deploying an IBM Enterprise Linux Server with full integration to an OpenStack environment; that is one serious and powerful cloud platform!

The mainframe is inherently highly available. The mainframe has traditionally had very strong RAS features, ranging from traditional RAID systems for disks, multipathing for network connections all the way to RAIM (redundant memory) and capabilities such as execution integrity characteristics that even detect faulty processors on the fly and shifts the workloads to spare processors without any impact to the workloads they are running. All these features allow the providers to achieve very high availabilities.

As the cloud market expands and becomes more mature towards the enterprise workloads, I sincerely expect to see the mainframe becoming a more and more integrated and central part in many cloud providers offerings. The mainframe has been around for 50 years this year and will be around for yet another 50 years or more.

More stories

Why we added new map tools to Netcool

I had the opportunity to visit a number of telecommunications clients using IBM Netcool over the last year. We frequently discussed the benefits of have a geographically mapped view of topology. Not just because it was nice “eye candy” in the Network Operations Center (NOC), but because it gives an important geographically-based view of network […]

Continue reading

How to streamline continuous delivery through better auditing

IT managers, does this sound familiar? Just when everything is running smoothly, you encounter the release management process in place for upgrading business applications in the production environment. You get an error notification in one of the workflows running the release management process. It can be especially frustrating when the error is coming from the […]

Continue reading

Want to see the latest from WebSphere Liberty? Join our webcast

We just released the latest release of WebSphere Liberty, 16.0.0.4. It includes many new enhancements to its security, database management and overall performance. Interested in what’s new? Join our webcast on January 11, 2017. Why? Read on. I used to take time to reflect on the year behind me as the calendar year closed out, […]

Continue reading