Mainframes

How IBM uses the mainframe to bring analytics back to the future – Part 1

Share this post:

 

DeLoreanTo conduct the background research for this blog post, I hopped into my DeLorean time machine (what, you don’t have one?), set the time circuits for 1976 and visited the IT shops of a few major enterprises.

1976

Aside from getting the chance to relive the US Bicentennial celebration, I wanted to check out data centers before companies like Teradata and Oracle came on the scene, and before Unix had made serious inroads into major corporations. As expected, IBM’s System/370 architecture was handling virtually all enterprise processing needs at this time. I saw big mainframes conduct online transactions all day, every day, and produce analytic reports all night. It was a very cohesive, consolidated and controlled environment.

1991

TreeNext, I moved forward 15 years to early 1991. Pausing briefly at a newsstand to check out the March issue of InfoWorld, in which editor Stewart Alsop famously predicted that the last mainframe would be unplugged on March 15, 1996, I reprised my data center tour.

This time I encountered a completely different scene. Mainframes were still handling the bulk of the world’s transactional operations, but now many of them were surrounded by a variety of mid-tier and personal computing systems copying their data for offline analysis. In this era, I could see how one might jump to the conclusion that the mainframe might indeed eventually be overrun by this increasingly invasive species of computers.

2007

Having enough fuel in the flux capacitor for one more jump before returning to present time, I stopped by an IBM customer conference in the fall of 2007—being careful to avoid contact with my 2007 self so I wouldn’t unintentionally erase my own future.

Not only did mainframes survive their predicted 1996 demise, but they were thriving despite the fact that the number of real and virtual distributed systems surrounding them had grown by orders of magnitude. IBM is no different from any large company, and it too had surrounded its mainframes with large numbers of distributed systems. And, like most large companies, it was facing the same challenges and costs in maintaining them.

I came back to this specific point in 2007 because it was the first time I heard of IBM’s ambitious plan to regain control of its data centers. The session I attended introduced our customers to Project Big Green, a plan to consolidate 3,900 stand-alone, distributed servers to just 30 System z Linux virtual images. I remember this session really catching my attention because the value proposition to IBM was significant.

If I had enough fuel for one more jump before returning, I would have come back to this conference a few years later to relive a very interesting talk by IBMer Larry Yarter, who discussed an outgrowth of Project Big Green called IBM Blue Insight. The goal of Blue Insight was to shift all of IBM’s internal analytics processing from departmental servers to a centralized, software as a service (SaaS), private cloud model.

Present Day

Having returned from my research runs, I phoned Larry to find out how things had progressed over the three-plus years since I heard him talk about Blue Insight at that conference. The results are nothing short of spectacular.

Larry is now an IBM Senior Technical Staff Member and the Chief Architect at what has come to be known as the Business Analytics Center of Competence (BACC). The environment that Larry described to me had the consolidated feel of 1976 that IT organizations loved, but with the freedom and flexibility demanded by business units in 2013.

Back in 2009, when Blue Insights was initiated, IBM was supporting some 175,000 users on stand-alone clients and hundreds of highly underutilized Brio/Hyperion servers. The acquisition of Brio/Hyperion software by Oracle in 2007, plus IBM’s own acquisition of Cognos that same year, meant that the company would be undergoing an inevitable and significant software shift. But rather than just converting everything from Brio to Cognos on the same inefficient server base, IBM decided to also transform its analytics capabilities to a centralized service based on a private cloud model. A private cloud deployed on System z Linux.

Now, in 2013, this model has been operational for several years.

Has it been a success? Well, you’re just going to have to stay tuned for part 2, in which I’ll share what I learned from Larry.  Trust me, it’s well worth waiting for!

More stories

Two industry firsts from IBM Z at the Think London event

Cloud servers, Data security, Mainframes

From time to time, we invite industry thought leaders to share their opinions and insights on current technology trends to the IBM Systems IT Infrastructure blog. The opinions in these posts are their own, and do not necessarily reflect the views of IBM. On the 16th of October IBM held their Think Summit in London ...read more


Met Office gives businesses the upper hand with fast, accurate weather data

Hybrid cloud, Mainframes, Storage

From time to time, we invite industry thought leaders to share their opinions and insights on current technology trends to the IBM Systems IT Infrastructure blog. The opinions in these posts are their own, and do not necessarily reflect the views of IBM. Weather is one of the few things in life none of us ...read more


IBM refreshes enterprise storage, bringing strong new storage technologies to IBM Z and IBM LinuxONE customers

Digital transformation, Mainframes, Storage

Written by Eric Burgener, Research Vice President, Infrastructure Systems, Platforms and Technologies Group – IDC, Sponsored by IBM With digital transformation (DX) underway in most enterprises, there is an ever-increasing need for higher performance, greater than “five nines” availability, and flexible data protection options that give customers the capability to implement a cost-effective approach in ...read more