Executive Corner: Interview with Technical Leader for Cloud, John Easton (Part 2 of 3)

Share this post:

Neil Weightman interviewed John Easton recently. The discussion concentrated on the future of IT departments, how organizations need to change to embrace cloud, the legacy of grid computing, and how customers are currently engaging with cloud.

(Read Part 1 and Part 3 of this interview.)

Neil: How much would you expect an IT department to save with a move to cloud?

John: This is where you have to come back to the cultural challenges that organizations face. If you run services in the cloud the way that you have run your services on your own premises, it will cost you more. For example, if you are buying a cloud service that charges you by the hour and you keep that cloud service turned on, you will get to a point in time where it would have been better and cheaper for you to have bought it yourself.

One of the interesting challenges that we need to get clients thinking about if they want to start realizing some of these cost savings is the idea of turning it off when you don’t need it. For some systems, they can’t turn it off, so what they’re getting by moving to cloud is less administrative headache of managing and running these systems. We also see clients wanting us to run their services for them so they can get access to some of the best practice experiences that we can offer that the clients cannot do internally for whatever reason.  Also, they are doing it in a way that is potentially more accessible to them than a full-scale out-sourcing of their IT department; they can do it on a workload-by-workload basis.

We’ve been talking to clients who, for a variety of reasons, have not been able to realize a lot of what the technology is actually capable of today for their existing IT. So, one of the reasons why clients are looking at IBM today to provide them with a cloud service is to allow them to exploit the technology more fully and deliver higher qualities of service, but that’s another organizational and cultural change.

The sorts of marketing figures that we typically put out, for things like IBM SmartCloud Enterprise+, say, would be savings in the 20-30% range for a given level of service.

The way that systems were “traditionally” sold was very much on an “everything’s included” basis. So, if we sold a client a server, some storage, and some software, and that platform was highly available or had disaster recovery capability, then every application that ran on that platform got high availability or disaster recovery by virtue of where they were running, whether they needed those features or not.

Where the interesting shift comes with cloud is that you move to a pick-list of options, where, for each service, you can choose whether you want storage, networking, or backup; and the more options you choose, the more it costs. It’s a consumption-based model, which we’re all familiar with from mobile phones: you get so many texts and so many minutes and so much data per month, and when you go over that the company charges you extra. We’re happy with that for mobile phones, but we’re much less happy when it comes to IT systems. There are some interesting opportunities to save considerable amounts of money by, say, only using high availability for those applications that really need it or backing up on a schedule, which is more suitable for a workload that doesn’t change very much rather than one that changes a lot of the time. Get it right and you can save a considerable amount of money. Get it wrong and either you’ll tick [select] all the boxes because you’re not sure, in which case you’re potentially paying over the odds [paying more than you need to] for the service and capabilities you don’t need, or you don’t tick enough boxes and you don’t choose high availability for a mission-critical application, and, when push comes to shove, it fails and you lose out that way.

When clients move to this selection box approach, the question is whether they can cope with the new way in which they’re being asked to consume IT. A lot of clients have not really articulated exactly what their requirements are. Clients will come to us and say “Well you’re never going to move to these pay-as-you-go models because that will destroy your business” (the press says that to us a lot). But, if you turn the tables round and ask – “Well, if we give you pay-as-you-go, can you actually consume this?” – the number of clients that can consume a pay-as-you-go model today is very small. We did work with a service provider who wanted to buy things on a per-virtual-machine basis and their procurement department said “We need a serial number to go into our spreadsheet,” so they didn’t have a procurement process that could buy VMs. Suppliers and clients both have to learn about this new environment at the same time.

Neil: I noticed that grid computing has appeared on your agenda previously.

John: I did grid from about 2001 to 2006 and a lot of what we learned we have to relearn again for cloud, and many of the things that clients wanted to do then are the sorts of things they want to do now with cloud.

Neil: So was grid really a precursor to cloud?

John: Yes and no. I think that a lot of the things that we could do back in, say, 2005 were the sorts of things that clients want to do now, but it was a bit before its time and clients weren’t quite ready for some of the implication that grid would bring – things like automation, which we did a lot of in the grid days. Clients seemed reluctant at that stage to let the computers take control. Now, I think that the economic imperatives by which they have to operate means that having computers take over a lot of it can help make things easier and cheaper for clients to do what they need to do, such as the provisioning of servers. Then we were mostly talking about physical servers rather than virtual servers, but the way that you do that is basically the same.

Neil: When you were involved in grid, were you thinking about what was going to be coming in cloud?

John: Looking back on it, yes; at the time, no. We did some very interesting things in the grid days around helping clients better understand the economics of their environment, understanding where their costs really go, and allowing them to make scheduling decisions or workload placement decisions based on cost metrics, which is a cloudy thing to be doing. We did a lot of work on automation where the grid would determine where a service level was about to be breached and would automatically provision new resources to avoid that. That sort of automation and direct feedback into the business are things that people are taking for granted now that clouds can do; but when we were doing it five or six years ago it was a little bit too much like science fiction for them.

Some people will tell you there’s nothing new in IT. Some people see cloud as just a bureau service on a mainframe with a web front end – no different from what we were doing in the 1970s. For some clients, virtualization was invented five years ago on x86 and have forgotten or don’t know about the 30-plus years of experience that we have on the mainframe – for them, this stuff is all quite new and interesting. Every so often we seem to go through these cycles where we have to relearn many things that we worked out how to solve a few years before. So cloud is basically reusing a lot of what we learned from grid. Going back further, a lot of the problems we were asked to solve in grid days were solved initially 5-10 years prior to that through things like the Distributed Computing Environment (DCE); things like distributed security we solved them for DCE, solved them for grid, solved them for cloud. Not that much is new in the world of IT.

Neil: So are we going to back to another client-based model next, then?

John: I don’t know. There are pros and cons to any of these business models. The key thing, and one of the things that comes out of the centennial stuff this year is that you have to be able to reinvent yourself as an organization as market demands change, as client demands change.

Neil: Are you thinking now about what’s going to come after cloud?

John: I think that in a few years we won’t be talking about cloud computing because it will just be the way that we do computing. What comes after cloud? I don’t know yet. We’ll wait and see. A lot of it will come down to getting the commercials right – cloud is disruptive, cloud breaks lots of things for lots of people, it throws some interesting spanners in the works of the IT departments. That, coupled with the economic conditions that we’re operating under, means that we don’t yet know how disruptive this is going to be.

There are some interesting scenarios about extending these models out beyond IT and into business processes, into people, into organizations and companies. If we buy IT pay-as-you-go, why don’t we buy people pay-as-you-go? What does that do to the workforce and the company? You can play the same commercial construct through some very interesting spaces. They are uncomfortable spaces, because this is disruptive and it breaks lots of things.

About John Easton
John Easton is an IBM Distinguished Engineer and the Chief Technology Officer for IBM Systems and Technology Group in the UK and Ireland.  He is internationally known for his work helping commercial clients exploit large scale distributed computational infrastructures, particularly those using new and emerging technologies.

John is currently the UK Technical Leader for Cloud Computing, shaping IBM strategy in this key business area and helping clients with their implementation and adoption of cloud technologies.

He has worked with clients in a wide range of industries, predominantly banks and financial markets firms but he also has significant experience in the telecommunications and public sectors.  Previous to his current role, John was the Chief Infrastructure Architect for a first-of-a-kind core banking infrastructure replacement program.

During his time at IBM before this, John has led IBM initiatives around hybrid systems, computational acceleration technologies, grid computing, and mission-critical systems.  John is a member of the IBM Academy of Technology and a Senior Technologist in the IBM Innovation Network.

More stories

Why we added new map tools to Netcool

I had the opportunity to visit a number of telecommunications clients using IBM Netcool over the last year. We frequently discussed the benefits of have a geographically mapped view of topology. Not just because it was nice “eye candy” in the Network Operations Center (NOC), but because it gives an important geographically-based view of network […]

Continue reading

How to streamline continuous delivery through better auditing

IT managers, does this sound familiar? Just when everything is running smoothly, you encounter the release management process in place for upgrading business applications in the production environment. You get an error notification in one of the workflows running the release management process. It can be especially frustrating when the error is coming from the […]

Continue reading

Want to see the latest from WebSphere Liberty? Join our webcast

We just released the latest release of WebSphere Liberty, It includes many new enhancements to its security, database management and overall performance. Interested in what’s new? Join our webcast on January 11, 2017. Why? Read on. I used to take time to reflect on the year behind me as the calendar year closed out, […]

Continue reading