My cousin's wife told me recently that they wanted to buy a house, but weren't sure they could justify such a huge investment in such a doubtful economy.
So I told her this: �Buy a few square feet. Take a few weeks, try them out and see what you think. If you like 'em, buy some more square feet. Then a whole room. Then a whole floor. Eventually, maybe, you'll have your dream house.�
Of course, this was just a joke. But most of the time I think it's actually very good advice, because it's very easy to apply and it applies to so many different circumstances.
It certainly applies to physical fitness, where trying to accomplish too much, too soon will just burn you out, or put you in the hospital, instead of make you fitter. It also applies to marriage; getting engaged on the second date is generally not considered a love-life best practice.
Much the same kind of thinking applies rather naturally in IT. It shows up, for instance, in the form of kernel-based operating systems like Linux and all modern versions of Windows. The kernel represents a solid initial foundation that handles core tasks like memory management, to which any number of logical capabilities can be (and are) added to form the complete OS.
And these same sorts of ideas apply on a far larger scale in the context of cloud computing, I think. Because organizations can't know with perfect accuracy in advance how best to develop and utilize cloud for their own particular circumstances, it's probably wise for them not to think of and develop clouds as a monolithic entity -- a thing they have to roll out perfectly and completely on day one -- but rather as a foundation to which they can add new capabilities over time.
If I had to guess, in fact, I would say that it was exactly this reasoning that led IBM to give SmartCloud Foundation that title. It's meant as the initial �cloud kernel� on top of which you can then subsequently add new layers, new capabilities, that match your business requirements, just as Linux developers add Linux services, all of which run on top of the Linux kernel.
Why manage the cloud when the cloud can manage itself?
As it happens, I prefer certainty to doubt. So rather than just keep guessing about IBM's nomenclatural logic, I decided to ask an expert: Marco Sebastiani, Product Manager for IBM Service Delivery Manager and Cloud Solutions.
Sebastiani not only confirmed my interpretation, but ran with it in what I thought was a pretty cool direction.
�You can think of cloud management software almost as a set of nested Russian dolls� he said. �Practically any cloud is going to need to be able to do things like create virtual servers, and track key assets, automatically. That basic functionality corresponds to the innermost Russian doll. We address that with SmartCloud Foundation's entry cloud solution, which does provisioning and image lifecycle management. But then, once you have that set up, you can easily add more capabilities over time: bigger dolls. Every larger doll, in turn, leverages the capabilities of the smaller ones. And the cloud intelligently and automatically orchestrates all of its capabilities based on business policies.�
So, to pursue this analogy, what's the next doll up from SmartCloud Foundation?
The answer, it seems, is IBM Service Delivery Manager -- a set of capabilities, delivered as a pre-integrated software stack, that can help organizations leverage clouds to do even more, and create more value, in areas where they typically really need more value.
�The idea of this solution,� said Sebastiani, �is to simplify, accelerate and automate service fulfillment. It minimizes the amount of manual work IT has to put into the cloud by making the cloud much more self-governing and self-optimizing. So suppose you're an employee who wants a new service in the cloud. Instead of having to submit a request to IT to create that service, employees can just ask the cloud itself to do it. And, by orchestrating key tasks in logical ways, that's just what the cloud will then do. In this way, service management becomes much easier to pursue because services running in the cloud basically manage themselves, cradle-to-grave.�
This fits Sebastiani's analogy rather well, too. Return to the idea of Russian dolls for a minute, remembering that the innermost cloud doll does provisioning and monitoring of virtual servers.
What IBM Service Delivery Manager does, in turn, is build bigger dolls on top of that, automatically leveraging those functions over time, in ways that fulfill business requirements, while also adding entirely new capabilities that add entirely new value.
End-to-end optimization of the complete service lifecycle
This solution, for instance, includes an intuitive portal interface available via any standard Web browser. This, in essence, is the front-end needed to create new services that will run in the cloud.
Using it, one can basically instruct the cloud: �This is what I'd like to do, this is when I'm going to need to be able to do it and this is how important it will be to the business.�
Then the cloud basically does the rest -- ensuring that new virtual servers are created and eliminated on time, provisioned using the right server images, and that this entire process doesn't conflict with or compromise existing services unacceptably. (If that sounds like �automatic IT governance� to you, you're pretty close to the mark.)
To do that, of course, the cloud needs to be able to allocate critical resources fluidly and dynamically -- resources like processing power, memory, storage and even network bandwidth. This capability, too, is provided by Service Delivery Manager. It is continually aware of the available resources, discovers new resources when they are added to the general pool and doles out resources when and where they're needed. Then, when the demand level falls, the cloud pulls those resources back to the pool, or directly assigns them to another service that happens to need them at that point.
Also worth noting is the fact that all of that happens far more quickly and efficiently than it would if it were overseen by human talent. So, because fewer resources are wasted, fewer are needed in the first place -- a major cost-saving opportunity for the organization, which can now get by on less total processing power, memory, storage and bandwidth than it would have thought possible before the cloud.
Real-time monitoring is another major capability. Service Delivery Manager continually tracks the health and performance level of both virtual and physical resources -- a critically important function given how incredibly dynamic a cloud can be. So let us imagine that a given node (physical host) fails due to a toasted logic board; Service Delivery Manager will automatically notice and report that issue, leading to a quick and accurate failover of the associated service to a different, much healthier node.
Cost-tracking is yet another major strength of this solution. Given the intensely shared and interconnected nature of a cloud, where so much is happening automatically, you might expect it'd be difficult figuring out the costs created by different cloud services and systems -- and business teams and projects that use the cloud. And normally you'd be right.
�Service Delivery Manager changes all that,� said Sebastiani. �It gives you granular insight into exactly how costs are trending in all those different ways -- in as much or as little detail as you need. So if you're using your cloud in a public model, it can tell you exactly how much to charge your customers for their particular cloud utilization, even though all customers are using the same hardware. Or if you have a strictly private cloud, it will tell you how much you should charge back to different groups. This way, it creates the kind of insight that over time can help, or encourage, those divisions to try to keep their costs down.�
Find out what Cloud and IT Optimization can do for your organization
Learn more about Cloud Service Delivery & Management
Discover the benefits of cloud with the cloud simulator game
About the author
Guest blogger Wes Simonds worked in IT for seven years before becoming a technology writer on topics including virtualization, cloud computing and service management. He lives in sunny Austin, Texas and believes Mexican food should always be served with queso.
Likes before 03/04/2016 - 0
Views before 03/04/2016 - 6647