Archive

A reference model for moving your applications to cloud

Share this post:

This article throws light on an important aspect of cloud computing technology –namely, migrating enterprise level workloads to a cloud environment without re-architecting or re-engineering the existing applications. I have explored the methodology of migrations and have tried to put in place some framework or repeatable model that can accelerate moving to cloud.

Overview

To start with, moving your business applications to cloud can be looked upon in the following three ways:

– Rewrite the application to exploit the cloud features
– Replace the application with a software as a service (SaaS) equivalent
– Relocate the application to the cloud environment

Of the above three options, it is the last one which is mostly prevalent and which we want to take up here.

At the outset, it is very important to remember that there should be clear and compelling business reasons to move to cloud, otherwise the efforts and costs incurred might not deliver the desired results. Not all workloads are candidates for cloud migration – either technically or from a business standpoint or both.

The Migration Framework

Moving workloads to “cloud environments” needs to follow a multi-step process and apparently does not look very different from a traditional cross-platform application migration. Studying the various migration patterns, we find that a common theme emerges which we have tried to depict in the following figure 1.

The five step methodology to cloud migration.

1.  Initial screening and analysis

Various methods like manual and automated data collection tools, conducting requirement workshops with stakeholders are employed to get the data for analysis. Some of the important questions that are usually raised at this stage are:

  • Will the workload run in the target cloud environment? For example, compatible infrastructure, middleware and operating system image.
  • Will the target cloud environment satisfy performance, availability and other non-functional requirements (NFRs)?
  • Will the target cloud environment comply with applicable security, privacy and regulatory requirements?

The outcome of this phase is generally identification of workloads (web serving, web applications, business intelligence and data warehouse, ERP and SCM, analytics) suitable to be hosted in a given cloud environment, the costs involved and the migration impact.

2.  Planning and design

This step involves detailed planning and design of the target environment (memory, processor, disk storage). It includes taking key architectural decisions and hardware sizing after studying the software stacks and patterns of utilization. Criteria considered  include: application criticality, availability / downtime tolerance and business constraints.

Choosing an appropriate target cloud delivery model involves considering many variables and weighs cost versus risk.

3. Implementing the migration

Depending upon the type of workload being considered and the type of target cloud environment chosen, there are essentially three scenarios for implementing the actual migration:

• Like for like migration:

It is the easiest and the most cost-effective way to move workloads from the non-cloud environment to the cloud environment. It usually takes place for like-for-like environment; those migrations involving Physical-to-Virtual or Virtual-to-Virtual movements with the same OS in the source and target. In this technique all of the applications running in physical servers (or in a virtual machine) are packaged up into an image and added to the cloud catalog and instantiated into a virtual machine on the cloud platform. Technologies such as VMWare Converter and Platespin are used to perform this kind of migration.

• Cross platform migration:

A cross platform (Solaris to AIX, Windows to Linux) migration is usually slow and expensive but enables more business agility than like-for-like migration. This type of migration is generally complex because it involves change in hardware type (SPARC to Intel), OS type and software versions when moving from the source to target. It thus requires a thorough analysis of the application, its dependencies, compatibility of the software in the target environment.; code remediation, porting and test.

• Application-only migration:

Recently there are new solutions like the Zapp Migration Tool from AppZero which extracts only the in-scope application from the source machine and enables it to run on the target without the need of any re-installation, re-engineering or re-deployment. The essential mechanism by which this works is decoupling the application from its OS and encapsulating it along with its dependencies, configuration / registry files, services, runtime libraries into a ‘Virtual Application Appliance’.

4. Tuning the Target Environment

Once the customer’s existing instances have been migrated to the target cloud platform, these instances have to go through an adjustment stage to configure them to the target cloud environment’s architecture standards. Some of these are: applying any operating system level security patches, security policy or regulatory requirements, IP address changes.

5. Final testing and go-live

This is the final phase where it is confirmed that the migrated workload is performing as expected and the cloud platform now becomes the production environment for the migrated workload. The old source servers and images are usually de-commissioned or re-purposed after a pre-defined period of monitoring.

Summary

Summary of the cloud migration process.

Conclusions

As with any modernization initiative, there is really no one-size-fits-all model and it is up to each business to decide how much change is tolerable and to decide how far into the cloud to step in. Please do drop a line if you’d like to share your feedback or ideas.

More stories

Why we added new map tools to Netcool

I had the opportunity to visit a number of telecommunications clients using IBM Netcool over the last year. We frequently discussed the benefits of have a geographically mapped view of topology. Not just because it was nice “eye candy” in the Network Operations Center (NOC), but because it gives an important geographically-based view of network […]

Continue reading

How to streamline continuous delivery through better auditing

IT managers, does this sound familiar? Just when everything is running smoothly, you encounter the release management process in place for upgrading business applications in the production environment. You get an error notification in one of the workflows running the release management process. It can be especially frustrating when the error is coming from the […]

Continue reading

Want to see the latest from WebSphere Liberty? Join our webcast

We just released the latest release of WebSphere Liberty, 16.0.0.4. It includes many new enhancements to its security, database management and overall performance. Interested in what’s new? Join our webcast on January 11, 2017. Why? Read on. I used to take time to reflect on the year behind me as the calendar year closed out, […]

Continue reading