Share this post:
Do you have a number of existing WebSphere Application Server-based workloads that you are looking to simplify? Are you looking to reduce cost while being more flexible at the same time? Do you want to easily integrate with value-add, cloud-based services like cognitive and analytics? Well, you probably know where this is going, but cloud is your answer! But before this turns into an infomercial, hear me out…
With many traditional workloads moving to cloud-based infrastructures, now is the perfect time to assess your existing Java-based workloads and migrate them to WebSphere Application Server on IBM Bluemix.
More commonly known as “WAS on Cloud”, this service provides a fully-configured, turnkey WebSphere-based application environment, supporting both traditional WAS and Liberty applications. You use your same tried & true
wsadmin automation scripts, same CI/CD techniques, but just point them at Bluemix instead of your own infrastructure! The low-cost, pay-for-what-you-use hourly model is much easier on your mind and your organization’s wallet than worrying about maintaining your existing servers in perpetuity. Automated security updates. Pre-configured firewalls with no default traffic to your Network Deployment cell. All out of the box for free!
Migrate your WebSphere apps to the cloud in phases
Once you’ve moved some of your WebSphere-based workloads to the cloud, evolving them into smaller, nimbler autonomous application components is the next logical step. If you’ve followed trends in application development, you know these are now commonly referred to as microservices. Microservices provide easier logical and physical deployment of independent, resilient, and stateless components that are assembled in a distributed fashion to make your application more agile long-term. No longer are you forced to deploy a large application stack and a web of dependencies just to provide simple web frontends.
If you are not familiar with microservices and its benefits, check out the IBM Cloud Garage Method:
Learn more about Microservices architecture
However, you can’t change everything all at once! Moving your WebSphere workloads to the cloud is covered in this post; evolving those workloads to microservices is the topic of the next post in this series. For a brief overview of what’s involved, watch the video below (it’s only 1 minute long).
Now let’s go over the phases in a bit more detail.
Enterprise Application Modernization
“Enterprise Application Modernization” is a fancy title for a strategy that we’ll employ while building this reference implementation in phases. These phases, documented below, provide teams, with existing application ownership, the ability to learn and adopt cloud native principles in a serial nature, while at the same time not sacrificing production efficiency or outages to the business. This strategy is covered in-depth in a free WebSphere on the Cloud webinar you can watch after you finish up here!
Phase 0: Current State
This phase involves understanding and documenting the as-is state of the existing applications. These applications are often on-premises, with very high-touch deployments. The deployments may be further complicated by disparate ownership and, for better or worse, layers of inefficiencies.
Phase 1: Modernization
This phase focuses on modernizing application components to present-day versions, as well as updating development environments and delivery pipelines to modern practices. Note that it is an intermediate modernization step and not a complete re-architecture.
More details on Application Modernization – Phase 1
The modernization of many enterprise applications is impeded by a complex web of dependencies and binary incompatibilities. It is during this phase that these problems are resolved, removed, or contained, so as to limit their effective “blast radius” when adopting newer desired— albeit potentially incompatible —platforms and technologies.
Phase 2: Mitigation Phase
This phase focuses on iteratively moving core pieces of compute-based business logic to cloud-based services. This iterative approach reinforces proper cloud-development principles, while not sacrificing production workloads and efficiency.
Clients are frequently stuck in “analysis paralysis” before even getting to this phase. The mitigation phase is critical to hit the ground running in later phases when the majority of the heavy lifting will occur. Additional networking and security practices are usually defined during this phase, as legacy enterprise teams are exposed to public cloud provider configuration details for the first time. Again, we want to minimize the number of variables that we are changing at any given time.
Most existing database and messaging capabilities are left intact during this phase; the focus is on the core compute functionality in the new cloud environment. Note that this phase may be a target end state for some enterprise clients.
Phase 3: Production Lift & Shift
A potential target end state for some enterprise clients, the “lifted” application and all core critical components are moved to the cloud provider during this phase. Additional components may still exist on-premise, and depending on latency requirements, either directly connected to the cloud provider or cached via secured, higher-latency direct connections to the on-premises resources.
Adoption of newer cloud-based services, such as newer databases and messaging capabilities, are often adopted during this phase as well. This phase is where clients begin to realize the value of cloud, with continuous integration and continuous delivery pipelines managing their deployment to production, minimizing hands-on activities with deployed applications, and integrating next-generation service management capabilities.
Next on WebSphere modernization to microservices
The prior sections are just an introduction to what we are planning to show for our WebSphere modernization to microservices work efforts. You can peruse much more detail in our working repositories over on GitHub:
Browse reference implementation project
For our reference implementation, we’re finishing up the details of Phase 1 and getting ready to jump into Phase 2. Each of these phases will provide detailed instructions to build the reference application and prescriptive guidance on how to build your own in the same way!
Look for the upcoming posts in this series; we’ll detail the work done with our reference implementation through the phases, how we knew we were “ready” for microservices, and all the good DevOps-y principles behind everything we’re doing.
If you’re anxious to get started right away, in addition to the reference architecture above, you should dig into the peer resources below.