September 5, 2017 By Debasis Das
Vijay Sukthankar
11 min read

WebSphere on the Cloud: Evolving to Microservices

Do you have existing monolithic Java/JEE applications running on WebSphere Application Server? Is your application comprised of multiple business functionalities, but it’s packaged as a single application? Are there performance bottlenecks that you are not able to resolve because the application does not scale well? Are there failures that can lead to unavailability of the entire application? Do you have to create new versions and roll out the entire application for minor modifications/enhancements? Are you struggling to use the right technology for individual components of your applications?

If you answered yes to any of the prior questions, you are in the right place. Maintaining old monolithic applications is becoming more expensive and challenging. In such a scenario, it is imperative to move to a modern microservices-based architecture that will solve many of the aforementioned challenges. Some of the key benefits of moving to a microservices based architecture include:

  • Better fault isolation; if one microservice fails, the others will continue to work

  • Appropriate technology can be used to develop individual application components

  • Changes to the application are isolated to the individual components, hence no expensive complete rollout of the entire application needed

  • Easy to scale individual components

  • Simplified DevOps process because of the adoption of container technologies like Dockers, Kubernetes, Istio etc.

A microservices-based architecture can result in:

  • Faster time to market

  • Infrastructure cost reductions

  • Faster introduction or enhancements of business functionalities.

For some great insights on microservices, also check out the IBM Cloud Garage Method:

Learn more about Microservices architecture

Microservices done right can deliver the benefits mentioned above. There should be a lot of emphasis on “done right”, because at the end of the day, it is not just changing the technology, it is also adopting to a new culture and a new way of working. When you move from a monolithic application to a microservices-based architecture, you must:

  • Move away from a large single team structure to a more decentralized smaller teams structure

  • Bring in technology diversity within the organization – Use the most appropriate technology for the job

  • Introduce a devOps culture that fosters greater collaboration between developers, operations, and everyone else involved in software delivery.

Refactor monolithic WebSphere app into microservices-based app

In this post, we will describe how you can refactor your existing monolithic WebSphere application into a microservices-based application. To validate our recommendations, we created two reference implementation projects on GitHub:

We followed the steps in this blog to refactor the original application into a microservices-based application. The above projects include detailed technical steps documenting how the refactoring was achieved.

The reference monolithic application is a simple store-front shopping application. Like many built during the early days of the Web 2.0 movement, users interact directly with a browser-based interface and manage their cart to submit orders. As depicted below, this application is built using the traditional 3-Tier Architecture model comprised of an HTTP server, an application server, and a supporting database.

The shopping app has products grouped into different categories; the user can search through the website, add items to the shopping cart, and later submit an order. The application uses two databases to store application data—the product inventory data and customer order data. You can find more details about the application architecture and application components in ibm-cloud-architecture/refarch-jee-customerorderon GitHub.

Reminder: If you are not familiar with the IBM Cloud Reference Architectures and their benefits, you can review them in the IBM Cloud Garage Method.

Step 1 – Assessment

To modernize the existing Websphere Application into a microservices-based architecture, we need to analyze the application and assess whether the application can be modernized and if the effort is justified. Occasionally, it’s easier and less expensive to recreate the application from scratch. However, most of the time, given the nature of JavaEE applications, it is much easier to convert it into a microservice-based architecture than to start over.

The first step is to analyze the application and understand the key business functionalities and how they are implemented. A good understanding of the key business functionalities can help us in repackaging it into microservices by following the “one microservice per functionality” approach. We should keep the microservices guiding principles in mind while assessing the application. Some of the key guiding principles are:

  1. Microservices should be designed to be small enough to be owned by an agile devOps team (the so-called “2 pizza rule”)

  2. Each microservice must be responsible for its own data. While assessing the application, assess the database as well to check data dependency between modules

  3. Microservices must be stateless. A stateless application handles every request with the information contained only within it

  4. Microservices should be designed to do only one thing, but do it right.

Step 2 – Identification of Microservices

Having a better understanding of the application both from a functionality and a technical perspective puts us in a better position to refactor the application into microservices with minimal code changes while preserving the functionality of the application. Depending on the nature of the application, we can apply any of the standard patterns like Backend for Frontend (BFF), Service Pattern, Adapter Pattern, etc. For a complete understanding of the various choices, please see Rapidly developing applications: using development patterns.

While refactoring a big monolith application into microservices, you usually try to split the front end from the backend. Barring few technical challenges like Cross-origin Resource Sharing (CORS), Service Invocation, or SSO, this is typically a simple process. After the split, the BFF Microservice makes remote calls to the downstream Microservices.

Let’s pause a moment to review the Backend for Frontend (BFF) Pattern in more detail.

While a single-page application works well for single channel user experiences, that pattern delivers poor results across user experiences through different channels, sometimes overloading a browser with managing interactions with many asynchronous REST-based backing services.

A Backend for Frontend (BFF) pattern has evolved in which a backend aggregator service reduces the overall number of calls from the browser and in turn handles most of the communication among external backing services, finally returning a more easily managed single request to the browser. The pattern allows front-end teams to develop and deploy their own backend aggregator service (the BFF) to handle the entirety of external service calls needed for their specific user experience – often built for a specific browser, mobile platform, or IoT device. The same team builds both the user experience and the BFF, often in the same language, leading to both an increase in overall application performance and application delivery. While building a microservices-based application, our customers follow a standard approach:

  • Modern omnichannel UIs should be implemented as Single Page Applications (SPAs) with HTML/CSS/JavaScript

  • Those UIs are loaded from a static web server (either implemented using NGINX or just a simple Node.js web server)

  • The JavaScript in the client can then make REST calls down into a set of BFFs (or directly to Business Microservices) to fill in data in the SPA components (e.g., fill in the contents for GUI menu items or drop downs, or other GUI elements).

Microservices do not generate GUI elements in HTML – that is entirely done by the JavaScript and the static HTML/CSS. Instead, Microservices only return business-oriented JSON data in response to REST requests.

The next step is to identify application microservices. Instead of going “big bang”, it is better to identify chunks of business functionality that can be isolated easily from the main application and can be converted into microservices. That is, the application should be broken into microservices gradually; until the time you have completed the process, run it in conjunction with the monolith application.

Over time, the amount of functionality implemented by the monolithic application shrinks until either it disappears entirely or it becomes just another microservice. This approach is also called the Strangler pattern.

For the Shopping Application, we applied the above patterns and came up with three microservices as shown below:

The first and the most obvious one is the ShoppingWebBFF Microservice that will take care of the front end of the application and can be managed independently of the backend application.

The two other microservices identified, based on the Service Pattern, represent the key business functionalities of this application. The first microservice is the ProductSearch capability, which will have the functionality to search product catalog and present the search result. The second microservice is the CustomerOrder capability, which will have the functionality to fulfill customer orders. We need to ensure that each microservices must own their own data. After analyzing the applications and the databases, it was established that the ProductSearch service owns the ProductDB database and CustomerOrderService owns the OrderDB. In this application, there are no interdependencies between the databases. However, any interdependencies between databases such as foreign key references should be removed before creating these microservices.

From a business functionality perspective, while the search service needs to be fast, the CutomerOrder service needs to be reliable, stable and fault tolerant. We will continue to have some functionality in the backend EJBs. Over time, these functionalities can be moved or rewritten using appropriate technology for the identified microservice.

Step 3 – Building Microservices

Once we have identified the microservices, the next step is to build them. The following are the major activities:

  • Writing/refactoring the microservices

  • Write/rewrite the communication code for these microservices to talk to each other

  • Provide implementation for challenges introduced due to breaking the applications. Some of those challenges are Cross-origin Resource Sharing (CORS), Single Sign-On (SSO), and so on

  • Create the hosting environment to host these microservices

  • Create an Agile DevOps environment for continuous integration/continuous delivery.

Let’s briefly cover each below.

Writing/refactoring the Microservices: During this step, we must aim to leverage as much code as possible with minimal change. We should not look for introducing new functionality or moving to new technology at this stage. Even though microservices theoretically can be implemented using any technology, the container technology such as Docker and Kubernetes provides a natural fitment for implementing it. Owing to the short lived nature of microservices and the kind of erratic workload they may experience, containers provide a perfect solution, because of its small memory footprint, faster startup time (as compared to VMs) and application isolation.

Once you have created the Microservices from the existing code, package it appropriately in a Docker container by creating a Dockerfile. Resolve any dependency that it may have with any of the other aspects of the application like removing references to Java classes outside of the module, removing EJB references or JNDI lookups, etc.

Write/rewrite the communication code for these microservices to talk to each other: When you repackage/refactor your application, you must take care of the inter-module dependencies. The direct references to the classes need to be substituted by REST service calls. The first step will be to create REST interfaces for the module and appropriate REST invocation client logic in the calling module.

Provide implementation for challenges that gets introduced (CORS, SSO, etc): If you are making direct REST invocations to the backend modules (which after refactoring will become an independent microservice) from the front-end javascript, it may result in CORS issue. This can be resolved by ensuring that all the frontend communications are routed through the BFF as depicted below:

When you refactor your application into microservices, your traditional application security mechanism may not work. As a sample implementation, we have used Open LDAP as our directory server and SSO provider, which can federate access to different microservices and pass on the access context appropriately. For the shopping application, we have used Liberty as the implementation runtime. Appropriate changes were made to the server configuration files (Server.xml) to enable SSO and use OLDAP as the SSO service provider.

Create the hosting environment to host these Microservices: IBM Bluemix Container service provides an excellent hosting environment to host your microservices. IBM Container Service provides a native Kubernetes experience that’s secure, easy to use, removes the distractions related to managing your clusters, and extends the power of your apps with IBM Watson and other cloud services, binding them with Kubernetes Secrets. It applies pervasive security intelligence to your entire DevOps pipeline by automatically scanning Docker images and live containers for vulnerabilities and malware, leveraging IBM’s X-Force Exchange.

Login to Bluemix and create an instance of “Kubernetes Cluster”.

To deploy your containers in the cluster and configure it, there are some pre-requisites as mentioned in the “Access” section of your cluster service’s dashboard. Follow the instructions to setup the environment in your local machine.

Create an Agile DevOps environment for CI/CD: The next step is to setup a DevOps pipeline, that will build and deploy the microservices into the cluster and configure the cluster as per your requirements. The IBM Toolchain service can be leveraged to setup the environment. Create the required YAML files that contains the necessary deployment instructions needed by the Kubernetes Cluster to create your Microservices containers.

A sample YAML file will look like this.

<br>
apiVersion: extensions/v1beta1<br>
kind: Deployment<br>
metadata:<br>
  name: shoppingwebbffservice<br>
spec:<br>
  replicas: 1<br>
  template:<br>
    metadata:<br>
     name: shoppingwebbffservice<br>
     labels:<br>
       run: shoppingwebbffservice<br>
       test: shoppingwebbffservice<br>
    spec:<br>
      containers:<br>
        - name: shoppingwebbffservice<br>
          image: shoppingwebbffservice<br>
          ports:<br>
          - containerPort: 9443<br>
          - containerPort: 9080<br>
---<br>
apiVersion: v1<br>
kind: Service<br>
metadata:<br>
  name: shoppingwebbffservice<br>
  labels:<br>
    run: shoppingwebbffservice<br>
spec:<br>
  type: NodePort<br>
  selector:<br>
    run: shoppingwebbffservice<br>
  ports:<br>
   - protocol: TCP<br>
     port: 9080<br>
     name: http<br>
   - protocol: TCP<br>
     port: 9443<br>
     name: https<br>

You can trigger an appropriately configured delivery pipeline to build and deploy the microservices.

Step 4: Building Backend

WebSphere Application Server (WAS V9.0) on Cloud: In this example, the core business functionalities, Product Search and Customer Order, are continued as EJBs. This is to make sure that business functionalities are not impacted during the migration process. To deploy EJBs on WebSphere Application Server (WAS), we have used WAS on Cloud (V9) offering from the IBM Cloud platform.

It is also important to package the EJBs into its own module (EAR file), based on functionalities, which will make it easier to modernise part of the application. For example, let’s say you observed that the Product Search was slower than desired. In the future, you can change the implementation of the entire Product Search to use new technology such as Node.js and NoSQL DB, with multiple levels of caching to assist. This would help the Product Search to be fast in response as well as be easy to scale.

Backend Database: In this example, we have considered moving the entire application and database to the cloud. The database used was DB2 and hence the DB2 on Cloud (which is called as dashdb-for-transactions) was the preferred choice.

It is not necessary to have the database on cloud. In certain cases, where customer requirements demand the data remain on-premises, you can still connect your application (microservices as well as EJB Components) to an on-premises database using Secure Gateway.

Step 5: Putting it all together

Here are the final deployment diagrams, after breaking the monolithic application into microservices, using the Strangler pattern:

Option 1: When the database is on cloud

Option 2: When the database is inside enterprise network

Step 6: Extending for Agility

Now that part of the application is running as cloud native on Bluemix Platform with Kubernetes, it’s easier to add the projects in to DevOps toolchain.

When part of an application is running as a microservices, it’s now easy to add automated build, deploy, and test jobs. DevOps, Microservices and Docker-based containers are the three main pillars of the continuous innovations, scalability and resiliency.

Step 7: Testing

Finally, the application need to be tested thoroughly to ensure that the application behavior remains intact. Individual microservices needs to be tested to ensure that they are able to work independently in isolation. Subsequently, the overall application must be tested to ensure that the application is scalable, highly available and resilient.

Next Steps

Are you ready to try out the case study to get real hands-on experience? You can follow the steps mentioned in the ibm-cloud-architecture/refarch-jee-monolith-to-microservices project on GitHub. You can also take your own application and follow the similar process, by applying the Strangler and BFF microservices patterns. This blog talks about beginning of the application modernization. There are more steps needed to complete the entire modernization task; we will be publishing some of these in the next post, including the implementation of the Circuit Breaker pattern and applying MicroServices Fabric like Istio to manage the services.

Was this article helpful?
YesNo

More from Cloud

IBM Tech Now: April 8, 2024

< 1 min read - ​Welcome IBM Tech Now, our video web series featuring the latest and greatest news and announcements in the world of technology. Make sure you subscribe to our YouTube channel to be notified every time a new IBM Tech Now video is published. IBM Tech Now: Episode 96 On this episode, we're covering the following topics: IBM Cloud Logs A collaboration with IBM watsonx.ai and Anaconda IBM offerings in the G2 Spring Reports Stay plugged in You can check out the…

The advantages and disadvantages of private cloud 

6 min read - The popularity of private cloud is growing, primarily driven by the need for greater data security. Across industries like education, retail and government, organizations are choosing private cloud settings to conduct business use cases involving workloads with sensitive information and to comply with data privacy and compliance needs. In a report from Technavio (link resides outside ibm.com), the private cloud services market size is estimated to grow at a CAGR of 26.71% between 2023 and 2028, and it is forecast to increase by…

Optimize observability with IBM Cloud Logs to help improve infrastructure and app performance

5 min read - There is a dilemma facing infrastructure and app performance—as workloads generate an expanding amount of observability data, it puts increased pressure on collection tool abilities to process it all. The resulting data stress becomes expensive to manage and makes it harder to obtain actionable insights from the data itself, making it harder to have fast, effective, and cost-efficient performance management. A recent IDC study found that 57% of large enterprises are either collecting too much or too little observability data.…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters