Refactoring to microservices, Part 1: What to consider when migrating from a monolith

How to move from traditional middleware architectures to microservices


In both the Cloud and Agile programming communities, it seems like everyone is talking about microservices. Other architectural principles like REST have taken the development world by storm, and now microservices are the newest wave to crest. But what are they? And why would a Java™ developer care about them?

In this series, I'll answer those questions and explain why you'd want to migrate your applications to microservices (Part 1). I'll delve into data refactoring (Part 2), and lay out a step-by-step method to help you migrate to microservices (Part 3).

What are microservices?

In Martin Fowler and James Lewis's classic article on the subject, we have the simplest definition of a microservices architecture: "An architectural approach that consists of building systems from small services, each in their own process, communicating over lightweight protocols." A critical concept to understand is that each microservice represents one unique business function.

Fowler also refers to microservices as "service orientation done right." As Fowler and Lewis state, and many others have attested to, the enterprise computing landscape is littered with the remains of large-scale SOA projects gone bad. Microservices may help reverse that trend, but we have to understand where to apply them, and more importantly, recognize that they are especially effective in projects that aren't the neatest, coolest thing on the block.

Why refactor to microservices?

Despite what the folks in Silicon Valley sometimes like to believe, not every application is a green-field. The reality is that enterprises have a lot of existing Java code, and a lot of Java developers. It's simply not economically feasible to throw away all of that Java code and start fresh with all new runtimes and programming languages.

Instead, it's better to find the good parts and reuse those in the right framework. That's why refactoring existing applications into microservices is often the best, most prudent approach to keeping your existing systems running while moving them to a more sustainable development model.

How do you refactor to microservices?

So what do I mean by refactoring? Programming communities define it as "introducing a behavior-preserving code transformation." That boils down to keeping your external APIs the same while changing the way your code operates or is packaged. Refactoring to microservices would thus mean adding microservices into your application without necessarily changing what it does. You wouldn't add new functionality to your application, but you would change how it's packaged and perhaps how the API is expressed.

Refactoring to microservices is not right for every application, and you can't always do it successfully. But refactoring is worth considering when you can't throw everything away. The three basic considerations are:

  • How is your application packaged (and built)?
  • How does your application code function?
  • How does your application interact with back-end data sources when those data sources are structured in different ways?

Step 1. Repackaging the application

The best place to begin is by revisiting your Java application packaging structure and adopting some new packaging practices before you even start to change your code. In the early 2000s we all started building ever-larger EAR files to contain our logical applications. We then deployed those EARs across every WebSphere® Application Server in our farm. The problem is that this tied each piece of code in that application to the same deployment schedules and the same physical servers. Changing anything meant retesting everything, and that made any changes too expensive to consider.

But now with containers like Docker and PaaS, and lightweight Java servers like WebSphere Liberty, the economics have changed. So now you can start reconsidering the packaging. Here are three principles you need to start applying:

  1. Split up the EARs: Instead of packaging all of your related WARs in one EAR, split them up into independent WARs. This may involve some minor changes to code, or more likely to static content, if you change application context roots to be separate.
  2. Apply "Container per service": Next apply the "Container per service" pattern and deploy each WAR in its own Liberty server, preferably in its own container (such as a Docker container or a Bluemix instant runtime). You can then scale containers independently.
  3. Build, deploy, and manage independently: Once they are split, you can then manage each WAR independently through an automated DevOps pipeline (such as Pipeline, which is part of the IBM Bluemix Continuous Delivery service). This is a step toward gaining the advantages of continuous delivery.

You can see the effect of applying these three principles:

Diagram showing before and after EARs are split
Diagram showing before and after EARs are split

Step 2. Refactoring the code

Now that your deployment strategy has gotten down to the level of independent WARs, you can start looking for opportunities to refactor the WARs to even more granular levels. Here are three cases where you can find opportunities for refactoring your code to accommodate a packaging approach that packages microservices independently.

  • Case 1. Existing REST or JMS services: This is by far the easiest case for refactoring. You may have existing services that are already compatible with a microservices architecture, or that could be made compatible. Start by untangling each REST or simple JMS service from the rest of the WAR, and then deploy each service as its own WAR. At this level, duplication of supporting JAR files is fine; this is still mostly a matter of packaging.
  • Case 2. Existing SOAP or EJB services: If you have existing services, they were probably built following a functional approach (such as the Service Façade pattern). In this case, functionally based services design can usually be refactored into an asset-based services design. This is because in many cases, the functions in the Service Façade were originally written as CRUD (create, retrieve, update, and delete) operations on a single object. If this is true, the mapping to a RESTful interface is simple: just re-implement the EJB session bean interface or JAX-WS interface as a JAX-WS interface. You may need to convert object representations to JSON in order to do this, but that's usually not very difficult, especially where you were already using JAX-B for serializations.

    In cases where it's not a simple set of CRUD operations (for instance, account transfer), then you can apply a number of different approaches for constructing RESTful services (such as building simple functional services like /accounts/transfer) that implement variants of the Command pattern.

  • Case 3. Simple Servlet/JSP interfaces: Many Java programs are really just simple Servlet/JSP front-ends to database tables. They may not have what is referred to as a "Domain Object" Layer at all, especially if they follow design patterns like the Active Record pattern. In this case, creating a domain layer that you can then represent as a RESTful service is a good first step. Identifying your domain objects by applying Domain Driven Design will help you identify your missing domain services layer. Once you've built that (and packaged each new service in its own WAR), then you can either refactor your existing Servlet/JSP app to use the new service or you can build a whole new interface, perhaps with JavaScript, HTML5, and CSS, or maybe as a native mobile application.

Step 3. Refactoring the data

Once you've built and repackaged the small services defined in the previous three cases, you'll want to turn your attention to what may be your hardest problem in adopting microservices: refactoring the data structures that your applications are built on. We'll examine this challenge more deeply in Part 2 of this series. But there are a few rules you can follow for the simplest cases:

  1. Isolated islands of data: Begin by looking at the database tables that your code uses. If the tables used are either independent of all other tables or come in a small, isolated "island" of a few tables joined by relationships, then you can just split those out from the rest of your data design. Once you have done that, you can consider the right option for your service.

    For instance, do you stay in SQL, but perhaps consider moving from a heavy-weight enterprise database like Oracle to a smaller, self-contained database like MySQL? Or do you consider a NoSQL database to replace your SQL database? The answer to that question depends on the kinds of queries you actually perform on your data. If most of the queries you do are simple queries on "primary" keys, then a key-value database or a Document Database may serve you very well. On the other hand, if you really do have complex joins that vary widely (for example, your queries are unpredictable), then staying with SQL may be your best option.

  2. Batch data updates: If you do have only a few relationships and you decide to move your data into a NoSQL database anyway, consider whether you just need to do a batch update into your existing database. Often when you consider the relationships between tables, the relationships don't take a time factor into consideration; they may not need to be always up to date. A data dump/load approach that runs every few hours may be fine for many cases.
  3. Table denormalization: If you have more than a few relationships to other tables, you may be able to refactor (or in DBA terms, "denormalize") your tables. Now, even discussing this can raise the hackles of many database administrators. However, if you take a step back, your team as a whole should think about why data was normalized to begin with. Often, the reason for highly normalized schemas was to reduce duplication, which was to save space, because disk space was expensive. However, that's simply not true anymore. Instead, query time is now the thing you want to optimize, and denormalization is a straightforward way to achieve that.


Now you have a taste of what refactoring to microservices is about, and what factors to consider in choosing your approach. The good news is that refactoring your code is not as hard as you may think, and in many cases, it's actually pretty simple. If you work your way through your code looking for these (relatively) simple cases, you may find that the more complex code sections are actually few and far between.

In Part 2 of this series, we'll dive deeper into how the structure of your data can affect how you choose to introduce microservices into your applications.

Downloadable resources

Related topics


Sign in or register to add and subscribe to comments.

Zone=Cloud computing, Java development
ArticleTitle=Refactoring to microservices, Part 1: What to consider when migrating from a monolith