What are microservices, and how do they compare to traditional monolithic architectures?
Interest and implementation of cloud-native development utilizing microservices have increased dramatically over the last few years. That being said, some businesses have just started exploring this architectural pattern to build new applications or modernize existing ones.
In this video, we provide a high-level overview of microservices and compare them to traditional monolithic architectures by utilizing an example of a company that sells tickets to concerts and sporting events.
The video is intended to be a high-level exploration of microservices and purposely doesn’t dive into API gateways, databases, service discovery, observability, service meshes, etc. We can save those topics for future conversations.
Learn more about microservices and cloud-native development
Hi, I’m Dan Bettinger with the IBM Cloud team, and I’m here today to answer the question “What are microservices?”
What are microservices?
For those who don’t know, a microservice is an architectural pattern where every application function is its own service, and these services are deployed in containers, and these containers speak with each other via APIs.
What is a monolith?
To better understand what a microservice is, let’s compare it to a monolith. A monolith is a server-side system based on a single application. In Java, for example, the application will be deployed in WAR or JAR files, and that’s how it gets put into production.
The thing about a monolith is, initially it’s easy to develop, deploy, and manage. So let’s get a better understanding through an example. In this case, let’s pretend we’re a ticketing platform and we sell tickets to sporting events and concerts.
In a monolithic world, that application might look like this. We’d have a user interface. Another component would be the inventory system. We’d have a component that generates recommendations based on user inputs. We’d have a cart. Some sort of a payment and ordering component. And then we also have a reporting engine as well.
The challenges of using a monolith
The thing to understand about monoliths is that, traditionally, they have a lot of shared libraries, so they’re highly dependent on each other. If you change a library you need to understand the ramifications of these changes. You could effectively take a whole application down to your change.
Language and framework
Another challenge around a monolith is that you’re locked into the framework and the language that the team picked when they built it. So additional componentry, as it gets added, needs to be written in those frameworks and languages, even if better ones are out there. So that can be a problem as well.
Another challenge is the growth. So, this might be great initially, but what happens is, user feedback comes in and the development team adds additional capability—additional functionality. In this case, we’ll add in component A, and we’ll add in component B, and even component C.
So what happens is as the application gets larger, it’s it’s less and less likely that people on the team can understand the whole thing unto itself. They might know little sections about what the application does and how it operates, but, holistically, that’s a challenge. And that could lead to a lot of trouble in trying to deploy it as well as maintain the application.
Speaking of deployments, deploying a monolith as it gets larger becomes more of a heroic task, where a change window needs to be implemented—usually on a Friday night—and the ops team would have to go wrestle with this monolith in trying to get it deployed to production and have it stabilized and ready for Monday morning when the load comes back on top of it. So that’s a challenge there unto itself.
Another challenge with the monolith is the ability for it to scale. In this example with the ticketing company, if there’s a high demand for tickets and there is lots of users, maybe the payment system becomes under duress and it has some contention—it needs some help. In that situation though, the way to fix that would be to deploy the whole application again and that could be interesting.
In this case, we have one version the application running right now. When the load comes up, we need to deploy the second version of the whole thing. That can take time, and by the time it gets deployed and stabilized, that peak might have subsided. And in that case, you’ve done nothing for your users because you’ve missed it—they’re already gone. So that’s one way to look at it. That’s a monolith.
Let’s take a look at the same application deployed as microservices. So in a microservices-based deployment, we’d still have our user interface—that’s a service inside of its own container. We’d have the inventory service. We’d have the recommendation engine deployed in its own container as a service—the cart, for example. We’d have some type of a payment capability as well as the reporting. Now, each one of these talks to each other, where needed, via APIs.
The advantages of microservices
There are benefits you’ll see right off the bat. Right off the bat, we’ll understand that the team responsible for the reporting engine can use the language and framework that they want to use. The team that runs the cart, for example, can use their own language and their own framework that best fits their requirements. So that’s a really interesting benefit right off the bat.
Iterate at will/DevOps pipeline
Number two, you’re able to actually iterate at will. These containers and these services are front-ended by a DevOps pipeline. As a developer builds code or checks in code into the pipeline, it goes to the automated testing, once all that passes, that code can be deployed into production immediately. You’re no longer beholden to the speed by which the other teams can operate. So you’re able to iterate faster, bringing value to your customers at a faster pace, which is wonderful.
Less risk in change
Additionally, if there is a change that breaks part of that service or breaks, in this case, the reporting engine, the whole application doesn’t fall over. It still works. So, effectively, by using this model, you’re reducing your risk, you’re implementing smaller changes, and you’re increasing value over time.
Add new components
Another really cool part is that you can actually add in new components over time, just like we did on the monolith. So we can add in component A, component B, and component C. And they can all be in different languages and frameworks, which is wonderful. And they just communicate, again, over APIs.
Another benefit of the microservice-based architecture is its ability to independently scale. If there are a lot of people trying to purchase tickets at the same time, and the purchasing or that payment system is under duress or is under some contention, the platform can spin up additional containers to help with the load. And when the load subsides, those containers can go away. So that’s a wonderful way for the application to naturally breathe.
A review of microservices
So let’s review real quick: A microservices architecture is one where every application function is its own service deployed in a container and they communicate over APIs. You’re given the ability to have independence with respect to the language and framework that you choose. You’re able to iterate fast and when needed, and you’re able to scale independently. So that’s what makes microservices really interesting.
Thank you for your time today. If you’d like to learn more and see more lightboarding videos, check us out on the IBM Cloud Blog.
Learn more about microservices and cloud-native development
We are excited to announce the availability of Kubernetes v1.14.1 for your clusters that are running in IBM Cloud Kubernetes Service. IBM Cloud Kubernetes Service continues to be the first public managed Kubernetes service to support the latest upstream versions from the community.
By defining our own MCP server, we allow users to move to the Istio service mesh without any code and deployment model changes. This means we can easily use Istio to control, observe, connect, and secure services running outside Kubernetes clusters.