This 6-part series on microservices application development provides a context for defining a cloud-based pilot project that best fits current needs and prepares for a longer-term cloud adoption decision.
Gartner analysts recently emphasized the importance of doing cloud native development as soon as possible after any initial migration of existing workloads.
Here in part 2, we lay out the common capabilities of an architecture for rapidly developing applications based on a microservice. You’ll need these capabilities whether you’re transforming a monolith or developing a cloud native application.
Here’s a guide to the overall series:
An overview of microservices(part 1), after providing context for business pressures requiring faster app development and delivery, steps through the process a team went through in transforming a specific monolithic application.
Architecting with microservices (this part) lays out the common capabilities of an architecture for rapidly developing applications based on a microservice.
Designing and versioning APIs (part 6) offers best practices for managing the interfaces of microservices in your application.
Common Capabilities of Microservices
Application components run as microservices in the cloud, communicating with each other through the microservices fabric. Though patterns differ depending on the type of application, all microservices-based applications include some common larger-scale components and capabilities.
Reviewing the diagram right to left, we note that the backend microservices are exposed via APIs and consumed through an API Gateway. An API Gateway can be as simple as a proxy of endpoints or more sophisticated; employing security at first contact, monitoring and API versioning, and full API management systems with a developer portal for third-party consumption.
Microservices success using DevOps
Creating a successful microservices architecture requires a strong automation strategy using DevOps, addressing provisioning and continuous deployment/release.
Provisioning: Each microservice component of your application runs in its own self-sufficient container. The cloud platform instantiates a container with each microservice and all its software dependenices, and orchestrates how the set of containers are assigned underlying infrastructure. For example, running a container infrastructure over an Infrastructure as a Service layer using Docker orchestration engines like Kubernetes or Docker data center, automates how those environments provision containerized software over a virtual machine or bare metal server. Usually, a software development and operations team uses a vendor’s cloud platform as a service (PaaS), which manages all the provisioning complexity.
Continuous deployment and release: To rapidly iterate a microservices application, you must setup an efficient system for building and versioning Docker. With a multi-cloud strategy, your build and deploy automation needs to abstract away the differences in how you do things like auto-scale or apply cloud policies. IBM Cloud provides ready-made toolchains for doing so. As with continuous deployment, your cloud platform must support a process of both test-driven development of functional units, environment validation testing, and automated functional and performance testing of the overall application.
Compute options for microservices
Depending on what best fits your application and workload, you can choose different compute options on a platform as a service. IBM Cloud offers this range of options for running microservices:
Docker containers provide the most portability across cloud and on-premises environments, and Kubernetes offers an extraordinary range of controls for managing the complexity of microservices applications running at scale.
Cloud Foundry is an open source platform that abstracts the microservices software from all platform operations below the runtime level. This provides a powerful advantage for teams with different levels of polyglot coding skills that don’t want to manage any runtime operations.
Serverless, based on Apache OpenWhisk, is an open source, event-based programming platform as a service that both abstracts operations and supports sequencing of small independent functions that run based on defined events, rules, and triggers. Developers deploy simple event handlers to respond to events that occur in the cloud platform. All virtualization is abstracted.
Virtual machines (VM) are another options for creating and running microservices-based applications that require much more manual provisioning and DevOps to succeed. VMs and bare metal offer the most flexibility.
To build microservices on a secure platform, read this guide for selecting a cloud provider.
Continue to part 3 for a method to implementing your own microservices project.
For developing a cloud native (‘greenfield’) application, view an interactive architecture diagram of a microservices online store application and access a tutorial on deploying the same app in a Kubernetes cluster.
Kyle Brown (IBM Distinguished Engineer/CTO) and Rick Osowski (IBM Distinguished Engineer/CTO) collaborated with Roland on this post.
We're back with another lightboarding video, and this time we'll be investigating containerization. Sai Vennam will be using the example of a Node.js application that we want to push into production, and we'll be using two different form factors—virtual machines (VMs) and containers—to explain the advantages of containerization.
In this video, Alex Hudak covers the basics of GPUs, including the differences between a GPU and CPU, the top GPU applications (including industry examples), and why it’s beneficial to use GPUs on cloud infrastructure.