November 28, 2016 | Written by: Ilene Seelemann and Nick Maynard
Categorized: DevOps | How-tos
Share this post:
In our first article, “Building a DevOps pipeline for API Connect and Microservices Architecture,” we described a scenario with multiple microservices with access governed and managed by API Connect. The deployment of the microservices in Bluemix was tightly interlocked with the availability of APIs published with API Connect. As such, a documented and consistent approach to versioning APIs and microservices was critical. The approach we used involved:
- Open API (aka Swagger) for describing both microservices and APIs (Note, we will use Swagger and Open API interchangeably in this article.)
- Semantic versioning
- Naming conventions for microservices and APIs.
In this article, we will describe how we used these techniques to keep APIs and microservices interlocked and stable through our iterative development cycle, while giving API consumers timely access to the latest updates. “It is critical to have a documented and consistent approach to versioning APIs and microservices.”
Aligning API and microservices
Before we go further, let’s take a closer look at how microservices and APIs interlocked in this project.
Application developers discover and subscribe to APIs through portals provided by API Connect. Each portal contains APIs for a specific catalog. In our project, we used a different catalog for each environment: QA, pre-production, and production. As a microservice was promoted to the next environment in its lifecycle—by being deployed to the corresponding Bluemix space—the API that provided access to that API was also updated and promoted to the next catalog if there were any interface changes. In this article, we will describe in detail how this process worked.
In the figure below, you can see how the lifecycle of the microservices interlocked with the lifecycle of the APIs. As microservices are promoted through their lifecycle, the externalized APIs for accessing these microservices are published in lockstep.
The heart of it all: Open API
At the core of our solution is Open API. Open API is a definition language for describing a REST API. An Open API definition is written in either JSON or YAML and includes all aspects of your REST interface from paths, request, and response structure to authentication requirements. It is also extensible so you can add solution or vendor-specific sections to your definition. There is a huge ecosystem of tools for Open API, from front-end editing and viewing UIs to code generation frameworks.
Open API is built into the core of API Connect. Open API definitions can be imported into API Connect. When you pull API definitions from API Connect, these are downloaded in Open API format.
We will show how we used Open API from the microservice development stage through the API publishing stages.
Semantic versioning guides the way
Another piece of the puzzle is versioning. We used versioning to coordinate the release of microservices and APIs. Specifically, we needed to answer the following questions: If we update a microservice, do we need to update the API that provides access to this microservice? How do we minimize impact to the API consumers?
We turned to semantic versioning to guide us to a solution. We used the versioning scheme: MAJOR.MINOR.PATCH
For example, version 1.2.17 represents a MAJOR release level of 1, a MINOR of 2, and a PATCH level of 17. (This is not decimal notation!)
Here’s how the version segments were defined:
- MAJOR: This is a breaking change to the interface. Existing code that uses this interface might break with this update. Examples of breaking changes include changes in request format of an existing endpoint or removal of endpoints.
- MINOR: This is a non-breaking change to the interface. Existing code that uses this interface should not break. A minor update might include completely new endpoints, additional optional parameters to existing endpoints, or simply updates to the interface documentation.
- PATCH: A patch update has absolutely no impact on the interface. This would be used primarily for defects/bugs in the implementation of existing endpoints.
You can think of a MAJOR update as a completely new release stream for your API product. A MAJOR update typically requires replication of the API and microservice stack, including dependencies such as databases. Old and new release streams usually need to co-exist with a deprecation strategy or end-of-life policy for the older stream.
(Learn more about semantic versioning.)
Our adoption of semantic versioning brought a guarantee that minor- and patch-level upgrades to version X of a microservice would be backward-compatible. Consequently, these upgrades (for a new minor or patch version) could be performed in-place with a blue-green deployment—by definition, they would not affect API clients.
If we needed to create a new major version of an already-deployed microservice, this is represented as a brand new Bluemix application and route. This enabled us to run multiple major versions of a single API within an environment, providing a gentle upgrade path for clients. We could also take advantage of Bluemix’s auto-scaling capabilities to appropriately size deployments according to usage at a finely-grained per-microservice, per-version level.
Adopting naming conventions
Another cornerstone to our approach was to adopt a naming convention for Bluemix applications and routes. As we relied heavily on automation as part of our DevOps drive, this was important for reliable provisioning, deployment, and configuration.
This became increasingly critical as the number of components in the solution increased. Our DevOps solution needed to reliably deploy applications, the API Connect layer needed to route traffic to them, our configuration scripts needed to target them, and our monitoring solutions needed to identify them.
We adopted a naming convention that included the following information:
- A shortname for the microservice
- A shortname for the environment
- A version indicator (major version only)
- A namespacing prefix/suffix to avoid route clashes with other Bluemix organizations and projects
For example, in Project X, a v2 of a “Test” microservice in the “prod” environment might have the following application name and route:
Application name: msvc-test-v2-prod-projectx
Alternatively, if using a custom domain, we could use that to simplify the route:
Application name: msvc-test-v2-prod
Note: As the application name need only be unique within the Bluemix space, it is possible to remove the “-prod” section from this (but not the route). We found that retaining this information improved visual grepping in the Bluemix console, reducing cases of mistaken identity! (To configure a custom domain in Bluemix, see here.)
The following figure shows three Liberty Cloud Foundry apps running in a QA space in Bluemix. Notice the naming convention in use for the application names and routes:
For the same project, you would find the following apps in the corresponding Bluemix space where you keep your pre-production deployments. In our case, we called this space “preprod” and we used the “preprod” segment in our naming convention.
You would do the same with all environments that your project uses during the development lifecycle: create a Bluemix space for each environment and use a naming convention for all applications (i.e., microservices) deployed in the space.
The microservices are externalized with API Connect. The following figure shows an example of how you can configure a
TARGET_HOST property in API Connect. This API is externalizing access to Microservice A. For each catalog in API Connect, we specify the Base URL to which the microservice provides access. Here you see that when publishing to the QA catalog, the
TARGET_HOST property will be set to https://msvc-a-v2-qa-article.mybluemix.net, and when publishing to the Preproduction catalog, the
TARGET_HOST property will be set to https://msvc-a-v2-preprod-article.mybluemix.net, and so on.
TARGET_HOST property is used in the configuration of the proxy or invoke policy for the API. This is the Assemble tab for an API in the API Manager or API Designer console. See that the Invoke policy is configured to invoke endpoints on
When the API is published to one of the catalogs, such as the preproduction catalog for consumption by application developers, the TARGET_HOST value for the catalog is substituted during the publish activity. An application developer can discover this API on the preproduction portal for the catalog. When they invoke the preproduction API, it will proxy to the preproduction microservice. In this way, the APIs and microservices are interlocked.
The naming convention allows us to do this mapping reliably and through automation.
Pulling it all together
Generating microservice skeletons from Open API specifications
We started with Open API (Swagger) definitions for each of the microservices. Although it’s possible to manually code implementations of Swagger specifications, the process can be time-consuming and unreliable. We opted for a code generation strategy, where a skeleton JAX-RS microservice and implementation classes were generated by a tool.
This gave us the following advantages:
- Guaranteeing implementation of the Swagger interface
- Kickstarting implementation
- Maintainability (important enough to mention twice!)
We used swagger-codegen’s “jaxrs-cxf-cdi” generator to generate a skeleton implementation for our service. This generates Java code: interfaces and models directly derived from the Swagger specification and sample implementations of these interfaces, which are updated by the developer with “real” code. (Explore swagger-codegen’s capabilities.)
The generator creates a Maven WAR project with interfaces and models in src/gen/java, and implementations in src/main/java. This WAR project is directly deployable to the Liberty runtime on Bluemix.
Developers update only the implementations by building them into meaningful services. Subsequent runs of the generator, such as when updating the specification, are automatically limited to update only the interfaces and models. This allows IDEs to assist developers by highlighting areas that require updates in the implementations.
Generating API Configurations from Microservice Open API
The APIs we externalized to application developers were basically proxies to the microservices that backed them. They also had a consistent configuration implementation in API Connect; we applied the same Activity Log policy to all APIs.
If you switch to the “Source View” for an API in API Connect, you will see the Open API (Swagger 2.0) definition for the API. As described in the IBM Knowledge Center, the definition is separated into two parts:
- Standard Open API: Used to describe the API interface
- API Connect extensions: Used primarily to describe the API implementation, including any policies that the API Gateway should apply and what target endpoints it should invoke to orchestrate the response. API Connect extensions are fully documented in the IBM Knowledge Center.
The generated API Connect YAML for each API was deployed and published using the API Connect CLI. You can read about the CLI here. We will dive into the detailed use in a future article.
End-to-end development flow
It all starts with the creation of a Swagger definition for the microservice. This is part of defining the specification with the customer. It should contain enough documentation for client consumption.
From this this, two parallel activities happen:
- The microservice skeleton is generated or updated
- The API .yaml files for API Connect are generated
The microservice code is updated by a developer and the microservice and API are published simultaneously for a MAJOR or MINOR update. Note: For a PATCH level update, a developer updates the microservice implementation. Then the .war is rebuilt and the microservice is refreshed. There is no interface change, so Swagger is not involved and the API is not affected.
Now we’re rolling
The first article outlined the problem space when developing microservices and externalizing access using API Connect. This article has laid out the key areas in establishing conventions on documenting interfaces, versioning, and naming. With these conventions in place, we were already moving much faster.
As we outlined, we had several scripts that developers used locally to generate artifacts (such as Java application templates and API Connect definitions), as well as scripts for deploying these artifacts. These tools reduced our risk and let us move at a faster pace. However, we weren’t fully automated yet.
In the next part of this series, we will describe how we worked with these tools toward continuous integration and deployment (CI/CD.)
Read the next part in this series