Let’s talk deployment in enabling DevOps in a microservices architecture

Share this post:

In our previous articles, Building a DevOps pipeline for an API Connect and Microservices architecture and Enabling DevOps in a microservices architecture, we discussed our project and its challenges. We also outlined the structure of the DevOps pipeline we built to help us.

In this article, we’ll further discuss how we manage deployment and configuration of our microservices on the runtime environments that make up our solution.

We won’t discuss:

  1. Provisioning the environments. Our environments are relatively simple, as Bluemix manages runtimes for us. We chose to sidestep this issue by provisioning the environment and its constituent databases manually. Solutions like IBM Cloud Orchestrator can help us if the solution becomes more complex.
  2. Building the deployment artifacts prior to deployment. For the purposes of this article, we’ll assume that the artifacts have been built and tested as far as possible. We’ll discuss the build process in a later article.

Understanding the deployment pipeline overview

In a complex solution like ours, deploying an application consists of updating many interrelated systems correctly—we’ve moved beyond simply updating code on the server. We need to ensure clients don’t suffer downtime and that we update everything in the correct order.

Accordingly, our deployment pipeline consists of a number of high-level steps:

  1. Deployment and configuration of a new microservice instance, using blue-green deployment methodologies for zero downtime
  2. Removal of the old microservice instance (only for minor/patch level upgrades)
  3. Updating API Connect’s (APIC) interface definitions so that any underlying function is correctly exposed
  4. Running automated tests to verify the setup is working as expected

Our entire deployment pipeline is coordinated by IBM UrbanCode Deploy (UCD).

Configuring UrbanCode Deploy

Representing microservices as Components

Every major version of every microservice is represented as a Component that:

  1. Uses the standard project-wide naming scheme “msvc-test-vX”
  2. Implements a single, common Component Template named “Microservice”
  3. Is part of a single project-wide Application object
  4. Is tagged with the special Component Tag “microservice”
  5. Has individual configuration variables defined, used to configure the microservice deployment

As you can infer from the naming schema, we have a Component for each major version of a microservice. Consequently, we can deploy multiple major versions of the microservice to a single Environment.

Within a major version, the various minor and patch versions are represented, along with their associated artifacts (WARs, etc.), as Component Versions within the Component. As only a single Component Version can be deployed within an Environment for each Component, each major version can only be deployed once on each Environment.

Each component version needs to have the following artifacts:

  1. WAR file, containing the code
  2. manifest.yml, configuring Bluemix memory allocation, buildpack configuration, etc.
  3. An application configuration script, setting up specific environments for the application.
  4. APIC artifacts, including API and Product YAMLs, as well as a product deployment script that uses the API Connect Command Line Interface (CLI) to stage and publish APIs.

We’ll discuss the “Microservice” Component Template in more detail in the next section.

Encapsulating the solution in the Application

The Application ties our microservices and deployment environments together. It has an Environment specified for each of our environments that can be targeted for deployment. In our case, that’s Development, QA, Preproduction, and Production:

development production environments

At the root of our deployment process is the Application’s “Install” Application Process. This is used for deploying new versions of microservices to a target environment, and rolling back the deployment if any part of it failed:

application devops process

NOTE: Both the “Install Multiple Components” and “Rollback Multiple Components” steps call the same underlying Component Process, “Deploy”, defined on all our microservices. When rolling back, the previous version of the microservice is deployed.

Even though we’re deploying to Bluemix, we still need a worker, or Agent, to run the process steps. This system runs tasks like “cf push”, automated tests, etc, and is shared among all the environments. Nonetheless, each of the UCD Environments needs an associated Resource, and within this an Agent configured as the target for all its microservice Components via their common “microservice” tag:


Configuring the microservice Components

As part of the deployment process, we configure the microservice with parameters it needs. Examples of these may be the URLs of other microservices, authentication credentials for third-party services, etc. To do this, we use a custom configuration script that is unique to the component. During deployment, placeholders in the configuration script are substituted with values from UCD’s configuration set, and the script is executed.

Within UCD’s configuration set, each microservice Component has a set of configuration variables defined. UCD makes it possible to rapidly set their values, varying them by environment. We can access a top-down configuration screen through the Environment properties:


In this example, we see a common “external_endpoint_url” configuration variable on all Components, and a “test_service_url” configuration variable defined only on Component “msvc-test2-v2.”

Simplification through commonality: the Component Template

Most of our pipeline’s complexity is hidden away in our “Microservice” Component Template. This defines a number of Component Processes, one of which is the main orchestrating process—appropriately named “Deploy.”

Conducting: the “Deploy” process

This is responsible for executing child processes, emailing progress, and correctly aborting deployment in the case of any errors. It orchestrates the following high-level steps:

  1. Deploying and configuring the application on Bluemix
  2. Updating the APIC deployment to reflect the new application interface
  3. Executing automated tests to check the interface behaves as expected

deployment process

NOTE: In the event of an error, the overall Application Process may choose to roll back the changes, but the “Deploy” process does not directly manage this.

Deploying and configuring the Bluemix application

The “BluemixDeploymentProcess” process handles blue-green deployment and configuration of the microservice’s Bluemix application. Broadly, it performs the following steps:

  1. Substitutes placeholders in the microservice’s configuration script with real values from UCD’s configuration set, and ensures there are none missing
  2. Deploys a completely new Bluemix application with a temporary name
  3. Configures the new Bluemix application by executing the substituted configuration script
  4. Checks the new application starts successfully
  5. Moves the active Bluemix routes/URLs from the “old” application onto the “new” application. During this process, there is a short period where both the “old” and “new” application are serving requests
  6. Stops and deletes the “old” application
  7. Renames the “new” application to the standard application naming scheme

We end up with a linear process that hides further execution details, error checking, etc. inside multi-stage scripts:

linear process that hides further execution details, error checking, etc. inside multi-stage scripts

Once successfully deployed, we need to update the APIC deployment to accurately reflect the newly deployed function.

Updating the external interface

The “ApicDeploymentProcess” process handles updating the APIC deployment to accurately reflect the microservice interface. This process depends on the existence of a custom deployment script, packaged as part of the component version’s artifacts.

In our APIC deployment, we maintain an APIC product per microservice & major version. As we have adopted semantic versioning, customers can subscribe to a microservice major version, knowing that upgrades will be non-breaking. As part of the deployment, we want to upgrade customers’ subscriptions to the product so they use the new version.

This custom deployment script performs the following steps:

  1. Using the API Connect CLI, calls apic products to get the full list of products in the specified environment (ie. API Connect catalog.)
  2. Iterates over all results from #1, to determine the current version number of the published product that is being updated. For each major version, a product should only be published once. (awk and sed are used to process the CLI output.)
  3. Once the currently published version is determined, if the major and minor are the same as that of the microservice, then the process terminates. The API product only needs to be updated if the interface has changed, and since we are using semantic versioning, we know that if the minor and major are unchanged, then there is no interface change.
  4. Stages the new product version, checking success.
  5. Publishes the new Product, replacing the currently published version and auto-migrating existing subscriptions.
  6. Checks the publish worked as expected.

The script is executed as the “StagePublishProduct” step in the “ApicDeploymentProcess” process:

The script is executed as the “StagePublishProduct” step in the “ApicDeploymentProcess” process

NOTE: In some cases, where the newly deployed version is the same as the current version, no action is required. This case is covered by a return code of “2” from the script.

Once the interface has been updated, we can verify the actual behaviour of the runtime, as exposed by the interface, is working as expected.

Verifying the application works as expected

We have a reasonable level of assurance from previous steps that the application has been successfully started and configured, and its interface deployed. We also know that the build system ran unit and FVT tests on the application’s artifacts before they were pushed to UCD. We’ll talk about these build and test steps in a later article.

However, we need to be comfortable that the application is behaving as expected in the actual deployment environment – where real databases, APIC proxies, etc. are deployed. Consequently, after deployment, we run additional tests on the interfaces (the APIC facade) that our microservice’s clients are invoking. Many HTTP test frameworks are available for this. We call these our integration tests.

devops integration testing

NOTE: The missing case allows for the situation where no tests exist for the microservice at all. Though in an ideal situation integration tests are written as user stories are implemented, in our case these tests were introduced into an existing code base. This allows our process to succeed, even when integration tests don’t exist for the microservice yet.


In this article we discussed how we manage deployment and configuration of the microservices that make up our solution. We spoke about the overall structure of our UCD configuration and its responsibilities, and then explored the individual processes in detail, showing how they executed the pipeline’s steps. As explained, a number of artifacts for each microservice’s version are needed to make this pipeline work. In the next article, we’ll talk about how we create these deployable artifacts.

IBM MobileFirst and Bluemix Consultant

More DevOps stories
May 3, 2019

Kubernetes Tutorials: 5 Ways to Get You Building Fast

Ready to start working with Kubernetes? Want to build your Kubernetes skills? The five tutorials in this post will teach you everything you need to know about how to manage your containerized apps with Kubernetes.

Continue reading

May 3, 2019

Using Portworx to Deploy and Manage an HA MySQL Cluster on IBM Cloud Kubernetes Service

This tutorial is a walkthrough of the steps involved in deploying and managing a highly available MySQL cluster on IBM Cloud Kubernetes Service.

Continue reading

May 2, 2019

Kubernetes v1.14.1 Now Available in IBM Cloud Kubernetes Service

We are excited to announce the availability of Kubernetes v1.14.1 for your clusters that are running in IBM Cloud Kubernetes Service. IBM Cloud Kubernetes Service continues to be the first public managed Kubernetes service to support the latest upstream versions from the community.

Continue reading