How-tos

Let’s talk building artifacts when enabling DevOps in a microservices architecture

Share this post:

Introduction

In our previous articles, Building a DevOps pipeline for an API Connect and Microservices architecture, Enabling DevOps in a microservices architecture, and Let’s talk deployment in enabling DevOps in a microservices architecture we discussed our project and its challenges. We also outlined the structure of the DevOps pipeline we built to help us, and how our UrbanCode Deploy (UCD) deployment is structured to deploy and configure our microservices.

In this article, we’ll discuss our build pipeline in more detail. We’ll talk about how we get from microservice source artifacts (including code) to deployable binaries, and scripts/metadata for configuring and deploying them.

What does the build need to do?

Our UCD deployment pipeline needs a number of artifacts:

  1. WAR file, containing the deployable microservice code.
  2. manifest.yml, configuring Bluemix memory allocation, buildpack configuration, etc.
  3. An application configuration script, setting up specific environments for the application.
  4. APIC artifacts, including API and Product YAMLs, as well as a product deployment script that uses the API Connect Command Line Interface (CLI) to stage and publish APIs.

At the end of the build run, these all need to be uploaded to UrbanCode Deploy.

The manifest.yml and configuration script can be extracted and uploaded directly from the source repository – they don’t require any build process. However, our WAR file (the deployable microservice binary) and APIC artifacts need to be built from the source repository, and need a build process to achieve that.

Creating deployable artifacts

Now we know what UCD needs from our builds, we can see that our build process needs to perform the following actions:

  1. Download the source code.
  2. Compile and test the code, and package the deployable artifacts.
  3. Create a new component version in UCD with the correct artifacts.
  4. (Optionally) Instruct UCD to start deployment of the new component version to our “Development” environment.

(2) is handled by a complex Maven build script. (1), (3) and (4) are managed for us by Jenkins, although you could use many products for this purpose, including IBM Bluemix Continuous Delivery.

The Maven build script

Standardizing the build and source structure across all our microservices brought numerous benefits, not least the ability for developers to move between codebases with minimal education.

The microservice’s Maven project definition (or POM) defines the following:

  1. Target runtime information , and implied dependencies provided by the platform. This information is provided by IBM. You can see the documentation here and understand which version of Liberty is used by your Bluemix Liberty buildpack by reading the buildpack update notes.
  2. Compilation processes, building Java files as well as API Connect metadata.
  3. Standard JUnit testing instructions.
  4. Packaging notes, creating a deployable WAR file.
  5. Environment variable references, passed to the FVT process. These can be specified in a file (for local development) or in standard environment variables (for continuous integration) – these operation modes are enabled by Maven “profiles”.
  6. FVT testing information. This starts a temporary Liberty server, installs the WAR file, and runs a number of REST invocations against the server. In some our tests, more advanced behaviour is present, such as starting a temporary Apache Derby server to provide a backing database for the microservice.

We’ll discuss the in-depth structure of this Maven build script in a further article.

In our project, we were able to factor common POM specifications into a single “parent” POM that each microservice’s POM inherited. We’d highly recommend this approach, particularly combined with Maven “profiles”, which can conditionally enable/disable elements of the process.

Building APIC artifacts

Our Maven build script naturally lends itself to building and testing Java artifacts, but we were also able to integrate our APIC artifact build process.

We used a template driven approach to generate API Connect .yaml files based on the swagger representation of the microservices.

We standardized on a set of policies that every API used, including

  1. An activity log policy
  2. A set-variable policy to inject custom headers required by the target microservice
  3. An invoke policy to call the target microservice

The .yaml files that represent APIs contain both the standard swagger format for the interface definition as well as a custom extension for describing the policies that the API Connect Gateway executes when an application calls an endpoint. The format for these policies is documented in the Knowledge Center: https://www.ibm.com/support/knowledgecenter/SSFS6T/com.ibm.apic.toolkit.doc/rapim_ref_ootb_policies.html

Using a python script, we merged the swagger interface from the microservice with the IBM extensions for defining the runtime behaviour into a single API Connect yaml file. The script took the following input:

  1. A config file with any settings specific to the API. For example, the host for the target microservice was captured in this file.
  2. Template for the API yaml that the script generates
  3. The swagger for the microservice

The Jenkins build job will upload this yaml file along with other build artifacts.

A Maven repository

To enable consistent use by developers and build systems, a private Maven repository maintained standard, versioned artifacts for:

  1. The parent POM.
  2. Common libraries, reused across multiple microservices.
  3. Liberty server definitions, used in FVT testing for microservices.

Using a Jenkins job to manage the build

We used a separate Jenkins job to manage the builds for the major version of each microservice.

We fetch our source from the per-microservice Git repository in our private Github instance. Using the Git parameters plugin for Jenkins, we can parameterise this, targeting a major release branch, ie. “v1”, by default, but allowing us to manually build against a specific commit or tag if required. Builds against this default branch are kicked off automatically by a commit hook – this is injected into the Github project by a further Jenkins plugin.

Per-job environment variables are defined, allowing us to inject information required by the build and its FVT phase, such as the location of third-party services to test against.

At this point, the Maven build script discussed earlier is executed, compiling, testing and packaging the artifacts:
invoke top level maven targets

NOTE : The “dev” profile is disabled and the “build” profile is enabled, ensuring that FVT environment variables are provided from Jenkins, rather than read from an override file. If any tests fail, the build will terminate.

If all this succeeds, we’re ready to upload the resulting artifacts to UrbanCode Deploy.

Uploading to UrbanCode Deploy

To upload the artifacts to UCD, we use the UrbanCode Deploy plugin for Jenkins. This provides us with a simple post-build step that we can configure:

publish artificats to ibm urbancode deploy

NOTE : Several default-content fields have been removed from this screenshot for simplicity’s sake!

We can optionally enable or disable the Deploy step, and specify which Application, Environment and Application Process to run in UCD.

This step will always attempt to create a new version in UCD, so it’s critical we get the version string correct.

Versioning components on UrbanCode Deploy

We’ve previously discussed the conventions we use on our project – these include semantic versioning and standard microservice naming practices.

When interacting with UrbanCode Deploy, it’s worth noting that every deployable version of a component (microservice) must have a unique version string within the component, so specifying the correct “Version” when uploading artifacts to UCD is critical. This means every single build must generate a unique version string before uploading artifacts to UCD. It’s important that those version strings are meaningful and human-readable .

Fortunately, our use of Git and Git “tags” makes this simpler for us. When we cut a new version according to semantic versioning rules, we tag the commit with the appropriate tag, ie. v1.20.3. Now we can use git describe --tags to generate a human-readable description of the commit.

To allow us to manually rebuild and redeploy any version of the code, we append an additional build identifier from the build system. In our system, we use the internal Jenkins build number, but a timestamp would also be effective. Without this additional build identifier, rebuilds of the same code version would have the same version identifier, and would violate UCD’s unique version requirements.

This means our versions all look like this:

${GIT_DESCRIBE}__${BUILD_IDENTIFIER}

Concrete examples look like this:

v1.20.3__63 :: commit precisely aligned with tag v1.20.3, Jenkins build number 63.

v1.20.3-2-gc061361__64 :: commit c061361 (two commits after tag v1.20.3), Jenkins build number 64.

NOTE : UrbanCode Deploy can be configured with “Environment Gates”. We would suggest creating one for the environments beyond “Development” so that untagged versions (ie. v1.20.3-2-gc061631) cannot be deployed to “QA” and above – perhaps via a “component version property”. This forces good practice when assigning semantic versions to “production” versions of microservices.

While BUILD_IDENTIFIER is provided by Jenkins, the GIT_DESCRIBE environment variable is not – we need to generate this manually by injecting an extra build step:

execute shell build

Structuring multiple Jenkins jobs

Now we’ve spoken about how the build process for a specific microservice is broken down, let’s take a look at how we configure Jenkins to achieve builds for multiple microservices.

As seen with our UCD configuration, our Jenkins instance is configured with a build job per microservice per major version:
APIs

You can also see a “TEMPLATE” job. This is disabled and thus can’t be invoked directly, but instead defines common steps used by all the microservices. Microservice projects refer to these steps using the Template Project plugin, and common process steps can be updated in a single place.

Consequently, most microservice build jobs consist of only three things:

  1. Git setup, parameters, etc.
  2. Environment variable specification.
  3. References to standard build steps from the “TEMPLATE” job.

This means specifying a new microservice or major version, as well as changing the process for all microservices, are very simple exercises!

Conclusion

In this article we discussed how we build deployable artifacts for of the microservices that make up our solution. We spoke about the overall structure of our build process, and then explored the individual steps to create and upload these artifacts to UCD.

As explained, a great deal of complexity is buried inside our Maven build script. In our final article of the series, we’ll discuss the detailed processes in this script, how we test the right things at the most appropriate levels, and provide an example skeleton microservice for you to base your projects on.

IBM MobileFirst and Bluemix Consultant

More How-tos stories
May 1, 2019

Two Tutorials: Plan, Create, and Update Deployment Environments with Terraform

Multiple environments are pretty common in a project when building a solution. They support the different phases of the development cycle and the slight differences between the environments, like capacity, networking, credentials, and log verbosity. These two tutorials will show you how to manage the environments with Terraform.

Continue reading

April 29, 2019

Transforming Customer Experiences with AI Services (Part 1)

This is an experience from a recent customer engagement on transcribing customer conversations using IBM Watson AI services.

Continue reading

April 26, 2019

Analyze Logs and Monitor the Health of a Kubernetes Application with LogDNA and Sysdig

This post is an excerpt from a tutorial that shows how the IBM Log Analysis with LogDNA service can be used to configure and access logs of a Kubernetes application that is deployed on IBM Cloud.

Continue reading