With DockerCon 2015 just a few days away, I thought it would be a good time to talk about some of the work we have been doing around continuous delivery of applications and micro-services with docker and ‘the cloud’. In this post, I’ll lay some groundwork and provide an overview of solutions for continuous delivery and DevOps involving containers.
Just try it…
If you would rather just dive right in, then great.
First off, if you don’t already have one, get an IBM Bluemix account by using your IBM ID. Next make sure you are registered for the Containers beta. You can do this by logging into Bluemix and then trying to create a container in the dashboard.
Puquios (network of aqueducts or ‘piplines’) is a GitHub repository of both Container and Cloud Foundry applications and pipelines. Simply select a project in there and click the ‘Deploy to Bluemix’ button in the readme or click it here:
The Deploy to Bluemix button forks the project, creates a delivery pipeline, and begins the deployment process for you. Awesome.
DevOps and pipelines
DevOps means different things to different people. That is a blog, or perhaps a book unto itself. For the purposes of this posting, I’ll just address a part of the automation picture. Automation is a critical practice in Continuous Delivery and DevOps solutions. The basic idea is to be able to make a code change, then have an automated and reliable process to push changes through the delivery process. The delivery process almost always includes automated builds, a test scenario and a deployment. The process often also needs to include other elements like globalization, security penetration testing, and approvals.
The pipeline is the piece that orchestrates the process through stages and jobs. The purpose of a pipeline is to establish a repeatable process for accepting code changes, testing those changes, and deploying validated changes into production or staging environments.
This process is similar to the idea of “every time I check in a change, I should run unit tests and automatically build the application in my Jenkins jobs”, we just want to take the notion a step further. Almost every software project has some form of a pipeline and we want to make this simple, powerful and repeatable.
The process of updating varies from application to application. Typically, the process either follows an upgrade strategy, or a rolling deployment strategy. An upgrade takes an existing application and infrastructure and applies updates to the software running within that same environment. A rolling-deployment stands up a new instance of the application, attaches external services and data-sources and then re-routes traffic to the new instance, or not, based upon metrics and goals. Generally speaking, traditional on-premise applications fall into the upgrade scenario, and cloud-centric applications use a rolling deployment. It is important to understand the characteristics of your application and which scenario is best suited to manage them.
‘The Twelve Factor Application’
The Twelve-Factor App does an excellent job of summarizing attributes of a cloud-centric application that is well suited for running in a container, continuous delivery, and the cloud. These attributes are valuable for horizontal scaling, feature flag work, and high availability. They also impact how we manage our application.
A typical process for a rolling deployment might be:
Version n of your application is deployed and has traffic being routed to it.
The application is modified, packaged and tested through a continuous integration process.
A new version of the application is deployed.
Some portion of traffic is routed to the application for Canary Testing, Feature Flags, or red-black deployments.
Based on metrics or testing, traffic flow is increased or decreased to the application versions.
Hybrid applications can take many forms. Typically, hybrid applications have a couple of elements:
A Legacy application that follows traditional update processes, but has fast moving new features and function that are delivered following cloud centric strategies.
An Application that has a carefully managed system of record, which must run on premise, but is connected to components ‘in the cloud’ that take advantage of cloud services.
Enough already … tell me about Containers and DevOps at DockerCon
IBM Continuous Delivery Pipeline for containers on the cloud
Bluemix DevOps Services provides developer services for IBM Bluemix. By using Bluemix DevOps Services, you can bring your own Git repository, or have one hosted for you. Bluemix DevOps Services provides ‘Tracking and Planning’ provides tools for managing work items, sprints, and backlogs that are geared towards agile processes.
Bluemix DevOps Services also includes the Delivery Pipeline for the delivery of applications to Bluemix. The pipeline is triggered from changes to the application code, and provides continuous build and delivery stages for the IBM Container Service. For example:
The Delivery Pipeline service is well suited for a small team developing an application or micro-service. It is simple yet powerful and allows a team to quickly automate their delivery process with none of the costs or complexity of traditional solutions.
The Delivery Pipeline can build and deploy containers and container groups, and supports zero-downtime rolling updates with the mapping of URL to your container group.
Best of all, it is really easy to get started.
To get started try an example from Puquios, just click the Deploy to Bluemix button here or in the project’s readme.
Powerful DevOps services
The pipeline’s job is to orchestrate a set of jobs and stages and we want that to be no more complicated than necessary. The power of the pipeline however, is in the jobs that it runs. In IBM Bluemix there is a host of powerful services and we are making these available within the Delivery Pipeline service. Here are a few examples:
Build Service. No need to build containers locally, and trust in manual or disconnected processes pushing images to your repository. Bluemix has a build service that takes the source then builds, versions, and pushes a docker image to your organization’s private image repository on Bluemix. This process is an out of the box repeatable process for building versioned images that map to the versions of your application.
Code scans. By using the IBM Static Security Analyzer, we check for vulnerabilities in your code. So simple to use that security scans may even become something developers enjoy doing.
Globalization Pipeline. This is awesome. The Pipeline can extend the Globalization experimental service in Bluemix, which uses Watson Machine Translation to translate your application. This capability means that you no longer have to wait for manual human processes deliver changes in multiple languages. Best of all the Globalization service allows human translators to review and edit the machine translation. If they make a correction, it will not be overridden by the next machine translation. You get the best of both worlds, and with a choice of static binding or dynamic binding for translated strings. You can finally can fit translation into your continuous delivery process and be confident in the end results.
I expect the services that are tightly integrated into the Delivery Pipeline service to expand consistently. The combination of the Globalization service and other Bluemix services makes building and sharing a powerful pipeline simple and cost effective. For more information, check out this video.
DevOps for hybrid applications and containers
IBM Urbancode Deploy (UCD) combined with IBM Bluemix provides the capabilities for continuous delivery and devops practices for hybrid applications. We have updated UCD with plugins to support local Docker Swarm, Docker Containers, Bluemix Containers and Docker registries including Docker Hub Enterprise.
You can orchestrate the deployment of applications across environments, and those applications can be included in a combination of traditional components and containers.
Sanjay Nayak has a good blog post and video on using UCD with the Secure Gateway and Bluemix.
Thanks and feel free to send me any feedback at email@example.com.
On a recent episode of Software Engineering Daily’s popular podcast, host Jeffrey Meyerson sat with Rodric Rabbah to discuss cloud native development, Serverless functions being the focal point. As the principal researcher and technical lead in serverless computing at IBM, Rodric helped design OpenWhisk, the open source functions-as-a-service platform that IBM has deployed and operationalized as IBM Cloud Functions.
Today I’m blogging from ISTIO Day at CloudNativeCon in Austin. IBM has a whole cache of event activities for attendees looking to unlock the secrets to accelerated DevOps, container ops, distributed logging, container security, microservices, and serverless computing. We’ll also have tips & tricks for building cloud native. Coming out of the conference, developers will […]
IBM can help your company modernize your application infrastructure as well as your development and operations culture to ensure your success with a pragmatic, swift, and low-risk cloud adoption journey that builds on your existing investments in technology and talent. Join this session to hear about IBM's client learnings from IBM Cloud Private patterns and use cases that you apply to your own cloud adoption plans.