I have been selling continuous delivery solutions around Europe for some time now, and in this post I would like to reflect on what I believe is the most important lesson I have learned from the field. That lesson is that successful continuous delivery relies on four pillars that must be present and have clear frontiers in terms of responsibilities and scope. Beyond the four pillars, I want to share my thoughts on a recurring pitfall I have observed that, in my opinion, has misled a lot of organizations on their way to implementing a successful continuous delivery process.
Since an image speaks a thousand words, here is a picture of the four pillars I will talk about:
What are the pillars?
Let’s go through each of these pillars in more detail to understand its role and scope.
- Source code management (SCM): This is the repository where all the source code written by developers is held. Different versions of the code are recorded and branches are created to support several parallel evolutions of a piece of code. This is where the entire continuous delivery process finds its source, where everything starts.
- Build: This process is responsible for retrieving the sources from the SCM, compiling them, running tests ranging from unit to integration tests, and packaging the complied files into deployable units. The result of the build process is published in the definitive media library (DML), available for deployments. This process’s best known tools are Jenkins and Hudson.
- Definitive media library (DML): This repository is used to store and protect software assets that are ready to be deployed. It is the central point for implementing the software asset governance and linking source code to deployed applications.
- Deploy: This process takes software assets from the DML and deploys them into the appropriate target environments. This function includes keeping track of the relations between applications, components and environment topologies.
One recurring pitfall
If we look at the nature of the four pillars, we see that two are repositories and two are processes. These similarities have motivated many clients I’ve met to use the same tool for the pillars that are similar in nature. It is not uncommon to encounter organizations that use Dimension for storing both source code and executables. Similarly, many companies use Jenkins for both the build and deploy process.
This is a dangerous pitfall in several respects:
- Tools are usually built for a single purpose, not two. The user interface often has to be bent backward to properly support similar (but different) functions, with suboptimal results most of the time.
- Using the same tool for several functions tends to influence people in consolidating processes. The best example I can give is an organization I met that consolidated the build and deploy process. Since the build process for one of their applications lasted four and a half hours, and the deployment 10 minutes, every time they wanted to deploy, they had to wait four hours and 40 minutes!
- Each of the four pillars has roles and responsibilities in other processes in the application lifecycle. Consolidating functions over the same tools adds a lot of complexity and load when other interactions come into play. Reuse is not promoted because the functions are operating in a tight-coupled manner, not allowing for flexible reuse of each function.
For these reasons, I believe organizations should look at dedicated tools for each of the four pillars and select the most appropriate tool for each job. As my experience shows, people who have selected the right tool for the right job have greater success. Not only do they implement more effective governance, but they also have more flexible and adaptive processes.
What tools have you selected for continuous delivery? Are you confident they are the right tool for the right job?
Rational DevOps Europe Lead