The effects of cloud-native architecture on continuous delivery

Share this post:

Cloud native continuous deliveryModern, “cloud-native” architectures are making a big impact on continuous delivery.

Continuous delivery is a set of practices and tools that enables rapid testing and release of changes. These architectures typically have many moving pieces, each of which is intended to stand on its own. Generally, the technologies of choice for these applications will be container-based, including Kubernetes or Cloud Foundry, or functions as-a-service.

As these architectures take hold, two concerns keep cropping up for our customers and prospects. The first is the belief that that the continuous delivery tools currently used in their organizations are too burdensome for cloud-native application teams. The second is a burgeoning need for help with release management.

These concerns seem to be in conflict, but have roots in the same underlying dynamics.

Continuous delivery for existing apps

Most applications in the enterprise are three-tier web applications written predominately in Java or .Net. Sprinkled in are message queues, service buses and other assorted middleware. Most of this is running in virtual machines. Each element of an application can be built independently and the practice of continuous integration is mainstream enough that generally every code commit produces a new build.

Unfortunately, it is also pretty common for a change in one component to break something else in the runtime test environment, which is why application release automation tools, such as IBM UrbanCode Deploy, have been thriving for the past 10 years or so.

UrbanCode Deploy picks up the builds from continuous integration tools, deploys the larger application to test environments and production environments. It also tracks the collection of web services, front ends, message queue settings and database schema updates to ensure they fit together properly, and even ensures they are deployed in the right order.

Put simply, a continuous integration (CI) tool builds your stuff and an application release automation tool orchestrates the delivery of lots of related stuff.

How continuous delivery is different for cloud-native apps

Cloud-native applications are presumed to be loosely coupled. Recognizing that orchestrating changes and testing large, integrated systems is expensive and slow, architects are pushing for each service to have well-defined APIs and responsibility. The ideal result is that a change to a service can be quickly tested with minimal expected impact on other services. In theory, a typical production deployment should impact only a single service and the deployment itself should be quite easy, often just one or two command-line calls.

These trends appear to cut at some of the core value of application release automation: Keeping track of many interrelated services and orchestrating complex deployments. It’s easy to see why we get questions about release automation tools being overkill for some cloud-native applications. It appears that simpler build pipeline tools such as Jenkins might be a better fit.

The rise in concern about release management is more curious. If services are flying to production independently, shouldn’t release management be getting easier? When we dig into this, we find two practical matters are in the way.

The first is that cloud-native microservices are not as decoupled as when drawn up on the whiteboard. While deployments are still pretty easy for many applications, keeping track of what is in a test lab, and understanding how that lab is different from production, gets harder as the number of services grows.

The second matter is the temptation to use simpler tools, better suited to cloud-native development, for more traditionally architected applications that need deeper coordination and orchestration. Perhaps a cloud-native team was the DevOps pioneer and their toolset was then brought to the wider organization.

In either case, the need for more coordination than is present in build pipeline tools tends to surface as a demand for “release management” help.

The path forward

Ideally, organizations should examine their applications to determine how loosely coupled their architectures are and then choose pipeline solutions to match the coupling and deployment difficulty. Note that, while an application using newer platforms such as Kubernetes is more likely to be loosely coupled, there are no guarantees.

As observed in Accelerate: The Science of Lean Software and DevOps, “It’s possible to achieve these [architectural] characteristics even with packaged software and ‘legacy’ mainframe systems — and conversely, employing the latest whizzy microservices architecture deployed on containers is no guarantee of higher performance if you ignore these characteristics.”

For significant coupling and complex deployments, use a release automation tool. For users who have tried to decouple, implemented a series of Jenkins jobs, and now need to bring a bit of order and coordination, put lightweight release orchestration such as IBM UrbanCode Velocity over the top. Those who have fully decoupled can use the simplest pipelines. You’ve earned it.

For a deeper look at applying each of these patterns, please check out the webinar “The Future of Continuous Delivery.”

More Apps stories

Top 5 DevOps predictions for 2020

There are five DevOps trends that I believe will leave a mark in 2020. I’ll walk you through all five, plus some recommended next steps to take full advantage of these trends. In 2019, Accenture’s disruptability index discovered that at least two-thirds of large organizations are facing high levels of industry disruption. Driven by pervasive […]

Continue reading

LogDNA and IBM find synergy in cloud

You know what they say: you can’t fix what you can’t find. That’s what makes log management such a critical element in the DevOps process. Logging provides key information for software developers on the lookout for code errors. While working on their third startup in 2013, Chris Nguyen and Lee Liu realized that traditional log […]

Continue reading

How AIOps can drive performance

Imagine your IT applications, services and infrastructures running like a high-performing Formula 1 race car — with its engine and gears running smoothly as the driver accelerates through the straights and decelerates while its tires and suspension hug the track through the curves. However, even when the race is running smoothly, obstacles arise, track conditions […]

Continue reading