Enterprise container platforms, Part III: Building applications with microservices in an evolving production environment

Cloud-native applications and continuous deployment can lower your costs and complexity

By | 3 minute read | November 21, 2019

The use of containers as a deployment vehicle for applications is on the rise. In fact, IDC says that by 2022, the acceleration of legacy application modernization and net-new development in Europe will lead to 30 percent of production applications being cloud-native1 using microservices, containers and dynamic orchestration.

This means that most containers will still house traditional applications, which are directly packaged in a container or refactored into macroservices. The operation, management and maintenance of these applications will not require significant changes in the IT operation. However, this gets more complicated when you think about running cloud-native applications.

Building cloud-native apps

Cloud-native applications by their nature are microservices-based applications. As such, they are composed of a set of microservices, which can all be owned by one business line or mixed and merged from different sources. In fact, a cloud-native application can easily consist of 30 or more microservices.

In an enterprise environment, you might have 100+ business applications; rebuilding them as cloud-native applications could result in 3,000 microservices, all deployed on a single container platform at once. From my experience in traditional environments, this will result in a replacement rate of somewhere between one and 10 microservices per day, driven by business requirements, bug fixes or other changes.

In other words, your production environment will be constantly changing.

As previously mentioned, cloud-native applications on a container platform require a complete redesign of IT operation and management. These changes are driven by:

  • Integrating the continuous deployment of cloud-native applications
  • Tooling for microservices monitoring, management and logging
  • Rebuilding the relationship and interaction model with the business

Dealing with cloud-native apps in production

Integrating the continuous deployment of cloud-native applications requires you to rethink how you handle your test, staging and production environments, as well as integrated tool chain, monitoring and flow control through all the different stages.

In an environment set up for continuous deployment of cloud-native applications, we need a much closer linkage among the different development stages and a consistent, transparent operational model across them. The development stages must closely resemble the production stage so business lines can test microservices under close-to-production conditions. The same goes for monitoring and logging so business lines can validate development progress and identify possible issues before rolling into production. At the very least, the environment must support rollback if the new microservices cause any problems in the production environment.

Additionally, container platforms have to support the CI/CD (continuous integration/continuous delivery) process chains and provide consistent tooling, monitoring and resource access across test, staging and production. The guiding principle of tooling for microservices monitoring, management and logging has to be share everything to reflect enterprise requirements.

Syncing with the business

Finally, there’s rebuilding the relationship and interaction model with the business. IT departments are getting much closer to developers and line of business personnel, working together to establish a harmonious microservices architecture and process model across the enterprise. IT takes on more of a consumption orchestrator role, and continuous deployment allows business lines to deploy new microservices and new cloud-native applications without any IT involvement.

Application agility requires us to rethink how we build, operate and manage IT environments. Ultimately, cloud-native applications drive IT disruption because the operational models of the VM-based world don’t fit.

In my first article, we took a look at microservices and containers and how they work together to save time and money in enterprise IT environments. In my second article, we explored how container platforms challenge IT leaders to rethink how they compose IT infrastructure and deliver resources to lines of business. Stay tuned for my next article in the series, where I’ll go over the implementation of container platforms, and feel free to reach out to me if you have any questions.

1IDC, IDC FutureScape: Worldwide Developer and DevOps 2019 Predictions – European Implications, EMEA44680919, January 2019