Opportunities to increase the efficiency of containers to further improve resource utilization.
Getting software to run in multiple environments used to be a major hassle. You’d have to write different versions to support each operating system. Even then, you could run into trouble when moving an app if the software environments, network topologies, security policies, or storage weren’t identical.
Containers eliminate these complexities. They pack up an entire runtime environment—the application, plus all the dependencies, libraries, binaries, and configuration files—making your applications portable enough to run anywhere. Because containers are very lightweight, you can run lots of them on a single machine, making them highly scalable. And they’re easy to provision and de-provision, so they’re highly elastic.
When it comes to migrating applications, containers are giving older virtual machines (VMs) a run for their money. Like VMs, containers let you run applications that are separated from the underlying hardware by a layer of abstraction. However, with VMs, that layer of abstraction not only includes the app and its dependencies, but also the entire OS. In contrast, containers running on a server all share the host OS. Because each container doesn’t need its own OS, it can be far smaller and resource-efficient than a VM.
Yet, despite the many efficiencies and benefits containers provide over earlier technologies, there are still opportunities to make them even more efficient to further improve resource utilization.
How do we increase container efficiency?
Organizations typically use specialized software, such as Kubernetes, to automate the deployment, scaling, and management of containerized applications. Developers often use a template or manifest tool like HashiCorp Terraform to tell their Kubernetes scheduler how many CPU and memory resources the container is expected to need for a particular workload. During this process, they request a minimum allocation of CPU and memory resources and set an upper allocation limit.
The problem is that developers using these tools don’t have visibility into production workloads to determine the actual container utilization for each one. They have to guess at the levels they should enter into the resource manifest.
As a result, it is very common for developers to allocate far more CPU and memory resources to containers than they actually end up using, incurring considerable costs for these unused resources.
Conversely, if they don’t allocate enough to accommodate potential peak usage, they face operational risk, where too few resources are available to address workload demands.
The need for machine intelligence
Densify provides an intelligent tool (delivered as a service) to monitor real-world, granular utilization data and provide a learning engine that can analyze utilization continuously to learn actual patterns of activity. The Densify solution, already deployed to over 75 IBM accounts worldwide, then enables you to apply sophisticated policies to generate safe recommendations for sizing containers so that they are neither too large nor too small for the particular workload.
Of course, a large organization might run thousands of containers. Not only do these containers need accurate resource specification (sizing) information, they also need to apply the right size to each one of these containers. This is not achievable at scale with manual efforts. Densify addresses this requirement through automation. By inserting a line of code to replace hardcoded parameters in the upstream DevOps process, you can automatically point to the analytics engine, allowing you to continually readjust values to optimize containers on the fly.
The benefits of optimizing container resources
By using Densify’s solution with the IBM Cloud Kubernetes Service, your organization can do the following:
- Guarantee the right resources are allocated to improve app performance
- Increase utilization and resource efficiency to ensure you never spend more than you need to
- Incorporate and integrate continuous automation into your DevOps processes
- Size containers without worry, so you can focus on app development
Containers are already resource efficient, but that doesn’t mean you can’t further improve efficiency. With Densify, you can specify memory and CPU values based on actual resource utilization, ensuring that you have all the resources you need for your particular application without any unnecessary costs for overprovisioning.
A note regarding Densify. Our software is hosted on the IBM Cloud and has utilized the IBM Cloud Kubernetes Service to optimize our own applications.
Engage and join the discussion
Engage our team via Slack by registering here and join the discussion in the #general channel on our public IBM Cloud Kubernetes Service Slack.