Moving data and applications from traditional on-premises data centers to cloud infrastructure offers companies the potential for significant cost savings through accelerating innovation, keeping a competitive edge and better interacting with customers and employees. What’s more, IT infrastructure becomes a pay-as-you-go operational expense with most public cloud providers. You can scale your cloud resources up or down to meet demand, and costs will follow. However, cloud services costs can be higher than anticipated, so monitoring and optimizing your cloud spend is critical.

Cloud cost optimization combines strategies, techniques, best practices and tools to help reduce cloud costs, find the most cost-effective way to run your applications in the cloud environment, and maximize business value.

It can be hard to monitor metrics and compare data when using multiple cloud vendors with different dashboards, and overspending can be easy. Whether you use IBM Cloud, Amazon AWS, Google Cloud, Microsoft Azure or some combination of platforms, it’s essential to understand, evaluate and optimize what you spend on cloud operations.

Why do you need cloud cost optimization?

Organizations waste about 32% of their spending on cloud services—a significant sum whether you’re a small business or one that spends six or seven digits on the cloud annually. Cloud optimization helps reduce waste and avoid overspending by identifying unused resources and neglected tools.

It’s not only about getting costs down. It’s also about making sure your costs align with your business goals. In other words, paying more may make sense if you earn more revenue or see more productive activities and profitability from a particular cloud service.

Cloud cost optimization means knowing what your cloud operations cost and making intelligent adjustments so you can control cloud costs without compromising performance.

Questions to ask yourself about optimizing cloud costs     

With some preparation, you can manage your cloud costs and avoid unanticipated overspending. Your IT team should consider these questions before, during and after your cloud implementation:

  • How can we evaluate our cloud costs at all company levels and manage the allocation of costs at the organization and team levels?
  • How will we provision our cloud resources and monitor and control spending over time?
  • How do we prevent overprovisioning and overspending?
  • What metrics will we track? Beyond your cloud bill, this may include the cost of services, capacity, utilization, performance and availability.

Tools for cloud cost optimization

Available cloud cost management tools can help you track bills, features and other configurations, enabling you to optimize costs. Cloud providers offer some tools, including Azure cost management, Google Cloud cost management and AWS cloud financial management tools.

There are also cloud cost tools from independent companies that assess other multiple vendors. For example, IBM® Turbonomic® automates critical actions in real-time, without human oversight, to help you most efficiently use compute, storage and network resources. These tools can work across multiple clouds and create reports showing the combined, multicloud data. Some compare your cloud costs with what it would cost to build your own server room.

Understand and leverage cloud pricing models

Cloud providers offer a range of different pricing models and service levels that you can use to help match resources and costs with application needs, availability requirements and business value. Navigating these can be confusing. Here are some general strategies to use:

  • Take advantage of reserved instances (RIS). These are prepaid computer instances that offer significant discounts (often up to 75%), which can be used over a defined period.
  • Use savings plan pricing, which offers low prices based on one- or three-year commitments.
  • Take advantage of Spot Instances (auctioned leftover resources) for last-minute purchases when possible. Use cases for Spot Instances can include processing big data/machine learning workloads, managing distributed databases and running CI/CD operations.
  • Limit data transfer fees by avoiding unnecessary data transfers.

Consider FinOps for cloud cost optimization

FinOps, a portmanteau of finance and DevOps, is a cloud financial management practice that helps organizations maximize business value in their hybrid and multicloud environments. Many organizations approach cloud cost optimization strategy and implementation by employing a cross-functional FinOps team—one with members from IT, finance and engineering—to bring financial accountability to the cloud.

FinOps practices rely on reporting and automation to increase ROI by continuously identifying opportunities for efficiency and taking action regarding cloud optimization in real-time. By automating their dynamic resourcing, organizations can also ensure their cloud environment’s underlying infrastructure always meets service-level objectives.

According to the FinOps Foundation, a mature FinOps practice allocates more than 90% of cloud spend, leaving little difference between the forecasted and actual spend.

Three phases of the FinOps journey: Inform, Optimize and Operate

A company may be in multiple phases of the FinOps journey—inform, optimize and operate—at the same time because different units, teams or applications will be on their own journeys.

  1. Inform: Organizations need accurate and up-to-date visibility to make intelligent decisions on allocation, benchmarking, budgeting and forecasting. Having correct, detailed allocation information of your cloud spending also enables correct chargeback and showback. FinOps teams need to know whether they are staying within budget, making accurate forecasts and achieving ROI targets.
  2. Optimize: The second phase is about optimizing the cloud footprint. There are multiple ways to optimize. On-demand capacity is the most expensive. Cloud providers offer discounts for advanced reservation planning and increased commitments. Teams can also optimize the cloud environment by using automation to rightsize environments and turn off unused resources.
  3. Operate: Organizations enter the third phase when they can continuously measure metrics—such as speed, quality and cost—against business objectives. The FinOps Foundation says, “Any organizational success is only possible if the organization builds a culture of FinOps, which involves a Cloud Cost Center of Excellence built around business, financial and operational stakeholders who also define the appropriate governance policies and models.”

The FinOps Maturity Model

The FinOps Foundation describes maturity levels as “crawl, walk, run,” representing organizations that take action at a small, limited scale up to those at a much higher level.

  • Crawl: An organization at the crawl level does minimal reporting and tooling, puts basic KPIs in place, and has plans to address only the “low-hanging fruit.” They allocate at least 50% of their cloud spend, and their forecast-to-spend accuracy variance is 20%.
  • Walk: Walk means the organization understands and follows cloud optimization capabilities. They identify difficult edge cases but do not address them. They set medium to high goals and KPIs. They allocate about 80% of their cloud spend, and the difference between their forecast and actual cloud spend is 15%.
  • Run: Organizations at the run level have teams that fully understand cloud optimization capabilities and execute them in cloud operations. They address difficult edge cases, set very high goals and KPIs, and prefer automation. They allocate more than 90% of their cloud spend, and their forecast-to-spend accuracy is about 12%.

Cloud cost optimization and IBM

The complex applications used by many businesses run IT teams ragged as they try to stay ahead of dynamic demand. When application performance drops, these teams often react at human speed after the fact. To avoid disruption, they might provision more resources for their cloud environment than needed, resulting in a bloated cloud bill and a disappointing ROI. IBM encourages clients to contain spend with hybrid cloud cost optimization.

IBM® Turbonomic® is a hybrid cloud cost optimization platform that enables IT teams to eliminate the guesswork that results in over- or under-provisioning application resources—saving time and optimizing costs. Teams can continuously automate real-time critical actions that proactively deliver the most efficient use of compute, storage and network resources to your apps at every layer of the stack.

Let’s rethink cloud operations. If you were to design your cloud operations for a new company, what would you automate to ensure application performance at the lowest cost? Watch the video.

Let’s optimize your cloud. Request a live demo.
Was this article helpful?
YesNo

More from Automation

Deployable architecture on IBM Cloud: Simplifying system deployment

3 min read - Deployable architecture (DA) refers to a specific design pattern or approach that allows an application or system to be easily deployed and managed across various environments. A deployable architecture involves components, modules and dependencies in a way that allows for seamless deployment and makes it easy for developers and operations teams to quickly deploy new features and updates to the system, without requiring extensive manual intervention. There are several key characteristics of a deployable architecture, which include: Automation: Deployable architecture…

Understanding glue records and Dedicated DNS

3 min read - Domain name system (DNS) resolution is an iterative process where a recursive resolver attempts to look up a domain name using a hierarchical resolution chain. First, the recursive resolver queries the root (.), which provides the nameservers for the top-level domain(TLD), e.g.com. Next, it queries the TLD nameservers, which provide the domain’s authoritative nameservers. Finally, the recursive resolver  queries those authoritative nameservers.   In many cases, we see domains delegated to nameservers inside their own domain, for instance, “example.com.” is delegated…

Using dig +trace to understand DNS resolution from start to finish

2 min read - The dig command is a powerful tool for troubleshooting queries and responses received from the Domain Name Service (DNS). It is installed by default on many operating systems, including Linux® and Mac OS X. It can be installed on Microsoft Windows as part of Cygwin.  One of the many things dig can do is to perform recursive DNS resolution and display all of the steps that it took in your terminal. This is extremely useful for understanding not only how the DNS…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters