December 17, 2019 By Chris Rosen
Ofer Idan
5 min read

Carbon Relay is working with the IBM Cloud Kubernetes Service team to tackle the Kubernetes complexity challenge.

There’s an excitement generated by a glimpse of the future. Then, that vision quickly fades, and frustration sets in with the realization that the hoped-for future state remains out of reach due to stubborn challenges and problems.

That sums up how many DevOps leaders, network architects, and other IT professionals are feeling these days about their experience with Kubernetes. Initial enthusiasm among these teams about what they could do what this powerful container orchestration system has been tamped down by its complexity. Now widely recognized in the industry, Kubernetes’ complexity makes many tasks more complicated and difficult to achieve.

Carbon Relay is working with the IBM Cloud Kubernetes Service team to tackle this complexity challenge head-on. This collaboration is focused on providing enterprises with new and effective ways to leverage Kubernetes to achieve their business goals more reliably, efficiently, and flexibly.

The following is a brief description of these efforts; but first, a little background.

Navigating in choppy waters

Some teams are new converts to Kubernetes from Docker Swarm or Mesos. Others began their containerization journeys from the start with Kubernetes. Either way, for most teams, it doesn’t take long to hit the infamous Kubernetes complexity wall. Kubernetes is a uniquely complex system, and for engineers lacking direct experience, there’s always a steep learning curve.

Getting apps up and running is tough enough but optimizing their performance in Kubernetes presents teams with entirely new levels of complexity. With the limited options DevOps and IT teams have had to date, many turn to over-provisioning to ensure they get the app performance they need. But by doing so, they can send their cloud costs through the roof.

Untenable complexity. Sticker shock with cloud costs. That’s not the rosy future that DevOps and IT pros envisioned when they set sail with Kubernetes. Now they need a way to keep themselves off the rocks and reefs.

Carbon Relay’s Red Sky Ops solves the Kubernetes complexity conundrum

Red Sky Ops is the solution that weary DevOps teams need—an AIOps platform with advanced, machine learning (ML)-based capabilities. Using a unique process, it automatically determines the optimal configuration for apps running in Kubernetes. By automating this highly complex task, it eliminates the need for manual optimization—which is almost always ineffective.

The technologies built into the Red Sky Ops platform leverage established methods in data science, enabling DevOps teams to automate the process of parameter tuning. Using ML-powered experimentation, Red Sky Ops unlocks efficient exploration of the application parameter space, resulting in configurations that are guaranteed to both deploy reliably and perform optimally. Last, but certainly not least, is Red Sky Ops’ ML-driven ability to learn over time, which plays a crucial role in the platform’s scalability and efficiency.

With Red Sky Ops’ advanced technologies working for them, teams can rest assured that the application they have running in Kubernetes will deploy and run properly from the start, scale naturally, and will be intelligently and automatically optimized over time.

That’s how Red Sky Ops turns that glimpse of the future into a dynamic and possibility-filled future and delivers it today.

Following is just one example of the many ways in which Red Sky Ops helps to unlock all the value and promise of Kubernetes and AIOps.

Getting started with Red Sky Ops

Red Sky Ops is based on the concepts of trial and experiments. An experiment, as the name suggests, is a process in which a single application or component is evaluated to determine its optimal configuration. Each trial within an experiment will test a particular configuration of parameters.

To start, users must first create an experiment definition, either from scratch or using one of Red Sky Ops’ examples and templates. The experiment definition includes the metrics to be measured (in order to determine application performance) and the parameters to be tuned during each trial.

Running an experiment is simple:

  1. Grab the latest release.
  2. Create an experiment.yaml file for your specific application.
  3. Initialize the Red Sky controller in your cluster using redskyctl init.
  4. Apply the manifests.
  5. Watch the results come in.

As the experiment progresses, our machine learning engine will learn your application and test configurations that get you closer to the optimal results. By using our internal UI or connecting to one of our open integrations, users can view the status of their experiment and pull their preferred configuration for deployment.

Results of a Red Sky Ops experiment

Figure 1

Figure 1 shows the typical results of a Red Sky Ops experiment. In this case, a sample web app was optimized for both throughput and resource costs. Each dot represents a successfully deployed trial, and orange dots are the optimal configurations that trade off highest throughput and lowest cost.

The results of the experiment speak for themselves. In the above Figure, every dot represents a trial or a specific application configuration. Failed trials (i.e., those configurations in which the application failed to deploy) are not shown and comprise less than 10% of all trials (compared to 50% failed trials with random exploration).

The orange dots are the ones the machine learning algorithm deemed as “best”; these configurations cannot be beaten on both throughput and cost at the same time. Sometimes referred to as the Pareto front, it is from these collections of trials that users can select their preferred configurations.

In addition, this map of trials shows just how far novice users may veer with suboptimal configurations. We have found that for common open source components, the Helm chart default configuration is far from optimal on metrics such as cost, throughput, latency, and more.

Exporting the optimal configuration and deploying is a simple process. And, with multiple experiments running under various load scenarios, users can collect a library of optimal deployment configurations that will adapt to different situations and scale their needs as necessary.

As the applications develop over time, it may be useful to run optimization experiments to ensure the evolved version still remains optimal. Luckily, Red Sky Ops’ machine learning algorithms learn and retain information about each application’s performance and can perform future experiments at a fraction of the time the original experiment took. In addition, as our platform evolves, it will incorporate learnings from widespread open source components (such as PostgreSQL, Redis, ELK, and others) and speed up optimization for applications utilizing these tools.

Using the results of its experiments, Red Sky Ops enables DevOps teams to automate the process of optimizing the performance of their applications running on Kubernetes. Once teams turn that corner with Red Sky Ops, they start experiencing faster and easier application deployments. They see the portability and scalability of their apps increase significantly—without any additional work required. And, they start to drive big savings, especially in reductions of cloud costs.

IBM and Carbon Relay: Complementary solutions

Many enterprises want to move ahead quickly with Kubernetes initiatives. As mentioned above, however, the technology’s inherent complexity and other factors (like a shortage of engineers with relevant experience) are slowing down or stalling these efforts. For companies in this position, IBM offers a smart and effective way forward with its IBM Cloud Kubernetes Service.

Offered as a complete managed container service, IBM Cloud Kubernetes Service makes it easy for enterprises to not only develop and deliver applications rapidly, but also bind them to advanced services like IBM Watson and blockchain.

As a certified Kubernetes provider, the IBM Cloud Kubernetes Service packages include intelligent scheduling, self-healing, horizontal scaling, service discovery and load balancing, and automated rollouts and rollbacks. The IBM Cloud Kubernetes Service also has advanced capabilities around simplified cluster management, container security and isolation policies, the ability to design your own cluster, and integrated operational tools for consistency in deployment.

When deployed in conjunction with Red Sky Ops and its automated optimization of the configuration of applications running in Kubernetes environments, customers get fast-tracked to Kubernetes success.

In short, instead of struggling to overcome Kubernetes’ complexity, Carbon Relay and IBM Cloud Kubernetes Service customers are quickly turning their AIOps-driven goals into reality.  Delivering that future for customers today is what the collaboration between the IBM Cloud Kubernetes Service team and Carbon Relay is all about. 

Join the discussion and learn more

For more information about these exciting Kubernetes-driven AIOps offerings, visit IBM’s Kubernetes Service site or check out the Carbon Relay website and GitHub repo

You can also engage the IBM team via Slack. Please register here and join the discussion in the #questions channel on https://ibm-container-service.slack.com.

Was this article helpful?
YesNo

More from Cloud

IBM Cloud Virtual Servers and Intel launch new custom cloud sandbox

4 min read - A new sandbox that use IBM Cloud Virtual Servers for VPC invites customers into a nonproduction environment to test the performance of 2nd Gen and 4th Gen Intel® Xeon® processors across various applications. Addressing performance concerns in a test environment Performance testing is crucial to understanding the efficiency of complex applications inside your cloud hosting environment. Yes, even in managed enterprise environments like IBM Cloud®. Although we can deliver the latest hardware and software across global data centers designed for…

10 industries that use distributed computing

6 min read - Distributed computing is a process that uses numerous computing resources in different operating locations to mimic the processes of a single computer. Distributed computing assembles different computers, servers and computer networks to accomplish computing tasks of widely varying sizes and purposes. Distributed computing even works in the cloud. And while it’s true that distributed cloud computing and cloud computing are essentially the same in theory, in practice, they differ in their global reach, with distributed cloud computing able to extend…

How a US bank modernized its mainframe applications with IBM Consulting and Microsoft Azure

9 min read - As organizations strive to stay ahead of the curve in today's fast-paced digital landscape, mainframe application modernization has emerged as a critical component of any digital transformation strategy. In this blog, we'll discuss the example of a US bank which embarked on a journey to modernize its mainframe applications. This strategic project has helped it to transform into a more modern, flexible and agile business. In looking at the ways in which it approached the problem, you’ll gain insights into…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters