Virtual machine (container platform node)
In container platform environments, a node is a virtual or physical machine that contains the services necessary to run pods. Turbonomic represents nodes as Virtual Machine entities in the supply chain.
Turbonomic can discover node roles and Master Nodes. It creates policies to keep nodes of the same role on unique host or Availability Zone providers, and policies to disable suspension of Master Nodes. Turbonomic also discovers and displays Node Pools and Red Hat OpenShift Machine Sets.
Synopsis

Synopsis | |
---|---|
Provides: | Resources to pods |
Consumes: | Resources from container platform clusters |
Discovered through: | Kubeturbo agent that you deployed to your cluster |
Monitored resources
Turbonomic monitors the following resources:
-
vMem
vMem is the virtual memory currently used by all containers on the node. The capacity for this resource is the Node Physical capacity.
-
vCPU
vCPU is the virtual CPU currently used by all containers on the node. The capacity for this resource is the Node Physical capacity.
-
Memory request allocation
Memory request allocation is the memory available to the node to support the
ResourceQuota
request parameter for a given Kubernetes namespace or Red Hat OpenShift project. -
CPU request allocation
CPU request allocation is the CPU available to the node to support the
ResourceQuota
request parameter for a given Kubernetes namespace or Red Hat OpenShift project. -
Virtual memory request
Virtual memory request is the memory currently guaranteed by all containers on the node with a memory request. The capacity for this resource is the Node Allocatable capacity, which is the amount of resources available for pods and can be less than the physical capacity.
-
Virtual CPU request
Virtual CPU request is the CPU currently guaranteed by all containers on the node with a CPU request. The capacity for this resource is the node allocatable capacity, which is the amount of resources available for pods and can be less than the physical capacity.
-
Memory allocation
Memory allocation is the memory
ResourceQuota
limit parameter for a given Kubernetes namespace or Red Hat OpenShift project. -
CPU allocation
CPU allocation is the CPU
ResourceQuota
limit parameter for a given Kubernetes namespace or Red Hat OpenShift project.
Actions
Turbonomic supports the following actions:
-
Provision
Provision nodes to address workload congestion or meet application demand.
-
Suspend
Suspend nodes after you have consolidated pods or defragmented node resources to improve infrastructure efficiency.
-
Reconfigure
Reconfigure nodes that are currently in the
NotReady
state.
Node provision and suspension actions
For both node provision and suspension actions, review the following guidelines:
-
For nodes in the public cloud, Turbonomic reports the cost savings or investments attached to node and provision actions. For example, you can see the additional costs you would incur if you provision nodes and then scale their volumes, or the savings you would realize if you suspend nodes. Note that performance and efficiency are the drivers of these actions, not cost. Cost information is included to help you track your cloud spend. For this reason, you will not see cost-optimization actions, including recommendations to re-allocate discounts or delete unattached volumes.
To view cost information, set the scope to a node and see the Necessary Investments and Potential Savings charts. You can also set the scope to a container platform cluster or the global cloud environment to view aggregated cost information.
-
For nodes that make up an Amazon Elastic Kubernetes Service (EKS), Azure Kubernetes Service (AKS), or Google Kubernetes Engine (GKE) cluster, Turbonomic can recommend node provision actions to increase the node count, and node suspension actions to reduce the node count. To manually or automatically execute these actions in Turbonomic, be sure that Turbonomic can connect to your cluster through the following targets:
-
EKS cluster – a container platform target (Kubernetes or Red Hat OpenShift) and an AWS target
-
AKS cluster – a container platform target (Kubernetes or Red Hat OpenShift) and an Azure service principal target
-
GKE cluster – a container platform target (Kubernetes or Red Hat OpenShift) and a Google Cloud target
-
-
Node pools and machine sets are ways to deploy and scale compute resources for container platform services hosted in the public cloud or Red Hat OpenShift on any infrastructure.
For the public cloud, Turbonomic uses default labels with the following patterns to discover the node pool types within each cluster:
Node pool Pattern AKS agentpool
EKS alpha.eksctl.io/nodegroup-name
eks.amazonaws.com/nodegroup
GKE cloud.google.com/gke-nodepool
For Red Hat OpenShift, Turbonomic creates node pools based on machine sets.
For both discovered and auto-created node pools, Turbonomic aggregates and visualizes actions for all the nodes in a pool to help you identify performance issues and optimization opportunities at the node pool level. Use the Top Node Pools chart to see actions and detailed information. By default, this chart displays when you set the scope to your global environment and then click the Container Platform Cluster entity in the supply chain.
The chart shows the number of nodes and aggregated actions for each node pool. For node pools in the public cloud, the chart also shows the costs you would incur if you provision nodes and then scale their volumes, or the savings you would realize if you suspend nodes. To view individual actions, click the button under the Actions column. To see more details, including the full list of nodes for each pool, click the node pool name.
You can automate the execution of these actions through Turbonomic with Red Hat OpenShift Machine API Operator . You can also manually execute node actions for AKS, EKS, or GKE through the cloud provider.
Note:Policies for node pools will be introduced in a future release.
Pre-execution check for node provision and suspension actions
Before executing a node provision or suspension action for a MachineSet in a Red Hat OpenShift cluster, Kubeturbo checks the node count range specified in your
ConfigMap
. If the pre-execution check determines that the node count will fall
outside the range after action execution, the action executes but fails, and the failure is logged
(for example, in the Executed Actions chart in the Turbonomic user interface).
This mechanism ensures the overall stability and performance of the Red Hat OpenShift
cluster.
The pre-execution check is not currently supported in AKS, EKS, GKE node pools.
By default, the minimum node count is 1 and the maximum is 1000. You can customize
these values by updating the nodePoolSize
parameter in your
Kubeturbo ConfigMap
. This update does not require a restart of the
Kubeturbo pod and takes effect after approximately one minute.
For a sample ConfigMap
with the nodePoolSize
parameter, see the Kubeturbo GitHub repository.
Node provision actions
When recommending node provision actions, Turbonomic also recommends pod provision actions that reflect the projected demand from required DaemonSet pods, and respects the maximum number of pods allowed for a node. This ensures that any application workload can be placed on the new node and stay within the desired range of vMem/vCPU usage, vMem/vCPU request, and number of consumers.
The action details for a node provision action show the related DaemonSet pods that are required for the node to run. Click a pod name to set it at your scope.

Turbonomic treats static pods as DaemonSets for the purpose of provisioning nodes. Because a static pod provides a node with a specific capability, it is controlled by the node and is not accessible through the API server. If a node to be provisioned requires a static pod, Turbonomic generates actions to provision the node and the corresponding static pod.
Node suspension actions
When recommending node suspension actions, Turbonomic also recommends suspending the DaemonSet pods that are no longer required to run the suspended nodes.
The action details for a node suspension action show the related DaemonSet pods that are no longer needed to run the suspended nodes. Click a pod name to set it at your scope.

Turbonomic treats static pods as DaemonSets for the purpose of suspending nodes. Because a static pod provides a node with a specific capability, it is controlled by the node and is not accessible through the API server. If the only workload type on a node is a static pod, Turbonomic generates actions to suspend the node and the corresponding static pod.
Nodes for Pods with Topology Spread Constraints
Currently, Turbonomic does not support moving pods with topology spread constraints. Until support is available, suspension actions for the nodes that the affected pods run on are disabled. For more information, see this topic.
Node reconfigure actions
Turbonomic generates node reconfigure actions to notify you of
nodes that are currently in the NotReady
state.
A reconfigure action is read-only in Turbonomic and must be
executed directly in the container platform cluster because the action might require
restarting the node or the kubelet on the node. When Turbonomic
discovers that a node's state is Ready
, it removes the reconfigure
action automatically and begins to monitor the health of the node and the associated
container pods.
Turbonomic treats a node as a VM under certain circumstances.
For example, it treats a node in vCenter as a VM that can move to a different
host if the current host is congested. This means that for a
NotReady
node in vCenter, it is possible to see a VM move
action along with the expected node reconfigure action. Both actions are valid
and safe to execute since they achieve two different and non-conflicting
results.
For each container platform cluster, Turbonomic creates an
auto-generated group of NotReady
nodes. To view all the
auto-generated groups, go to Search, select Groups, and then type
notready
as your search keyword. Click a group to view the
individual nodes and the pending reconfigure actions.

When you examine a pending reconfigure action, you can click the link in the Entities Impacted by this Node section to view a list of impacted pods.

These pods are in the Unknown
state and are not controllable. In the
supply chain and in the list of container pods, these pods display with a gray color
to help you differentiate them from other pods.