Kubernetes installation general requirements

To install Turbonomic on an existing cluster, your environment must meet the following requirements.

Note:

Additional requirements apply to specific environments. For example, installation on AWS EKS requires a CSI driver. These requirements are listed in the respective installation topics.

Supported platforms

You can install Turbonomic on any supported versions of the following platforms:

Resources

The amount of resources that you need to run Turbonomic is related to the complexity and type of entities that it manages. Entities include objects such as virtual machines, containers, pods, or cloud accounts. Turbonomic takes advantages of containers to scale the platform accordingly.

For more information, see this topic.

Managed targets

Turbonomic connects to your environment through targets.

Targets are not enabled by default. They can be enabled or disabled through probe settings in the Custom Resource YAML at any time during or after installation.

For a list of supported targets, see this topic.

Cluster access

Be sure that the following items are available before the installation:

  • Command line tool

    • oc for Red Hat OpenShift deployments

    • kubectl for Kubernetes-based deployments

  • cluster-admin role in the given cluster

    Note: cluster-admin is required if you want to use Kubeturbo and Prometurbo. This role might be required for other components.
  • MariaDB or MySQL client for remote database server setup

Network

The operator creates an NGINX LoadBalancer service by default, which uses an IP address or a load balancing service (external or internal), depending on the annotation.

Public IP and elastic load balancing are the default load balancers in the public cloud. However, it is recommended that you leverage one of the load balancers that is offered by the cloud provider, such as Network Load Balancer (NLB) or Application Load Balancer (ALB), and then configure the service annotations available for the chosen load balancer in the Turbonomic Custom Resource.

Since NGINX is deployed as a proxy service, you must edit the Custom Resource YAML to include NGINX attributes. Additionally, you can choose to have Turbonomic create a route in front of NGINX or you can manually create the route.

For more information, see NGINX Service Configuration Options and Platform Provided Ingress and Routes.

Container image repository

Turbonomic supports public and private container image repositories. By default, the Turbonomic installation uses the public IBM Container Registry (icr.io).

To connect to icr.io, your environment must have direct access to the Internet or indirect access through a proxy server. If your environment is behind a firewall, make sure that you have full access to the IBM Container Registry services that deliver the Turbonomic components. You need to add https://*.icr.io host address to your allowed firewall rules.

You can configure the installation to pull from a private repository and use pull secrets. For more information, see this topic.

For Red Hat OpenShift installations, if you are air-gapped or running a disconnected cluster and still want to use OperatorHub, see additional information in the Red Hat OpenShift documentation.

Turbonomic version

The Turbonomic version that you install must be specified in the Custom Resource (CR) YAML during installation. For more information about CR YAML, see Reference: Working with YAML files.

Remote database

A remote database with backup and high availability capabilities is required for production environments.

For its historical database, Turbonomic supports long-term maintenance stable series of MariaDB (the minimum supported version is 10.6) and MySQL 8.0.x. This support includes comprehensive testing and quality control for Turbonomic usage.

The database server should be configured with a minimum of 8 GB memory, 4 cores/CPUs, and 1 TB disk/file system space for tables. For more information, see this topic.

Note:

Using a containerized database is not supported as it can lead to irrecoverable data corruption. Additionally, a containerized database's performance is directly affected by network performance and cannot be guaranteed. Use a containerized database only for demo, Proof of Concept (POC), or testing purposes.

Security context

Components need read/write access to persistent volumes (PVs) and may use a range of fsGroup IDs. Where required, security context for which UIDs are used for a pod to write to its PV is configured using the project's SCC UID, obtained from the project's properties and supplied to the Turbonomic installation through the Custom Resource YAML.

Storage

The Turbonomic platform leverages persistent volumes (PVs) to store logs, certificates, and tokens. The platform uses Persistent Volume Claims (PVCs) to create PVs from the cluster's default storage class, or a Security Context that is specified in the Turbonomic Custom Resource.

Recommendations and best practices:

  • Use block storage for the PVs used by Turbonomic.

  • Volume backup strategy is important. The PVs associated with the api, api-certs, auth, topology-processor, and consul-data PVCs should be backed up to ensure recovery in case of server loss. If you are running a containerized database for Turbonomic, back up db-data and data-timescaledb-0.

  • The following Storage Class properties are required:
    • To support pods with multiple PVs, especially where your nodes are spread across multiple availability zones or regions, set the binding to be volumeBindingMode = WaitForFirstConsumer.

    • Ensure recovery by setting the reclaimPolicy option of Retain.

    • Ideally, also use a storage provisioner that allows for volume expansion (allowVolumeExpansion).

    For PVC details and total storage required, see this topic.