Configuration Requirements

For this release of Turbonomic, you must satisfy the following configuration requirements.

Compute and Storage Requirements

The requirements for running a Turbonomic instance depend on the deployment method and the size of the environment you are managing.

Virtual machine image installations of Turbonomic version 8.9.1 or later require at least 1.5 TB for disk storage (that uses Thin provisioning or Thick provisioning), and the /var partition must be at least 340 GB. For more information, see Installing on a Virtual Machine Image.

Note:

Before updating Turbonomic, you must extend the /var partition to at least 340 GB. If the /var partition requirement is not met, your upgrade may fail, new container images may not load properly, and you can run into an ImagePullBackOff issue for some of the images. Before upgrading, be sure that the partition has sufficient disk space. If you need to free up disk space to increase the size of the partition, you can delete old images that are not in use. For details, see Increasing Available Disk Space.

For more information, see Minimum Requirements.

Turbonomic Updates and Operator Version

Turbonomic deploys as a cloud-native application on a Kubernetes cluster. This cluster can be pre-configured on a VM that you deploy, or you can deploy Turbonomic to a Kubernetes cluster in your environment. In either case, Turbonomic uses an Operator to manage the application deployment.

For different versions of Turbonomic, the version of Operator you use changes as follows:

Product Version

Operator Version

8.12.5

42.58

8.12.4

42.57

8.12.3

42.56

8.12.2

42.55

8.12.1

42.54

8.12.0

42.53

8.11.6

42.52

8.11.5

42.51

8.11.4

42.50

8.11.3

42.49

8.11.2

42.48

8.11.1

42.47

8.10.7 - 8.11.0

42.46

8.10.6

42.45

8.10.5

42.44

8.10.4

42.43

8.10.3

42.42

8.10.2

42.41

8.10.1

42.40

8.10.0

42.39

8.9.7

42.38

8.9.6

42.37

8.9.5

42.36

8.9.4

42.35

8.9.3

42.34

8.9.2

42.33

8.9.1

42.32

8.9.0

42.31

8.8.6

42.30

8.8.5

42.29

8.8.4

42.28

8.8.3

42.27

8.8.2

42.25

8.8.1

42.24

8.7.6 - 8.8.0

42.23

8.7.5

42.22

8.7.4

42.21

8.7.3

42.20

8.7.2

42.19

8.7.1

42.18

8.6.6 - 8.7.0

42.17

8.6.4 - 8.6.5

42.16

8.6.3

42.15

8.6.2

42.14

8.6.1

42.13

8.6.0

42.12

When you update Turbonomic, always include the matching version of Operator in the update. Online or offline updates that were completed according to the latest installation instructions automatically include the latest Operator.

If you installed Turbonomic on a Kubernetes cluster, you might need to manually update the Operator version.

Red Hat OpenShift Installations of Turbonomic

If you manage your installation with the Red Hat OpenShift OperatorHub, see Installing using the Red Hat OpenShift OperatorHub.

Other Kubernetes Installations of Turbonomic

Note:

Starting with Turbonomic version 8.7.5, IBM Container Registry is used for all Turbonomic images for online upgrades and installations. Ensure that you have access to https://*.icr.io before continuing.

For installations on other supported Kubernetes platforms, you can update the Operator version in either one of two ways:

  • Directly edit the running deployment of Turbonomic:

    1. Enter Edit mode for your cluster.

      kubectl edit deployment t8c-operator -n {YourNamespace}
    2. Edit the Operator version.

      image: icr.io/cpopen/t8c-operator:{Operator version}
    3. Edit the registry, if not already set to use IBM Container Registry.

      registry: icr.io/cpopen/turbonomic
    4. Verify that the Operator pod is ready.

      kubectl get pods -n {YourNamespace}
  • Edit the Turbonomic deployment YAML file:

    1. Open the Operator deployment file for editing.

      In the location where you store your manifests, open the file operator.yaml, which is the file that you use to deploy the t8c-operator pod.

    2. Edit the Operator version.

      image: icr.io/cpopen/t8c-operator:{Operator version}
    3. Edit the registry, if not already set to use IBM Container Registry.

      registry: icr.io/cpopen/turbonomic
    4. Apply the change to the operator.

      kubectl apply -f operator.yaml
    5. Verify that the Operator pod is ready.

      kubectl get pods -n {YourNamespace}

After you update the Operator version, and you verify that the pod is running and ready, edit your Custom Resource declaration to update Turbonomic to the version that matches your Operator version.

Supported MariaDB Version for OVA and VHD Installations

For its default historical database on Open Virtual Appliance (OVA) and Virtual Hard Disk (VHD) installations, Turbonomic currently supports MariaDB version 10.5.23. This support includes comprehensive testing and quality control for Turbonomic usage of the historical database.

Important:

Because of a known issue, you must never use MariaDB versions 10.5.14, 10.5.15, or 10.6.7.

If you are running Turbonomic installed as an OVA or VHD image, and are using the database that is included in that installation, then you must use version 10.5.23. For versions of Turbonomic that you installed as an OVA or VHD before version 8.7.6, you must now update to MariaDB version 10.5.23 if not already done.

For more information, see Verifying Your MariaDB Version.

Required Database Capacities

For Turbonomic versions 8.0.6 or later, your historical database must provide certain storage and memory size capacities. For MariaDB or MySQL installations, your database must provide the necessary memory, messaging, and logging capacity.

Turbonomic supports MariaDB version 10.5.23 or MySQL 8.0.x for the historical database. This support includes comprehensive testing and quality control for Turbonomic usage.

For more information, see Configuring a Remote Database.

If you installed Turbonomic as an OVA, and use the included MariaDB for the historic database, you can set the correct capacities by updating your Turbonomic deployment to the latest version. For more information, see Increasing Your Database Capacities.

SQL Server Modes for External Databases

If you deploy Turbonomic to work with an external database instead of the included historical database, then you must specify the correct SQL Server modes for the database.

For more information, see Configuring a Remote Database.

External Databases and Turbonomic Updates

If you deployed Turbonomic with an external database server, for some updates you might need to manually create a new database and user for that deployment.

Note:

If your external database server is multi-tenant, or if your database server does not grant administrative privileges to Turbonomic, then you must continue with this configuration requirement.

Azure database services are multi-tenant. If you deployed an external database on Azure, this configuration requirement applies to you.

If you deployed your database server in a way that grants Turbonomic privileges to create new databases and new users, then the update automatically creates the necessary database. This configuration requirement does not apply to you.

For some Turbonomic updates, the updated version includes new databases on the server. If you are updating to one of these versions, then you must first create the new database and a user account with privileges to access that database. Then, you can update to the latest version of Turbonomic.

For more information, see External DBs and Updates.

Transport Layer Security Requirements

By default, Turbonomic requires Transport Layer Security (TLS) version 1.2 to establish secure communications with targets. Most targets have TLS 1.2 enabled; however, some targets do not enable TLS or they enabled an earlier version. In that case, you see handshake errors when Turbonomic tries to connect with the target service. When you go to the Target Configuration view, you see a Validation Failed status for such targets.

NetApp filers often do not enable TLS and the supported version is TLS 1.0, which causes the NetApp target to fail validation.

If target validation fails because of TLS support, you see validation errors with the following strings:

  • No appropriate protocol

    Ensure that you enable the latest version of TLS that your target technology supports.

  • Certificates do not conform to algorithm constraints

    Refer to the documentation for your target technology (such as the NetApp documentation) for instructions to generate a certification key with a length of 1024 or greater on your target server.