Deploying Kubeturbo through a Helm chart in a Helm chart repository

Helm manages the charts that package all resources associated with an application. For more information, see the Helm documentation.

You can deploy Kubeturbo using a Helm chart that is hosted in a Helm chart repository in IBM GitHub Pages. This chart delivery mechanism supports versioning, which enables seamless access to Kubeturbo updates and allows you to roll back Kubeturbo to a previous version if needed. It also simplifies your deployment process by eliminating the need to clone the Kubeturbo repository.

Deployment requirements

  • General deployment requirements

    Before deploying Kubeturbo, be sure to:

    • Review the general requirements.

    • Set up and record the credentials for your Turbonomic instance. You will specify these credentials when you deploy Kubeturbo.

  • Helm

    For instructions on installing Helm, see the Helm documentation.

    This topic assumes that you are familiar with Helm usage and chart repositories.

Task overview

To deploy Kubeturbo, perform the following tasks:

  1. Add the Helm chart repository for Kubeturbo.

  2. Create the YAML resource for the Kubeturbo namespace and credentials.

  3. Deploy Kubeturbo.

    Deployment creates the following resources in the cluster:

    • Service account and binding to the Kubeturbo cluster role

    • Updated configMap containing the required information for Kubeturbo to connect to Turbonomic

    • Deployment of Kubeturbo

  4. Validate the deployment.

Note:

Turbonomic gathers information from your clusters through the Kubeturbo container images. By default, these images are pulled from IBM Container Registry (icr.io). If you prefer to pull these images from your private repository, configure that repository before deploying Kubeturbo. For more information about private repositories, see this topic.

Adding the Helm chart repository for Kubeturbo

Run the following command to add the Helm chart repository for Kubeturbo to your local Helm registry.

helm repo add turbonomic https://ibm.github.io/turbonomic-container-platform/

You can run the following commands for additional options.

  • Retrieve the latest versions of the packages.

    helm repo update
  • See the available charts and their version numbers.

    helm search repo turbonomic
Note:

If you need to uninstall the chart later, run helm delete my-kubeturbo.

Creating the YAML resource for the Kubeturbo namespace and credentials

This YAML resource specifies the Kubeturbo namespace and Turbonomic instance credentials. Be sure to set up and record the credentials before performing this task.

  1. Download the YAML resource.

    curl -O https://raw.githubusercontent.com/turbonomic/kubeturbo-deploy/master/deploy/kubeturbo_yamls/kubeturbo_namespace_turbo_credentials_secret.yaml
  2. Update the Kubeturbo credentials in the YAML resource that you downloaded. For example, you can update the YAML resource in VS Code or vi.

    Important:

    The YAML resource that you downloaded specifies the following default values:

    • Namespace – turbonomic

    • Secret – turbonomic-credentials

    It is recommended that you leave these values unchanged because the same values are specified in other Kubeturbo resources. Changing the values in the YAML resource but not in the other Kubeturbo resources could result in deployment issues.

    Choose from the following options:

    • (Recommended) OAuth 2.0 clientId and clientSecret

      Use the Turbonomic API to create and manage OAuth 2.0 client credentials. These credentials are more secure than the local account credentials created from the Turbonomic user interface.

      To create the OAuth 2.0 client credentials, perform the following steps:

      1. Log in to your Turbonomic instance.

      2. In your browser's address bar, change the URL to https://{your_instance_address}/swagger/#/Authorization/createClient.

        For example, change the URL to https://my-instance.com/swagger/#/Authorization/createClient.

      3. Click Try it out.

      4. In the body section, replace the sample request with the following request. This request has all the required parameters for generating OAuth 2.0 credentials for Kubeturbo.

        {
          "clientName": "kubeturbo",
          "grantTypes": [
            "client_credentials"
          ],
          "clientAuthenticationMethods": [
            "client_secret_post"
          ],
          "scopes": [
            "role:PROBE_ADMIN"
          ],
          "tokenSettings": {
            "accessToken": {
              "ttlSeconds": 600
            }
          }
        }
      5. Click Execute and then scroll to the Server response section. If the credentials were generated successfully, a response with a code of 200 displays.

      6. In the response, find and record the clientID and clientSecret credentials. These credentials cannot be retrieved after you close the API so it is important that you record them.

      The credentials that you created for Kubeturbo must be converted to Base64.

      In Linux, you can run the following command to convert each credential to Base64.

      echo {credential} | base64
      Note:

      If your credential fails to convert to Base64, it might have invalid or unsupported characters and must be changed before it can be converted to Base64 successfully.

      After the conversion, specify the Base64 clientId and clientSecret in the YAML file that you are updating, as shown in the following example.

      data:
        clientid: {Base64_client_ID}
        clientsecret: {Base64_client_secret}
    • (Not recommended) Local user account username and password (in Base64 in a secret)

      Note:

      Support for these credentials will be discontinued in a future release.

      Comment the clientId and clientSecret parameters, and then specify the username and password in Base64.

      data:
        username: {Base64_username}
        password: {Base64_password}
  3. Deploy the YAML resource.

    1. Log in to your cluster through the command line.

    2. Deploy the YAML resource.

      kubectl apply -f kubeturbo_namespace_turbo_credentials_secret.yaml

Deploying Kubeturbo

  1. (Optional) Perform a dry run to test your deployment.

    helm install --dry-run \ 
    --debug kubeturbo kubeturbo/deploy/kubeturbo \ 
    --namespace turbo \ 
    --set serverMeta.turboServer={your_instance_address} \ 
    --set targetConfig.targetName={your_cluster_name}

    Update the following parameters:

    • --set serverMeta.turboServer={your_instance_address}

      Specify the address of your Turbonomic instance, such as https://10.1.1.1 or https://myinstance.com.

    • --set targetConfig.targetName={your_cluster_name}

      Specify the cluster name that will display in the Turbonomic user interface. Spaces are not allowed.

    • If you want to enable move actions for pods with persistent volumes, add the following parameter.

      --set args.failVolumePodMoves=false

    Be sure to resolve any errors before proceeding to the next step.

  2. Deploy Kubeturbo with a specific role by choosing one of the following options. The role that you choose for Kubeturbo determines its level of access to your cluster.

    • Option 1: Default role

      By default, Kubeturbo deploys to your cluster with the cluster-admin role. This role has full control over every resource in the cluster.

      If you want to use the default role, run the following command. This command does not explicitly specify a role parameter, which means that the default role will be used.

      helm install kubeturbo kubeturbo/deploy/kubeturbo \ 
      --namespace turbo \ 
      --set serverMeta.turboServer={your_instance_address} \ 
      --set targetConfig.targetName={your_cluster_name}

      For details about the parameters that you need to configure, see the previous step.

    • Option 2: turbo-cluster-admin custom role

      This custom role specifies the minimum permissions that Kubeturbo needs to monitor your workloads and execute the actions that Turbonomic generated to optimize these workloads.

      If you want to use this role, specify the role parameter, as shown in the last line of the following command.

      helm install kubeturbo kubeturbo/deploy/kubeturbo \ 
      --namespace turbo \ 
      --set serverMeta.turboServer={your_instance_address} \ 
      --set targetConfig.targetName={your_cluster_name} \
      --set roleName=turbo-cluster-admin

      For details about the other parameters that you need to configure, see the previous step.

    • Option 3: turbo-cluster-reader custom role

      This custom role is the least privileged role. It specifies the minimum permissions that Kubeturbo needs to monitor your workloads. Actions that Turbonomic generated to optimize these workloads can only be executed outside of Turbonomic (for example, in your cluster).

      If you want to use this role, specify the role parameter, as shown in the last line of the following command.

      helm install kubeturbo kubeturbo/deploy/kubeturbo \ 
      --namespace turbo \ 
      --set serverMeta.turboServer={your_instance_address} \ 
      --set targetConfig.targetName={your_cluster_name} \
      --set roleName=turbo-cluster-reader

      For details about the other parameters that you need to configure, see the previous step.

  3. (Optional) If you need to adjust or specify other parameters, see the last section in this topic, 'Kubeturbo custom resource values'.

The deployment starts after you run the command.

Validating the deployment

  1. Review the output in Helm. The following example indicates that the deployment was successful.

    NAME: kubeturbo
    LAST DEPLOYED: Thu Aug 10 15:42:16 2023
    NAMESPACE: turbo
    STATUS: deployed
    REVISION: 1
    TEST_SUITE: None
  2. Verify the status of the Kubeturbo pod.

    kubectl get pods -n turbo

    The following example result indicates that the pod was deployed and is currently running.

    NAME                    READY  STATUS   RESTARTS  AGE
    kubeturbo-asdf1234asd3  1/1    Running  0         37m
  3. If you use OAuth 2.0 credentials for Kubeturbo, verify the that the credentials are configured correctly. In the Kubeturbo logs, the following log example indicates that the credentials are configured correctly.

    I0723 20:21:46.659559       1 tap_service.go:114] Secure credentials is provided, target Kubernetes-ocp-414 could be auto-registered if the server is running in secure mode and secure connection is established
  4. Open the Turbonomic user interface and navigate to Settings > Target Configuration. If the deployment was successful, a new container platform target appears in the list.

    If you do not see the target, there might be a deployment issue that you need to resolve.

    • To start troubleshooting an issue, review the Kubeturbo logs. You can retrieve logs by running the following command.

      kubectl logs kubeturbo-{kubeturbo_pod_ID} -n turbo

      For example:

      kubectl logs kubeturbo-asdf1234asd3 -n turbo
    • If you need further assistance, contact your Turbonomic representative.

Reference: Kubeturbo custom resource values

The following table describes the values that you can specify in the Kubeturbo custom resource to configure your deployment.

Refer to the Kubeturbo CRD for the schema.

Parameter Default value Changes to default value Parameter type
args.cleanupSccImpersonationResources true Optional Clean up the resources for SCC impersonation by default. For details about SCC impersonation, see this topic.
args.discovery-interval-sec None Optional Number in seconds
args.discovery-sample-interval None Optional Number in seconds
args.discovery-samples None Optional Number
args.discovery-timeout-sec None Optional Number in seconds
args.discovery-workers None Optional Number
args.garbage-collection-interval None Optional Number in seconds
args.kubelethttps true Optional, change to false if using Kubernetes 1.10 or older Boolean
args.kubeletport 10250 Optional, change to 10255 if using Kubernetes 1.10 or older Number
args.logginglevel 2 Optional Number. A high value increases logging.
args.pre16k8sVersion false Optional If Kubernetes version is older than 1.6, then add another arg for move/resize actions.
args.sccsupport None Optional Enabled by default in clusters that use SCC.
args.stitchuuid true Optional, change to false if IaaS is VMM or Hyper-V Boolean
daemonPodDetectors.daemonPodNamespaces1 and daemonPodNamespaces2 daemonSet kinds are allowed for node suspension by default. Adding this parameter changes the default. Optional but required to identify pods in the namespace to be ignored for cluster consolidation. Regular expression used, values are in quotes and are comma-separated, such as "kube-system","kube-service-catalog","openshift-.*".
daemonPodDetectors.daemonPodNamePatterns daemonSet kinds are allowed for node suspension by default. Adding this parameter changes the default. Optional but required to identify pods matching this pattern to be ignored for cluster consolidation. Regular expression used .*ignorepod.*
HANodeConfig.nodeRoles Any value for label key value node-role.kubernetes.io/ for master and others Optional. This is used to automate policies that keep nodes of the same role limited to one instance per ESX host or availability zone. Values in values.yaml or cr.yaml use escapes or quotes, or are comma-separated. Master nodes are by default. Other roles populated by nodeRoles:""foo","bar""
image.busyboxRepository icr.io/cpopen/turbonomic/cpufreqgetter:1.0 Optional Full path to the repository, image, and tag.
image.cpufreqgetterRepository icr.io/cpopen/turbonomic/cpufreqgetter Optional Repository used to get node cpufrequency.
image.imagePullSecret None Optional Defines the secret used to authenticate to the container image registry
image.pullPolicy IfNotPresent Optional  
image.repository icr.io/cpopen/turbonomic/kubeturbo Optional Path to the repository. Must be used with image.tag.
image.tag Depends on product version Optional Kubeturbo tag. Must be used with image.repository.
kubeturboPodScheduling.affinity None Optional

Specify affinity pod scheduling constraint in the cluster. For examples, see the Kubernetes documentation

kubeturboPodScheduling.nodeSelector None Optional

Specify nodeSelector pod scheduling constraint in the cluster. For examples, see the Kubernetes documentation.

kubeturboPodScheduling.tolerations None Optional

Specify tolerations pod scheduling constraint in the cluster. For examples, see the Kubernetes documentation.

masterNodeDetectors.nodeLabels Any value for label key value node-role.kubernetes.io/master Deprecated. Previously used to avoid suspending masters identified by node label key value pair. The value ignored if there is no match. Regular expression used, specify the key as masterNodeDetectors.nodeLabelsKey (such as node-role.kubernetes.io/master) and the value as masterNodeDetectors.nodeLabelsValue (such as .*).
masterNodeDetectors.nodeNamePatterns Node name includes .*master.* Deprecated. Previously used to avoid suspending masters identified by node name. The value is ignored if there is no match. String, regular expression used. For example: .*master.*
restAPIConfig.opsManagerPassword None Optional Administrator password.
restAPIConfig.opsManagerUserName None Optional Local or Active Directory user with site administrator role.
restAPIConfig.turbonomicCredentialsSecretName turbonomic-credentials Required only if using secret and not taking the default secret name Secret that contains the Turbonomic site administrator username and password in Base64.
roleBinding turbo-all-binding Optional Specify the name of clusterrolebinding
roleName cluster-admin Optional Specify custom turbo-cluster-reader or turbo-cluster-admin role instead of the default cluster-admin role.
serverMeta.proxy None Optional Format of http://username:password@proxyserver:proxyport or http://proxyserver:proxyport
serverMeta.turboServer None Required HTTPS URL to log in to Turbonomic
serverMeta.version None Optional Number x.y
serviceAccountName turbo-user Optional Specify the name of the service account.
targetConfig.targetName {Your_cluster} Optional but required for multiple clusters String that indicates how you want to identify your cluster.