Kubernetes common configuration parameters

You can configure the Custom Resource YAML for your desired configuration during or after deployment.

Custom Resource values

The following table describes the values that you can specify in the Kubernetes custom resource to configure your deployment. For a complete list of options, refer to the custom resource definition for the Turbonomic Operator in the CRD YAML file.

Note:

All parameters are in the global section of the custom resource unless otherwise specified.

Some of these parameters might be required for your Kubernetes cluster.

Configuration Parameter Modification Default value
Disabling the default ingress ngnix

You can configure your own ingress to route requests to the Turbonomic server. To use your own ingress, disable ngnix. For example:

    nginx:
      nginxIsPrimaryIngress: false
      httpsRedirect: false

For more information, see Platform Provided Ingress and Red Hat OpenShift Routes.

Note:

(Red Hat OpenShift requirement) Since ngnix should be deployed as a proxy service rather than a primary ingress, you must edit the custom resource to include ngnix attributes. Additionally, you can choose to have Turbonomic create a route in front of ngnix or you can manually create the route.

Enabling targets <targetname>: enabled

All enabled target probes use the following format: <targetname>: enabled: true, such as vcenter: enabled: true.

For a list of supported target probes and the parameters that are required to enable the target, see this sample CR.

None enabled
Specifying private repository and image pull credentials repository and registry

Once the images are pulled in to your private registry, update the registry and repository location of the Turbonomic server container images in the custom resource YAML. Edit the custom resource to include the repository and registry attributes. For example:: repository:.

 repository:<yourRegistry> /<yourRepository>
_# uncomment line below if using RedHat Container Catalog, and specify `registry.connect.redhat.com/turbonomic` as the `repository`_
_# when deploying from the OCP Operator Hub the `repository` and `customImageNames` will be preconfigured for you_
# customImageNames: false

_# for pull credentials, registry parameter is required and the value can be the same as repository_
_# uncomment what you need below if you need to specify pull credentials. Note this will be used for all images._
# registry: <yourRegistry>/<yourRepository>
# imageUsername: turbouser
# imagePassword: turbopassword
# imagePullSecret: <yourSecret>

For more information, see Working with a Private Repository and Image Pull Secrets

*.icr.io
Specifying non-default storage class storageClassName

The Turbonomic deployment uses the cluster's default storage class if one is defined. If you want to use a different storage class, specify the name of the storage class in storageClassName: <yourStorageClass>. For example:

  repository: icr.io/cpopen/turbonomic 
  tag: 8.14.3
  storageClassName: <yourStorageClass>
  externalIP: 10.97.96.3
  pullPolicy: Always

For more information, see Storage Class Requirements

Use the cluster’s default storage class
Enabling an external database externalDBName

You can install Turbonomic directly to a Red Hat OpenShift or Kubernetes cluster (instead of installing the VM image), and you can provide your own historical database by setting the externalDBName parameter. For example:

    externalDBName: <yourDB.yourURL.com>
  properties:
    global:
      enableSecureDBConnection: true
      dbPort: 6033
      dbRootPassword: vmturbo
      dbRootUsername: turboadmin
   #additional properties may be required for AWS RDS and Azure DB Services

For more information, see Configuring a Remote Database.

None; local containerized DB and PV
Specifying the group ID for all pods securityContext

Components need read/write access to Persistent Volumes (PVs) and may use a range of fsGroup ids. Where required, security context for which UIDs are to be used for a pod to write to its PV is configured using the project's SCC UID, obtained from the project's properties and supplied to the Turbonomic deployment through the Custom Resource YAML. Specify this information in the securityContext: fsgroup parameter. For example:

    externalDbIP: 10.0.2.15
    repository: icr.io/cpopen/turbonomic
    securityContext:
      fsGroup: 1000680000
Note:

Security Context is a requirement for Red Hat OpenShift. For more information, see Determine the Security Context Constraints (SCC) UID range and Set the Security Context Constraints.

See information related to storage classes for all other Kubernetes deployments.

 
Adding IAM role support for AWS Mediation

mediation-aws: serviceAccountName

mediation-awscloudbilling: serviceAccountName

mediation-awscost: serviceAccountName

Connection to AWS through an IAM role is supported when a Turbonomic instance is deployed to Kubernetes in AWS through Red Hat OpenShift Service on AWS (ROSA) or Amazon EKS. For these deployments, cluster configurations must support an OIDC provider and webhooks.

(Best Practice) Manually create a separate service account for the AWS Mediation pods to use. You must then modify the Custom Resource YAML to specify this service account to the AWS Mediation components. For example:

spec:
  mediation-aws:
    serviceAccountName: t8c-iam-role
  mediation-awscloudbilling:
    serviceAccountName: t8c-iam-role
  mediation-awscost:
    serviceAccountName: t8c-iam-role

For more information, see this topic.

IAM user
Enabling NGINX service annotations

ingress: annotations

To modify the nginx LoadBalancer type service to use an internal IP address on your load balancer, use the required annotation depending on your Kubernetes platform and version. Use an annotation that is applicable for your environment, such as ingress: annotations: service.kubernetes.io/{annotation}: “{value}”. For example:

    ingress:
      annotations:
        #provide the correct annotations based on the LoadBalancer you want and the properties you want.

For more information, see NGINX Service Configuration Options and Using Platform Provided Ingress.

None
Using a self-signed certificate for UI HTTPS

ingress: annotations

Relevant for provided nginx ingress only.

Using AWS Certificate Manager, you can leverage ACM to provide a certificate to the AWS LoadBalancer created for the Turbonomic nginx Service using an annotation of service.beta.kubernetes.io/aws-load-balancer-ssl-cert with the ACM ARN on the Service.

For more information, see Self-Signed Certificates and AWS Certificate Manager if you are running in AWS and have AWS Certificate Manager (ACM).

Unsigned cert
Enabling secure LDAP integration auth-secret

If your company policy requires secure access, you can use a certificate with your LDAP service to set up secure access for your users. For example, you can configure Active Directory (AD) accounts to manage external authentication for users or user groups. The user interface to enable AD includes a Secure option, which enforces certificate-based security.

Configure LDAP first in the UI, then update the Turbonomic configuration after deployment to apply the certificate.

For more information, see Enforcing Secure Access via LDAP.

None; configure after deployment
Enabling SSO integration samlEnabled or openIdEnabled

If your company policy supports Single Sign-On (SSO) authentication, you can configure Turbonomic to support SSO authentication via either Security Assertion Markup Language (SAML) 2.0 or OpenID Connect 1.0.

For more information, see Single Sign-On Authentication.

Not enabled; configure after deployment
Enabling self-monitoring for Kubeturbo

kubeturbo: enabled

You can enable Kubeturbo using the following format: kubeturbo: enabled: true.

Not enabled
Enabling monitoring of other Kubernetes clusters  

You can deploy Kubeturbo through YAML, Helm, or an operator.

For more information, see Connecting to Kubernetes Clusters.

Enabling Turbonomic on Turbonomic APM example prometheusexportersprometurbokubeturbo

Start with the sample custom resource YAML to enable prometheus, exporters, prometurbo, and kubeturbo.

Not enabled
Enabling SaaS Reporting  

The Turbonomic platform includes an optional SaaS-based reporting component that you can choose to enable. You can use SaaS reporting to understand trends in application resource management and to share insights with stakeholders through reports and Liveboards.

SaaS reporting for on-prem environments is not currently available. For more information, see Enabling SaaS Reporting.

Not enabled
Enabling embedded reporting grafana: enabled

The Turbonomic platform includes an embedded reporting component that you can choose to enable when you install the platform. Embedded reporting stores a history of your managed environment and then presents selective snapshots of this history with a set of standard dashboards and reports.

To enable embedded reporting, find the grafana: section in the YAML resource, and uncomment the line, enabled: true.

For more information, see Enabling embedded reporting.

Not enabled
Enabling Data Exporter extractor: enabled

You can enable just the Data Exporter feature, which is the data export without postures or grafana. To enable the Data Exporter, find the extractor: section in the YAML resource, and uncomment the line, enabled: true. You also need to add the following lines under spec: properties: extractor: enableDataExtraction: true. For example:

spec:
  extractor:
    enabled: true
  properties:
    extractor:
      enableDataExtraction: true
Note:

If you already have properties defined in your custom resource, you must merge the properties: extractor parameters into what is already in the CR.

 
Enabling audit log forwarding rsyslog

Organizations can redirect the audit log to their centralized logging systems for analysis and tamper resistance. Turbonomic supports the syslog protocol, which enables it to easily integrate logging with many third-party logging solutions, such as Splunk.

For more information, see Redirecting Audit and Container Logs.

All logging centralized to rsyslog pod log.

NGINX service configuration options

The default Turbonomic configuration sets up an nginx deployment in the Turbonomic namespace, creates the service “nginx” (type: LoadBalancer, externalTrafficPolicy: Local), and attempts to get a public external IP. You have several options on how to have both the routing logic defined in the nginx service, and the flexibility to define an ingress/route to Turbonomic, or annotate load balancer configurations.

Option 1: NGINX as proxy and bring your own ingress or route

  • You can configure nginx as a ClusterIP type service, allowing you to use your own ingress or route, and still maintain the Turbonomic internal routing rules and leverage nginx as a proxy.

  • This is required to leverage embedded reporting on your Turbonomic instance with your own ingress or route. You must set the nginxIsPrimaryIngress parameter to false.

Note:

You do not need this configuration if you are running embedded reporting with the Turbonomic provided nginx service as the ingress.

  kind: XL
  metadata:
    name: xl-release
  spec:
    nginx:
      nginxIsPrimaryIngress: false
    #use openshiftingress and nginxingress if you would like Turbonomic to create a single route that will point to the nginx service
    #openshiftingress:
    #  enabled: true
    #nginxingress:
    #  enabled: true

To create your own ingress, see Using Platform Provided Ingress and the Ingress minimum requirements.

Option 2: NGINX as a LoadBalancer service type and customize annotations

Turbonomic does not deploy an ingress controller; it deploys nginx as a Kubernetes service which can create a cloud provider LoadBalancer (external type) or be used as a internal clusterIP service type to use behind a customer (platform) provided ingress.

Things to consider:

  • The types of LoadBalancers that are available based on your cloud or infrastructure provider.

  • Understand all available service annotations and understand what is required for your environment.

  • Turbonomic has an opinion on LoadBalancer.

To modify the nginx LoadBalancer type service to use an internal IP address on your LoadBalancer, use the required annotation depending on your Kubernetes platform and version, and environment.

The Turbonomic platform provides in the CR a way to put into the nginx service the proper annotations you have determined are required for your LoadBalancer in this place in the XL Custom Resource:
  global: 
    ingress:
      annotations:
        #provide the correct annotations based on the LoadBalancer and properties you want.

Option 3: NGINX external traffic policy

In some environments where you are using the nginx service as your ingress (as LoadBalancer type), you will want to change the externalTrafficPolicy from the default of Local to a value like Cluster. Add this parameter under nginx and combine with any other parameters you may have defined here:

 spec:
   global:
     ingress:
       annotations:
   nginx:
     externalTrafficPolicy: Cluster

Self-signed certificates and AWS Certificate Manager (ACM)

You can deploy the Turbonomic Server on a Kubernetes cluster running in AWS, using the Turbonomic provided nginx as a LoadBalancer type service.

You can leverage AWS Certificate Manager to provide a certificate to the AWS LoadBalancer created for the Turbonomic nginx service using an annotation of service.beta.kubernetes.io/aws-load-balancer-ssl-cert with the ACM Amazon Resource Name (ARN) on the service.

Note:

If you want to modify a deployed CR, you must apply the CR change then delete the existing nginx Service. The Operator recreates the nginx Service with the certificate annotation.

  1. Create an AWS ARN Certificate.

  2. In the Turbonomic custom resource (CR), under ingress: annotations, specify the aws-load-balancer-ssl-cert annotation and ACM ARN. The LoadBalancer created for the nginx Service uses this certificate for TLS termination at the LoadBalancer that would be combined with another service annotation required for your LoadBalancer type that you have chosen.

    Refer to the AWS documentation for the exact syntax for the required annotation.

  3. Apply the CR.

    Turbonomic creates the nginx Service that references the certificate to be used for TLS termination on the LoadBalancer.