Configuring IBM Cloud Pak foundational services by using the CommonService custom resource
The IBM Cloud Pak foundational services operator creates the CommonService
custom resource (CR) in the ibm-common-services
namespace.
The custom resource has the services, cluster configurations, and hardware profile that you can set before or after you install foundational services.
You can access the CommonService
custom resource by using the OpenShift Container Platform console or by using the command-line interface (CLI).
-
To use the console, complete these steps:
- From the navigation pane, click Operators > Installed Operators.
- From the Project drop-down list, select
ibm-common-services
. - Click IBM Cloud Pak foundational services.
- Select the CommonService tab. You can see the
common-service
custom resource instance. - Click the
common-service
resource. - Select the YAML tab.
-
To use the CLI, run the following command:
oc edit CommonService common-service -n ibm-common-services
Updating the custom resource
Set the following parameters by adding them to the CommonService
custom resource. You can update these parameters before you create an OperandRequest
instance or after you install the services. Add or modify the parameters
and values in the spec
section.
- Approval strategy
- Bring your own CA Certificate
- License
- Hardware profile
- Storage class
- Crossplane service
- Services configuration
- General settings
- MongoDB settings
- Authentication settings
- Changing the default admin username
- Delegating authentication to OpenShift (ibm-iam-operator)
- Assigning the Cloud Pak administrator privileges to an OpenShift user
- Adding custom OIDC claims
- Disabling nonce
- Adding description of the login options (ibm-commonui-operator)
- Setting the preferred login options (ibm-iam-operator)
- Changing the OIDC issuer URL (ibm-iam-operator)
- Changing the provider issuer URL (ibm-iam-operator)
- Changing the cluster name (ibm-iam-operator)
- HA configuration
- Sample ibm-iam-operator operator section in the custom resource
- Configuring resources for IAM (ibm-iam-operator)
- Logging service settings Note: The deprecated Logging service is removed in IBM Cloud Pak® foundational services version 3.7.x.
- Log storage
- Audit logging settings
- License Service settings
- Platform API settings
- Catalog UI settings
- Helm API settings
- Helm Repo settings
- Monitoring settings
- PrometheusExt settings
- Configure resources for Monitoring PrometheusExt (ibm-monitoring-prometheusext-operator)
- Exporters settings
- Configure resources for Monitoring Exporters (ibm-monitoring-exporters-operator)
- Grafana settings
- Configure resources for Monitoring Grafana (ibm-monitoring-grafana-operator)
- Certificate management service settings
- Common Web UI settings
- Serviceability settings
- Management Ingress settings
- Nginx Ingress settings
- Metering settings Note: The deprecated Metering service is removed in IBM Cloud Pak® foundational services version 3.7.x.
- Common UI settings
Approval strategy
The approval strategy defines whether the approval is needed to install, or upgrade IBM Cloud Pak foundational services. By default, the approval strategy is set to Automatic, however, you can change this setting during installation. You can
also set the approval strategy after the installation by adding or changing the installPlanApproval
parameter in the custom resource. If the approval strategy is set to Automatic
, the operator is automatically installed
or upgraded when a new version is available. If you set the installPlanApproval
parameter to Manual
, the operator is not automatically installed or upgraded. Instead, you get an an install plan that needs to be manually
approved before an upgrade.
Notes: - When you set the installPlanApproval
for the IBM Cloud Pak foundational services operator, it applies to all foundational services installed with this operator. - If you install the IBM Cloud Pak foundational services operator
in a namespace where another operator is installed that already has the installPlanApproval: Manual
set in its subscription, then the IBM Cloud Pak foundational services operator inherits this setting. The approval plan is set
to Manual
and IBM Cloud Pak foundational services operator cannot be automatically installed or upgraded.
Changing approval strategy from Automatic
to Manual
To change the approval strategy from Automatic
to Manual
, change the installPlanApproval
parameter value to Manual
in the spec
section of the CommonService
CR.
Note: If the installPlanApproval
parameter is not in the CR, add the following line to the CR in the spec
section: installPlanApproval: Manual
.
apiVersion: operator.ibm.com/v3
kind: CommonService
metadata:
name: common-service
namespace: ibm-common-services
spec:
installPlanApproval: Manual
Changing approval strategy from Manual
to Automatic
Note: Before you begin, make sure that the operators that are not a part of IBM Cloud Pak foundational services in the namespace have the installPlanApproval
parameter set to Automatic
in the subscription.
Otherwise, even if you change the approval strategy from Manual
to Automatic
for the IBM Cloud Pak foundational services operator, this setting might be overwritten.
To change the approval strategy from Manual
to Automatic
, complete the following steps:
-
Change the
installPlanApproval
parameter value toAutomatic
in thespec
section of theCommonService
CR.Note: If the
installPlanApproval
parameter is not in the CR, add the following line to the CR in thespec
section:installPlanApproval: Automatic
.apiVersion: operator.ibm.com/v3 kind: CommonService metadata: name: common-service namespace: ibm-common-services spec: installPlanApproval: Automatic
-
Change the
installPlanApproval
parameter value toAutomatic
in the IBM Cloud Pak foundational services subscription. If you have more than one IBM Cloud Pak foundational services subscription, change the value of theinstallPlanApproval
parameter in all subscriptions.
Namespace permissions
Manually authorize foundational services to watch a namespace by adding manualManagement: true
in the spec
section of the CommonService
CR. For more information about namespace permissions, see IBM NamespaceScope Operator.
apiVersion: operator.ibm.com/v3
kind: CommonService
metadata:
name: common-service
namespace: ibm-common-services
spec:
size: medium
manualManagement: true
Bring your own CA Certificate
You can replace the foundational services self-signed certificate authority (CA) certificate with your own CA certificate. To do so, first update BYOCACertificate: true
in the spec
section of the CommonService
CR.
Note: If BYOCACertificate
specification is not in the CR, you can add it in the spec
section as shown in the following example:
apiVersion: operator.ibm.com/v3
kind: CommonService
metadata:
name: common-service
namespace: ibm-common-services
spec:
BYOCACertificate: true
After you update the CommonService
CR, complete the following steps to replace the foundational services self-signed CA certificate with your own CA certificate:
-
Prepare and have your Transport Layer Security (TLS) certificate, TLS private key, and CA certificate ready.
-
Create a backup of the foundational services self-signed CA certificate. By default, the certificate is created in the
ibm-common-services
namespace.oc get certificate cs-ca-certificate -n ibm-common-services -o yaml > cs-ca-certificate.yaml
-
Delete the foundational services self-signed CA certificate resource so that the
cert-manager
service does not re-create the updated secret.oc delete certificate cs-ca-certificate -n ibm-common-services
-
Delete the foundational services self-signed CA certificate secret.
oc delete secret cs-ca-certificate-secret -n ibm-common-services
-
Re-create the
cs-ca-certificate-secret
secret with your CA certificate (ca.crt
), TLS certificate (tls.crt
), and private key (tls.key
).oc -n ibm-common-services create secret generic cs-ca-certificate-secret --from-file=ca.crt=<your path>/ca.crt --from-file=tls.crt=<your path>/tls.crt --from-file=tls.key=<your path>/tls.key
-
Refresh the leaf certificates that are created by individual foundational services. For more information and steps to refresh the leaf certificates, see Refreshing foundational services internal certificates.
License
Accept the license to use foundational services.
To do so, add spec.license.accept: true
in the spec
section. See the following sample code:
apiVersion: operator.ibm.com/v3
kind: CommonService
metadata:
name: common-service
namespace: ibm-common-services
spec:
license:
accept: true
size: as-is
Hardware profile
Set the hardware requirements profile based on the workloads in your cluster. For more information about the profiles, see Hardware requirements and recommendations for foundational services.
You can use the templates to update the hardware requirements of the services that you are installing. You can also use the templates to set the configuration parameters of the services.
The default profile is starterset
. You can change the profile to starter
, small
, medium
, production
, or large
, if required. If you are upgrading your cluster from
a previous release, the default profile setting is as-is
, which means that the hardware requirements setting from the previous release is retained.
apiVersion: operator.ibm.com/v3
kind: CommonService
metadata:
name: common-service
namespace: ibm-common-services
spec:
size: as-is
Storage class
Instead of specifying the storage class for each service in the spec.<service-name>
section, you can specify a storage class for all services in the spec
section. Any service that needs to use a storage class
will then use this specified storage class.
apiVersion: operator.ibm.com/v3
kind: CommonService
metadata:
name: common-service
namespace: ibm-common-services
spec:
size: as-is
storageClass: <storage-class-name>
When you specify a storage class in the spec
section, the storage class of a service is updated to reflect this value only if the service is not yet deployed. For example, MongoDB
and the Monitoring
service
both use a storage class. If IBM Cloud Pak A specifies rook-cephfs
as the storage class in the spec
section and requests only for MongoDB
, then MongoDB
is deployed with rook-cephfs
as the storage class. Though IBM Cloud Pak A did not request for the Monitoring
service, internally the storage class for the Monitoring
service is set as rook-cephfs
. In the same cluster, if IBM Cloud
Pak B then specifies rook-ceph-block
as the storage class in the spec
section and requests for the Monitoring
service, then the service is deployed with rook-ceph-block
as the storage class.
The storage class for MongoDB is not changed and it continues to use rook-cephfs
.
Crossplane service
Crossplane is an open source Kubernetes add-on that extends any cluster with the ability to provision and manage cloud infrastructure, services, and applications by using kubectl, GitOps, or any tool that works with the Kubernetes API.
The IBM Crossplane operators are automatically installed during foundational services installation. However, you must manually enable the service if you want to use it in your cluster.
Enable the service by adding the following piece of code in the CommonService
CR:
spec:
features:
bedrockshim:
enabled: true
From foundational services version 3.19.10 onwards, if you want to uninstall the IBM Crossplane Provider operators, you can add the crossplaneProviderRemoval: true
configuration to the CommonService
CR. This configuration
uninstalls only the IBM Crossplane Provider service, and leaves the Crossplane service untouched.
See the following sample.
spec:
features:
bedrockshim:
enabled: true
crossplaneProviderRemoval: true
When you add crossplaneProviderRemoval: true
, foundational services deletes the CSV and subscription of the following operators:
ibm-crossplane-provider-ibm-cloud-operator
ibm-crossplane-provider-kubernetes-operator
Services configuration
Add any service-related configuration in the spec.services
section. First, add services:
under spec:
. You need to add services:
section only once. You can add all services-related configuration
under the same section.
apiVersion: operator.ibm.com/v3
kind: CommonService
metadata:
name: common-service
namespace: ibm-common-services
spec:
services:
Then, add the service configuration. For example, if you are adding a storage class for MongoDB, the configuration would be as shown in the following example:
apiVersion: operator.ibm.com/v3
kind: CommonService
metadata:
name: common-service
namespace: ibm-common-services
spec:
services:
- name: ibm-mongodb-operator
spec:
mongoDB:
storageClass: cephfs
MongoDB settings
Storage class (ibm-mongodb-operator)
Specify a storage class name for the persistent volume. If you do not add the storage class configuration, the default storage class that is available in the cluster is used for MongoDB.
- name: ibm-mongodb-operator
spec:
mongoDB:
storageClass: <storage_class_name>
Note: If you change the resource settings post upgrade, the default settings get restored.
If you are installing your IBM Cloud Pak on IBM Cloud®, use the ibmc-file-gold-gid
or ibmc-block-gold
storage class.
Changing the default values of the MongoDB username and password (ibm-mongodb-operator)
The ibm-mongodb-operator
generates default username and password, which are random strings. You can change these parameter values before you install the IBM Cloud Pak foundational services.
- Get the base64-encoded values of the new username and password.
echo <new-username or new-password> | base64
- Define a YAML file of kind
Secret
. Use theibm-common-services
namespace, and the base64-encoded username and password values in this YAML file. Following is a sample file:apiVersion: v1 kind: Secret metadata: name: icp-mongodb-admin namespace: ibm-common-services type: Opaque data: password: SFV6a2NYMkdKa2tBZA== user: dGpOcDR5Unc=
- Create the secret by running the following command:
kubectl apply -f <YAML-file-name>.yaml
Configuring resources for MongoDB (ibm-mongodb-operator)
Specify resource settings. If you do not specify a setting, the default resource settings that are available in the cluster are used for MongoDB.
replicas:
resources:
requests:
cpu: <cpu-request>
memory: <memory-request>
limits:
cpu: <cpu-limit>
memory: <memory-limit>
Authentication settings
- Changing the default admin username
- Delegating authentication to OpenShift
- Adding custom OIDC claims
- Disabling nonce
- Adding description of the login options
- Setting the preferred login options (ibm-iam-operator)
- Changing the OIDC issuer URL
- Changing the provider issuer URL
- Changing the cluster name
- HA configuration
- Sample ibm-iam-operator operator section in the custom resource
- Configuring resources for IAM (ibm-iam-operator)
Changing the default admin username
The IBM Cloud Pak foundational services installation creates a default admin
user, who is a cluster administrator. You can customize the username by adding the defaultAdminUser
parameter.
Note: If you already have a user by the name admin
in your cluster, you must set the defaultAdminUser
parameter with a custom name that is not admin
before you install foundational
services. This is to avoid your admin
user from being removed if you uninstall foundational services later.
- name: ibm-iam-operator
spec:
authentication:
config:
defaultAdminUser: <custom-username>
Delegating authentication to OpenShift (ibm-iam-operator)
Authentication with Red Hat OpenShift is enabled by default. The ibm-iam-operator
has the following default configuration. If you do not update these parameters, authentication with OpenShift is enabled with no prefix.
- name: ibm-iam-operator
spec:
authentication:
config:
roksEnabled: true
roksURL: <your-endpoint-URL>
roksUserPrefix: ""
You can disable the authentication, or update the parameters as required. Following are the parameter descriptions:
roksEnabled:
Set tofalse:
to disable authentication with Red Hat OpenShift.roksURL:
The public service endpoint URL of your public cloud cluster. For more information about how to get the URL, see Updating OpenShift authentication.roksUserPrefix:
Prefix to be used with the username. When you access your cluster console or CLI, you use the prefix along with the username to authenticate with OpenShift. If you are using IAM with Red Hat OpenShift Kubernetes Service in the IBM Cloud, you must set the prefix toIAM#
.
Assigning the cloud pak administrator privileges to an OpenShift user
If you are delegating authentication to OpenShift, you can assign the cloud pak administrator privileges to an existing OpenShift user. The user can be added to the bootstrapUserId
parameter.
To add the bootstrapUserId
parameter before IAM service installation, see the following configuration:
- name: ibm-iam-operator
spec:
authentication:
config:
bootstrapUserId: "<custom-username>"
To add the bootstrapUserId
parameter after IAM service installation, see Changing the default cluster administrator.
Adding custom OIDC claims
The IAM service uses the default scopes and claims that WebSphere® Application Server Liberty provides. Based on your OIDC authentication requirements, you can customize the OIDC claims that are returned by the UserInfo endpoint. For more information about custom claims, see Adding custom OIDC claims.
To add the custom claims before you install the IAM service, add the following configuration:
- name: ibm-iam-operator
spec:
authentication:
config:
claimsSupported: "<list-of-claims>"
claimsMap: "<list-of-claims-map>"
scopeClaim: "profile=<list-of-claims-in-the-profile>"
See the following example:
apiVersion: operator.ibm.com/v3
kind: CommonService
metadata:
name: common-service
namespace: ibm-common-services
spec:
size: medium
services:
- name: ibm-iam-operator
spec:
authentication:
config:
claimsSupported: "name,family_name,display_name,given_name,preferred_username"
claimsMap: "name=\"givenName\" family_name=\"givenName\" given_name=\"givenName\" preferred_username=\"givenName\" display_name=\"displayName\""
scopeClaim: "profile=\"name,family_name,display_name,given_name,preferred_username\""
Disabling nonce
To improve security, nonce
is enabled by default. Nonce
associates a client session with an ID token and is used in authentication to ensure that attackers do not use old sessions.
To disable nonce
, add the following configuration:
- name: ibm-iam-operator
spec:
authentication:
config:
nonceEnabled: false
Adding description of the login options (ibm-commonui-operator)
You can configure multiple authentication types in your cluster. For more information, see Authentication types.
The authentication types that you configure in your cluster are displayed on the console login page. You can provide a short description of the authentication types in the ibm-commonui-operator
by configuring these parameters.
The description is then displayed on the console login page.
- name: ibm-commonui-operator
spec:
globalUIConfig:
enterpriseLDAP: <Provide a short description about the enterprise LDAP authentication type.>
defaultAuth: <Provide a short description about the default authentication type.>
osAuth: <Provide a short description about the OpenShift authentication type.>
enterpriseSAML: <Provide a short description about the enterprise SAML authentication type.>
Setting the preferred login options (ibm-iam-operator)
If you configured multiple authentication types in your cluster, but you want the console login page to display any one or a selected set of the configured login options, you can set the preferredLogin:
parameter. For example, preferredLogin:SAML,LDAP
.
Use the following parameter values:
- For default authentication, use
DEFAULT
- For enterprise LDAP, use
LDAP
- For enterprise SAML, use
SAML
- For OpenShift authentication, use
ROKS
- name: ibm-iam-operator
spec:
authentication:
config:
preferredLogin: <option1>,<option2>,<optionN>
Changing the OIDC issuer URL (ibm-iam-operator)
Default configuration of OpenID Connect (OIDC) in IBM Cloud Pak for Integration uses <cluster_address>:443
in the authentication endpoints, which are used to authenticate users to Kubernetes. However, in the oidcIssuerURL
,
local host IP address is used, the default value of which is https://127.0.0.1:443/idauth/oidc/endpoint/OP
. If required, you can change the oidcIssuerURL
endpoint to use a hostname for Kubernetes authentication.
If you want to use the .well-known/openid-configuration
endpoint, you must update the oidcIssuerURL
with the hostname that you want to use.
- name: ibm-iam-operator
spec:
authentication:
config:
oidcIssuerURL: <https://<hostname>:443/oidc/endpoint/OP>
Changing the provider issuer URL (ibm-iam-operator)
If you want to use the .well-known/openid-configuration
endpoint, you must update the providerIssuerURL
with the hostname that you want to use. You can use the endpoint to get the OIDC configuration information
of the provider. The default setting is providerIssuerURL: ''
.
- name: ibm-iam-operator
spec:
authentication:
config:
providerIssuerURL: <https://<hostname>:443/idprovider/v1/auth/.well-known/openid-configuration
Changing the cluster name (ibm-iam-operator)
The default cluster name is mycluster
. You can change the cluster name by using the clusterName
parameter. The cluster name is used in Cloud Resource Names (CRNs).
- name: ibm-iam-operator
spec:
authentication:
config:
clusterName: <custom-name>
HA configuration
If you are configuring high availability (HA) in your cluster, add the following configuration for the IAM operator:
- name: ibm-iam-operator
spec:
authentication:
replicas: 3
pap:
replicas: 3
policycontroller: {}
policydecision:
replicas: 3
secretwatcher: {}
securityonboarding: {}
Sample ibm-iam-operator operator section in the custom resource
Following is a sample configuration of the ibm-iam-operator
operator in the custom resource. The sample includes the authentication with Red Hat OpenShift and OIDC URL parameters.
- name: ibm-iam-operator
spec:
authentication:
config:
bootStrapUserId: clusteradmin
roksEnabled: true
roksURL: 'https://c100-e.eu-de.containers.cloud.ibm.com:32301'
roksUserPrefix: 'IAM#'
oidcIssuerURL: 'https://c100-e.eu-de.containers.cloud.ibm.com:443/oidc/endpoint/OP'
preferredLogin: ROKS
oidcclientwatcher: {}
pap: {}
policycontroller: {}
policydecision: {}
secretwatcher: {}
securityonboarding: {}
Configuring resources for IAM (ibm-iam-operator)
Specify resource settings. If you do not specify a setting, the default resource settings that are available in the cluster are used for IAM.
- name: ibm-iam-operator
spec:
authentication:
replicas: 1
auditService:
resources:
limits:
cpu:
memory:
requests:
cpu:
memory:
authService:
resources:
limits:
cpu:
memory:
requests:
cpu:
memory:
clientRegistration:
resources:
limits:
cpu:
memory:
requests:
cpu:
memory:
identityManager:
resources:
limits:
cpu:
memory:
requests:
cpu:
memory:
identityProvider:
resources:
limits:
cpu:
memory:
requests:
cpu:
memory:
oidcclientwatcher:
replicas: 1
resources:
limits:
cpu:
memory:
requests:
cpu:
memory:
pap:
auditService:
resources:
limits:
cpu:
memory:
requests:
cpu:
memory:
papService:
resources:
limits:
cpu:
memory:
requests:
cpu:
memory:
replicas: 1
policycontroller:
replicas: 1
resources:
limits:
cpu:
memory:
requests:
cpu:
memory:
policydecision:
auditService:
resources:
limits:
cpu:
memory:
requests:
cpu:
memory:
resources:
limits:
cpu:
memory:
requests:
cpu:
memory:
replicas: 1
secretwatcher:
resources:
limits:
cpu:
memory:
requests:
cpu:
memory:
replicas: 1
securityonboarding:
replicas: 1
resources:
limits:
cpu:
memory:
requests:
cpu:
memory:
iamOnboarding:
resources:
limits:
cpu:
memory:
requests:
cpu:
memory:
Logging service settings
Note: The deprecated Logging service is removed in IBM Cloud Pak® foundational services version 3.7.x.
- Table 1. General Logging service settings
- Table 2. Filebeat settings
- Table 3. Logstash settings
- Table 4. Kibana settings
- Table 5. Elasticsearch settings
- Table 6. Security settings
- Table 7. Curator settings
General
The following parameters can be specified in the custom resource to override default values. Adding parameters to custom resource is not necessary if you want to use the default values.
Parameter | Description | Default |
---|---|---|
image.pullPolicy |
The policy used by Kubernetes for images. | IfNotPresent |
image.pullSecret.enabled |
If set to true, adds an imagePullSecret annotation to all deployments. This setting enables the use of private image repositories that require authentication. | false |
image.pullSecret.name |
The name of the image pull secret to specify. The pull secret is a resource that is created by an authorized user. | regcred |
general.environment |
Describes the target Kubernetes environment to enable the chart to meet specific vendor requirements. Valid values are IBMCloudPrivate , Openshift , and Generic . |
IBMCloudPrivate |
general.clusterDomain |
The value that was used during configuration installation of IBM Cloud Private. The chart default corresponds to IBM Cloud Private default. | cluster.local |
general.ingressPort |
The secure port number used to access services deployed within the IBM Cloud Private cluster. | 8443 |
Filebeat
Parameter | Description | Default |
---|---|---|
filebeat.name |
The internal name of the Filebeat pod. | filebeat-ds |
filebeat.image.repository |
Full repository and path to image. | quay.io/opencloudio/icp-filebeat-oss |
filebeat.resources.limits.memory |
The maximum memory allowed per pod. | 256Mi |
filebeat.resources.requests.memory |
The minimum memory required per pod. | 64Mi |
filebeat.image.tag |
The version of Filebeat to deploy. | 6.6.1-build.1 |
filebeat.scope.nodes |
One or more label key/value pairs that refine node selection for Filebeat pods. | empty (nil) |
filebeat.scope.namespaces |
List of log namespaces to monitor upon. Logs from all namespaces are collected if value is set to empty . |
empty (nil) |
filebeat.tolerations |
Kubernetes tolerations that can allow the pod to run on certain nodes. | empty (nil) |
filebeat.registryHostPath |
Location to store filebeat registry on the host node. | /var/lib/icp/logging/filebeat-registry/<helm-release-name> |
Logstash
Parameter | Description | Default |
---|---|---|
logstash.name |
The internal name of the Logstash cluster. | logstash |
logstash.image.repository |
Full repository and path to image. | quay.io/opencloudio/icp-logstash-oss |
logstash.image.tag |
The version of Logstash to deploy. | 6.6.1-build.1 |
logstash.replicas |
The initial pod cluster size. | 1 |
logstash.heapSize |
The JVM heap size to allocate to Logstash. | 512m |
logstash.memoryLimit |
The maximum allowable memory for Logstash. This setting includes both JVM heap and file system cache. | 1024Mi |
logstash.port |
The port on which Logstash listens for beats. | 5000 |
logstash.probe.enabled |
Enables the liveness probe for Logstash. Logstash instance is considered not alive when:
|
false |
logstash.probe.periodSeconds |
Seconds probe waits before it calls Logstash endpoint for status again. | 60 |
logstash.probe.minEventsPerPeriod |
Logstash instance is considered healthy if the number of log events that are processed is greater than logstash.probe.minEventsPerPeriod within logstash.probe.periodSeconds . |
1 |
logstash.probe.maxUnavailablePeriod |
Logstash instance is considered unhealthy after API endpoint is unavailable for logstash.probe.periodSeconds * logstash.probe.maxUnavailablePeriod seconds. |
5 |
logstash.probe.image.repository |
Full repository and path to image. | quay.io/opencloudio/logstash-liveness-probe |
logstash.probe.image.tag |
Image version. | 1.0.2-build.2 |
logstash.probe.resources.limits.memory |
The maximum memory allowed per pod. | 256Mi |
logstash.probe.resources.requests.memory |
The minimum memory required per pod. | 64Mi |
logstash.tolerations |
Kubernetes tolerations that can allow the pod to run on certain nodes. | empty (nil) |
logstash.nodeSelector |
Kubernetes selector that can restrict the pod to run on certain nodes. | empty (nil) |
Kibana
Parameter | Description | Default |
---|---|---|
kibana.name |
The internal name of the Kibana cluster. | kibana |
kibana.image.repository |
Full repository and path to image. | quay.io/opencloudio/icp-kibana-oss |
kibana.image.tag |
The version of Kibana to deploy. | 6.6.1-build.1 |
kibana.replicas |
The initial pod cluster size. | 1 |
kibana.internal |
The port for Kubernetes-internal networking. | 5601 |
kibana.external |
The port used by external users. | 31601 |
kibana.maxOldSpaceSize |
Maximum old space size (in MB) of the V8 JavaScript engine. | 1536 |
kibana.memoryLimit |
The maximum allowable memory for Kibana. | 2048Mi |
kibana.initImage.repository |
Full repository and path to initialization image. | quay.io/opencloudio/curl |
kibana.initImage.tag |
The version of the initialization image to deploy. | 4.2.0-build.3 |
kibana.routerImage.repository |
Full repository and path to the image used as a secure proxy (only used when kibana.access is ingress ) |
quay.io/opencloudio/icp-management-ingress . |
kibana.routerImage.tag |
The version of the secure proxy image to deploy. | 2.5.1 |
kibana.init.resources.limits.memory |
The maximum memory allowed per pod. | 256Mi |
kibana.init.resources.requests.memory |
The minimum memory required per pod. | 64Mi |
kibana.routerImage.resources.limits.memory |
The maximum memory allowed per pod. | 256Mi |
kibana.routerImage.resources.requests.memory |
The minimum memory required per pod. | 64Mi |
kibana.tolerations |
Kubernetes tolerations that can allow the pod to run on certain nodes. | empty (nil) |
kibana.nodeSelector |
Kubernetes selector that can restrict the pod to run on certain nodes. | empty (nil) |
kibana.access |
How access to kibana is achieved, either loadBalancer or ingress . |
loadBalancer |
kibana.ingress.path |
Path used when access is ingress . |
/tenantA/kibana |
kibana.ingress.labels.inmenu |
Determines whether a link is added to the UI navigation menu. | true |
kibana.ingress.labels.target |
If provided, the UI navigation link starts in a new window. | logging-tenantA |
kibana.ingress.annotations.name |
The UI navigation link display name. | Logging - Tenant A |
kibana.ingress.annotations.id |
The parent navigation menu item. | add-ons |
kibana.ingress.annotations.roles |
The roles able to see the UI navigation link. | ClusterAdministrator,CloudPakAdministrator,Administrator,Operator,Viewer |
kibana.ingress.annotations.ui.icp.ibm.com/tenant |
The teams able to see the UI navigation link. | tenantAdev,tenantAsupport (tenantA examples) |
kibana.security.authc.enabled |
Determines whether IBM Cloud Private login is required before access is allowed. | false |
kibana.security.authz.enabled |
Determines whether namespace access is required before access is allowed (requires authc.enabled: true ). |
false |
kibana.security.authz.icp.authorizedNamespaces |
List of namespaces that allow access. | (tenantA examples) |
Elasticsearch
Parameter | Description | Default |
---|---|---|
elasticsearch.name |
A name to uniquely identify this Elasticsearch deployment. | elasticsearch |
elasticsearch.image.repository |
Full repository and path to Elasticsearch image. | quay.io/opencloudio/icp-elasticsearch-oss |
elasticsearch.image.tag |
The version of Elasticsearch to deploy. | 6.6.1-build.1 |
elasticsearch.initImage.repository |
Full repository and path to the image used during startup. | quay.io/opencloudio/icp-initcontainer |
elasticsearch.initImage.tag |
The version of init-container image to use. 1.0.0-build.3 |
|
elasticsearch.pkiInitImage.repository |
Full repository and path to the image for public key infrastructure (PKI) initialization. | quay.io/opencloudio/logging-pki-init |
elasticsearch.pkiInitImage.tag |
Version of the image for public key infrastructure (PKI) initialization. | 2.3.0-build.2 |
elasticsearch.pkiInitImage.resources.limits.memory |
The maximum memory allowed per pod. | 256Mi |
elasticsearch.pkiInitImage.resources.requests.memory |
The minimum memory required per pod. | 64Mi |
elasticsearch.routerImage.repository |
Full repository and path to the image that provides proxy support for role-based access control (RBAC). | quay.io/opencloudio/icp-management-ingress |
elasticsearch.routerImage.tag |
Version of the image for providing role-based access control (RBAC) support. | 2.5.1 |
elasticsearch.routerImage.resources.limits.memory |
The maximum memory allowed per pod. | 256Mi |
elasticsearch.routerImage.resources.requests.memory |
The minimum memory required per pod. | 64Mi |
elasticsearch.internalPort |
The port on which the full Elasticsearch cluster communicates. | 9300 |
elasticsearch.security.authc.enabled |
Determines whether mutual certificate-based authentication is required before access is allowed. | true |
elasticsearch.security.authc.provider |
Elastic stack plug-in to provide TLS. Only acceptable value icp . |
icp |
elasticsearch.security.authz.enabled |
Determines whether authenticated user query results are filtered by namespace access. | false |
elasticsearch.data.name |
The internal name of the data node cluster. | data |
elasticsearch.data.replicas |
The number of initial pods in the data cluster. | 2 |
elasticsearch.data.heapSize |
The JVM heap size to allocate to each Elasticsearch data pod. | 4000m |
elasticsearch.data.memoryLimit |
The maximum memory (including JVM heap and file system cache) to allocate to each Elasticsearch data pod. | 7000Mi |
elasticsearch.data.antiAffinity |
Whether Kubernetes may (soft ) or must not (hard ) deploy data pods onto the same node. |
hard |
elasticsearch.data.storage.size |
The minimum size of the persistent volume. | 10Gi |
elasticsearch.data.storage.accessModes |
See official documentation. | ReadWriteOnce |
elasticsearch.data.storage.storageClass |
See official documentation. | "" |
elasticsearch.data.storage.persistent |
Set to false for non-production or trial-only deployment |
true |
elasticsearch.data.storage.useDynamicProvisioning |
Set to true to use GlusterFS or other dynamic storage provisioner. |
false |
elasticsearch.data.storage.selector.label |
A label associated with the target persistent volume (ignored if you use dynamic provisioning). | "" |
elasticsearch.data.storage.selector.value |
The value of the label associated with the target persistent volume (ignored if you use dynamic provisioning) | "" |
elasticsearch.data.tolerations |
Kubernetes tolerations that can allow the pod to run on certain nodes. | empty (nil) |
elasticsearch.data.nodeSelector |
Kubernetes selector that can restrict the pod to run on certain nodes. | empty (nil) |
Security
Parameter | Description | Default |
---|---|---|
security.ca.keystore.password |
Keystore password for the certificate authority (CA). | changeme |
security.ca.truststore.password |
Truststore password for the CA. | changeme |
security.ca.origin |
Specifies which CA to use for generating certs. There are two accepted values:
|
internal |
security.ca.external.secretName |
Name of Kubernetes secret that stores the external CA. The secret needs to be under the same namespace as the Helm release. | cluster-ca-cert |
security.ca.external.certFieldName |
Field name (key) within the specified Kubernetes secret that stores CA cert. If the signing cert is used, the complete trust chain (root CA and signing CA) needs to be included in this file. | tls.crt |
security.ca.external.keyFieldName |
Field name (key) within the specified Kubernetes secret that stores CA private key. | tls.key |
security.app.keystore.password |
Keystore password for logging service components (such as Elasticsearch, Kibana). | changeme |
security.tls.version |
The version of TLS required, always TLSv1.2 . |
TLSv1.2 |
Curator
The curator is a tool to clean out old log indices from Elasticsearch. For more information, see Elastic's official documentation.
Parameter | Description | Default |
---|---|---|
curator.name |
A name to uniquely identify this curator deployment. | curator |
curator.image.repository |
Full repository and path to image. | quay.io/opencloudio/indices-cleaner |
curator.image.tag |
The version of curator image to deploy. | 1.2.0-build.2 |
curator.schedule |
A Linux® cron schedule that identifies the starting of the curator process. The default schedule runs at midnight. | 59 23 * * * |
curator.log.unit |
The age unit type to retain application logs. | days |
curator.log.count |
The amount of curator.log.unit used to retain application logs. |
1 |
curator.va.unit |
The age unit type to retain Vulnerability Advisor logs. This setting applies only to the instance of Logging installed with IBM Cloud Private and used by Vulnerability Advisor. | days |
curator.va.count |
The amount of curator.va.unit used to retain Vulnerability Advisor logs. This setting applies only to the instance of Logging installed with IBM Cloud Private and used by Vulnerability Advisor. |
90 |
curator.auditLog.unit |
The age unit type to retain audit logs. This setting applies only to the instance of Logging installed with IBM Cloud Private and used by Audit Logging. | days |
curator.auditLog.count |
The amount of curator.auditLog.unit used to retain audit logs. This setting applies only to the instance of Logging installed with IBM Cloud Private and used by Audit Logging. |
1 |
curator.tolerations |
Kubernetes tolerations that can allow the pod to run on certain nodes. | empty (nil) |
curator.nodeSelector |
Kubernetes selector that can restrict the pod to run on certain nodes. | empty (nil) |
curator.resources.limits.memory |
The maximum memory allowed per pod. | 256Mi |
curator.resources.requests.memory |
The minimum memory required per pod. | 64Mi |
Sample CR
Following is a sample CR:
apiVersion: elasticstack.ibm.com/v1alpha1
kind: ElasticStack
metadata:
name: logging
spec:
nameOverride: elk
general:
environment: OpenShift
ingressPort: "443"
security:
ca:
# set to `external` to use existing CA stored in Kubernetes secret to generate certs
origin: external
external:
# the secret need to be in the same namespace as the chart release
secretName: cs-ca-certificate-secret
# the Kubernetes field name (key) within the specified secret that stores CA cert
certFieldName: tls.crt
# the Kubernetes field name (key) within the specified secret that stores CA private key
keyFieldName: tls.key
tls:
version: TLSv1.2
logstash:
replicas: 3
port: 5044
nodeSelector:
management: "true"
tolerations:
- key: "dedicated"
operator: "Exists"
effect: "NoSchedule"
kibana:
replicas: 3
nodeSelector:
management: "true"
tolerations:
- key: "dedicated"
operator: "Exists"
effect: "NoSchedule"
# accepted values:
# ingress or loadBalancer, defaults to loadBalancer
access: ingress
ingress:
# "/kibana" for managed service logging instance
# sample value for custom ingress: "/tenantA/kibana"
# no trailing /
path: "/kibana"
# additional labels to facilitate link rendering in icp console
labels:
# Nav link currently explicitly created by platform UI when Logging is present. If this is changed, inmenu should be set to "true".
inmenu: "false"
# if provided, the link will open in a new tab with the target value in the <a> tag
target: "platform-logging"
service:
# "/kibana" for managed service logging instance
# sample value for custom ingress: "/tenantA/kibana"
# no trailing /
path: "/kibana"
# additional labels to facilitate link rendering in icp console
labels:
# Nav link currently explicitly created by platform UI when Logging is present. If this is changed, inmenu should be set to "true".
inmenu: "false"
# if provided, the link will open in a new tab with the target value in the <a> tag
target: "platform-logging"
# additional annotations to facilitate link rendering in icp console
annotations:
# display name that will show in the menu
name: "Logging"
# provided by icp console
id: "add-ons"
# list of roles to be able to view TA in the menu
# for managed logging instance deployed
# without other multi-tenant logging
# instances, use the following:
roles: "ClusterAdministrator,Administrator,Operator,Viewer,Auditor,Editor"
# for managed logging instance deployed
# with other multi-tenant logging
# instances, use the following:
# roles: "ClusterAdministrator"
# show link if user is in any of the teams
# ui.icp.ibm.com/tenant: "tenantA,tenantB"
security:
authc:
enabled: true
# accepted values: icp
# what it does: redirects to icp login page first
provider: icp
authz:
enabled: false
# accepted values: icp
# what it does: only allow request to pass if user
# have access to the required namespaces
# that the current user has access to
# requires authc.enabled = true and authc.provider = icp
provider: icp
icp:
# 1. user is allowed to access the kibana ingress
# if namespaces granted to user are in the following list
# 2. when the following list is empty, only cluster admin
# can access this kibana ingress
authorizedNamespaces:
# - tenantadev
# - tenantatest
# - tenantaprod
filebeat:
scope:
nodes:
# for managed logging instance deployed
# with other multi-tenant logging
# instances, it might be a good
# idea to have managed logging instance
# only collect log from limited workloads
# (rather than workloads from all tenants).
# set desired node selector here
namespaces:
tolerations:
- key: "dedicated"
operator: "Exists"
effect: "NoSchedule"
elasticsearch:
security:
authz:
enabled: true
# accepted values: icp
# what it does: filter log content by the namespace
# that the current user has access to
provider: icp
client:
tolerations:
- key: "dedicated"
operator: "Exists"
effect: "NoSchedule"
nodeSelector:
management: "true"
master:
tolerations:
- key: "dedicated"
operator: "Exists"
effect: "NoSchedule"
nodeSelector:
management: "true"
data:
replicas: 1
tolerations:
- key: "dedicated"
operator: "Exists"
effect: "NoSchedule"
nodeSelector:
management: "true"
storage:
size: "20Gi"
storageClass: "gp2"
useDynamicProvisioning: true
curator:
nodeSelector:
management: "true"
tolerations:
- key: "dedicated"
operator: "Exists"
effect: "NoSchedule"
Log Storage
Static Persistent Volumes
For each Elasticsearch pod, a persistent volume (PV) is required if no dynamic provisioning is set up. For more information, see official Kubernetes documentation.
Create a PV for logging.
-
Create a logging service data directory and change the permissions. On the node, grant at least
0755
permission to the logging service data directory. The default location of this customizable path is/var/lib/icp/logging/elk-data
. -
Create the PV.
- Create PV YAML file,
pv-logging-datanode-aaa.bbb.ccc.ddd.yaml
. Modify the following sample YAML file to replaceaaa.bbb.ccc.ddd
with the IP address of the node on which logging runs.
apiVersion: v1 kind: PersistentVolume metadata: # replace `aaa.bbb.ccc.ddd` with the IP address of the node on which logging runs name: logging-datanode-aaa.bbb.ccc.ddd spec: accessModes: - ReadWriteOnce capacity: storage: 30Gi # adjust to your need local: path: /var/lib/icp/logging/elk-data # adjust to your need nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - <worker_node_name> # add node name to this list, which can be found using kubectl get nodes persistentVolumeReclaimPolicy: Retain storageClassName: logging-storage-datanode # adjust to your need
- Run the following command to create a PV.
kubectl apply -f pv-logging-datanode-aaa.bbb.ccc.ddd.yaml
- Repeat steps a. and b. for each node on which logging runs.
- Create PV YAML file,
Dynamically provisioned persistent volumes
See documentation for the underlying container platform, for example, such as Red Hat OpenShift. If elasticsearch.data.storage.storageClass
is not specified, the default storage provisioner that is configured for the cluster
is used.
Audit logging settings
Add policy controller settings for Audit logging 3.8.0.
Parameter | Description | Default value | Syntax |
---|---|---|---|
policyController.enabled |
Set this parameter to true to deploy audit policy controller. |
true |
String |
policyController.verbosity |
Sets the level of log output. | 0 | String |
policyController.frequency |
Sets the status update frequency (in seconds) of an audit policy. | 10 | String |
policyController.pullPolicy |
Sets the pullPolicy for the audit policy controller image. | IfNotPresent | String |
policyController.imageRegistry |
Sets the registry to pull the audit policy controller image from | quay.io/opencloudio/ . |
String |
To configure the policy controller for Audit logging version 3.7.0, see Configuring the policy controller for Audit logging version 3.7.0.
Parameter | Description | Default value | Syntax |
---|---|---|---|
fluentd.enabled |
Set this parameter to true to enable forwarding of audit logs. |
false |
Boolean |
fluentd.imageRegistry |
Sets the registry to pull the fluentd image from. | quay.io/opencloudio/ |
String |
fluentd.pullPolicy |
Sets the pullPolicy for the fluentd image. | IfNotPresent |
"IfNotPresent", "PullNever", or "Always" |
fluentd.journalPath |
Sets the default path to store audit log data. | /run/log/journal |
Path has no trailing / |
fluentd.issuer |
A central authority to obtain certificates from. | cs-ca-issuer |
String |
fluentd.resources.limits.cpu |
Sets the CPU limit for Fluentd container. | 300m | Kubernetes CPU units (String) |
fluentd.resources.limits.memory |
Sets the memory limit for Fluentd container. | 400Mi | Bytes (String) |
fluentd.resources.requests.cpu |
Sets the CPU request for Fluentd container. | 25m | Kubernetes CPU units (String) |
fluentd.resources.requests.memory |
Sets the memory request for Fluentd container. | 100Mi | Bytes (String) |
Note: Fluentd runs as a DaemonSet so resource values are per node.
You can update these parameters by adding them in the custom resource:
**Note:** Default values are used when fields are empty.
- name: ibm-auditlogging-operator
spec:
fluentd:
enabled: <true or false>
imageRegistry: <fluentd image registry>
pullPolicy: <fluentd pull policy>
journalPath: <path-to-the-journal>
issuer: <certificate-authority-name>
resources:
requests:
cpu: <cpu-request>
memory: <memory-request>
limits:
cpu: <cpu-limit>
memory: <memory-limit>
Following is an example configuration with the default values:
- name: ibm-auditlogging-operator
spec:
fluentd:
enabled: false
imageRegistry: quay.io/opencloudio/
pullPolicy: IfNotPresent
journalPath: /run/log/journal
issuer: cs-ca-issuer
resources:
requests:
cpu: 25m
memory: 100Mi
limits:
cpu: 300m
memory: 400Mi
Parameter | Description | Default value | Syntax |
---|---|---|---|
fluentd.enabled |
Set this parameter to true to enable forwarding of audit logs. |
false |
Boolean |
fluentd.imageRegistry |
Sets the registry to pull the fluentd image from. | quay.io/opencloudio/ |
String |
fluentd.pullPolicy |
Sets the pullPolicy for the fluentd image. | IfNotPresent |
"IfNotPresent", "PullNever", or "Always" |
fluentd.journalPath |
Sets the default path to store audit log data. | /run/log/journal |
Path has no trailing / |
fluentd.issuer |
A central authority to obtain certificates from. | cs-ca-issuer |
String |
policyController.imageRegistry |
Sets the registry to pull the audit policy controller image from. | quay.io/opencloudio/ |
String |
policyController.pullPolicy |
Sets the pullPolicy for the audit policy controller image. | IfNotPresent |
"IfNotPresent", "PullNever", or "Always" |
policyController.verbosity |
Set the level of log output to debug-level (0-4) or trace-level (5-10). | 0 |
String |
policyController.frequency |
The status update frequency (in seconds) of a mutation policy. | 10 |
String |
You can update these parameters by adding them in the custom resource:
**Note:** Default values are used when fields are empty.
- name: ibm-auditlogging-operator
spec:
fluentd:
enabled: <true or false>
imageRegistry: <fluentd image registry>
pullPolicy: <fluentd pull policy>
journalPath: <path-to-the-journal>
issuer: <certificate-authority-name>
policyController:
imageRegistry: <policy controller image registry>
pullPolicy: <policy controller pull policy>
verbosity: <"log-output-level">
frequency: <"frequency-of-status-update">
Following is an example configuration with the default values:
- name: ibm-auditlogging-operator
spec:
fluentd:
enabled: false
imageRegistry: quay.io/opencloudio/
pullPolicy: IfNotPresent
journalPath: /run/log/journal
issuer: cs-ca-issuer
policyController:
imageRegistry: quay.io/opencloudio/
pullPolicy: IfNotPresent
verbosity: "0"
frequency: "10"
License Service settings
Enable additional information in logs for troubleshooting purposes
By default, the License Service instance pod logs contain only the basic information about the service. You can enable the additional information in logs for troubleshooting purposes by adding the logLevel
parameter to the spec
section.
The available logLevel
options:
DEBUG
- This option enables all debug information in logs.VERBOSE
- This option extends the logs with information about license calculation and API calls.
Following is an example configuration:
- name: ibm-licensing-operator
spec:
IBMLicensing:
logLevel: DEBUG
Configure resources for License Service (ibm-licensing-operator)
Specify resource settings. If you do not specify a setting, the default resource settings that are available in the cluster are used for License Service.
- name: ibm-licensing-operator
spec:
IBMLicensing:
resources:
requests:
cpu:
memory:
limits:
cpu:
memory:
IBMLicenseServiceReporter:
databaseContainer:
resources:
requests:
cpu:
memory:
limits:
cpu:
memory:
receiverContainer:
resources:
requests:
cpu:
memory:
limits:
cpu:
memory:
Note: If you do not specify a setting, the default resource settings that are available in the cluster are used for License Service. The following are the default resource settings.
- name: ibm-licensing-operator
spec:
IBMLicensing:
resources:
requests:
cpu: 200m
memory: 256Mi
limits:
cpu: 500m
memory: 512Mi
IBMLicenseServiceReporter:
databaseContainer:
resources:
requests:
cpu: 200m
memory: 256Mi
limits:
cpu: 300m
memory: 300Mi
receiverContainer:
resources:
requests:
cpu: 200m
memory: 256Mi
limits:
cpu: 300m
memory: 384Mi
Platform API settings
Parameter | Description | Default value | Syntax |
---|---|---|---|
auditService.config.enabled | Set this parameter to true to enable Platform API audit logs. | true | Boolean |
auditService.resources.limits.cpu | Sets the CPU limit for Audit Service container. | 200m | Kubernetes CPU units (String) |
auditService.resources.limits.memory | Sets the memory limit for Audit Service container. | 250Mi | Bytes (String) |
auditService.resources.requests.cpu | Sets the CPU request for Audit Service container. | 200m | Kubernetes CPU units (String) |
auditService.resources.requests.memory | Sets the memory request for Audit Service container. | 250Mi | Bytes (String) |
platformApi.resources.limits.cpu | Sets the CPU limit for Platform API container. | 100m | Kubernetes CPU units (String) |
platformApi.resources.limits.memory | Sets the memory limit for Platform API container. | 128Mi | Bytes (String) |
platformApi.resources.requests.cpu | Sets the CPU request for Platform API container. | 100m | Kubernetes CPU units (String) |
platformApi.resources.requests.memory | Sets the memory request for Platform API container. | 5128Mi | Bytes (String) |
replicas (use in favour of replicaCount) | Sets the number of Platform API replicas. | 1 | Integer |
replicaCount (deprecated) | Sets the number of Platform API replicas. | 1 | Integer |
Example configuration with default values:
- name: ibm-platform-api-operator
spec:
platformApi:
auditService:
config:
enabled: true
resources:
limits:
cpu: 200m
memory: 250Mi
requests:
cpu: 200m
memory: 250Mi
platformApi:
resources:
limits:
cpu: 500m
memory: 512Mi
requests:
cpu: 500m
memory: 512Mi
replicas: 1
Catalog UI settings
Parameter | Description | Default value | Syntax |
---|---|---|---|
catalogui.resources.limits.cpu | Sets the CPU limit for Catalog UI container. | 300m | Kubernetes CPU units (String) |
catalogui.resources.limits.memory | Sets the memory limit for Catalog UI container. | 300Mi | Bytes (String) |
catalogui.resources.requests.cpu | Sets the CPU request for Catalog UI container. | 300m | Kubernetes CPU units (String) |
catalogui.resources.requests.memory | Sets the memory request for Catalog UI container. | 300Mi | Bytes (String) |
replicaCount | Sets the number of Catalog UI replicas. | 1 | Integer |
Example configuration with default values:
- name: ibm-catalog-ui-operator
spec:
catalogui:
resources:
limits:
cpu: 300m
memory: 300Mi
requests:
cpu: 300m
memory: 300Mi
replicaCount: 1
Helm API settings
Parameter | Description | Default value | Syntax |
---|---|---|---|
helmapi.env.HTTP_PROXY | Sets the HTTP_PROXY environment variable for Helm API container | "" | String |
helmapi.env.HTTPS_PROXY | Sets the HTTPS_PROXY environment variable for Helm API container | "" | String |
helmapi.env.NO_PROXY | Sets the NO_PROXY environment variable for Helm API container | mycluster.icp,mongodb,platform-identity-provider,platform-identity-management,icp-management-ingress,iam-pap,localhost,127.0.0.1 | String |
helmapi.resources.limits.cpu | Sets the CPU limit for Helm API container. | 300m | Kubernetes CPU units (String) |
helmapi.resources.limits.memory | Sets the memory limit for Helm API container. | 400Mi | Bytes (String) |
helmapi.resources.requests.cpu | Sets the CPU request for Helm API container. | 300m | Kubernetes CPU units (String) |
helmapi.resources.requests.memory | Sets the memory request for Helm API container. | 400Mi | Bytes (String) |
replicaCount | Sets the number of Helm API replicas. | 1 | Integer |
rudder.resources.limits.cpu | Sets the CPU limit for Rudder container. | 150m | Kubernetes CPU units (String) |
rudder.resources.limits.memory | Sets the memory limit for Rudder container. | 256Mi | Bytes (String) |
rudder.resources.requests.cpu | Sets the CPU request for Rudder container. | 150m | Kubernetes CPU units (String) |
rudder.resources.requests.memory | Sets the memory request for Rudder container. | 256Mi | Bytes (String) |
tiller.nodePort | Sets the node port for the Tiller service | 31514 | Integer |
tiller.resources.limits.cpu | Sets the CPU limit for Tiller container. | 100m | Kubernetes CPU units (String) |
tiller.resources.limits.memory | Sets the memory limit for Tiller container. | 256Mi | Bytes (String) |
tiller.resources.requests.cpu | Sets the CPU request for Tiller container. | 100m | Kubernetes CPU units (String) |
tiller.resources.requests.memory | Sets the memory request for Tiller container. | 256Mi | Bytes (String) |
Example configuration with default values:
- name: ibm-helm-api-operator
spec:
helmapi:
env:
HTTP_PROXY: ""
HTTPS_PROXY: ""
NO_PROXY: mycluster.icp,mongodb,platform-identity-provider,platform-identity-management,icp-management-ingress,iam-pap,localhost,127.0.0.1
resources:
limits:
cpu: 300m
memory: 400Mi
requests:
cpu: 300m
memory: 400Mi
replicaCount: 1
rudder:
resources:
limits:
cpu: 150m
memory: 128Mi
requests:
cpu: 150m
memory: 128Mi
tiller:
resources:
limits:
cpu: 100m
memory: 128Mi
requests:
cpu: 100m
memory: 128Mi
Helm Repo settings
Parameter | Description | Default value | Syntax |
---|---|---|---|
helmrepo.resources.limits.cpu | Sets the CPU limit for Helm Repo container. | 100m | Kubernetes CPU units (String) |
helmrepo.resources.limits.memory | Sets the memory limit for Helm Repo container. | 512Mi | Bytes (String) |
helmrepo.resources.requests.cpu | Sets the CPU request for Helm Repo container. | 100m | Kubernetes CPU units (String) |
helmrepo.resources.requests.memory | Sets the memory request for Helm Repo container. | 512Mi | Bytes (String) |
replicaCount | Sets the number of Helm Repo replicas. | 1 | Integer |
Example configuration with default values:
- name: ibm-helm-repo-operator
spec:
helmrepo:
resources:
limits:
cpu: 100m
memory: 512Mi
requests:
cpu: 100m
memory: 512Mi
replicaCount: 1
Cluster Monitoring settings
PrometheusExt settings
Parameter | Description | Default value | Syntax |
---|---|---|---|
prometheusConfig.retention |
Retention time of Prometheus data. | 24h | String |
prometheusConfig.scrapeInterval |
Sets the frequency to scrape metrics in Prometheus. | 1m | String |
prometheusConfig.logLevel |
Prometheus log level. | info |
String |
prometheusConfig.pvSize |
Persistent volume size for Prometheus data. | 10Gi | String |
prometheusConfig.nodeSelector |
|||
alertmanager.logLevel |
Alertmanager log level. | info |
String |
alertmanager.pvSize |
Persistent volume size for alert data. | 10Gi | string |
storageClassName |
Name of storage class. | "" | String |
nodeSelector |
Node selector for operand pods to be scheduled. | null | Object |
The resource:
sizes are controlled by the IBM Common Service Operator. The following sample configuration uses the medium
resource values as an example.
- name: ibm-monitoring-prometheusExt
spec:
prometheusConfig:
retention: 24h
scrapeInterval: 1m
logLevel: info
pvSize: 10Gi
nodeSelector:
key1: value1
key2: value2
routerResource:
requests:
cpu: 10m
memory: 50Mi
limits:
cpu: 75m
memory: 50Mi
resource:
requests:
cpu: 150m
memory: 10200Mi
limits:
cpu: 230m
memory: 13500Mi
alertmanager:
LogLevel: info
pvSize: 10Gi
resource:
requests:
cpu: 30m
memory: 50Mi
limits:
cpu: 30m
memory: 50Mi
storageClassName: myStorageClass
Configure resources for Monitoring PrometheusExt (ibm-monitoring-prometheusext-operator)
Specify resource settings. If you do not specify a setting, the default resource settings that are available in the cluster are used for the service.
- name: ibm-monitoring-prometheusext-operator
spec:
prometheusExt:
prometheusConfig:
routerResource:
requests:
cpu:
memory:
limits:
cpu:
memory:
resource:
requests:
cpu:
memory:
limits:
cpu:
memory:
alertManagerConfig:
resource:
requests:
cpu:
memory:
limits:
cpu:
memory:
Exporter settings
Parameter | Description | Default value | Syntax |
---|---|---|---|
collectd.enable |
Specifies whether or not to enable collected exporter. | true | bool |
kubeStateMetrics.enable |
Specifies whether or not to enable kube state metrics. | true | bool |
nodeExporter.enable |
Specifies whether or not to enable node exporter. | true | bool |
Example configuration with default values:
name: ibm-monitoring-exporters-operator
spec:
exporter:
collectd:
enable: true
resource:
requests:
cpu: 30m
memory: 50Mi
limits:
cpu: 30m
memory: 50Mi
routerResource:
limits:
cpu: 25m
memory: 50Mi
requests:
cpu: 20m
memory: 50Mi
nodeExporter:
enable: true
resource:
requests:
cpu: 5m
memory: 50Mi
limits:
cpu: 20m
memory: 50Mi
routerResource:
requests:
cpu: 50m
memory: 128Mi
limits:
cpu: 100m
memory: 256Mi
kubeStateMetrics:
enable: true
resource:
requests:
cpu: 500m
memory: 180Mi
limits:
cpu: 540m
memory: 220Mi
routerResource:
limits:
cpu: 25m
memory: 50Mi
requests:
cpu: 20m
memory: 50Mi
Configure resources for Monitoring Exporters (ibm-monitoring-exporters-operator)
Specify resource settings. If you do not specify a setting, the default resource settings that are available in the cluster are used for the service.
- name: ibm-monitoring-exporters-operator
spec:
nodeSelector:
key1: value1
key2: value2
exporter:
collectd:
resource:
requests:
cpu:
memory:
limits:
cpu:
memory:
routerResource:
limits:
cpu:
memory:
requests:
cpu:
memory:
nodeExporter:
resource:
requests:
cpu:
memory:
limits:
cpu:
memory:
routerResource:
requests:
cpu:
memory:
limits:
cpu:
memory:
kubeStateMetrics:
resource:
requests:
cpu:
memory:
limits:
cpu:
memory:
routerResource:
limits:
cpu:
memory:
requests:
cpu:
memory:
Grafana settings
Example configuration with default values:
spec:
nodeSelector:
key1: value1
Key2: value2
grafanaConfig:
resources:
requests:
cpu: 25m
memory: 200Mi
limits:
cpu: 150m
memory: 230Mi
dashboardConfig:
resources:
requests:
cpu: 25m
memory: 140Mi
limits:
cpu: 70m
memory: 170Mi
routerConfig:
routerConfig:
resources:
requests:
cpu: 25m
memory: 65Mi
limits:
cpu: 70m
memory: 80Mi
isHub: false
Using datasourceConfig
parameter with OpenShift Container Platform monitoring
By default IBM Cloud Pak foundational services Prometheus is installed and configured as data source for Grafana. If you want to use OpenShift Container Platform monitoring Prometheus as your data source, you must add the following parameters to your configuration.
- name: ibm-monitoring-grafana-operator
spec:
grafana:
grafanaConfig:
datasourceConfig:
type" "openshift"
Configure resources for Monitoring Grafana (ibm-monitoring-grafana-operator)
Specify resource settings. If you do not specify a setting, the default resource settings that are available in the cluster are used for the service.
- name: ibm-monitoring-grafana-operator
spec:
grafana:
grafanaConfig:
resources:
requests:
cpu:
memory:
limits:
cpu:
memory:
dashboardConfig:
resources:
requests:
cpu:
memory:
limits:
cpu:
memory:
routerConfig:
resources:
requests:
cpu:
memory:
limits:
cpu:
memory:
Certificate management service settings
- Configuring resources for Certificate manager (ibm-cert-manager-operator)
- certManagerCAInjector
- certManagerController
- certManagerWebhook
- Leaf certificate refresh parameters
Configuring resources for Certificate manager (ibm-cert-manager-operator)
Specify resource settings. If you do not specify a setting, the default resource settings are used.
- name: ibm-cert-manager-operator
spec:
certManager:
certManagerCAInjector:
resources:
limits:
cpu: <cpu-limit (cores)>
memory: <memory-limit (GB)>
requests:
cpu: <cpu-limit (cores)>
memory: <memory-limit (GB)>
certManagerController:
resources:
limits:
cpu: <cpu-limit (cores)>
memory: <memory-limit (GB)>
requests:
cpu: <cpu-limit (cores)>
memory: <memory-limit (GB)>
certManagerWebhook:
resources:
limits:
cpu: <cpu-limit (cores)>
memory: <memory-limit (GB)>
requests:
cpu: <cpu-limit (cores)>
memory: <memory-limit (GB)>
certManagerCAInjector
Parameter | Description | Default value | Syntax |
---|---|---|---|
certManagerCAInjector.resources.limits.cpu |
Sets the CPU limit for CAInjector container. | 100m | Kubernetes CPU units (String) |
certManagerCAInjector.resources.limits.memory |
Sets the memory limit for CAInjector container. | 520Mi | Bytes (String) |
certManagerCAInjector.resources.requests.cpu |
Sets the CPU request for CAInjector container. | 20m | Kubernetes CPU units (String) |
certManagerCAInjector.resources.requests.memory |
Sets the memory request for CAInjector container. | 410Mi | Bytes (String) |
{: caption="Table 17a. Settings for certManagerCAInjector " caption-side="top"} |
certManagerController
Parameter | Description | Default value | Syntax |
---|---|---|---|
certManagerController.resources.limits.cpu |
Sets the CPU limit for Controller container. | 80m | Kubernetes CPU units (String) |
certManagerController.resources.limits.memory |
Sets the memory limit for Controller container. | 530Mi | Bytes (String) |
certManagerController.resources.requests.cpu |
Sets the CPU request for Controller container. | 20m | Kubernetes CPU units (String) |
certManagerController.resources.requests.memory |
Sets the memory request for Controller container. | 230Mi | Bytes (String) |
{: caption="Table 17b. Settings for certManagerController " caption-side="top"} |
certManagerWebhook
Parameter | Description | Default value | Syntax |
---|---|---|---|
certManagerWebhook.resources.limits.cpu |
Sets the CPU limit for webhook container. | 60m | Kubernetes CPU units (String) |
certManagerWebhook.resources.limits.memory |
Sets the memory limit for webhook container. | 100Mi | Bytes (String) |
certManagerWebhook.resources.requests.cpu |
Sets the CPU request for webhook container. | 30m | Kubernetes CPU units (String) |
certManagerWebhook.resources.requests.memory |
Sets the memory request for webhook container. | 40Mi | Bytes (String) |
{: caption="Table 17c. Settings for certManagerWebhook " caption-side="top"} |
Leaf certificate refresh parameters
Parameter | Description | Default value | Syntax |
---|---|---|---|
enableCertRefresh |
Flag that can be set or unset to enable or disable the refresh of leaf certificates feature. | True | Boolean |
refreshCertsBasedOnCA |
List of CA certificate names. Leaf certificates that are created from the CA will be refreshed when the CA is refreshed. | None | List |
refreshCertsBasedOnCA.certName |
CA certificate name used to create leaf certificates. | None | String |
refreshCertsBasedOnCA.namespace |
Namespace of CA certificate. | None | String |
Example configuration with a CA certificate and namespace added for leaf refresh. Each
certName
requires a namespace. Leaf certificates based on the CA certificate will be refreshed when the CA certificate is refreshed by cert-manager.
- name: ibm-cert-manager-operator
spec:
refreshCertsBasedOnCA:
- certName: sample-common-ca-cert
namespace: sample-namespace
Common Web UI settings
Parameter | Description | Default value | Syntax |
---|---|---|---|
commonWebUI.resources.limits.cpu |
Sets the CPU limit for commonWebUI container. | 300m | Kubernetes CPU units (String) |
commonWebUI.resources.limits.memory |
Sets the memory limit for commonWebUI container. | 256Mi | Bytes (String) |
commonWebUI.resources.requests.cpu |
Sets the CPU request for commonWebUI container. | 300m | Kubernetes CPU units (String) |
commonWebUI.resources.requests.memory |
Sets the memory request for commonWebUI container. | 256Mi | Bytes (String) |
replicas (use instead of replicaCount ) |
Sets the number of commonWebUI replicas. | 1 | Integer |
Example configuration with default values:
- name: ibm-commonui-operator
spec:
commonWebUI:
replicas: 1
resources:
requests:
memory: 256Mi
cpu: 300m
limits:
memory: 256Mi
cpu: 300m
Serviceability settings
MustGather configuration settings
Parameter | Description | Default value | Syntax |
---|---|---|---|
gatherConfig |
Set the MustGather configuration parameters such as data modules, namespaces and labels. | "" | String |
Example configuration with default values:
apiVersion: operator.ibm.com/v1alpha1
kind: MustGatherConfig
metadata:
name: default
spec:
gatherConfig: |-
modules="overview,system,failure,ocp,cloudpak"
namespaces="common-service,ibm-common-services"
labels=""
MustGather job settings
Parameter | Description | Default value | Syntax |
---|---|---|---|
serviceAccountName |
Set the name of the service account for the MustGather job. | default | String |
mustgatherConfigName |
Set the name of the MustGather config. | default | String |
Example configuration with default values:
apiVersion: operator.ibm.com/v1alpha1
kind: MustGatherJob
metadata:
name: example-mustgatherjob
spec:
serviceAccountName: default
mustgatherConfigName: default
MustGather service settings
Parameter | Description | Default value | Syntax |
---|---|---|---|
persistentVolumeClaim.name |
Set the name of the MustGatherService Persistent Volume Claim. | default | String |
persistentVolumeClaim.storageClassName |
Set the name of the storage class that will be used to dynamically create a PV. The storage permissions use the OpenShift default UID, GID and supplemental-groups. Note: If you install an IBM Cloud Pak on IBM Cloud, use the ibmc-file-gold-gid storage class. |
Cluster choosese one storage class in the list. | String |
mustgather.name |
Set the name of the MustGather service deployment. | default | String |
mustgather.serviceAccountName |
Set the service account name for the MustGather service deployment. | default | String |
mustgather. nodeSelector |
Set the node selector for the MustGather service deployment. | none | String |
mustgather. tolerations |
Set the MustGather service deployment tolerations. | none | String |
mustgather. securityContext |
Set the MustGather service deployment securityContext. | none | String |
mustgather. replicas |
Set the MustGather service deployment replicas. | 0 | int32 |
Note: The System Healthcheck service provides the live status of the service that is regenerated every 10 minutes. No backup or recovery is needed. The MustGather support information is used for troubleshooting purposes, and no backup or restore is needed.
Example configuration with default values:
apiVersion: operator.ibm.com/v1alpha1
kind: MustGatherService
metadata:
name: must-gather-service
spec:
# Add fields here
persistentVolumeClaim:
name: must-gather-pvc
storageClassName: ""
resources:
requests:
storage: 5Gi
mustGather:
name: must-gather-service
replicas: 1
serviceAccountName: default
nodeSelector:
node-role.kubernetes.io/worker: ""
tolerations:
- effect: NoSchedule
key: dedicated
operator: Exists
- key: CriticalAddonsOnly
operator: Exists
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
runAsNonRoot: true
resources:
requests:
memory: "64Mi"
cpu: "50m"
limits:
memory: "512Mi"
cpu: "500m"
System Healthcheck service settings
Parameter | Description | Default value | Syntax |
---|---|---|---|
healthService.name |
Healthcheck service deployment name. | none | string |
healthService.image.pullPolicy |
Healthcheck service image pullPolicy . |
IfNotPresent | string |
healthService.configmapName |
Configmap that contains health service configuration files. |
none | string |
healthService.cloudpakNameSetting |
Set labels/annotation name to get pod cloudpakname . |
none | string |
healthService.serviceNameSetting |
Set labels/annotation name to get pod servicename . |
none | string |
healthService.dependsSetting |
Set labels/annotation name to get pod dependencies. | none | string |
healthService.replicaCount |
Healthcheck service deployment replicas. | 0 | int32 |
healthService.serviceAccountName |
Healthcheck service deployment ServiceAccountName . |
default | string |
healthService.nodeSelector |
Healthcheck service deployment node selector. | none | string |
healthService.tolerations |
Healthcheck service deployment tolerations. | none | string |
healthService.securityContext |
Healthcheck service deployment securityContext . |
none | string |
healthService.hostNetwork |
Healthcheck service deployment hostNetwork . |
false | bool |
memcached.name |
Memcached deployment name. | none | string |
memcached.image.pullPolicy |
Memcached image pullPolicy . |
IfNotPresent | string |
memcached.replicaCount |
Memcached deployment replicas. | 0 | int32 |
memcached.serviceAccountName |
Memcached deployment ServiceAccountName . |
default | string |
memcached.nodeSelector |
Memcached deployment node selector. | none | string |
memcached.tolerations |
Memcached deployment tolerations. | none | string |
memcached.securityContext |
Memcached deployment securityContext . |
none | string |
memcached.command |
Memcached startup command. | "memcached -m 64 -o modern -v" |
string |
Following is a sample CR:
apiVersion: operator.ibm.com/v1alpha1
kind: HealthService
metadata:
name: system-healthcheck-service
spec:
memcached:
name: icp-memcached
image:
repository: quay.io/opencloudio/icp-memcached
tag: 3.5.0
replicaCount: 1
serviceAccountName: ibm-healthcheck-operator-cluster
nodeSelector:
node-role.kubernetes.io/worker: ""
tolerations:
- effect: NoSchedule
key: dedicated
operator: Exists
- key: CriticalAddonsOnly
operator: Exists
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
command:
- memcached
- -m 64
- -o
- modern
- -v
healthService:
name: system-healthcheck-service
configmapName: system-healthcheck-service-config
image:
repository: quay.io/opencloudio/system-healthcheck-service
tag: 3.5.0
replicaCount: 1
serviceAccountName: ibm-healthcheck-operator-cluster
nodeSelector:
node-role.kubernetes.io/worker: ""
tolerations:
- effect: NoSchedule
key: dedicated
operator: Exists
- key: CriticalAddonsOnly
operator: Exists
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
runAsNonRoot: true
hostNetwork: false
serviceNameSetting: Annotations:productName
System Healthcheck resources settings (ibm-healthcheck-operator)
Specify resource settings. If you do not specify a setting, the default resource settings that are available in the cluster are used for the System Healthcheck service.
- name: ibm-healthcheck-operator
spec:
healthService:
memcached:
replicas: 1
resources:
requests:
memory:
cpu:
limits:
memory:
cpu:
healthService:
replicas: 1
resources:
requests:
memory:
cpu:
limits:
memory:
cpu:
Management Ingress settings
Parameter | Description | Default value | Syntax |
---|---|---|---|
managementIngress.replicas |
Sets the number of management ingress replicas. | 1 | Integer |
managementIngress.resources.limits.cpu |
Sets the CPU limit for management ingress container. | 200m | Kubernetes CPU units (String) |
managementIngress.resources.limits.memory |
Sets the memory limit for management ingress container. | 240Mi | Bytes (String) |
managementIngress.resources.requests.cpu |
Sets the CPU request for management ingress container. | 1800m | Kubernetes CPU units (String) |
managementIngress.resources.requests.memory |
Sets the memory request for management ingress container. | 195Mi | Bytes (String) |
Example configuration with default values:
- name: ibm-management-ingress-operator
spec:
managementIngress:
replicas: 1
resources:
limits:
cpu: 200m
memory: 240Mi
requests:
cpu: 1800m
memory: 195Mi
Nginx Ingress settings
Note: Nginx ingress is deprecated with installer version 3.10.0 and might be removed in a future release.
Parameter | Description | Default value | Syntax |
---|---|---|---|
nginxIngress.defaultBackend.replicas |
Sets the number of default backend replicas. | 1 | Integer |
nginxIngress.defaultBackend.resources.limits.cpu |
Sets the CPU limit for the default backend container. | 20m | Kubernetes CPU units (String) |
nginxIngress.defaultBackend.resources.limits.memory |
Sets the memory limit for the default backend container. | 50Mi | Bytes (String) |
nginxIngress.defaultBackend.resources.requests.cpu |
Sets the CPU request for the default backend container. | 20m | Kubernetes CPU units (String) |
nginxIngress.defaultBackend.resources.requests.memory |
Sets the memory request for the default backend container. | 50Mi | Bytes (String) |
nginxIngress.ingress.replicas |
Sets the number of ingress replicas. | 1 | Integer |
nginxIngress.ingress.resources.limits.cpu |
Sets the CPU limit for the ingress container. | 100m | Kubernetes CPU units (String) |
nginxIngress.ingress.resources.limits.memory |
Sets the memory limit for the ingress container. | 225Mi | Bytes (String) |
nginxIngress.ingress.resources.requests.cpu |
Sets the CPU request for the ingress container. | 100m | Kubernetes CPU units (String) |
nginxIngress.ingress.resources.requests.memory |
Sets the memory request for the ingress container. | 140Mi | Bytes (String) |
nginxIngress.kubectl.resources.limits.cpu |
Sets the CPU limit for the kubectl container. | 30m | Kubernetes CPU units (String) |
nginxIngress.kubectl.resources.limits.memory |
Sets the memory limit for the kubectl container. | 150Mi | Bytes (String) |
nginxIngress.kubectl.resources.requests.cpu |
Sets the CPU request for the kubectl container. | 30m | Kubernetes CPU units (String) |
nginxIngress.kubectl.resources.requests.memory |
Sets the memory request for the kubectl container. | 150Mi | Bytes (String) |
Example configuration with default values:
- name: ibm-ingress-nginx-operator
spec:
nginxIngress:
defaultBackend:
replicas: 1
resources:
limits:
cpu: 20m
memory: 50Mi
requests:
cpu: 20m
memory: 50Mi
ingress:
replicas: 1
resources:
limits:
cpu: 100m
memory: 225Mi
requests:
cpu: 100m
memory: 140Mi
kubectl:
resources:
limits:
cpu: 30m
memory: 150Mi
requests:
cpu: 30m
memory: 150Mi
Metering settings
Note: The deprecated Metering service is removed in IBM Cloud Pak® foundational services version 3.7.x.
Specify resource settings. If you do not specify a setting, the default resource settings that are available in the cluster are used for the metering service.
- name: ibm-metering-operator
spec:
metering:
dataManager:
dm:
resources:
limits:
cpu:
memory:
requests:
cpu:
memory:
reader:
rdr:
resources:
limits:
cpu:
memory:
requests:
cpu:
memory:
meteringReportServer:
reportServer:
resources:
limits:
cpu:
memory:
requests:
cpu:
memory:
meteringUI:
replicas: 1
ui:
resources:
limits:
cpu:
memory:
requests:
cpu:
memory:
Common UI settings
Specify resource settings. If you do not specify a setting, the default resource settings that are available in the cluster are used for the common UI service.
- name: ibm-commonui-operator
spec:
commonWebUI:
replicas: 1
resources:
requests:
memory:
cpu:
limits:
memory:
cpu: