(Optional) Configuring huge pages and resource requests or limits postinstallation
Learn about custom ConfigMaps for enabling huge pages and configuring resource requests or limits.
Before you begin
For more information about huge pages, see What huge pages do and how they are consumed by applications in the Red Hat® OpenShift® Container Platform documentation.
After you install Netcool® Operations Insight® on Red Hat OpenShift, complete the following steps to deploy custom ConfigMaps and configure huge pages settings and CPU or memory request or limit settings:
Procedure
- Set the
INSTALL_NAME
parameter for cloud native analytics:INSTALL_NAME=noi
Note: The installation name is assumed to benoi
. If you plan to install with a different name, change this parameter. - Create a custom ConfigMap for cloud native analytics:
cat <<EOF | oc apply -f - apiVersion: v1 metadata: name: ${INSTALL_NAME}-sizing kind: ConfigMap data: noi: | alert-action-service-alertactionservice: specs: replicas: 2 containers: alert-action-service-alertactionservice: resources: limits: cpu: 300m memory: 4000Mi hugepages-2Mi: 4000Mi requests: cpu: 200m memory: 1000Mi waitforkafka: resources: limits: cpu: 300m memory: 4000Mi requests: cpu: 200m memory: 1000Mi alert-trigger-service-alerttriggerservice: specs: replicas: 2 containers: alert-trigger-service-alerttriggerservice: resources: limits: cpu: 1200m memory: 3000Mi hugepages-2Mi: 3000Mi requests: cpu: 700m memory: 2000Mi waitforkafka: resources: limits: cpu: 1200m memory: 3000Mi requests: cpu: 700m memory: 2000Mi asm-operator: specs: replicas: 1 containers: asm-operator: resources: limits: cpu: 200m memory: 512Mi requests: cpu: 100m memory: 64Mi cassandra: specs: replicas: 3 containers: cassandra: resources: limits: cpu: '8' memory: 16Gi requests: cpu: '6' memory: 16Gi cem-operator: specs: replicas: 1 containers: cem-operator: resources: limits: cpu: 200m memory: 512Mi requests: cpu: 100m memory: 64Mi common-dash-auth-im-repo-dashauth: specs: replicas: 1 containers: nginx: resources: limits: cpu: 200m memory: 256Mi requests: cpu: 100m memory: 128Mi couchdb: specs: replicas: 3 containers: couchdb: resources: limits: cpu: 500m memory: 800Mi requests: cpu: 200m memory: 600Mi curator-pattern-metrics: specs: {} containers: common-elastic-curator: resources: limits: cpu: 1500m memory: 4000Mi requests: cpu: '1' memory: 2400Mi ea-noi-layer-eanoiactionservice: specs: replicas: 1 containers: ea-noi-layer-eanoiactionservice: resources: limits: cpu: 300m memory: 1Gi requests: cpu: 100m memory: 512Mi waitforobjserv: resources: limits: cpu: 300m memory: 512Mi requests: cpu: 50m memory: 512Mi ea-noi-layer-eanoigateway: specs: replicas: 1 containers: ea-noi-layer-eanoigateway: resources: limits: cpu: 300m memory: 4Gi requests: cpu: 100m memory: 2Gi waitforobjserv: resources: limits: cpu: 300m memory: 512Mi requests: cpu: 50m memory: 512Mi ea-noi-layer-easetupomnibus: specs: {} containers: registernoilayer: resources: limits: cpu: 300m memory: 1Gi requests: cpu: 100m memory: 512Mi setupnoiautomations: resources: limits: cpu: 300m memory: 512Mi requests: cpu: 50m memory: 512Mi sleep: resources: limits: cpu: 300m memory: 512Mi requests: cpu: 50m memory: 512Mi waitforaggb: resources: limits: cpu: 300m memory: 512Mi requests: cpu: 50m memory: 512Mi waitforaggp: resources: limits: cpu: 300m memory: 512Mi requests: cpu: 50m memory: 512Mi elasticsearch: specs: replicas: 3 containers: elasticsearch: resources: limits: cpu: 1500m memory: 4000Mi hugepages-2Mi: 4000Mi requests: cpu: '1' memory: 2400Mi grafana: specs: replicas: 1 containers: grafana: resources: limits: cpu: '1' memory: 2Gi requests: cpu: 700m memory: 1Gi healthcron: specs: {} containers: healthchecker: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 50m memory: 512Mi servicemonitor: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 50m memory: 512Mi ibm-ea-asm-mime-eaasmmime: specs: replicas: 1 containers: probablecause: resources: limits: cpu: 300m memory: 4000Mi hugepages-2Mi: 4000Mi requests: cpu: 200m memory: 2000Mi waitforcassandra: resources: limits: cpu: 300m memory: 4000Mi requests: cpu: 200m memory: 2000Mi waitforkafka: resources: limits: cpu: 300m memory: 4000Mi requests: cpu: 200m memory: 2000Mi ibm-ea-asm-normalizer-normalizerstreams: specs: replicas: 2 containers: normalizer: resources: limits: cpu: 300m memory: 2000Mi hugepages-2Mi: 2000Mi requests: cpu: 200m memory: 2000Mi waitforinputtopicevents: resources: limits: cpu: 300m memory: 2000Mi hugepages-2Mi: 2000Mi requests: cpu: 200m memory: 2000Mi waitforinputtopicstatus: resources: limits: cpu: 300m memory: 2000Mi hugepages-2Mi: 2000Mi requests: cpu: 200m memory: 2000Mi waitforkafka: resources: limits: cpu: 300m memory: 2000Mi hugepages-2Mi: 2000Mi requests: cpu: 200m memory: 2000Mi ibm-ea-mime-classification-eaasmmimecls: specs: replicas: 1 containers: serviceconnector: resources: limits: cpu: 300m memory: 4000Mi requests: cpu: 200m memory: 2000Mi waitforcassandra: resources: limits: cpu: 300m memory: 4000Mi requests: cpu: 200m memory: 2000Mi ibm-ea-ui-api-graphql: specs: replicas: 2 containers: graphql: resources: limits: cpu: 300m memory: 800Mi requests: cpu: 200m memory: 600Mi ibm-hdm-analytics-dev-archivingservice: specs: replicas: 2 containers: archiving-checkdb: resources: limits: cpu: 300m memory: 2000Mi requests: cpu: 100m memory: 1000Mi ibm-hdm-analytics-dev-archivingservice: resources: limits: cpu: 300m memory: 2000Mi requests: cpu: 100m memory: 1000Mi waitforcassandra: resources: limits: cpu: 300m memory: 2000Mi requests: cpu: 100m memory: 1000Mi waitforingestionservice: resources: limits: cpu: 300m memory: 2000Mi requests: cpu: 100m memory: 1000Mi waitforkafka: resources: limits: cpu: 300m memory: 2000Mi requests: cpu: 100m memory: 1000Mi ibm-hdm-analytics-dev-collater-aggregationservice: specs: replicas: 1 containers: ibm-hdm-analytics-dev-collater-aggregationservice: resources: limits: cpu: 300m memory: 2000Mi hugepages-2Mi: 2000Mi requests: cpu: 100m memory: 500Mi waitforkafka: resources: limits: cpu: 300m memory: 2000Mi requests: cpu: 100m memory: 500Mi waitforredis: resources: limits: cpu: 300m memory: 2000Mi requests: cpu: 100m memory: 500Mi ibm-hdm-analytics-dev-dedup-aggregationservice: specs: replicas: 2 containers: ibm-hdm-analytics-dev-dedup-aggregationservice: resources: limits: cpu: 300m memory: 1000Mi requests: cpu: 100m memory: 500Mi waitforkafka: resources: limits: cpu: 300m memory: 1000Mi requests: cpu: 100m memory: 500Mi waitforredis: resources: limits: cpu: 300m memory: 1000Mi requests: cpu: 100m memory: 500Mi ibm-hdm-analytics-dev-eventsqueryservice: specs: replicas: 2 containers: eventsquery-checkforschema: resources: limits: cpu: 300m memory: 2400Mi requests: cpu: 100m memory: 500Mi ibm-hdm-analytics-dev-eventsqueryservice: resources: limits: cpu: 300m memory: 2400Mi requests: cpu: 100m memory: 500Mi waitforarchiving: resources: limits: cpu: 300m memory: 2400Mi requests: cpu: 100m memory: 500Mi waitforcassandra: resources: limits: cpu: 300m memory: 2400Mi requests: cpu: 100m memory: 500Mi ibm-hdm-analytics-dev-inferenceservice: specs: replicas: 2 containers: ibm-hdm-analytics-dev-inferenceservice: resources: limits: cpu: 300m memory: 2000Mi hugepages-2Mi: 2000Mi requests: cpu: 100m memory: 1000Mi waitforkafka: resources: limits: cpu: 300m memory: 2000Mi requests: cpu: 100m memory: 1000Mi ibm-hdm-analytics-dev-ingestionservice: specs: replicas: 2 containers: ibm-hdm-analytics-dev-ingestionservice: resources: limits: cpu: 300m memory: 1000Mi requests: cpu: 100m memory: 500Mi waitforkafka: resources: limits: cpu: 300m memory: 1000Mi requests: cpu: 100m memory: 500Mi ibm-hdm-analytics-dev-normalizer-aggregationservice: specs: replicas: 1 containers: ibm-hdm-analytics-dev-normalizer-aggregationservice: resources: limits: cpu: 300m memory: 1000Mi requests: cpu: 100m memory: 500Mi waitforkafka: resources: limits: cpu: 300m memory: 1000Mi requests: cpu: 100m memory: 500Mi ibm-hdm-analytics-dev-policyregistryservice: specs: replicas: 2 containers: ibm-hdm-analytics-dev-policyregistryservice: resources: limits: cpu: 500m memory: 2000Mi requests: cpu: 100m memory: 1000Mi policyregistry-checkdb: resources: limits: cpu: 300m memory: 2000Mi requests: cpu: 100m memory: 1000Mi waitforcassandra: resources: limits: cpu: 300m memory: 2000Mi requests: cpu: 100m memory: 1000Mi ibm-hdm-analytics-dev-servicemonitorservice: specs: replicas: 1 containers: ibm-hdm-analytics-dev-servicemonitorservice: resources: limits: cpu: 300m memory: 1Gi requests: cpu: 100m memory: 512Mi ibm-hdm-analytics-dev-setup: specs: {} containers: setup-init-training-config: resources: limits: cpu: 300m memory: 2000Mi requests: cpu: 300m memory: 1000Mi setup-policies: resources: limits: cpu: 300m memory: 2000Mi requests: cpu: 100m memory: 1000Mi setupdb-events: resources: limits: cpu: 300m memory: 2000Mi requests: cpu: 100m memory: 1000Mi setupdb-policies: resources: limits: cpu: 300m memory: 2000Mi requests: cpu: 100m memory: 1000Mi waitforcassandra: resources: limits: cpu: 300m memory: 2000Mi requests: cpu: 100m memory: 1000Mi waitfortrainer: resources: limits: cpu: 300m memory: 2000Mi requests: cpu: 300m memory: 1000Mi ibm-hdm-analytics-dev-trainer: specs: replicas: 1 containers: trainer: resources: limits: cpu: 500m memory: 2000Mi hugepages-2Mi: 2000Mi requests: cpu: 300m memory: 1000Mi waitforcouchdb: resources: limits: cpu: 300m memory: 2000Mi requests: cpu: 300m memory: 1000Mi waitforregisteredsparkworker: resources: limits: cpu: 300m memory: 2000Mi requests: cpu: 300m memory: 1000Mi waitforsparkmaster: resources: limits: cpu: 300m memory: 2000Mi requests: cpu: 300m memory: 1000Mi waitforsparkslave: resources: limits: cpu: 300m memory: 2000Mi requests: cpu: 300m memory: 1000Mi ibm-hdm-common-ui-uiserver: specs: replicas: 3 containers: create-secrets: resources: limits: cpu: 300m memory: 512Mi requests: cpu: 50m memory: 512Mi ui-content: resources: limits: cpu: 300m memory: 512Mi requests: cpu: 50m memory: 512Mi ui-server: resources: limits: cpu: 500m memory: 800Mi requests: cpu: 200m memory: 600Mi ibm-noi-alert-details-service: specs: replicas: 2 containers: alertdetails: resources: limits: cpu: 500m memory: 2000Mi requests: cpu: 200m memory: 1000Mi ibm-noi-alert-details-setup: specs: {} containers: setupdb-alert-details: resources: limits: cpu: '1' memory: 2000Mi requests: cpu: 200m memory: 1000Mi waitforcassandra: resources: limits: cpu: '1' memory: 2000Mi requests: cpu: 200m memory: 1000Mi ibm-redis-server: specs: replicas: 3 containers: config-init: resources: limits: cpu: 500m memory: 450Mi requests: cpu: 200m memory: 350Mi redis: resources: limits: cpu: 500m memory: 450Mi requests: cpu: 200m memory: 350Mi sentinel: resources: limits: cpu: 200m memory: 200Mi requests: cpu: 10m memory: 25Mi impactgui: specs: replicas: 1 containers: configuration-share: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 50m memory: 512Mi impactgui: resources: limits: cpu: '2' memory: 8Gi requests: cpu: 500m memory: 1Gi wait4webgui: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 50m memory: 512Mi kafka: specs: replicas: 6 containers: init-config: resources: limits: cpu: 50m memory: 32Mi requests: cpu: 50m memory: 32Mi kafka: resources: limits: cpu: 1500m memory: 2600Mi requests: cpu: 950m memory: 1600Mi kafka-rest: resources: limits: cpu: 750m memory: 600Mi requests: cpu: 50m memory: 350Mi waitforzookeeper: resources: limits: cpu: 50m memory: 32Mi requests: cpu: 50m memory: 32Mi logstash: specs: replicas: 3 containers: logstash: resources: limits: cpu: 1500m memory: 4000Mi requests: cpu: '1' memory: 2400Mi metric-action-service-metricactionservice: specs: replicas: 2 containers: metric-action-service-metricactionservice: resources: limits: cpu: '1' memory: 4000Mi requests: cpu: 200m memory: 1000Mi waitforkafka: resources: limits: cpu: '1' memory: 4000Mi requests: cpu: 200m memory: 1000Mi metric-api-service-metricapiservice: specs: replicas: 2 containers: metric-api-service-metricapiservice: resources: limits: cpu: '1' memory: 3000Mi requests: cpu: 200m memory: 2000Mi waitforcassandra: resources: limits: cpu: '1' memory: 3000Mi requests: cpu: 200m memory: 2000Mi waitforkafka: resources: limits: cpu: '1' memory: 3000Mi requests: cpu: 200m memory: 2000Mi metric-spark-service-metricsparkservice: specs: replicas: 1 containers: metric-spark-service-metricsparkservice: resources: limits: cpu: '1' memory: 3000Mi requests: cpu: 200m memory: 2000Mi waitforcassandra: resources: limits: cpu: '1' memory: 3000Mi requests: cpu: 200m memory: 2000Mi waitforkafka: resources: limits: cpu: '1' memory: 3000Mi requests: cpu: 200m memory: 2000Mi waitforregisteredsparkworker: resources: limits: cpu: '1' memory: 3000Mi requests: cpu: 200m memory: 2000Mi nciserver: specs: replicas: 1 containers: configuration-share: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 50m memory: 512Mi nciserver: resources: limits: cpu: 2000m memory: 3Gi requests: cpu: 200m memory: 2Gi wait4webgui: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 50m memory: 512Mi ncobackup: specs: replicas: 1 containers: ncobackup-agg-b: resources: limits: cpu: '2' memory: 5Gi requests: cpu: '2' memory: 2Gi ncobackup-agg-gate: resources: limits: cpu: '1' memory: 4Gi requests: cpu: '1' memory: 2Gi ncodatalayer-agg-ir: specs: replicas: 1 containers: ncodatalayer: resources: limits: cpu: '2' memory: 2Gi hugepages-2Mi: 2Gi requests: cpu: 200m memory: 1Gi osschemaupgrade: resources: limits: cpu: '2' memory: 2Gi hugepages-2Mi: 2Gi requests: cpu: 200m memory: 1Gi waitforagg: resources: limits: cpu: '2' memory: 2Gi hugepages-2Mi: 2Gi requests: cpu: 200m memory: 1Gi waitforkafka: resources: limits: cpu: '2' memory: 2Gi requests: cpu: 200m memory: 1Gi ncodatalayer-agg-irf: specs: replicas: 1 containers: ncodatalayer: resources: limits: cpu: '2' memory: 2Gi hugepages-2Mi: 2Gi requests: cpu: 200m memory: 1Gi waitforagg: resources: limits: cpu: '2' memory: 2Gi hugepages-2Mi: 2Gi requests: cpu: 200m memory: 1Gi waitforkafka: resources: limits: cpu: '2' memory: 2Gi requests: cpu: 200m memory: 1Gi ncodatalayer-agg-setup: specs: {} containers: ncodatalayer-setup-automations: resources: limits: cpu: '2' memory: 2Gi hugepages-2Mi: 2Gi requests: cpu: 200m memory: 1Gi waitforagg: resources: limits: cpu: '2' memory: 2Gi hugepages-2Mi: 2Gi requests: cpu: 200m memory: 1Gi ncodatalayer-agg-std: specs: replicas: 2 containers: ncodatalayer: resources: limits: cpu: '2' memory: 2Gi hugepages-2Mi: 2Gi requests: cpu: 200m memory: 1Gi waitforagg: resources: limits: cpu: '2' memory: 2Gi hugepages-2Mi: 2Gi requests: cpu: 200m memory: 1Gi waitforkafka: resources: limits: cpu: '2' memory: 2Gi requests: cpu: 200m memory: 1Gi ncoprimary: specs: replicas: 1 containers: ncoprimary: resources: limits: cpu: '2' memory: 5Gi requests: cpu: '2' memory: 2Gi openldap: specs: replicas: 1 containers: openldap: resources: limits: cpu: 300m memory: 2Gi requests: cpu: 100m memory: 1Gi proxy: specs: replicas: 1 containers: proxy: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 300m memory: 128Mi register-cnea-mgmt-artifact: specs: {} containers: register-normalizer-management-artifact: resources: limits: cpu: 300m memory: 300Mi requests: cpu: 300m memory: 300Mi spark-master: specs: replicas: 1 containers: spark-master: resources: limits: cpu: 750m memory: 2000Mi hugepages-2Mi: 2000Mi requests: cpu: 500m memory: 1000Mi spark-slave: specs: replicas: 2 containers: spark-slave: resources: limits: cpu: '4' memory: 9000Mi hugepages-2Mi: 9000Mi requests: cpu: '2' memory: 5000Mi sparkslave-waitformaster: resources: limits: cpu: '4' memory: 8000Mi requests: cpu: '2' memory: 4000Mi verifysecrets: specs: {} containers: pre-install-job: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 50m memory: 512Mi webgui: specs: replicas: 1 containers: configuration-share: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 50m memory: 512Mi wait4ldap: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 50m memory: 512Mi wait4obj: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 50m memory: 512Mi webgui: resources: limits: cpu: '3' memory: 3Gi requests: cpu: '2' memory: 2Gi zookeeper: specs: replicas: 3 containers: init-config: resources: limits: cpu: 50m memory: 32Mi requests: cpu: 50m memory: 32Mi zookeeper: resources: limits: cpu: 500m memory: 450Mi requests: cpu: 200m memory: 350Mi cem: | ibm-cem-akora-app-cem: specs: replicas: 1 containers: akora-app-cem: resources: limits: cpu: 450m memory: 450Mi requests: cpu: 300m memory: 350Mi waitforredis: resources: limits: cpu: 300m memory: 300Mi requests: cpu: 200m memory: 200Mi ibm-cem-brokers: specs: replicas: 1 containers: brokers: resources: limits: cpu: 300m memory: 300Mi requests: cpu: 200m memory: 200Mi waitforkafka: resources: limits: cpu: 300m memory: 300Mi requests: cpu: 200m memory: 200Mi waitforredis: resources: limits: cpu: 300m memory: 300Mi requests: cpu: 200m memory: 200Mi waitforcouchdb: resources: limits: cpu: 300m memory: 300Mi requests: cpu: 200m memory: 200Mi ibm-cem-cem-users: specs: replicas: 1 containers: cem-users: resources: limits: cpu: 450m memory: 800Mi requests: cpu: 200m memory: 600Mi waitforredis: resources: limits: cpu: 300m memory: 300Mi requests: cpu: 200m memory: 200Mi waitforcouchdb: resources: limits: cpu: 300m memory: 300Mi requests: cpu: 200m memory: 200Mi ibm-cem-channelservices: specs: replicas: 1 containers: channelservices: resources: limits: cpu: 300m memory: 450Mi requests: cpu: 200m memory: 350Mi ibm-cem-event-analytics-ui: specs: replicas: 1 containers: event-analytics-ui: resources: limits: cpu: 450m memory: 450Mi requests: cpu: 200m memory: 350Mi waitforkafka: resources: limits: cpu: 300m memory: 300Mi requests: cpu: 200m memory: 200Mi waitforredis: resources: limits: cpu: 300m memory: 300Mi requests: cpu: 200m memory: 200Mi ibm-cem-eventpreprocessor: specs: replicas: 1 containers: eventpreprocessor: resources: limits: cpu: 750m memory: 450Mi requests: cpu: 500m memory: 350Mi waitforkafka: resources: limits: cpu: 300m memory: 300Mi requests: cpu: 200m memory: 200Mi waitforredis: resources: limits: cpu: 300m memory: 300Mi requests: cpu: 200m memory: 200Mi waitforcouchdb: resources: limits: cpu: 300m memory: 300Mi requests: cpu: 200m memory: 200Mi ibm-cem-incidentprocessor: specs: replicas: 1 containers: incidentprocessor: resources: limits: cpu: 750m memory: 450Mi requests: cpu: 500m memory: 350Mi waitforkafka: resources: limits: cpu: 300m memory: 300Mi requests: cpu: 200m memory: 200Mi waitforredis: resources: limits: cpu: 300m memory: 300Mi requests: cpu: 200m memory: 200Mi waitforcouchdb: resources: limits: cpu: 300m memory: 300Mi requests: cpu: 200m memory: 200Mi ibm-cem-integration-controller: specs: replicas: 1 containers: integration-controller: resources: limits: cpu: 750m memory: 450Mi requests: cpu: 500m memory: 350Mi waitforredis: resources: limits: cpu: 300m memory: 300Mi requests: cpu: 200m memory: 200Mi waitforcouchdb: resources: limits: cpu: 300m memory: 300Mi requests: cpu: 200m memory: 200Mi ibm-cem-normalizer: specs: replicas: 1 containers: normalizer: resources: limits: cpu: 450m memory: 450Mi requests: cpu: 300m memory: 350Mi waitforredis: resources: limits: cpu: 300m memory: 300Mi requests: cpu: 200m memory: 200Mi waitforcouchdb: resources: limits: cpu: 300m memory: 300Mi requests: cpu: 200m memory: 200Mi ibm-cem-notificationprocessor: specs: replicas: 1 containers: notificationprocessor: resources: limits: cpu: 300m memory: 450Mi requests: cpu: 200m memory: 350Mi waitforkafka: resources: limits: cpu: 300m memory: 300Mi requests: cpu: 200m memory: 200Mi ibm-cem-rba-as: specs: replicas: 1 containers: rba-as: resources: limits: cpu: 600m memory: 1Gi requests: cpu: 400m memory: 100Mi waitforcouchdb: resources: limits: cpu: 600m memory: 1Gi requests: cpu: 400m memory: 100Mi ibm-cem-rba-rbs: specs: replicas: 1 containers: rba-rbs: resources: limits: cpu: 600m memory: 1536Mi requests: cpu: 400m memory: 100Mi waitforkafka: resources: limits: cpu: 300m memory: 300Mi requests: cpu: 200m memory: 200Mi waitforcouchdb: resources: limits: cpu: 300m memory: 300Mi requests: cpu: 200m memory: 200Mi ibm-cem-scheduling-ui: specs: replicas: 1 containers: scheduling-ui: resources: limits: cpu: 300m memory: 450Mi requests: cpu: 200m memory: 350Mi waitforkafka: resources: limits: cpu: 300m memory: 300Mi requests: cpu: 200m memory: 200Mi waitforcouchdb: resources: limits: cpu: 300m memory: 300Mi requests: cpu: 200m memory: 200Mi EOF
Note: The request and limits values are displayed as examples only and represent the default setting for a production deployment of the 1.6.8 release. If your requirements differ, change the values before you deploy the custom ConfigMap. - Create a custom ConfigMap for topology analytics:
cat <<EOF | oc apply -f - apiVersion: v1 metadata: name: ${INSTALL_NAME}-topology-sizing kind: ConfigMap data: asm: | file-observer: specs: replicas: 1 containers: file-observer: resources: limits: cpu: 750m memory: 750Mi hugepages-2Mi: 750Mi requests: cpu: 500m memory: 350Mi elasticsearch: specs: replicas: 3 containers: elasticsearch: resources: limits: cpu: 2500m memory: 4000Mi hugepages-2Mi: 4000Mi requests: cpu: 1000m memory: 2400Mi kubernetes-observer: specs: replicas: 1 containers: kubernetes-observer: resources: limits: cpu: 750m memory: 750Mi hugepages-2Mi: 750Mi requests: cpu: 501m memory: 350Mi layout: specs: replicas: 2 containers: layout: resources: limits: cpu: 1500m memory: 2500Mi hugepages-2Mi: 2500Mi requests: cpu: 1 memory: 700Mi merge: specs: replicas: 1 containers: merge: resources: limits: cpu: 3 memory: 1500Mi hugepages-2Mi: 1500Mi requests: cpu: 2 memory: 1000Mi noi-gateway: specs: replicas: 1 containers: nco-g-xml: resources: limits: cpu: 750m memory: 1000Mi hugepages-2Mi: 1000Mi requests: cpu: 500m memory: 800Mi noi-probe: specs: replicas: 1 containers: nco-p-messagebus: resources: limits: cpu: 500m memory: 512Mi hugepages-2Mi: 512Mi requests: cpu: 250m memory: 256Mi observer-service: specs: replicas: 1 containers: observer-service: resources: limits: cpu: 750m memory: 1500Mi hugepages-2Mi: 1500Mi requests: cpu: 500m memory: 350Mi rest-observer: specs: replicas: 1 containers: rest-observer: resources: limits: cpu: 750m memory: 750Mi hugepages-2Mi: 750Mi requests: cpu: 500m memory: 350Mi search: specs: replicas: 2 containers: search: resources: limits: cpu: 750m memory: 800Mi hugepages-2Mi: 800Mi requests: cpu: 500m memory: 600Mi servicenow-observer: specs: replicas: 1 containers: servicenow-observer: resources: limits: cpu: 750m memory: 750Mi hugepages-2Mi: 750Mi requests: cpu: 500m memory: 350Mi openstack-observer: specs: replicas: 1 containers: openstack-observer: resources: limits: cpu: 750m memory: 750Mi hugepages-2Mi: 750Mi requests: cpu: 500m memory: 350Mi docker-observer: specs: replicas: 1 containers: docker-observer: resources: limits: cpu: 750m memory: 750Mi hugepages-2Mi: 750Mi requests: cpu: 500m memory: 350Mi bigcloudfabric-observer: specs: replicas: 1 containers: bigcloudfabric-observer: resources: limits: cpu: 750m memory: 750Mi hugepages-2Mi: 750Mi requests: cpu: 500m memory: 350Mi hpnfvd-observer: specs: replicas: 1 containers: hpnfvd-observer: resources: limits: cpu: 750m memory: 750Mi hugepages-2Mi: 750Mi requests: cpu: 500m memory: 350Mi vmvcenter-observer: specs: replicas: 1 containers: vmvcenter-observer: resources: limits: cpu: 750m memory: 750Mi hugepages-2Mi: 750Mi requests: cpu: 500m memory: 350Mi status: specs: replicas: 1 containers: status: resources: limits: cpu: 3 memory: 2000Mi hugepages-2Mi: 2000Mi requests: cpu: 2 memory: 1000Mi topology: specs: replicas: 2 containers: topology: resources: limits: cpu: 3 memory: 3600Mi hugepages-2Mi: 3600Mi requests: cpu: 2 memory: 1200Mi ui-api: specs: replicas: 2 containers: ui-api: resources: limits: cpu: 600m memory: 750Mi requests: cpu: 400m memory: 350Mi EOF
Note: The request and limits values that are presented are examples only and represent the default setting for a production deployment of the 1.6.8 release. If your requirements differ, change the values before you deploy the custom ConfigMap. - Verify that the two ConfigMaps were created successfully.
oc get cm $INSTALL_NAME-sizing $INSTALL_NAME-topology-sizing
If the ConfigMaps are created successfully, output similar to the following output is displayed:NAME DATA AGE evtmanager-sizing 1 14m evtmanager-topology-sizing 1 12m
- Restart the
noi-operator
andasm-operator
operators.oc delete po -l name=noi-operator oc delete po -l name=asm-operator
When the operators restart, a rolling update of the impact pods is displayed. The changes are applied when all pods are in aRunning
state.