You can use the cluster monitoring dashboard to monitor the status of your cluster and applications.
The monitoring dashboard uses Grafana and Prometheus to present detailed data about your cluster nodes and containers. For more information about Grafana, see the Grafana documentation .
For more information about Prometheus, see the Prometheus documentation
.
Log in to the Common services console.
Note: When you log in to the console, you have administrative access to Grafana. Do not create more users within the Grafana dashboard or modify the existing users or org.
To access the Grafana dashboard, click Menu > Monitor Health > Monitoring.
Alternatively, you can open https://<IP_address>:<port>/grafana, where <IP_address> is the DNS or IP address that is used to access the console. <port> is the port that is used to access
the console.
Note: If you are logged in as a Cluster Administrator, you can access the Monitoring dashboard from the Administration Hub dashboard. This dashboard provides Cluster Administrators with an overview of a cluster. It includes key metrics for various services and components. Use available links to other dashboards, pages, and consoles to administer those services and components. From this Administration Hub dashboard, you can view and click Monitoring link on the Welcome widget to access the Grafana dashboard. The Administration Hub can be accessed by clicking Home within the main navigation menu. Only Cluster Administrators can access the Administration Hub dashboard.
To access the Alertmanager dashboard, open https://<IP_address>:<port>/alertmanager.
https://<IP_address>:<port>/prometheus.From the Grafana dashboard, open one of the following default dashboards.
Common Services Namespaces Performance IBM Provided 2.5
Provides information about namespace performance and status metrics.
Common Services Performance IBM Provided 2.5
Provides TCP system performance information about Nodes, Memory, and Containers.
Helm Release Metrics
Provides information about system metrics such as CPU and Memory for each Helm release that is filtered by pods.
Kubernetes Cluster Monitoring
Monitors Kubernetes clusters that use Prometheus. Provides information about cluster CPU, Memory, and Filesystem usage. The dashboard also provides statistics for individual
pods, containers, and systemd services.
Kubernetes POD Overview
Monitors pod metrics such as CPU, Memory, Network pod status, and restarts.
NGINX Ingress controller
Provides information about NGINX Ingress controller metrics that can be sorted by namespace, controller class, controller, and ingress.
Node Performance Summary
Provides information about system performance metrics such as CPU, Memory, Disk, and Network for all nodes in the cluster.
Prometheus Stats
Dashboard for monitoring Prometheus v2.x.x.
Note: If you configure pods that use host level resources such as host network, the dashboards display the metrics of the host but not the pod itself.
If you want to view other data, you can create new dashboards or import dashboards from JSON definition files for Grafana.
Some exporters are provided to manage metrics. The exporters show metrics endpoints as Kubernetes services.
node-exporter
Provides the node-level metrics, including metrics for CPU, memory, disk, network, and other components.
kube-state-metrics
Provides the metrics for Kubernetes objects, including metrics for pod, deployment, statefulset, daemonset, replicaset, configmap, service,
job, and other objects.
collectd-exporter
Provides metrics that are sent from the collected network plug-in.
Some Kubernetes pods provide metrics endpoints for Prometheus:
In addition, Prometheus includes preconfigured scrape targets that communicate with several targets to scrape metrics:
cAdvisor
Provides container metrics that include CPU, memory, network, and other components.
Prometheus
Provides metrics for the Prometheus server that include metrics for request handle, alert rule evaluation, TSDB status, and other components.
kubernetes-apiservers
Provide metrics for the Kubernetes API servers.
Prometheus displays scrape targets in its user interface as links. These addresses are typically not accessible from a user's browser as they are on the Kubernetes cluster internal network. Only the Prometheus server needs to be able to access the addresses.
A user with role ClusterAdministrator,Administrator or Operator can access monitoring service. A user with role ClusterAdministrator or Administrator can perform write operations in monitoring service, including deleting Prometheus metrics data, and updating Grafana configurations.
Starting with version 1.2.0, the ibm-icpmonitoring Helm chart introduces an important feature. It offers a new module that provides role-based access controls (RBAC) for access to the Prometheus metrics data.
The RBAC module is effectively a proxy that sits in front of the Prometheus client pod. It examines the requests for authorization headers, and at that point, enforces role-based controls. In general, the rules that concern RBAC are as follows:
A user with the ClusterAdministrator role can access any resource. A user with any other role can access data in only the namespaces for which that user is authorized.
If metrics data includes the label, kubernetes_namespace, then it is recognized as being in the namespace that is the value of that label. If metrics data has no such label, then it is recognized as system level metrics. Only users with
the role ClusterAdministrator can access system level metrics.
In a IBM Multicloud Manager hub cluster environment, users can access metrics from managed clusters. A user with the role ClusterAdministrator can access data from all managed clusters. A user with any other role can access data from only the managed clusters whose related namespaces that user is authorized.
Starting with version 1.5.0, the ibm-icpmonitoring Helm chart offers a new module that provides role-based access controls (RBAC) for access to the monitoring dashboards in Grafana.
In Grafana, users can belong to one or more organizations. Each organization contains its own settings for resources such as data sources and dashboards. For the Grafana running in IBM Cloud Pak for Multicloud Management, each namespace in IBM Cloud
Pak for Multicloud Management has a corresponding organization with the same name. For example, if you create a new namespace that is named test in IBM Cloud Pak for Multicloud Management, an organization that is named test is generated
in Grafana. If you delete the test namespace, the test organization is also removed. The only exception is the kube-system namespace. The corresponding organization for kube-system is the Grafana default
of Main Org.
Each Grafana organization includes a default data source that is named prometheus, which points to the Prometheus in the monitoring service. Each organization also includes the following dashboards:
All out of the box monitoring dashboards that are mentioned in Accessing the monitoring dashboard are imported to the Main Org organization.
When you log in to IBM Cloud Pak for Multicloud Management, you can access a Grafana organization only if you are authorized to access the corresponding namespace. If you have access to more than one Grafana organization, use the Grafana console to
switch to a different organization. Message, UNAUTHORIZED appears when you do not have access to a Grafana organization.
Different users access Grafana organizations by using different organization roles. In the corresponding namespace, if you are assigned the role of ClusterAdministrator or Administrator, you have Admin access to
the Grafana organization. Otherwise, you have Viewer access to the Grafana organization.
When you access Grafana as a user of IBM Cloud Pak for Multicloud Management, a user with the same name is created in Grafana. If the user in IBM Cloud Pak for Multicloud Management is deleted, the corresponding user is not deleted from Grafana. The user account becomes stale. Run the following command to request the removal of stale users:
curl -k -s -X POST -H "Authorization:$ACCESS_TOKEN" https://<Cluster Master Host>:<Cluster Master API Port>/grafana/check_stale_users
For information about Grafana APIs, see Accessing monitoring service APIs.
Note: Monitoring service does not provide RBAC support for Prometheus and Alertmanager alerts.
For information, see Installing IBM Cloud Platform Common Services in your OpenShift Container Platform cluster.
You can deploy the monitoring service with customized configurations from the Catalog in IBM Cloud Pak for Multicloud Management console.
ibm-icpmonitoring Helm chart to configure and install it.Provide required values for the following parameters:
monitoringkube-systemManagedInstall the Kubernetes command line (kubectl). For information about the kubectl CLI, see Installing the Kubernetes CLI (kubectl).
Install the Helm command-line interface (CLI). For more information, see Installing the Helm CLI (helm).
Install the ibm-icpmonitoring Helm chart. Run the following command.
helm install -n monitoring --namespace kube-system --set mode=managed --set clusterAddress=<IP_address> --set clusterPort=<port> ibm-icpmonitoring-1.4.0.tgz
<IP_address> is the DNS or IP address that is used to access IBM Cloud Pak for Multicloud Management console.
<port> is the port that is used to access IBM Cloud Pak for Multicloud Management console.
By default, user data in the monitoring service components such as Prometheus, Grafana, or Alertmanager, is not stored in persistent volumes. The user data is lost if the monitoring service component crashes. To store user data in persistent volumes, you must configure related parameters when you install the monitoring service. Use one of the following options to enable persistent volumes:
Use volumes that are dynamically provisioned. You must use a storage provider that supports dynamic provisioning. For example, you can configure GlusterFS to dynamically create persistent volumes. During configuration, select the checkbox for Persistent volume,
and provide values for the following parameters:
In the following example, the value of Field to select the volume is component. The value of Value of the field to select the volume is prometheus:
apiVersion: v1
kind: PersistentVolume
metadata:
name: monitoring-prometheus-pv
labels:
component: prometheus
.......
PersistentVolumeClaims. You must manually create persistent volumes and persistent volume claims. During configuration, select the checkbox for Persistent volume, and provide a value for the Name of existing persistentVolumeClaim parameter.You can configure the following Prometheus server parameters during preinstallation or postinstallation.
scrape_Interval
The parameter for the frequency to scrape targets. The default value is 1 minute (1m).
evaluation_Interval
The parameter for the frequency to evaluate rules. The default value is 1 minute (1m).
retention
The parameter for the frequency to remove old data. The default value is 24 hours (24h).
resources.limits.memory
The parameter for the memory limitation for the Prometheus container. The default value is 4096Mi. The Prometheus container crashes if the memory limitation is not fulfilled. You must increase the value of this parameter to ensure that
the Prometheus container can work correctly.
For monitoring service installation and IBM Cloud Pak for Multicloud Management, you can configure the parameters in the config.yaml before installation. For example, your config.yaml file might resemble the following content:
monitoring:
prometheus:
scrape_Interval: 1m
evaluation_Interval: 1m
retention: 24h
resources:
limits:
memory: 4096Mi
If you choose to install the monitoring service from the Catalog, you can configure the parameters in related console fields.
You can also update configuration parameters after you install the monitoring service by editing the Prometheus resource, monitoring-prometheus.
kubectl edit prometheus monitoring-prometheus -n kube-system
You can update values for spec.scrapeInterval, spec.evaluationInterval, spec.retention, and spec.resources.limits.memory in the monitoring-prometheus resource.
retention or resources.limits.memory values, the active Prometheus pod is deleted, and a new Prometheus pod is started.Capability to install default alerts is available in version 1.3.0 of the ibm-icpmonitoring chart. Some alerts provide customizable parameters to control the alert frequency. You can configure the following alerts during installation.
Node memory usage
Default alert to trigger when the node memory threshold exceeds 85%. The threshold is configurable and is installed by default. If you use the CLI, the following values control this alert:
| Field | Default Value |
|---|---|
| prometheus.alerts.nodeMemoryUsage.nodeMemoryUsage.enabled | true |
| prometheus.alerts.nodeMemoryUsage.nodeMemoryUsageThreshold | 85 |
High CPU Usage
Default alert to trigger when the CPU threshold exceeds 85%. The threshold is configurable and is installed by default. If you use the CLI, the following values control this alert:
| Field | Default Value |
|---|---|
| prometheus.alerts.highCPUUsage.enabled | true |
| prometheus.alerts.highCPUUsage.highCPUUsageThreshold | 85 |
Failed jobs
Default alert if a job did not complete successfully. Is installed by default. If you use the CLI, the following values control this alert:
| Field | Default Value |
|---|---|
| prometheus.alerts.failedJobs | true |
Pods terminated
Default alert if a pod was terminated and did not complete successfully. This alert is installed by default. If you use the CLI, the following values control this alert:
| Field | Default Value |
|---|---|
| prometheus.alerts.podsTerminated | true |
Pods restarting
Default alert is triggered if a pod is restarting more than five times in 10 minutes. If you use the CLI, the following values control this alert:
| Field | Default Value |
|---|---|
| prometheus.alerts.podsRestarting | true |
You can use the Kubernetes custom resource, PrometheusRule, to manage alert rules in IBM Cloud Pak for Multicloud Management.
The following sample-rule.yaml file is an example of an PrometheusRule resource definition:
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
labels:
component: icp-prometheus
name: sample-rule
spec:
groups:
- name: a.rules
rules:
- alert: NodeMemoryUsage
expr: ((node_memory_MemTotal_bytes - (node_memory_MemFree_bytes + node_memory_Buffers_bytes + node_memory_Cached_bytes))/ node_memory_MemTotal_bytes) * 100 > 5
annotations:
DESCRIPTION: '{{ $labels.instance }}: Memory usage is greater than the 15% threshold. The current value is: {{ $value }}.'
SUMMARY: '{{ $labels.instance }}: High memory usage detected'
You must provide the following parameter values:
monitoring.coreos.com/v1
PrometheusRule
icp-prometheus
Contains the content of the alert rule. For more information, see Recording Rules .
You can migrate your existing monitoring AlertRule to the PrometheusRule.
You must change the format of any existing AlertRule that is not defined by the monitoring component. The following differences exist in the format of the .yaml file.
enabled flag is no longer supported. If created, the rule is active. spec no longer includes data: |-. This change removes big string rule formats.apiVersion is changed from monitoringcontroller.cloud.ibm.com/v1 to monitoring.coreos.com/v1.kind parameter is changed from AlertRule to PrometheusRule. metadata.labels.component: icp-prometheus is mandatory.Following is an example of the AlertRule.
apiVersion: monitoringcontroller.cloud.ibm.com/v1
kind: AlertRule
metadata:
name: failed-jobs
spec:
enabled: true
data: |-
groups:
- name: failedJobs
rules:
- alert: failedJobs
expr: kube_job_failed != 0
annotations:
description: 'Job {{ "{{ " }} $labels.exported_job {{ " }}" }} in namespace {{ "{{ " }} $labels.namespace {{ " }}" }} failed for some reason.'
summary: Failed job
After you migrate to PrometheusRule, your .yaml resembles the following example.
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
labels:
component: icp-prometheus
name: failed-jobs
spec:
groups:
- name: failedJobs
rules:
- alert: failedJobs
expr: kube_job_failed != 0
annotations:
description: 'Job {{ "{{ " }} $labels.exported_job {{ " }}" }} in namespace {{ "{{ " }} $labels.namespace {{ " }}" }} failed for some reason.'
summary: Failed job
After you change your .yaml file, run the following command to load your new PrometheusRule and activate it on Prometheus.
kubectl create -f {file}
Edit Kubernetes secret, monitoring-prometheus-alertmanager to configure Prometheus Alertmanager to integrate external alert service receivers, such as Slack or PagerDuty, for IBM Cloud Pak for Multicloud Management.
kubectl edit secret alertmanager-monitoring-prometheus-alertmanager -n kube-system
Following is an example of the default secret configuration.
apiVersion: v1
data:
alertmanager.yaml: CiAgZ2xvYmFsOgogIHJlY2VpdmVyczoKICAgIC0gbmFtZTogZGVmYXVsdC1yZWNlaXZlcgogIHJvdXRlOgogICAgZ3JvdXBfd2FpdDogMTBzCiAgICBncm91cF9pbnRlcnZhbDogNW0KICAgIHJlY2VpdmVyOiBkZWZhdWx0LXJlY2VpdmVyCiAgICByZXBlYXRfaW50ZXJ2YWw6IDNo
kind: Secret
metadata:
name: alertmanager-monitoring-prometheus-alertmanager
type: Opaque
The content of alertmanager.yaml is base64 encoded. To update alertmanager.yaml, you must first decode it. Next, update alertmanager.yaml, and encode the updated content. Finally, replace the content in the secret
and save the change.
Important: Secret changes are lost when you upgrade, roll back, or update the monitoring release. In addition, the secret format can change between releases.
Allow several minutes for the updates to take effect. Open the AlertManager dashboard at https://<Cluster Master Host>:<Cluster Master API Port>/alertmanager. <Cluster Master Host>:<Cluster Master API Port> is defined in the Master endpoints.
You can manage Grafana dashboards by operating on a Kubernetes custom resource MonitoringDashboard in IBM Cloud Pak for Multicloud Management. The following sample-dashboard.yaml file is an example of a MonitoringDashboard resource definition.
apiVersion: monitoringcontroller.cloud.ibm.com/v1
kind: MonitoringDashboard
metadata:
name: sample-dashboard
spec:
enabled: true
data: |-
{
"uid": null,
"title": "Marco Test Dashboard",
"tags": [ "test" ],
"timezone": "browser",
"schemaVersion": 16,
"version": 1
}
You must provide the following parameter values:
monitoringcontroller.cloud.ibm.com/v1
MonitoringDashboard
Contains the content of the Grafana dashboard definition file. For more information about dashboard files, see Dashboard JSON .
Set the flag to specify whether the dashboard is enabled or not enabled.
You can use kubectl to manage the dashboard. Use the -n option to specify the namespace in which this MonitoringDashboard is to be created. The dashboard is imported to the corresponding organization in Grafana.
Note: Do not set the id field in the dashboard .json file.
Create a dashboard resource in the default namespace by using the sample-dashboard.yaml file. The dashboard is imported into the default organization in Grafana.
kubectl apply -f sample-dashboard.yaml -n default
Edit the sample dashboard.
kubectl edit monitoringdashboards/sample-dashboard -n default
Delete the sample dashboard.
kubectl delete monitoringdashboards/sample-dashboard -n default
Modify the application to expose the metrics.
For applications that have a metrics endpoint, you must define the metrics endpoint as a Kubernetes service by using the annotation prometheus.io/scrape: 'true'. The service definition resembles the following code:
apiVersion: v1
kind: Service
metadata:
annotations:
prometheus.io/scrape: 'true'
labels:
app: liberty
name: liberty
spec:
ports:
- name: metrics
targetPort: 5556
port: 5556
protocol: TCP
selector:
app: liberty
type: ClusterIP
Note: For more information about configuring the metrics endpoint for Prometheus, see CLIENT LIBRARIES in the Prometheus documentation.
Applications can have more than one port defined in the service definition. You might not want to expose monitoring metrics on some ports or have the ports be discovered by Prometheus. You can add annotation filter.by.port.name: 'true' so the port whose name does not start with metrics is ignored by Prometheus. In the following service definition, Prometheus collects metrics from port metrics, and ignores metrics from port collector.
apiVersion: v1
kind: Service
metadata:
annotations:
prometheus.io/scrape: 'true'
filter.by.port.name: 'true'
labels:
app: liberty
name: liberty
spec:
ports:
- name: metrics
targetPort: 5556
port: 5556
protocol: TCP
- name: collector
targetPort: 8443
port: 8443
protocol: TCP
selector:
app: liberty
type: ClusterIP
For applications that have a metrics endpoint with TLS enabled, you must use IBM Cloud Pak for Multicloud Management cert-manager to generate a secret and use it to configure the metrics endpoint.
Use cert-manager to create a certificate resource for a workload.
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: {{ .Release.Name }}-foo-certs
namespace: {{ .Release.Namespace }}
spec:
secretName: {{ .Release.Name }}-foo-certs
issuerRef:
name: icp-ca-issuer
kind: ClusterIssuer
commonName: "foo"
dnsNames:
- "*.{{ .Release.Namespace }}.pod.cluster.local"
Mount the secret to your pod. You can retrieve the cert/key from the mounted path. The tls.crt and tls.key files are under the mounted path. tls.crt includes a workload cert file and a ca cert file that
must use to configure the application metrics endpoint.
containers:
- image: foo-image:latest
name: foo
volumeMounts:
- mountPath: "/foo/certs"
name: certs
volumes:
- name: certs
secret:
# secretName should be the same as the one defined in step 1.
secretName: {{ .Release.Name }}-foo-certs
Define annotations on workload service to allow Prometheus to use TLS to scrape metrics, and prometheus.io/scheme.
apiVersion: v1
kind: Service
metadata:
annotations:
prometheus.io/scrape: 'true'
prometheus.io/scheme: 'https'
collectd and depend on collectd-exporter to expose metrics, you update collectd configuration file within the application container. In this configuration file, you must add the network
plug-in and point to collectd exporter. Add the following text to the configuration file: LoadPlugin network
<Plugin network>
Server "monitoring-prometheus-collectdexporter.kube-system" "25826"
</Plugin>
You can modify the time period for metric retention by updating the storage.tsdb.retention parameter in the config.yaml file. By default this value is set at 24h, which means that the metrics are kept for 24 hours
and then purged.
However, if you need to manually remove this data from the system, you can use the rest API that is provided by the Prometheus component.
The target URL must have the format:
https://<IP_address>:<Port>/prometheus
<IP_address> is the IP address that is used to access the console.<Port> is the port that is used to access the console.
The command to delete metrics data resembles the following code.
https://<IP_address>:<Port>/prometheus/api/v1/admin/tsdb/delete_series?*******
The command to remove deleted data and clean up the disk, resembles the following code.
https://<IP_address>:<Port>/prometheus/api/v1/admin/tsdb/clean_tombstones
You can access monitoring service APIs such as Prometheus and Grafana APIs. Before you can access the APIs, you must obtain authentication tokens to specify in your request headers. For information about obtaining authentication tokens, see Preparing to run component or management API commands.
After you obtain the authentication tokens, complete the following steps to access the Prometheus and Grafana APIs.
Access the Prometheus API at url, https://<Cluster Master Host>:<Cluster Master API Port>/prometheus/* to get boot times of all nodes.
$ACCESS_TOKEN is the variable that stores the authentication token for your cluster.<Cluster Master Host> and <Cluster Master API Port> are defined in Master endpoints.curl -k -s -X GET -H "Authorization:Bearer $ACCESS_TOKEN" https://<Cluster Master Host>:<Cluster Master API Port>/prometheus/api/v1/query?query=node_boot_time_seconds
For more information, see Prometheus HTTP API .
Access the Grafana API at url, https://<Cluster Master Host>:<Cluster Master API Port>/grafana/* to obtain the sample dashboard.
$ACCESS_TOKEN is the variable that stores the authentication token for your cluster.<Cluster Master Host> and <Cluster Master API Port> are defined in Master endpoints.curl -k -s -X GET -H "Authorization: Bearer $ACCESS_TOKEN” "https://<Cluster Master Host>:<Cluster Master API Port>/grafana/api/dashboards/db/sample"
For more information, see Grafana HTTP API Reference .
You can customize the cluster access URL. For more information, see Customizing the cluster access URL. After you complete the customization, you must manually edit the Prometheus and Alertmanager resources and verify that all external links are correct.
Use kubectl to edit the monitoring-prometheus resource. For example,
kubectl edit prometheus monitoring-prometheus -n kube-system
In the monitoring-prometheus resource, change externalUrl:* to the following value:
externalUrl: https://<custom_host>:<custom_port>/prometheus
<custom_host> and <custom_port> are the customized host name and port that you defined in the custom cluster access URL.
Use kubectl to edit the monitoring-prometheus-alertmanager resource. For example,
kubectl edit alertmanager monitoring-prometheus-alertmanager -n kube-system
In the monitoring-prometheus-alertmanager resource, change externalUrl:* to the following value:
externalUrl: https//:<custom_host>:<custom_port>/alertmanager
<custom_host> and <custom_port> are the customized host name and port that you defined in the custom cluster access URL.