Installing and configuring Grafana

In Red Hat® OpenShift® Container Platform, install and configure the Grafana operator into the openshift-user-workload-monitoring project by using the OperatorHub.

About this task

Starting in Maximo® Application Suiteversions 8.10.10 and 8.11.7, you can follow these steps to install and configure Grafana. Instead of following the manual steps, you can run a playbook to install and setup Grafana. For more information, see cluster_monitoring.

Procedure

  1. Create a project to install the operator.
    1. In the Red Hat OpenShift Container Platform console, click Home > Projects.
    2. On the Projects page, click Create Project.
    3. In the Name field, enter openshift-user-workload-monitoring.
    4. Click Create.
  2. Install the Grafana Operator.
    1. In the Red Hat OpenShift Container Platform console, click Operators > OperatorHub.
    2. Select the openshift-user-workload-monitoring project.
    3. Search for the Grafana operator.
    4. Click the Grafana Operator tile.
    5. Select v5 as the update channel to install.
    6. Click Install.
  3. Install the Grafana instance.
    1. In the Red Hat OpenShift Container Platform console, click Operators > Installed Operators.
    2. Click the Grafana Operator.
    3. Click the Grafana tab.
    4. To configure the Grafana instance, select the YAML tab and enter the following text:
      ---
      apiVersion: grafana.integreatly.org/v1beta1
      kind: Grafana
      metadata:
         name: mas-grafana
         namespace: openshift-user-workload-monitoring
      labels:
         dashboards: "grafanav5"
      spec:
        config:
          auth:
            disable_login_form: "false"
            disable_signout_menu: "true"
        log:
          level: warn
          mode: console
        dataStorage:
          accessModes:
            ReadWriteOnce
            class: ocs-storagecluster-cephfs
            size: 10Gi
        deployment:
          strategy:
            type: Recreate
        route:  
          spec: {}
      A route is created, and the public URL is included in that route definition in the openshift-user-workload-monitoring namespace project.
  4. Click Create.
  5. Update the subscription.yaml file and add the following configuration so that Grafana scans for the dashboard across the whole cluster:
    
    Config:
      env: 
        - name: "DASHBOARD_NAMESPACES_ALL"
            value: "true"
      env:
        name: "WATCH_NAMESPACE"
          value: ""
    After you save the subscription.yaml file, the file is updated and includes the following content:
    ---
    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: grafana-operator
      namespace: openshift-user-workload-monitoring
    labels:
      operators.coreos.com/grafana-operator.{{ grafana_namespace }}: ''
    spec:
      channel: v5
      installPlanApproval: Automatic
      name: grafana-operator
      source: community-operators
      sourceNamespace: openshift-marketplace
    config:
      env:
        name: "WATCH_NAMESPACE"
        value: ""
        name: "DASHBOARD_NAMESPACES_ALL"
        value: "true"
  6. Run the following command to grant the user permission.
    oc adm policy add-cluster-role-to-user cluster-monitoring-view -z grafana-serviceaccount
  7. Run the following command to obtain the BEARER_TOKEN.
    - oc serviceaccounts get-token grafana-serviceaccount -n openshift-user-workload-monitoring
  8. Create the GrafanaDataSource, which points to the Prometheus instance that you installed earlier.
    1. Select the Operators > Installed operators and select Grafana Datasource.
    2. Enter the following text and replace the ${BEARER_TOKEN} value with the value that you obtained in a previous step.
      ---
      apiVersion: grafana.integreatly.org/v1beta1
      kind: GrafanaDatasource
      metadata:
        name: mas-prom-grafanadatasource
        namespace: openshift-user-workload-monitoring
      spec:
        instanceSelector:
        matchLabels:
        dashboards: "grafanav5"
        datasource:
          name: prometheus
          type: prometheus
          access: proxy
          url: https://thanos-querier.openshift-monitoring.svc.cluster.local:9091
          isDefault: true
          editable: true
        jsonData:
          httpHeaderName1: Authorization
          timeInterval: 5s
          tlsSkipVerify: true  
        secureJsonData:
          httpHeaderValue1: 'Bearer ${BEARER_TOKEN}'