Updating logging service data collection filters

By default, container logs from all workloads are collected. Extra filtering can be applied to the log collection process.

Two types of filters are supported:

Complete the following steps if more host labels are needed.

  1. Install the kubectl command line interface. See Installing the Kubernetes CLI (kubectl).

  2. Get a list of nodes by running the following command:

    kubectl get nodes --show-labels
    

    The command output resembles the following text:

    NAME          STATUS    AGE       VERSION                    LABELS
    9.1.1.1     Ready     5h        v1.7.3-11+f747daa02c9ffb   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,gpu/nvidia=NA,kubernetes.io/hostname=9.1.1.1,role=master
    9.1.1.2    Ready     4h        v1.7.3-11+f747daa02c9ffb   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,gpu/nvidia=NA,kubernetes.io/hostname=9.1.1.2
    9.1.1.3   Ready     4h        v1.7.3-11+f747daa02c9ffb   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,gpu/nvidia=NA,kubernetes.io/hostname=9.1.1.3,management=true
    
  3. Label the nodes on which to run Filebeat. Labels are applied by running the following command. <node_name> is the name of a node that should run Filebeat and myfilebeat=true is a label that can later be used to match that node for the Filebeat deployment. Any label that conforms to Kubernetes standards Opens in a new tab will work.

    kubectl label node <node_name> myfilebeat=true
    

Complete the following steps to update data collection filters.

  1. Extract existing logging chart parameters.

    • Run the following command to extract Helm parameters:

      helm get values logging --tls > values-old.yaml
      
    • Optionally, apply prior adjustments. All Kubernetes resource manifest adjustments that are made by using the kubectl command are overridden with values that are defined in chart parameters. Replica count, JVM heap size, or container memory limits are a few examples. If prior Kubernetes resource manifests were adjusted, make sure that you apply the same adjustments to values-old.yaml.

  2. Prepare chart parameters.

    • Create a values-override.yaml file to include the following settings.

      filebeat:
        scope:
          # logs are only collected from hosts with all matching label key/value pairs
          # no filtering is applied if empty
          nodes:
            planet: jupiter
            system: solar
          # logs are only collected from listed namespaces
          # no filtering is applied if empty
          namespaces:
            - europa
            - ganymede
      

      See the following notes:

      • filebeat.scope.nodes uses the Kubernetes node selector format Opens in a new tab
      • filebeat.scope.nodes and filebeat.scope.namespaces can be used separately. If both values are set, logs meeting only both criteria are collected.
  3. Download the chart.

    • Identify chart version

      Logging chart versions vary based on the installed version of your product. You can use the console to find chart versions in the service catalog. The logging chart can be identified by the name ibm-icplogging under the mgmt-repo repository. You can also select SOURCE & TAR FILES from the console to find a local link to a chart.

    • Download chart .tar file. Run the following command by using the local link found in Step 3.

       curl -k https://<Cluster Master Host>:<Cluster Master API Port>/mgmt-repo/requiredAssets/ibm-icplogging-x.y.z.tgz > ibm-icplogging-x.y.z.tgz
      

      For more information, see Accessing your cluster by using the console.

  4. Upgrade the Helm chart.

    Run the following command. Replace x.y.z with the version that you found in Step 3:

    helm upgrade logging ibm-icplogging-x.y.z.tgz -f values-old.yaml -f values-override.yaml --recreate-pods --force --timeout 600 --tls
    
  5. The logging service becomes available in approximately 5 - 10 minutes. You can also check Helm upgrade status by using the following command:

     helm history --tls logging