Generating GPFS trace reports

Some issues might require low-level system detail accessible only through the IBM Storage Scale daemon and the IBM Storage Scale Linux kernel trace facilities.

In such instances the IBM Support Center might request such GPFS trace reports to facilitate rapid problem determination of failures.

The level of detail that is gathered by the trace facility is controlled by setting the trace levels using the mmtracectl command. For more information, see mmtracectl command in IBM Storage Scale documentation.

The following steps must be performed under the direction of the IBM Support Center.

  1. Enter the following command to access a running ibm-spectrum-scale-core pod:

     oc rsh -n ibm-spectrum-scale <ibm-spectrum-scale-core-pod>
    

    The pod must be in Running status to connect. It is best to pick a pod running on a node that is not exhibiting issues.

    The remaining steps should be completed while connected to this shell running inside the gpfs container of this running core pod.

  2. Enter the mmchconfig command to change the dataStructureDump field to point to /var/adm/ras. This changes the default location where trace data is stored to a directory that persists on the host machine:

     mmchconfig dataStructureDump=/var/adm/ras/
    
  3. Set desired trace classes and levels.

    This part of the process is identical to classic IBM Storage Scale installs. For more information, see Generating GPFS trace reports in IBM Storage Scale documentation.

     mmtracectl --set --trace={io | all | def | "Class Level [Class Level ...]"}
    
  4. Start the trace facility on all nodes by entering the following command:

     mmtracectl --start
    
  5. Re-create the problem.

  6. Stop the trace generation as soon as the problem to be captured occurs, by entering the following command:

     mmtracectl --stop
    
  7. Turn off trace generation by entering the following command:

     mmtracectl --off