Resolving the mounting, unmounting or remounting issues in IBM Spectrum Scale file systems

The following issues have been observed if the IBM Spectrum Scale filesystem is not properly mounted, remounted, or unmounted.

  • The mmunmount command fails to raise an error when I/O functions continue inside the container even after the user has unmounted the file system.

    Note: In this case, a container might still be accessing an IBM Spectrum Scale file system through a link, although the mmumount command has reported that the file system has been unmounted.
  • In some cases, remounting the file system fails, and the system might display one of the following EBUSY errors:
    mmmount: Mounting file systems ...
    
    mount: gpfs is already mounted or /gpfs busy
    
    mmmount: Command failed. Examine previous error messages to determine cause.

    These errors might occur after the unmount of an IBM Spectrum Scale file system has occurred while a pod is accessing the file system. Follow these steps to check for such issues and resolve them.

    1. Find the ID of pod which is in terminating state using the following command:
      kubectl get pod --no-headers -o custom-columns=wwn:metadata.uid <pod name>
    2. Find the node on which pod was scheduled using the following command:
      kubectl get pod --no-headers -o custom-columns=wwn:spec.nodeName <pod name>
    3. Check if the broken soft link exists on the node. If the soft link is broken then delete the soft link.

    You can now remount the IBM Spectrum Scale file system using the mmmount command. For more information on the mmunmount command, see the IBM Spectrum Scale: Command and Programming Reference guide in the IBM Spectrum Scale Knowledge Center.

    Note: You might need to restart Docker on the node with the terminated pod. Use the systemctl restart docker command to restart Docker.