Table of contents

Managing storage volumes

You can create and manage connections to storage volumes on your existing IBM® Cloud Pak for Data storage devices.

Many enterprise applications use a mounted file system to work with data sets. For example, many Spark jobs process CSV, PARQUET, and AVRO files that are stored on a POSIX-compliant shared file system that can be accessed by all of the executors. However, you might need to store source code for Spark jobs or extra packages that can be used by your Spark jobs. In some cases, these additional files need to be stored on a mountable, shared file system. You can use a volume instance to store these files by creating connections to a storage volume.

You can perform the following tasks:

Requirements

You must have the Create service instances permission in Cloud Pak for Data. You can check which permissions you have from your profile.

To use persistent volume claims, a cluster administrator must complete at least one of the following tasks:

  • Set up dynamic storage on the cluster.
  • Create persistent volume claims that point to the storage that you want to use.

For more information, see Storage considerations.

Create a storage volume connection

You can create a storage volume connection on the Platform connections page. For more information, see Connecting to data sources.

Establish access to a volume

Note: The NFS server must be accessible from the OpenShift® worker nodes through a low latency network. The NFS server should also be resilient and highly available.

You can establish access to a volume on an external NFS storage server or a persistent volume claim (PVC).

  1. From the navigation menu, select Administration > Storage volumes.
  2. Click New volume.
  3. Enter required information about the volume and select the Volume type.
    • Name: Enter the name of the volume. Do not include special characters or blanks in the name of the volume.
    • Description: Optionally, enter a description of the volume.
    • Volume type: Select one of the following options:
      External NFS
      • NFS server: Specify the IP address or the fully qualified hostname of the NFS server.
      • Exported path: Specify the exported directory path that is configured on the NFS server. For example, /shared/data.
      • Mount path: Specify the directory path that users can access the contents of this volume from. You can use the exported path. For example, /shared/data.
      Existing PVC
      • Existing PVC: Select the PVC that you want to give users access to from the Platform connections page.
      • Mount path: Specify the directory path that users can access the contents of this volume from. You can use the exported path. For example, /shared/data.
      New PVC
      • Storage class: Specify a storage class. A cluster administrator can create storage classes to define different types of storage. Work with your cluster administrator to determine which storage class to use.
      • Size in GB: Enter the amount of storage to allocate to this volume. The size is constrained by either the total amount of storage on the storage device or the storage class configuration.
      • Mount path: Specify the directory path that users can access the contents of this volume from. You can use the exported path. For example, /shared/data.
  4. Click Add.

    After a volume instance is created, you can mount the volume in the appropriate pods in your Cloud Pak for Data deployment. The mount path to the storage volume is prefixed with /mnts/ and you can specify a path in this directory.

Manage access to a storage volume connection

Note: You cannot manage access to a volume if it is not running. Check the status of the volume on the Storage volumes page.

You can specify which users have access to the storage volume to ensure that only authorized users have access to the volume.

  1. On the Storage volumes page, click Options Options icon for a volume, and then click Manage access.
  2. Click Add users to grant access to users who can access the storage volume.
  3. Select users and choose the role of each user as Editor, Viewer, or Admin.
  4. Click Add.

As the creator of a volume instance, you can remove access to a storage volume.

  1. On the Access management page, for a user, click Remove Remove icon to remove access to the storage volume.

    The user will no longer be able to access the volume.

  2. Or, on the Access management page, select multiple users and then click Remove Remove icon in the toolbar.

    The users will no longer be able to access the volume.

View details about a storage volume connection

You can view a list of the available storage volumes, the number of users with access to each volume, and the status of each volume.

  1. On the Storage volumes page, click a volume name to see details.
  2. To generate or revoke an API key for this volume, click Instance API key.
  3. To copy the access token for this volume, click Copy Copy icon.
  4. To regenerate the access token for this volume, click Regenerate token Regenerate icon.
  5. To manage access to the storage volume or delete a storage volume, click Options Options icon.

You can use the endpoint and the access token of the storage volume in the Volumes APIs.

Delete a storage volume connection

You can delete a connection to a storage volume.

  1. From the Storage volumes page, click Options Options icon for a volume.
  2. Click Delete. The connection is deleted and all users' access to the volume is removed. Users or applications that connect to this volume will no longer be able to connect.
Important: The data inside of the volume is not deleted and services can continue to use the volume until a Red Hat® OpenShift project administrator removes the persistent volume claim to reclaim the storage. The reclaim policy that is specified in the storage class determines what happens when the persistent volume claim is deleted.

Browse the storage volume and upload content

You can view the content (files and directories) in a storage volume, add or delete files and directories, upload or download files, and extract the contents of files and directories. For more information, see Managing persistent volume instances with the Volumes API.