IBM Support

Cloud object storage offload configuration: IBM Spectrum Protect™ Plus V10.1.3

Preventive Service Planning


Abstract

This document details the cloud object storage offload configuration recommendations for IBM® Spectrum Protect™ Plus V10.1.3.

Content

This document is divided into linked sections for ease of navigation. Use the links below to jump to the section of the document you need.
 

General
Sizing
Default cache area
Creating the cache area
Expanding the cache area if it already exists
 


General

For all functionality related to offload to or restore from cloud object storage, the vSnap server requires a disk cache area to be present on the vSnap server.


Below we provide recommendations for sizing of the cache area and instructions how to create and expand the cache area.

For all functionality related to offload or restore from cloud object storage, each vSnap server requires a disk cache area to perform the following functions:

  • During offload operations as a temporary staging area for objects that are pending upload to the cloud object storage endpoint.
  • During restore operations to cache downloaded objects as well as to store any temporary data that may be written into the restore volume.

Most of the cache space is freed up at the end of each offload or restore, but a small amount may continue to be used to cache metadata that will be used to speed up subsequent operations.

The cache area must be configured in the form of an XFS filesystem mounted at /opt/vsnap-data on the vSnap server. If this mount point is not configured, offload or restore jobs will fail with the error: "Cloud functionality disabled: Data disk /opt/vsnap-data is not configured."

Note: Do not unmount or manipulate files under /opt/vsnap-data while any offload or restore jobs are active. Once you have ensured that no jobs are active, it is safe to perform any maintenance activities such as unmounting and reconfiguring the cache area.

The data stored under /opt/vsnap-data is also safe to delete as long as no offload or restore jobs are active, although deleting this data may result in vSnap having to re-download data from the cloud object storage endpoint during the next offload or restore operation which in turn may introduce a small delay during the job.


For more information about offloading see in Knowledge Center under   https://www.ibm.com/support/knowledgecenter/SSNQFQ_10.1.3/spp/c_spp_offload.html.

Default cache area

Depending on the installation method and version of IBM Spectrum Protect Plus that was initially deployed, a default cache may or may not already be present on the system.

For new installations starting at version 10.1.3:

  • When the vSnap server is deployed as a virtual appliance, the cache area is already present as a pre-configured 128 GB data disk mounted at /opt/vsnap-data.
  • When the vSnap server is installed on a custom server, the cache area must be configured manually.

For systems upgraded from version 10.1.2 to version 10.1.3:

  • A default, pre-configured cache area of 128 GB may already be present and mounted at /opt/vsnap-data if the system was previously deployed as a virtual appliance starting with version 10.1.2. If the system was previously upgraded from version 10.1.1, the cache area will not be present. 
Use the "df" command on the vSnap server to confirm the presence of the mount point /opt/vsnap-data. If the mount point is not present, it must be configured manually.


Sizing

Although the cache area is sized at 128 GB as a starting point, it must be expanded based on the size of the vSnap pool on that system. The table below shows some general recommendations for sizing of the cache area.

Size of pool Size of cache area
1 TB 128 GB
10 TB 500 GB
25 TB 750 GB
50 TB 1 TB
100 TB 1.5 TB
200 TB or above 1.5 TB


Creating the cache area

Note: The sample commands below assume they are being run as the user serveradmin. If running as root, the sudo prefix can be omitted.

  •  Attach one or more disks to the vSnap system. The cumulative size of the disk(s) should be based on the sizing guidelines described above.

  • On the vSnap console, rescan to discover newly attached disk(s), then list them and identify the name(s) e.g. /dev/sdx, /dev/sdy.
    $ vsnap disk rescan
    $ vsnap disk show
    OR
    $ sudo lsblk

  • Create a Physical Volume on each disk, then create a Volume Group named "vsnapdata" that spans all the disks, and create a Logical Volume named "vsnapdatalv".
    $ sudo pvcreate /dev/sdx
    $ sudo pvcreate /dev/sdy
    $ sudo vgcreate vsnapdata /dev/sdx /dev/sdy
    $ sudo lvcreate -l 100%VG -n vsnapdatalv vsnapdata

  •  Create an XFS partition on the Logical Volume, create the mount point directory, and mount the volume.
    $ sudo mkfs.xfs /dev/mapper/vsnapdata-vsnapdatalv
    $ sudo mkdir -p /opt/vsnap-data
    $ sudo mount /dev/mapper/vsnapdata-vsnapdatalv /opt/vsnap-data

  • To ensure that the volume is remounted on reboot, edit the file /etc/fstab and append the following line to the end of the file:
    /dev/mapper/vsnapdata-vsnapdatalv  /opt/vsnap-data  xfs  defaults  0  0

  • Run "df -h" and verify that the volume /opt/vsnap-data is mounted and has the desired size.
     


Expanding the cache area if it already exists

Note: The sample commands below assume they are being run as the user "serveradmin". If running as "root", the "sudo" prefix can be omitted.

  • Attach one or more disks to the vSnap system. The cumulative size of the disk(s) should be based on the amount of space you want to add to the existing cache area. Use the "df -h" command to view the existing size of the /opt/vsnap-data mount point.

  • On the vSnap console, rescan to discover newly attached disks, then list them and identify the names e.g. /dev/sdx, /dev/sdy.
    $ vsnap disk rescan
    $ vsnap disk show
    OR
    $ sudo lsblk    

  • Create a Physical Volume on each disk, then add them to the existing Volume Group named "vsnapdata" to expand it, and then expand the existing Logical Volume named "vsnapdatalv".
    $ sudo pvcreate /dev/sdx
    $ sudo pvcreate /dev/sdy
    $ sudo vgextend vsnapdata /dev/sdx /dev/sdy
    $ sudo lvextend -l 100%VG /dev/mapper/vsnapdata-vsnapdatalv

  • Extend the XFS partition to fully occupy the newly expanded Logical Volume.
    $ sudo xfs_growfs /dev/mapper/vsnapdata-vsnapdatalv

  • Run "df -h" and verify that the volume /opt/vsnap-data is mounted and has the desired new size.

[{"Business Unit":{"code":"BU058","label":"IBM Infrastructure w\/TPS"},"Product":{"code":"SSNQFQ","label":"IBM Spectrum Protect Plus"},"Component":"Not Applicable","Platform":[{"code":"PF016","label":"Linux"}],"Version":"10.1.3","Edition":"","Line of Business":{"code":"LOB26","label":"Storage"}}]

Document Information

Modified date:
20 February 2019

UID

ibm10869560