After IBM installers install the connector nodes, the customer can decide at any time to
begin making use of the Fibre Channel backup functionality.
Before you begin
Important:
- The following procedure applies to Cloud Pak for Data System
versions 1.x.
- The following procedure is only for filesystem connector method. Alternatively, you can also use
third-party backup LAN-free such as TSM/Spectrum Protect, as described in Setting up Tivoli Storage Manager 8.1.10 LAN free. Ignore the
following procedure if you plan to use TSM/Spectrum Protect.
NPS must be on version 11.2.1.x to use this functionality.
Customers using connector nodes must supply 4 FC ports per 1 connector node. Speed depends on the
customer, however the feature was tested on 16G speeds. Up to and including 32G is supported.
Amount of LUNs is up to the customer. All LUNs that recognize the WWPNs from the connector nodes
(i.e. that the connector nodes can discover) will be used in parallel.
Fibre Channel over Ethernet (FCoE) is not supported on connector nodes.
Cloud Pak for Data System (CPDS) does not support adding
non-CPDS nodes (such as an external TSM server or other) to the same GPFS cluster running on the
System control or connector nodes.
Procedure
-
Connect the FC cables to the connector nodes. Verify link on both ends:
- On customer DC end, view storage appliance (array) GUI or CLI to confirm the setup.
- On Cloud Pak for Data System, view
/sys/class/fc_host/host*/speed and such parameters in that directory on the
connector nodes to confirm link speed and status.
Also look at link lights.
- Run
ap apps disable vdb
and monitor ap apps
to make
sure VDB ends up in DISABLED state. Note: This is when the NPS outage starts.
- Run the
storage_setup
script to auto-generate the GPFS filesystem called
ext_mnt
, which will be backed by the customer's FC LUNs:
ssh
to the first connector node of the set of them. The hostname
depends on the size of system. To identify the connector nodes, see ap node
. Take
the first ordinal number such as enclosureX.nodeY
. The hostname will then be
eXnY
. ssh
to this hostname.
- Run
/opt/ibm/appliance/platform/xcat/scripts/storage/storage_setup -e
san
. This script is interactive, and it guides you through the process of setting up the
storage. It will prompt when the LUNs are to be overwritten and asks for
confirmation.
- In some cases, the LUNs will not be discovered when hotplugged. If that is the case, a
reboot of the connector nodes is required to make them discover the LUNs. If it does not help, then
it's likely that the customer needs to add the FC HBA WWNs to their DC storage admin settings to
grant access to them. See the FC HBA WWPNs at
/sys/class/fc_host/host*/wwpn
- The LUNs will not be RAIDed by Cloud Pak for Data System. It is up to the customer to configure RAID on their end if they want RAID backing their LUN(s).
The GPFS filesystem in this case does not use replication, it stripes data across all
customer-supplied LUNs for maximum performance. It is assumed the customer may be using a RAIDed
storage appliance already behind their LUNs. If JBOD is used, then there will be no replication or
parity.
- The mount is at /opt/ibm/appliance/storage/external_mount/SAN. The
NPS host container on the connector nodes already has this mount in the dockerfile. Thus, when NPS
is started again, it will have this mount in the container.
- To start NPS again and end the outage, run
ap apps enable vdb
and
monitor ap apps
output until VDB is ENABLED.
docker exec
into the container as usual, and run
nzstart
as usual.
- From there,
nzbackup
can be run to take backups as usual, but this time
with the filesystem connector to point to the FC storage mount that is local to the container. See
df -H
to see the mount /external_mount/SAN locally to make
directories and then use those for nzbackup
and nzrestore
commands.