Mounting PureData System for Analytics storage to connector nodes
Learn how to mount the SAN storage from PureData System for Analytics on Cloud Pak for Data System 1.0.X by using the connector node so that you can restore your data from PureData System for Analytics to Netezza Performance Server.
Procedure
-
SSH to
e1n1
on Cloud Pak for Data System and identify the connector node.
Example:ap node
[root@e1n1 ~]# ap node +------------------+-----------+---------------+-----------+-----------+ | Node | State | Personality | Monitored | Is Master | +------------------+-----------+---------------+-----------+-----------+ | enclosure1.node1 | ENABLED | CONTROL | YES | YES | | enclosure1.node2 | ENABLED | CONTROL | YES | NO | | enclosure1.node3 | ENABLED | CONTROL | YES | NO | | enclosure1.node4 | ENABLED | WORKER | YES | NO | | enclosure2.node1 | ENABLED | WORKER | YES | NO | | enclosure2.node2 | ENABLED | WORKER | YES | NO | | enclosure2.node3 | UNMANAGED | VDB[IPS1NODE] | NO | NO | | enclosure2.node4 | UNMANAGED | VDB[IPS1NODE] | NO | NO | | enclosure3.node1 | ENABLED | CN,VDB_HOST | YES | NO | +------------------+-----------+---------------+-----------+-----------+
The connector node is marked
CN
in thePersonality
column. In this example, the connector node is marked asenclosure3.node1
(e3n1
). You might have more than one connector node on the system. - ssh to the first connector
node.
Example:ssh FIRST_CONNECTOR_NODE
[root@e1n1 ~]# ssh e3n1 Last login: Mon Aug 16 07:43:31 2021 from e1n1
- On the connecTor node, add a device for your specific storage device by editing
/etc/multipath.conf.
The multipath.conf file is a standard file with
device
structure built in. Some devices are already covered in the file. Do not change thedefaults
orblacklist
sections. Add an extradevice
section only if you need to.Usually, you can obtain the device information from the storage vendor. If you are already using that device on PureData System for Analytics, you can get the information from that system from
/etc/multipath.conf
.Following is an example of themultipath.conf
file on Cloud Pak for Data System.[root@e3n1 ~]# cat /etc/multipath.conf defaults { path_selector "round-robin 0" path_grouping_policy multibus uid_attribute ID_SERIAL prio const path_checker readsector0 rr_min_io 100 max_fds 8192 rr_weight priorities failback immediate no_path_retry fail user_friendly_names no find_multipaths yes polling_interval 5 # Added from RH fast_io_fail_tmo 5 # Added from RH dev_loss_tmo infinity # Added from RH checker_timeout 15 # Added from RH } devices { device { vendor "IBM" product "FlashSystem-9840" path_selector "service-time 0" path_grouping_policy multibus path_checker tur rr_min_io_rq 4 rr_weight uniform no_path_retry fail failback immediate } device { vendor "IBM" product "2145" path_grouping_policy "group_by_prio" path_selector "service-time 0" prio "alua" path_checker "tur" failback "immediate" no_path_retry fail rr_weight uniform rr_min_io_rq "1" } } blacklist { device { vendor "IBM" product "IPR-10 761E9100" } device { vendor "ATA" product "ThinkSystem M.2" } }
- Confirm that your storage devices are seen on the connector node by running the
lsblk or lsscsi command. If the storage devices are not seen, do the following steps.
- Verify with your storage administrator that the WWPNs for the connector nodes' FC HBAs are added
to the storage box hosts list to allow access.To retrieve these port names, on each connector node on your system, run the following command.
Example:cat /sys/class/fc_host/host*/port_name
[root@e3n1 ~]# cat /sys/class/fc_host/host*/port_name 0x100000109b953a15 0x100000109b953a16 0x100000109b9539f9 0x100000109b9539fa
If you have two connector nodes, there are 8-port names in total. You can add as many ports out of the available port names as you want. The more, the better.
- Check whether your FC connections have link. Check on your storage end, FC switch end, and run
the
command.
Example:cat /sys/class/fc_host/host*/port_state
[root@e3n1 ~]# cat /sys/class/fc_host/host*/port_state Online Online Online Online [root@e3n1 ~]# cat /sys/class/fc_host/host*/speed 16 Gbit 16 Gbit 16 Gbit 16 Gbit
16 Gbit is not necessary. The ports can support up to 32 Gbit.
- If the ports are added, you have link, and you still do not see your storage devices on the
connector nodes, run the following
command.
Example:echo 1 > /sys/class/fc_host/host18/issue_lip
$ echo 1 > /sys/class/fc_host/host18/issue_lip $ echo 1 > /sys/class/fc_host/host19/issue_lip $ echo 1 > /sys/class/fc_host/host20/issue_lip $ echo 1 > /sys/class/fc_host/host21/issue_lip $ /usr/bin/rescan-scsi-bus.sh --hosts={18,19,20,21} $ systemctl reload multipathd $ multipath -v3
- Run
lsblk
orlsscsi
again. If your storage devices are still not shown when you run the commands, restart with the following commands.- For each connector node, from
e1n1
, run the command.
To verify that theap apps disable vdb
VDB
state isDISABLED
, run the command.ap apps
- Disable each connector node that you
have.
To verify that the state isap node disable NODE_NAME
DISABLED
, run the command.ap node NODE_NAME
- Restart each connector node.
The storage devices are visible now.
- For each connector node, from
e1n1
, run the command.
To verify that the nodes areap node enable CONNECTOR_NODE_NAME
ENABLED
, run the commands.ap apps enable vdb
ap apps
- From
e1n1
, verify that the first connector node has theCN,VDB_HOST
role.ap node
- For each connector node, from
- Verify with your storage administrator that the WWPNs for the connector nodes' FC HBAs are added
to the storage box hosts list to allow access.
- As
root
, verify the results.
Example:lsblk
[root@e3n1 ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 447.1G 0 disk ├─sda1 8:1 0 1000M 0 part /boot ├─sda2 8:2 0 97.7G 0 part /var/lib/docker ├─sda3 8:3 0 48.8G 0 part [SWAP] ├─sda4 8:4 0 1K 0 part ├─sda5 8:5 0 39.1G 0 part /var ├─sda6 8:6 0 29.3G 0 part /home ├─sda7 8:7 0 29.3G 0 part /tmp ├─sda8 8:8 0 4.9G 0 part /var/log/audit └─sda9 8:9 0 197.1G 0 part / sdb 8:16 0 5T 0 disk └─360050768818134588000000001000023 253:3 0 5T 0 mpath └─360050768818134588000000001000023p1 253:4 0 5T 0 part sdc 8:32 0 5T 0 disk └─360050764e9b5d884e80000000100000b 253:1 0 5T 0 mpath └─360050764e9b5d884e80000000100000b1 253:2 0 5T 0 part sdd 8:48 0 5T 0 disk └─360050764e9b5d884e80000000100000b 253:1 0 5T 0 mpath └─360050764e9b5d884e80000000100000b1 253:2 0 5T 0 part sde 8:64 0 5T 0 disk └─360050764e9b5d884e80000000100000b 253:1 0 5T 0 mpath └─360050764e9b5d884e80000000100000b1 253:2 0 5T 0 part sdf 8:80 0 5T 0 disk └─360050764e9b5d884e80000000100000b 253:1 0 5T 0 mpath └─360050764e9b5d884e80000000100000b1 253:2 0 5T 0 part sdg 8:96 0 5T 0 disk ├─sdg1 8:97 0 5T 0 part └─360050768818134588000000001000023 253:3 0 5T 0 mpath └─360050768818134588000000001000023p1 253:4 0 5T 0 part sdh 8:112 0 5T 0 disk └─360050768818134588000000001000023 253:3 0 5T 0 mpath └─360050768818134588000000001000023p1 253:4 0 5T 0 part sdi 8:128 0 5T 0 disk └─360050768818134588000000001000023 253:3 0 5T 0 mpath └─360050768818134588000000001000023p1 253:4 0 5T 0 part sdj 8:144 0 5T 0 disk └─360050768818134588000000001000023 253:3 0 5T 0 mpath └─360050768818134588000000001000023p1 253:4 0 5T 0 part sdk 8:160 0 5T 0 disk └─360050768818134588000000001000023 253:3 0 5T 0 mpath └─360050768818134588000000001000023p1 253:4 0 5T 0 part sdl 8:176 0 5T 0 disk └─360050768818134588000000001000023 253:3 0 5T 0 mpath └─360050768818134588000000001000023p1 253:4 0 5T 0 part sdm 8:192 0 5T 0 disk └─360050764e9b5d884e80000000100000b 253:1 0 5T 0 mpath └─360050764e9b5d884e80000000100000b1 253:2 0 5T 0 part sdn 8:208 0 5T 0 disk └─360050764e9b5d884e80000000100000b 253:1 0 5T 0 mpath └─360050764e9b5d884e80000000100000b1 253:2 0 5T 0 part sdo 8:224 0 5T 0 disk └─360050768818134588000000001000023 253:3 0 5T 0 mpath └─360050768818134588000000001000023p1 253:4 0 5T 0 part sdp 8:240 0 5T 0 disk └─360050764e9b5d884e80000000100000b 253:1 0 5T 0 mpath └─360050764e9b5d884e80000000100000b1 253:2 0 5T 0 part sdq 65:0 0 5T 0 disk └─360050764e9b5d884e80000000100000b 253:1 0 5T 0 mpath └─360050764e9b5d884e80000000100000b1 253:2 0 5T 0 part sdr 65:16 0 5T 0 disk └─360050768818134588000000001000023 253:3 0 5T 0 mpath └─360050768818134588000000001000023p1 253:4 0 5T 0 part sds 65:32 0 5T 0 disk └─360050768818134588000000001000023 253:3 0 5T 0 mpath └─360050768818134588000000001000023p1 253:4 0 5T 0 part sdt 65:48 0 5T 0 disk └─360050768818134588000000001000023 253:3 0 5T 0 mpath └─360050768818134588000000001000023p1 253:4 0 5T 0 part sdu 65:64 0 5T 0 disk └─360050764e9b5d884e80000000100000b 253:1 0 5T 0 mpath └─360050764e9b5d884e80000000100000b1 253:2 0 5T 0 part sdv 65:80 0 5T 0 disk └─360050764e9b5d884e80000000100000b 253:1 0 5T 0 mpath └─360050764e9b5d884e80000000100000b1 253:2 0 5T 0 part sdw 65:96 0 5T 0 disk └─360050764e9b5d884e80000000100000b 253:1 0 5T 0 mpath └─360050764e9b5d884e80000000100000b1 253:2 0 5T 0 part sdx 65:112 0 5T 0 disk └─360050764e9b5d884e80000000100000b 253:1 0 5T 0 mpath └─360050764e9b5d884e80000000100000b1 253:2 0 5T 0 part sdy 65:128 0 5T 0 disk └─360050768818134588000000001000023 253:3 0 5T 0 mpath └─360050768818134588000000001000023p1 253:4 0 5T 0 part sdz 65:144 0 5T 0 disk └─360050768818134588000000001000023 253:3 0 5T 0 mpath └─360050768818134588000000001000023p1 253:4 0 5T 0 part sdaa 65:160 0 5T 0 disk └─360050768818134588000000001000023 253:3 0 5T 0 mpath └─360050768818134588000000001000023p1 253:4 0 5T 0 part sdab 65:176 0 5T 0 disk └─360050768818134588000000001000023 253:3 0 5T 0 mpath └─360050768818134588000000001000023p1 253:4 0 5T 0 part sdac 65:192 0 5T 0 disk └─360050764e9b5d884e80000000100000b 253:1 0 5T 0 mpath └─360050764e9b5d884e80000000100000b1 253:2 0 5T 0 part sdad 65:208 0 5T 0 disk └─360050764e9b5d884e80000000100000b 253:1 0 5T 0 mpath └─360050764e9b5d884e80000000100000b1 253:2 0 5T 0 part sdae 65:224 0 5T 0 disk └─360050764e9b5d884e80000000100000b 253:1 0 5T 0 mpath └─360050764e9b5d884e80000000100000b1 253:2 0 5T 0 part sdaf 65:240 0 5T 0 disk └─360050764e9b5d884e80000000100000b 253:1 0 5T 0 mpath └─360050764e9b5d884e80000000100000b1 253:2 0 5T 0 part sdag 66:0 0 5T 0 disk └─360050768818134588000000001000023 253:3 0 5T 0 mpath └─360050768818134588000000001000023p1 253:4 0 5T 0 part nvme0n1 259:0 0 3.5T 0 disk └─md0 9:0 0 21T 0 raid6 └─raid6_nzscratch_vg-raid6_nzscratch_local_lv 253:0 0 21T 0 lvm /opt/ibm/appliance/storage/nzscratch nvme1n1 259:2 0 3.5T 0 disk └─md0 9:0 0 21T 0 raid6 └─raid6_nzscratch_vg-raid6_nzscratch_local_lv 253:0 0 21T 0 lvm /opt/ibm/appliance/storage/nzscratch nvme2n1 259:4 0 3.5T 0 disk └─md0 9:0 0 21T 0 raid6 └─raid6_nzscratch_vg-raid6_nzscratch_local_lv 253:0 0 21T 0 lvm /opt/ibm/appliance/storage/nzscratch nvme3n1 259:7 0 3.5T 0 disk └─md0 9:0 0 21T 0 raid6 └─raid6_nzscratch_vg-raid6_nzscratch_local_lv 253:0 0 21T 0 lvm /opt/ibm/appliance/storage/nzscratch nvme4n1 259:1 0 3.5T 0 disk └─md0 9:0 0 21T 0 raid6 └─raid6_nzscratch_vg-raid6_nzscratch_local_lv 253:0 0 21T 0 lvm /opt/ibm/appliance/storage/nzscratch nvme5n1 259:5 0 3.5T 0 disk └─md0 9:0 0 21T 0 raid6 └─raid6_nzscratch_vg-raid6_nzscratch_local_lv 253:0 0 21T 0 lvm /opt/ibm/appliance/storage/nzscratch nvme6n1 259:6 0 3.5T 0 disk └─md0 9:0 0 21T 0 raid6 └─raid6_nzscratch_vg-raid6_nzscratch_local_lv 253:0 0 21T 0 lvm /opt/ibm/appliance/storage/nzscratch nvme7n1 259:3 0 3.5T 0 disk └─md0 9:0 0 21T 0 raid6 └─raid6_nzscratch_vg-raid6_nzscratch_local_lv 253:0 0 21T 0 lvm /opt/ibm/appliance/storage/nzscratch
Example:multipath -ll
[root@e3n1 ~]# multipath -ll 360050768818134588000000001000023 dm-3 IBM ,FlashSystem-9840 size=5.0T features='0' hwhandler='0' wp=rw `-+- policy='service-time 0' prio=1 status=active |- 19:0:0:0 sdb 8:16 active ready running |- 19:0:5:0 sdg 8:96 active ready running |- 19:0:6:0 sdh 8:112 active ready running |- 19:0:7:0 sdi 8:128 active ready running |- 18:0:0:0 sdj 8:144 active ready running |- 18:0:1:0 sdk 8:160 active ready running |- 18:0:2:0 sdl 8:176 active ready running |- 18:0:5:0 sdo 8:224 active ready running |- 21:0:0:0 sdr 65:16 active ready running |- 21:0:1:0 sds 65:32 active ready running |- 21:0:2:0 sdt 65:48 active ready running |- 21:0:7:0 sdy 65:128 active ready running |- 20:0:1:0 sdaa 65:160 active ready running |- 20:0:0:0 sdz 65:144 active ready running |- 20:0:2:0 sdab 65:176 active ready running `- 20:0:7:0 sdag 66:0 active ready running 360050764e9b5d884e80000000100000b dm-1 IBM ,FlashSystem-9840 size=5.0T features='0' hwhandler='0' wp=rw `-+- policy='service-time 0' prio=1 status=active |- 19:0:1:0 sdc 8:32 active ready running |- 19:0:2:0 sdd 8:48 active ready running |- 19:0:3:0 sde 8:64 active ready running |- 19:0:4:0 sdf 8:80 active ready running |- 18:0:4:0 sdn 8:208 active ready running |- 18:0:3:0 sdm 8:192 active ready running |- 18:0:7:0 sdq 65:0 active ready running |- 18:0:6:0 sdp 8:240 active ready running |- 21:0:4:0 sdv 65:80 active ready running |- 21:0:3:0 sdu 65:64 active ready running |- 21:0:5:0 sdw 65:96 active ready running |- 21:0:6:0 sdx 65:112 active ready running |- 20:0:3:0 sdac 65:192 active ready running |- 20:0:4:0 sdad 65:208 active ready running |- 20:0:5:0 sdae 65:224 active ready running `- 20:0:6:0 sdaf 65:240 active ready running
Example:ll /dev/mapper/
[root@e3n1 ~]# ll /dev/mapper/ total 0 lrwxrwxrwx. 1 root root 7 Aug 13 12:08 360050764e9b5d884e80000000100000b -> ../dm-1 lrwxrwxrwx. 1 root root 7 Aug 13 12:07 360050764e9b5d884e80000000100000b1 -> ../dm-2 lrwxrwxrwx. 1 root root 7 Aug 13 12:08 360050768818134588000000001000023 -> ../dm-3 lrwxrwxrwx. 1 root root 7 Aug 13 12:07 360050768818134588000000001000023p1 -> ../dm-4 crw-------. 1 root root 10, 236 Aug 12 17:29 control lrwxrwxrwx. 1 root root 7 Aug 13 09:32 raid6_nzscratch_vg-raid6_nzscratch_local_lv -> ../dm-0
- As
root
, list the PVs on the system.-
pvscan --cache
-
pvscan
-
pvs
[root@e3n1 ~]# pvscan --cache [root@e3n1 ~]# pvscan PV /dev/md0 VG raid6_nzscratch_vg lvm2 [20.96 TiB / 0 free] Total: 1 [20.96 TiB] / in use: 1 [20.96 TiB] / in no VG: 0 [0 ] [root@e3n1 ~]# pvs PV VG Fmt Attr PSize PFree /dev/md0 raid6_nzscratch_vg lvm2 a-- 20.96t 0
-
Example:df -h
[root@e3n1 ~]# df -h Filesystem Size Used Avail Use% Mounted on devtmpfs 95G 0 95G 0% /dev tmpfs 95G 24K 95G 1% /dev/shm tmpfs 95G 7.0M 95G 1% /run tmpfs 95G 0 95G 0% /sys/fs/cgroup /dev/sda9 194G 14G 171G 8% / /dev/sda1 969M 121M 782M 14% /boot /dev/sda6 29G 286M 27G 2% /home /dev/sda5 38G 7.8G 29G 22% /var /dev/sda8 4.7G 1.1G 3.5G 23% /var/log/audit /dev/sda7 29G 834M 27G 3% /tmp platform 3.0T 386G 2.6T 13% /opt/ibm/appliance/storage/platform ips 5.9T 56G 5.9T 1% /opt/ibm/appliance/storage/ips /dev/sda2 98G 2.0G 96G 3% /var/lib/docker /dev/mapper/raid6_nzscratch_vg-raid6_nzscratch_local_lv 21T 34M 21T 1% /opt/ibm/appliance/storage/nzscratch overlay 98G 2.0G 96G 3% /var/lib/docker/overlay2/29664a900b8c96c105e1e616f66693ddede981f42e9944f6f2001f6208d71a9e/merged shm 127G 0 127G 0% /var/lib/docker/containers/44aceb70059daf24a59de5f5847d366d9068c9c6403ca578423a63832ad2c467/shm tmpfs 19G 0 19G 0% /run/user/0
Example:ap apps
[root@e3n1 ~]# ap apps Management service is not running on node localhost. Generated: 2021-08-16 09:28:22
- Edit the LVM configuration file on the connector nodes to allow discovering LVM. Note: This step is needed only if you followed https://www.ibm.com/support/pages/adding-san-storage-puredata-system-analytics on your PureData System for Analytics, and you ran Step 3. Configuring LVM.
- As
root
, run the following command.vi /etc/lvm/lvm.conf
- Search for
filter =
and comment out anything that is not already commented out. - Set
filter
.filter = [ "a|/dev/sda|", "a|/dev/mapper/*|" , "r/block/", "r/disk/", "r/sd.*/", "a/.*/" ]
- Search for
global_filter =
and comment out anything that is not already commented out. - Set
global_filter
.global_filter = [ "a|.*/|" ]
- As
- Scan for LVM and activate it.
-
pvscan
-
pvdisplay
[root@e3n1 ~]# pvscan PV /dev/md0 VG raid6_nzscratch_vg lvm2 [20.96 TiB / 0 free] PV /dev/mapper/360050764e9b5d884e80000000100000b1 VG vg0 lvm2 [5.00 TiB / 0 free] PV /dev/mapper/360050768818134588000000001000023p1 lvm2 [5.00 TiB] Total: 3 [30.96 TiB] / in use: 2 [25.96 TiB] / in no VG: 1 [5.00 TiB] [root@e3n1 ~]# pvdisplay --- Physical volume --- PV Name /dev/md0 VG Name raid6_nzscratch_vg PV Size 20.96 TiB / not usable 3.88 MiB Allocatable yes (but full) PE Size 4.00 MiB Total PE 5494051 Free PE 0 Allocated PE 5494051 PV UUID rrsIP8-45gD-rvR6-XGoL-5gQK-cCDo-hegTnk --- Physical volume --- PV Name /dev/mapper/360050764e9b5d884e80000000100000b1 VG Name vg0 PV Size 5.00 TiB / not usable 511.95 MiB Allocatable yes (but full) PE Size 512.00 MiB Total PE 10239 Free PE 0 Allocated PE 10239 PV UUID PScH2q-hE7A-TShG-LJTd-J46c-T6yn-BhoBov "/dev/mapper/360050768818134588000000001000023p1" is a new physical volume of "5.00 TiB" --- NEW Physical volume --- PV Name /dev/mapper/360050768818134588000000001000023p1 VG Name PV Size 5.00 TiB Allocatable NO PE Size 0 Total PE 0 Free PE 0 Allocated PE 0 PV UUID mcJkJ2-Xpff-9qqH-4Jb0-s8Gj-oboj-n65vdV
-
Example:vgscan
[root@e3n1 ~]# vgscan Reading volume groups from cache. Found volume group "raid6_nzscratch_vg" using metadata type lvm2 Found volume group "vg0" using metadata type lvm2
Example:lvscan
[root@e3n1 ~]# lvscan ACTIVE '/dev/raid6_nzscratch_vg/raid6_nzscratch_local_lv' [20.96 TiB] inherit inactive '/dev/vg0/lvol0' [5.00 TiB] inherit
Example:lvs
[root@e3n1 ~]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert raid6_nzscratch_local_lv raid6_nzscratch_vg -wi-ao---- 20.96t lvol0 vg0 -wi------- 5.00t
Example:vgs
[root@e3n1 ~]# vgs VG #PV #LV #SN Attr VSize VFree raid6_nzscratch_vg 1 1 0 wz--n- 20.96t 0 vg0 1 1 0 wz--n- 5.00t 0
Example:vgchange -ay
[root@e3n1 ~]# vgchange -ay 1 logical volume(s) in volume group "raid6_nzscratch_vg" now active 1 logical volume(s) in volume group "vg0" now active [root@e3n1 ~]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert raid6_nzscratch_local_lv raid6_nzscratch_vg -wi-ao---- 20.96t lvol0 vg0 -wi-a----- 5.00t
Example:lvdisplay
[root@e3n1 ~]# lvdisplay --- Logical volume --- LV Path /dev/raid6_nzscratch_vg/raid6_nzscratch_local_lv LV Name raid6_nzscratch_local_lv VG Name raid6_nzscratch_vg LV UUID LCSTTd-xpYP-e26n-SgT5-VJdW-g2bH-9X5rRg LV Write Access read/write LV Creation host, time e3n1, 2021-08-13 09:32:45 -0400 LV Status available # open 1 LV Size 20.96 TiB Current LE 5494051 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 8192 Block device 253:0 --- Logical volume --- LV Path /dev/vg0/lvol0 LV Name lvol0 VG Name vg0 LV UUID H2Dyjk-L8cz-XY6I-Bqpe-89ty-4Scs-fbCHW8 LV Write Access read/write LV Creation host, time Q100M-6-H1, 2021-08-15 23:09:36 -0400 LV Status available # open 0 LV Size 5.00 TiB Current LE 10239 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 8192 Block device 253:5
- Verify that your LVM device from your remote storage is present in
/dev/mapper/vg0-lvol0
.
Example:ll /dev/mapper/
[root@e3n1 ~]# ll /dev/mapper/ total 0 lrwxrwxrwx. 1 root root 7 Aug 13 12:08 360050764e9b5d884e80000000100000b -> ../dm-1 lrwxrwxrwx. 1 root root 7 Aug 13 12:07 360050764e9b5d884e80000000100000b1 -> ../dm-2 lrwxrwxrwx. 1 root root 7 Aug 13 12:08 360050768818134588000000001000023 -> ../dm-3 lrwxrwxrwx. 1 root root 7 Aug 13 12:07 360050768818134588000000001000023p1 -> ../dm-4 crw-------. 1 root root 10, 236 Aug 12 17:29 control lrwxrwxrwx. 1 root root 7 Aug 13 09:32 raid6_nzscratch_vg-raid6_nzscratch_local_lv -> ../dm-0 lrwxrwxrwx. 1 root root 7 Aug 16 09:32 vg0-lvol0 -> ../dm-5
- As
root
, mount the discovered LVM device to this exact mount point:/opt/ibm/appliance/storage/external_mnt/SAN/
.Do not use any other mount points.
Only one connector node can mountext4
orxfs
at a time. Make sure this one goes to your first connector node, which is sorted alphanumerically.-
mount -t ext4 /dev/mapper/vg0-lvol0 /opt/ibm/appliance/storage/external_mnt/SAN/
Example:df -h
[root@e3n1 ~]# df -h Filesystem Size Used Avail Use% Mounted on devtmpfs 95G 0 95G 0% /dev tmpfs 95G 24K 95G 1% /dev/shm tmpfs 95G 7.0M 95G 1% /run tmpfs 95G 0 95G 0% /sys/fs/cgroup /dev/sda9 194G 14G 171G 8% / /dev/sda1 969M 121M 782M 14% /boot /dev/sda6 29G 286M 27G 2% /home /dev/sda5 38G 7.8G 29G 22% /var /dev/sda8 4.7G 1.1G 3.5G 24% /var/log/audit /dev/sda7 29G 834M 27G 3% /tmp platform 3.0T 386G 2.6T 13% /opt/ibm/appliance/storage/platform ips 5.9T 56G 5.9T 1% /opt/ibm/appliance/storage/ips /dev/sda2 98G 2.0G 96G 3% /var/lib/docker /dev/mapper/raid6_nzscratch_vg-raid6_nzscratch_local_lv 21T 34M 21T 1% /opt/ibm/appliance/storage/nzscratch overlay 98G 2.0G 96G 3% /var/lib/docker/overlay2/29664a900b8c96c105e1e616f66693ddede981f42e9944f6f2001f6208d71a9e/merged shm 127G 0 127G 0% /var/lib/docker/containers/44aceb70059daf24a59de5f5847d366d9068c9c6403ca578423a63832ad2c467/shm tmpfs 19G 0 19G 0% /run/user/0 /dev/mapper/vg0-lvol0 5.0T 4.2T 483G 90% /opt/ibm/appliance/storage/external_mnt/SAN
-
- Edit
/etc/fstab
to make sure that this mount comes up on restart.
Example:vi /etc/fstab
[root@e3n1 ~]# vi /etc/fstab # /etc/fstab # Created by anaconda on Wed Aug 11 22:31:41 2021 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # UUID=4f743dad-f155-4eed-adcb-2a0d7c3d7e6d / ext4 defaults 1 1 UUID=16832cce-ff3e-42be-b88a-eb509a638f4b /boot ext4 defaults 1 2 UUID=5cf1745c-38c6-4d55-a78f-9e7e1e02bb75 /home ext4 defaults 1 2 UUID=8e68b58f-617b-47c4-aa09-ec957bbcb285 /tmp ext4 defaults 1 2 UUID=24be1b62-92b5-4742-8806-1a9b602eef29 /var ext4 defaults 1 2 UUID=a5ad00fa-a6c4-4eef-bd97-7486cb8f34f6 /var/log/audit ext4 defaults 1 2 UUID=c568aaf5-c3af-459c-bc2e-1b8b5cb546c7 none ext4 defaults 1 2 UUID=cef3974d-1dba-4bf1-a3ce-755ca380d057 swap swap defaults 0 0 ips /opt/ibm/appliance/storage/ips gpfs rw,nomtime,relatime,dev=ips,noauto 0 0 platform /opt/ibm/appliance/storage/platform gpfs rw,nomtime,relatime,dev=platform,noauto 0 0 /dev/sda2 /var/lib/docker xfs nofail 0 0 /dev/raid6_nzscratch_vg/raid6_nzscratch_local_lv /opt/ibm/appliance/storage/nzscratch xfs nofail 0 0 /dev/mapper/vg0-lvol0 /opt/ibm/appliance/storage/external_mnt/SAN ext4 nofail 0 0
- From
e1n1
, restart the container so that it picks up the mount you just made.
Example:ap apps
[root@e1n1 ~]# ap apps +----------+------------------+ | Name | Management State | +----------+------------------+ | CallHome | ENABLED | | ICP4D | ENABLED | | VDB | ENABLED | +----------+------------------+
Example:ap apps disable vdb
[root@e1n1 ~]# ap apps disable vdb VDB will be disabled. Arestart you sure you want to proceed? [y/N]: y State change request sent successfully [root@e1n1 ~]# ap apps +----------+------------------+ | Name | Management State | +----------+------------------+ | CallHome | ENABLED | | ICP4D | ENABLED | | VDB | ENABLED | +----------+------------------+ Generated: 2021-08-16 09:35:28
Example:ap apps
[root@e1n1 ~]# ap apps +----------+------------------+ | Name | Management State | +----------+------------------+ | CallHome | ENABLED | | ICP4D | ENABLED | | VDB | DISABLING | +----------+------------------+ Generated: 2021-08-16 09:35:31
Example:ap apps
[root@e1n1 ~]# ap apps +----------+------------------+ | Name | Management State | +----------+------------------+ | CallHome | ENABLED | | ICP4D | ENABLED | | VDB | DISABLED | +----------+------------------+ Generated: 2021-08-16 09:36:25
Example:ap apps enable vdb
[root@e1n1 ~]# ap apps enable vdb VDB will be enabled. Are you sure you want to proceed? [y/N]: y State change request sent successfully [root@e1n1 ~]# ap apps +----------+------------------+ | Name | Management State | +----------+------------------+ | CallHome | ENABLED | | ICP4D | ENABLED | | VDB | ENABLING | +----------+------------------+ Generated: 2021-08-16 09:36:34
Example:ap apps
[root@e1n1 ~]# ap apps +----------+------------------+ | Name | Management State | +----------+------------------+ | CallHome | ENABLED | | ICP4D | ENABLED | | VDB | ENABLED | +----------+------------------+ Generated: 2021-08-16 09:38:23
- ssh to the first connector node, exec into the container, and observe
whether the sTorage is mounted and shows usage in
/external_mount/SAN
.Your backup is in/external_mount/SAN
.
Example:ssh e3n1
[root@e1n1 ~]# ssh e3n1 Last login: Mon Aug 16 09:34:52 2021 from e1n1 [root@e3n1 ~]# docker exec -it ipshost1 bash [root@e3n1-npshost /]# df -h Filesystem Size Used Avail Use% Mounted on overlay 98G 2.0G 96G 3% / tmpfs 95G 0 95G 0% /dev ips 5.9T 56G 5.9T 1% /nz /dev/mapper/raid6_nzscratch_vg-raid6_nzscratch_local_lv 21T 35M 21T 1% /nzscratch /dev/sda2 98G 2.0G 96G 3% /etc/hosts /dev/sda9 194G 14G 171G 8% /etc/resolv.conf.upstream tmpfs 95G 7.0M 95G 1% /host/run tmpfs 19G 0 19G 0% /host/run/user/0 /dev/mapper/vg0-lvol0 5.0T 4.2T 483G 90% /external_mount/SAN shm 127G 42M 127G 1% /dev/shm /dev/sda5 38G 7.8G 29G 22% /var/lib/sedsupport tmpfs 64M 13M 52M 19% /run tmpfs 64M 0 64M 0% /run/lock tmpfs 64M 0 64M 0% /var/log/journal tmpfs 71G 4.0K 71G 1% /tmp [root@e3n1-npshost /]# ll /external_mount/SAN/ total 20 drwxrwxr-x. 3 nz nz 4096 Aug 16 01:54 DB_BACKUP drwx------. 2 nz root 16384 Aug 15 23:17 lost+found
- Optional: Start the system and begin the restore.
-
nzstart
-
nzrestore
-