Failback with migrated file on old primary and no change from acting primary
This use case describes failing back with migrated file on old primary.
Fileset: primary
State of the file in latest RPO at primary: resident
Action on the file from primary after latest RPO: update data and migrate
Primary disaster, failover to secondary, old primary came back up, run failback start
State of the file in the old primary after failback start: resident with blocks from the last RPO
Current state of the file on acting primary: R with no mtime change
State of the file in the old primary after applyUpdates: resident with no mtime change, old blocks remain
#create a new file from primary, flush to secondary
0 ;) hs22n19; Thu Sep 24 07:28:05; tests#
dd if=/dev/urandom
of=/gpfs/fs2/drHSM-DRP/file1 bs=1M count=1
1+0 records in
1+0 records out
1048576 bytes (1.0 MB) copied, 0.0819051 s, 12.8 MB/s
0 ;) hs22n19; Thu Sep 24 07:28:15; tests#
mmafmctl fs2 getstate -j
drHSM-DRP
Fileset Name Fileset Target Cache State Gateway Node Queue Length Queue numExec
------------ -------------- ------------- ------------ ------------ -------------
drHSM-DRP nfs://hs21n30/gpfs/fs1/drHSM-DRS Active hs22n19 0 83
0 ;) hs22n19; Thu Sep 24 07:29:44; tests#
#create a new RPO snapshot at primary fileset
hs22n19; Thu Sep 24 07:29:46; tests#
mmpsnap fs2 create -j drHSM-DRP
--rpo
Flushing dirty data for snapshot drHSM-DRP::psnap-rpo-C0A874465450C18C-46-15-09-24-07-30-00...
Quiescing all file system operations.
Snapshot drHSM-DRP::psnap-rpo-C0A874465450C18C-46-15-09-24-07-30-00 created with id 9017.
mmpsnap: The peer snapshot psnap-rpo-C0A874465450C18C-46-15-09-24-07-30-00 is created successfully.
0 ;) hs22n19; Thu Sep 24 07:30:02; tests#
#Update file at primary and flush to secondary
hs22n19; Thu Sep 24 07:30:03; tests#
do dd if=/dev/urandom of=/gpfs/fs2/drHSM-DRP/file1
bs=1M seek=1
count=1
1+0 records in
1+0 records out
1048576 bytes (1.0 MB) copied, 0.0970652 s, 10.8 MB/s
0 ;) hs22n19; Thu Sep 24 07:31:34; tests# mmafmctl fs2 getstate -j
drHSM-DRP
Fileset Name Fileset Target Cache State Gateway Node Queue Length Queue numExec
------------ -------------- ------------- ------------ ------------ -------------
drHSM-DRP nfs://hs21n30/gpfs/fs1/drHSM-DRS Active hs22n19 0 102
0 ;) hs22n19; Thu Sep 24 07:31:35; tests#
#migrate the file on primary live fs
hs22n21; Thu Sep 24 07:31:48; scripts#
dsmmigrate /gpfs/fs2/drHSM-DRP/file1
ANS0102W Unable to open the message repository /opt/tivoli/tsm/client/ba/bin/EN_IN/dsmclientV3.cat. The American English repository will be used instead.
Tivoli Storage Manager
Command Line Space Management Client Interface
Client Version 7, Release 1, Level 3.0
Client date/time: 09/24/2015 07:32:04 EDTA
(c) Copyright by IBM Corporation and other(s) 1990, 2015. All Rights Reserved.
0 ;) hs22n21; Thu Sep 24 07:32:12; scripts#
dsmls /gpfs/fs2/drHSM-DRP/file1 | grep
file1
2097152 0 0 m file1
#Run failoverToSecondary at secondary
hs21n30; Thu Sep 24 07:32:45; fs1#
mmafmctl fs1 failoverToSecondary -j drHSM-DRS
--restore
mmafmctl: failoverToSecondary restoring from psnap psnap-rpo-C0A874465450C18C-46-15-09-24-07-30-00
[2015-09-24 07:32:48] Restoring fileset "drHSM-DRS" from snapshot "psnap-rpo-C0A874465450C18C-46-15-09-24-07-30-00" of filesystem "/dev/fs1"
[2015-09-24 07:32:55] Scanning inodes, phase 1 ...
[2015-09-24 07:32:57] 5777216 inodes have been scanned, 50% of total.
[2015-09-24 07:33:01] 11554432 inodes have been scanned, 100% of total.
[2015-09-24 07:33:01] Constructing operation list, phase 2 ...
[2015-09-24 07:33:01] 0 operations have been added to list.
[2015-09-24 07:33:01] 18 operations have been added to list.
[2015-09-24 07:33:01] Deleting the newly created files, phase 3 ...
[2015-09-24 07:33:01] Deleting the newly created hard links, phase 4 ...
[2015-09-24 07:33:02] Splitting clone files, phase 5 ...
[2015-09-24 07:33:03] Deleting the newly created clone files, phase 6 ...
[2015-09-24 07:33:03] Moving files, phase 7 ...
[2015-09-24 07:33:04] Reconstructing directory tree, phase 8 ...
[2015-09-24 07:33:05] Moving files back to their correct positions, phase 9 ...
[2015-09-24 07:33:05] Re-creating the deleted files, phase 10 ...
[2015-09-24 07:33:06] Re-creating the deleted clone parent files, phase 11 ...
[2015-09-24 07:33:07] Re-creating the deleted clone child files, phase 12 ...
[2015-09-24 07:33:08] Re-creating the deleted hard links, phase 13 ...
[2015-09-24 07:33:08] Restoring the deltas of changed files, phase 14 ...
[2015-09-24 07:33:09] Restoring the attributes of files, phase 15 ...
[2015-09-24 07:33:10] Restore completed successfully.
[2015-09-24 07:33:10] Clean up.
Primary Id (afmPrimaryID) 12646758592946767367-C0A874195583B621-19
Fileset drHSM-DRS changed.
Promoted fileset drHSM-DRS to Primary
# Start failbackToPrimary at old primary
hs22n19; Thu Sep 24 07:31:35; tests#
mmafmctl fs2 failbackToPrimary -j drHSM-DRP
--start
Fileset drHSM-DRP changed.
mmafmctl: failbackToPrimary restoring from psnap psnap-rpo-C0A874465450C18C-46-15-09-24-07-30-00
[2015-09-24 07:33:58] Restoring fileset "drHSM-DRP" from snapshot "psnap-rpo-C0A874465450C18C-46-15-09-24-07-30-00" of filesystem "/dev/fs2"
[2015-09-24 07:34:00] Scanning inodes, phase 1 ...
[2015-09-24 07:34:03] 9447232 inodes have been scanned, 50% of total.
[2015-09-24 07:34:06] 18894464 inodes have been scanned, 100% of total.
[2015-09-24 07:34:06] Constructing operation list, phase 2 ...
[2015-09-24 07:34:06] 0 operations have been added to list.
[2015-09-24 07:34:07] 18 operations have been added to list.
[2015-09-24 07:34:07] Deleting the newly created files, phase 3 ...
[2015-09-24 07:34:08] Deleting the newly created hard links, phase 4 ...
[2015-09-24 07:34:08] Splitting clone files, phase 5 ...
[2015-09-24 07:34:09] Deleting the newly created clone files, phase 6 ...
[2015-09-24 07:34:09] Moving files, phase 7 ...
[2015-09-24 07:34:10] Reconstructing directory tree, phase 8 ...
[2015-09-24 07:34:11] Moving files back to their correct positions, phase 9 ...
[2015-09-24 07:34:12] Re-creating the deleted files, phase 10 ...
[2015-09-24 07:34:20] Re-creating the deleted clone parent files, phase 11 ...
[2015-09-24 07:34:20] Re-creating the deleted clone child files, phase 12 ...
[2015-09-24 07:34:21] Re-creating the deleted hard links, phase 13 ...
[2015-09-24 07:34:22] Restoring the deltas of changed files, phase 14 ...
[2015-09-24 07:34:23] Restoring the attributes of files, phase 15 ...
[2015-09-24 07:34:24] Restore completed successfully.
[2015-09-24 07:34:24] Clean up.
0 ;) hs22n19; Thu Sep 24 07:34:25; tests#
ls -lashi
/gpfs/fs2/drHSM-DRP/file1
9439104 1.0M -rw-r--r-- 1 root root 1.0M Sep 24 07:28 /gpfs/fs2/drHSM-DRP/file1
#State of the file in the old primary after failback start
hs22n21; Thu Sep 24 07:32:26; scripts#
dsmls /gpfs/fs2/drHSM-DRP/file1 | grep file1
1048576 1048576 1024 r file1
#ApplyUpdates on old primary
hs22n19; Thu Sep 24 07:36:30; tests#
mmafmctl fs2 applyUpdates -j
drHSM-DRP
[2015-09-24 07:37:04] Getting the list of updates from the acting Primary...
[2015-09-24 07:37:19] Applying the 19 updates...
[2015-09-24 07:37:20] 19 updates have been applied, 100% of total.
mmafmctl: Creating the failback psnap locally. failback-psnap-rpo-C0A874465450C18C-46-15-09-24-07-37-03
Flushing dirty data for snapshot drHSM-DRP::failback-psnap-rpo-C0A874465450C18C-46-15-09-24-07-37-03...
Quiescing all file system operations.
Snapshot drHSM-DRP::failback-psnap-rpo-C0A874465450C18C-46-15-09-24-07-37-03 created with id 9018.
0 ;) hs22n19; Thu Sep 24 07:37:21; tests#
ls -lashi
/gpfs/fs2/drHSM-DRP/file1
9439104 1.0M -rw-r--r-- 1 root root 1.0M Sep 24 07:28 /gpfs/fs2/drHSM-DRP/file1
#State of the file in the old primary after applyUpdates
hs22n21; Thu Sep 24 07:32:26; scripts#
dsmls /gpfs/fs2/drHSM-DRP/file1 | grep
file1
1048576 1048576 1024 r file1