We're currently investigating the replication of a linux GPFS 3.4 filesystem, which is to be space-managed (Tivoli Storage Manager 6.3 for Space Management), to a remote site (another GPFS filesystem in another cluster, possibly also space-managed independently) using rsync.
I have some questions that I haven't found a conclusive answer to
1) For rsync, will a stub file left behind by space management have the exact same attributes as the original file?
That is, the default rsync algorithm (the man page says: compares file size and modification timestamp attributes) will not re-sync a file (and thus cause a recall) when comparing stub file at site A and file or stub file at site B (if the file was previously rsynced and didn't change at site A in the mean time)?
(I know that a patched rsync exists that will copy GPFS ACLs in addition).
How does the HSM system (using DMAPI) differentiate between stub and non-stub files, actually? Are there xattrs on the file indicating it is a stub file (with risk of triggering too many rsyncs)?
Or is every file opened matched against the TSM database?
2) I came across the following statement on the page
"IBM Tivoli Storage Manager for Space Management V6.3 HSM Client known problems"
which, in the rightmost panel, mentions Operating system(s): AIX, Linux/x86
"HSM Linux problems and limitations"
"Filesets are not supported"
There has been another thread on HSM in this forum recently, and mention of a GPFS filesystem using filesets therein, but this remark was not made.
As I am on Linux x86_64, I'm left with the question:
Can I have a GPFS 3.4 or 3.5 filesystem with RedHat 6 Linux x86_64 NSD servers, using multiple filesets, and have it space-managed with TSM for Space Management 6.3, or are space management and GPFS filesets mutually exclusive on this platform? The intent would be to have the file migration triggered by GPFS ILM ie a "hsminstall=scoutfree" HSM setup.
3) Our client currently uses GPFS snapshotting. What is the behaviour of HSM stub files in a GPFS snapshot?
I came across the "SONAS Architecture, Planning and Implementation Basics" redbook (I have the december 2010 version) and it mentions
6.3.1 Snapshot considerations
As snapshots are not copies of the entire file system so they must not be used as protection
against media failure.
A snapshot file is independent from the original file as it only contains the user data and user
attributes of the original file. For Data Management API (DMAPI) managed file systems the
snapshot will not be DMAPI managed, regardless of the DMAPI attributes of the original file
because the DMAPI attributes are not inherited by the snapshot. For example, consider a
base file that is a stub file because the file contents have been migrated by Tivoli Storage
Manager HSM to offline media, the snapshot copy of the file will not be managed by DMAPI
as it has not inherited any DMAPI attributes and consequently referencing a snapshot copy of
a Tivoli Storage Manager HSM managed file will not cause Tivoli Storage Manager to initiate
a file recall.
Having been told that there is some relation between SONAS and GPFS: what does all of the above mean?
a) a put a file on a space-managed GPFS filesystem
b) the file gets migrated to tape and is replaced by a stub file
c) I create a snapshot of the GPFS filesystem
d) I try to access said file in the GPFS snapshot directory
-> will this return an error, will it trigger a recall, will it actually make it possible to access the original file from the snapshot?
(it doesn't sound to me as you'll be able to access the original file from the snapshot; it's not clear to me whether the file/stub in the snapshot is seen as a regular file or a stub either, or whether it will appear there at all)
NOTICE: developerWorks Community will be offline May 29-30, 2015 while we upgrade to the latest version of IBM Connections. For more information, read our upgrade FAQ.
This topic has been locked.