I have a large collection of older raid arrays which support a feature the vendor calls automaid, i.e., disks spin down when not used. We have many files which are infrequently accessed and I'd like to stick it in storage pools which are constructed of this disk with this feature enabled. It will look like this:
- fileystem named FS is create using a large-ish number of SSD luns for the system storage pool, marked metadataOnly
- each raid array's LUNs are added to its own storage pool as dataOnly
- some of these arrays will be set to spin down when not used
- a policy rule will be written which will migrate unused data to this storage pool
I think this is all pretty standard. What I'm not clear about is how the filesystem changes when a migration occurs. For instance, let's say the filesystem tree looks like:
each of raid1, raid2 and raid3_automaid is a storage pool linked (mmlinkfileset) to NDSs that come from the same raid array.
The reason that I want each fileset to represent a seperate storage pool is that I want to limit the impact on the filesystem as a whole of the loss of any storage array. This data will not be replicated or backed up. The LUNs will be RAID 6 and I trust that the controllers won't corrupt the data, nor that we'll lose > 2 drives per array at once. Yes, I'm trusting but, by definition, though inconvenient, these data could be reacquired.
So, I run my policy rule which traverses raid1 and raid2 looking for files which haven't been accessed in > 3 months; when found I migrate the files to raid3_automaid. Let's say before the run I had a file:
which gets migrated. Will an 'ls /gpfs/fs/raid1/not_used_file.dat' still find the file? Or will it disappear from there and only be found at /gpfs/fs/raid3_automaid?
The answer I want to hear is that its name (pointer/inode) will stay in raid1 but the actual data will move to raid3*. (Perhaps it would be a good idea for me to not link the raid3* storage pool so that users never traverse into it?)