So we're doing an upgrade to an existing DS5300, and would like to copy the original data over to 2 DS3500's, then change the disk stripping on the DS5300 and add the trays, then copy it back. What do you all think is the fastest way to do this? I have used CPIO in the past, and it's always worked like a champ, but looking for something that can do it in the mmfs environment vs the os kernel...After the migration back to the ds5300, the 2 ds3500's will be used for backups on the clusters...
2 gpfs filesystems, now...3 after the migration.
/gpfs2 (this is the one getting migrated to the ds3500's)
/gpfs3 (contains both ds3500's)
GPFS ver 3.3
OS is RHEL5U6
Pinned topic fastest way to copy 2 gpfs filesystems
Answered question This question has been answered.
Unanswered question This question has not been answered yet.
Updated on 2013-01-28T23:33:00Z at 2013-01-28T23:33:00Z by SystemAdmin
Re: fastest way to copy 2 gpfs filesystems2013-01-28T19:55:35ZThis is the accepted answer. This is the accepted answer.I don't have a pat answer at this time, but...
With GPFS, you should be thinking PARALLEL. In fact, even if it were not GPFS, and even on a single node, parallel could help,
because you want to drive all the disk, disk arms, IO channels, etc, etc at 100% -- Assuming you have a lot of data to move and you are in a hurry ;-)
As a simple example, suppose you're file system directory tree divides the files nicely near the top into roughly equally sized set of files.
Start multiple processes, each working on a different section of the tree...
Re: fastest way to copy 2 gpfs filesystems2013-01-28T20:08:23ZThis is the accepted answer. This is the accepted answer.To compound onto what marc said, I'm doing something similar in a few weeks (moving 28TB of data to a new filesystem, then rebuilding an old one), and I actually have an rsync utility I wrote that runs across multiple nodes. It basically takes a list of sources and destinations, the use python+MPI+rsync to distribute tasks to move data in parallel, without the rsyncs working on the same directory all at once, etc.
I was able to move 28TB (of many directories and files of different sizes) in ~6 hours using 8 NSD servers in my limited amount of testing. I was getting an average of 1.3GB/s aggregate during the move. I've only tested this on the same filesystem, but there was a ton of disk seek since the rsync reads/writes were both happening on the same LUNs. I'd estimate the results to be similar over the network, since during the actual move, the filesystems will be cross mounted over QDR IB.
If you are interested, I can post it here...
Re: fastest way to copy 2 gpfs filesystems2013-01-28T21:05:10ZThis is the accepted answer. This is the accepted answer.
- SystemAdmin 110000D4XK
You'll need a script to copy directories, hint `mkdir -p`; and one to copies files, hint `cp -p`...
And a policy something like this:
rule 'd1' external list 'd' exec '/ghome/makaplan/policies/a-policy-style-script-to-process-directories.sh' opts 'TARGET-PATH' rule 'f1' external list 'f' exec '/ghome/makaplan/policies/a-policy-style-script-to-process-files-symlinks-etc.sh' opts 'TARGET-PATH' rule 'd2' list 'd' directories_plus weight(length(name)-length(path_name)) where mode like 'd%' /* all directories */ rule 'f2' list 'f' directories_plus weight(length(name)-length(path_name)) where mode not like 'd%' /* everthing but the directories */
You might use the -I prepare option to prepare the file lists without jumping immediated to execution.
Then process with the -r option to make sure that you replicate the directory skeleton completely
before populating the skeleton with files.
Good luck, and let us know how it works out!
Re: fastest way to copy 2 gpfs filesystems2013-01-28T21:56:58ZThis is the accepted answer. This is the accepted answer.
- SystemAdmin 110000D4XK
More or less, you'll need mpi4py + mpi (I used mpich) installed on all hosts that you want to participate in the parallel rsync. I did this by installing all of the dependencies in a central area on the GPFS filesystem. Obviously you'll also need to get the LD_LIBRARY_PATH environment setup for the hosts as well (I put it in root's .bashrc temporarily), since when you launch the script, it uses MPI to communicate between the ranks. I just created a python virtualenv with all the dependencies I needed.
The input file you give it is simple:
I launch it like this (use prsync --help for options):
/gpfs/gpfs0/software/mpich/mpich2-1.4.1p1/bin/mpirun -f mpihostsfile.txt -n 16 /gpfs/gpfs0/software/python/bin/python ./prsync.py -f input.txt -l /gpfs/gpfs0/rsynclogs -v
Sorry if it's a little kludgy, but it certainly works.
Hope this helps, and let me know if there are any problems!
Re: fastest way to copy 2 gpfs filesystems2013-01-28T23:33:00ZThis is the accepted answer. This is the accepted answer.I would do this as follows:
- Free up a couple of luns on /gpfs3.
- Add these 3500 luns to /gpfs2
- Remove the 5300 luns from /gpfs2
- Reformat the 5300 luns
- Add the new 5300 luns back to /gpfs2
- Delete the 3500 luns from /gpfs2
This will probably be the best performing solution. Even better: All of this can be done online while the file copy is offline. Your users will be happy !