• No replies
24 Posts

Pinned topic Migrate Space Data

‏2013-07-30T07:47:33Z | gpfs hsm linux tsm


I have a GPFS Filesystem running on a SLES 11 with SP2 x64.
This is an archive system where I got 65 millions files and the server isn't that fast really. But now have IBM demand that we need to separate our production environment with our archive system to 2 different TSM Servers. So we have start a EXPORT NODE of all our data from TSM-A to TSM-B.
But now have the export been running for 24 hours and TSM has been able to export 1 million files so far.
That mean this export will be running for minimum of 64 days before it is done. I don't have enough space in my GPFS Filesystem to wait 64 days before next HSM or MMBACKUP to be run.

Does anyone have any ideas how to solve this?
Maybe I should do a recall on all my 65 million files and backup the data one more time? But I don't have enough space for that either if I don't install a new SAN disk to my GPFS cluster maybe.

Or can I setup a policy where I can recall the data from one TSM Server and migrate the data to my other TSM Server. And also all the new data will be backed up and migrated to my new TSM Server?

Any idea will help. Thanks
Christian Svensson