A commn problem is running out of disk space for your archive logs - what can you do?
If you are using archiving of your MQ logs, you can quickly get a lot of archive log data set. If you are not careful you can run out of space because the storage pool is full. What can you do? Your best bet is to go and talk to your Storage Administrator, taking this blog may help you.
If you backup your page sets every day, you need at the achive logs for at least 1 day - which means two days. If you create a backup at 0300 inthe morning,and the archive logs have an expiry of 1 day. I think they can get deleted at midnight the following day. If you then have a problem at 0100 - you may fnd you do not have the archives. So you want them to last past the time of the second backup.
Also remember if you have a power down weekend they archives may get deleted on Sunday 2359, so you may not be able to recover from the backup you took on Friday night.
So if you have 50 archives a day - and each active log is 3GB you might have 100 archives over 2 days - and if you have dual archive, this is 200 data sets - all of size 3GB.
One option is to be nice to your Storage Administrator and ask for some more DASD. If they can provid it, end of problem, However it is more likely that they can give you some more, but not enough, and you are stuck with being unable to archive duiring your peak time! Another option is described below.
What are realistic options?
I spoke to Chris Taylor in IBM IBM Software Migration Project Office, (which performs conversion and implementation services ) who said
A lot of our accounts have changed the archive log offload process (MQ or DB2) to go straight to a virtual tape system, if they have access to one.
They then let the tape management system determine when the log data should expire. The upside of this approach is that you do not have to worry about HSM managing the data via migration and thus using CPU cycles to migrate this data. The downside is that if tape activity is unavailable for any reason, MQ will eventually grind to a halt, as it cannot offload the data quick enough.
The second approach is for the archive offloads to a storage group, which can then be setup for Interval Migration in HSM. By default, this checks once an hour, on the hour and if a threshold point has been reached, the logs will be migrated to HSM. I actually would suggest going straight to tape in this case, as the size of the logs could well end up filling the ML1 volumes and causing other problems; ML1 is generally used for data that requires a quick recall and I would hope that there is little likelihood of the MQ logs actually being used for a forward recovery. If they want to avoid the tape outage problem, they could migrate to ML1 but move to ML2 after one day. Interval Migration can be CPU-intensive so I suggest limiting it only to those storage groups that really need it and MQ logs would certainly fit into this category.
So by clever use of thresholds you could keep archive logs on DASD for 2 hours, and they get migrated after this time.
If the customer has non-SMS volumes with the AUTOMIGRATION parameter set in the ADDVOL command, then Interval Migration will be performed against those volumes when SETSYS INTERVALMIGRATION is specified. For this reason, I prefer to use SETSYS NOINTERVALMIGRATION and set Auto Migrate to I in the storage group - see below
So how do you do it ?
I spoke to the SMS administrator at IBM Hursley ( Keith Wickens) who told me what a Storage Administrator needs to know.
You can control the migration characteristics on a storage group level.
In the management class make sure days on primary is set to 0, which will migrate at the earliest opportunity within the given policy. If it does the migrates overnight the data sets from that day will most probably be migrated.
You can then chose Interval migration over standard migration, and that will check every hour if there is something to be migrated. Rrefer to the following webpage: Interval migration explained
In a Nutshell:
SETSYS INTERVALMIGRATION in SYS1.PARMLIB(ARCCMDxx)
and within ISMF option 6 for storage group application selection panel, alter the specific storage group, and set the Auto Migrate to I ( for Interval).
With that value altered, F8 down to the second page and you may also want to alter the "Allocation/migration Threshold" value down.
At Hursley these are typically 70 High & 50 Low, but for DB2 arclogs I go quite extreme and use 50 High & 1 Low. This means that space management will target the storage group if it reaches 50% used and migrate until it is all gone. Best for the customer to experiment.
Once that is done you will need to revalidate and activate SMS. Then restart HSM.
I never alter the track managed value but the above seems to work for me.
In Hursley we have also moved to continuous migration where it is supposed to attempt keep storage groups at those levels, continuously as it allocates. However, if it decides a dataset is not eligible for migration, it won't re-attempt to do so for 24 hours, even if it becomes eligible within the next hour. For rapidly filling storage groups like arclogs, that can be counterproductive!