Method 3: History Processing
This method allows you to copy existing sequential data sets (referred to as “history data sets") to the cloud. History processing supports sequential data sets on both DASD and tape.
In order to copy existing sequential data sets to the cloud, you must create a History Include/Exclude list. Select option 4 (Backup History Datasets) from the Main menu to create a History Include/Exclude list.
History processing is the only method available to copy existing data sets to the cloud. Methods 1 and 2 captures the data as it is being written. If the data set has already been created and you want the data copied to the cloud, it must be specified in the History Include/Exclude list.
The History Include/Exclude list is processed when the auto-backup repository interval expires. This interval is based on the amount of time specified for the "Auto Bkup Repository Min" parameter, which you can access from the Main menu by selecting Option 1 (Cloud Connector Settings - Parmlib Options), then choosing Option 1 (General Options). The History Include/Exclude list is also processed 15 seconds after the Cloud Tape Connector started task has started .
The History Include/Exclude list is processed to schedule data sets to be copied to the cloud. Since the History Include/Exclude list supports the use of "wildcard" characters when specifying data sets, it can be used to pick up newly created data sets, along with the datasets that are already processed. You can choose to schedule datasets that have an edited cloud copy or datasets after last processing, or datasets that are not edited with the help of options in ISPF panel CUZ$INEX . If the History Include/Exclude list is replaced, the updates will take effect during the next interval.
If you want to copy new backups to the cloud, without staging the data or writing directly to the cloud, you can use this history processing method. The only drawback is that the backup tape will be mounted twice: once to create the tape backup and the second time to read the tape to copy the data set to the cloud. This method is useful if you have a very short production batch window or you do not have the DASD storage available that is required to stage the data.
Another consideration to keep in mind is that history processing supports only one generation of the dataset to be copied to the cloud and it replaces the cloud copy when the same dataset is copied again. Filtering (Methods 1 and 2) allows up to 10 generations.