How To
Summary
This document describes the steps needed to migrate DASD devices of Db2 Analytics Accelerator for z/OS defined for GDPS failover support.
For Accelerator maintenance levels up to 7.5.12.3, please see the following two sections:
- section A describes the steps when using 'GDPS Metro Global - GM 4-site',
- section B describes the steps when using 'GDPS Metro and Metro Global - GM 3-site'.
For Accelerator maintenance level 7.5.13 or later, please see section C.
Objective
Migrating Accelerator DASD devices with GDPS failover support (up to maintenance level 7.5.12.3)
Note:
Your GDPS GEOPARM should never have the proxy tag specified for both the old and the new volumes at the same time. Both can be part of the GEOPARM at the same time if needed, but only the currently used ones should have the proxy tag specified.
Your GDPS GEOPARM should never have the proxy tag specified for both the old and the new volumes at the same time. Both can be part of the GEOPARM at the same time if needed, but only the currently used ones should have the proxy tag specified.
If your Accelerator maintenance level is 7.5.13 or later, skip to Section C.
A. GDPS Metro Global - GM 4-site
To migrate the accelerator DASD devices, follow the steps described below:
- Put the accelerator in maintenance mode in GDPS (Kp primary system in production region) using action mm in the xDR status panel, as shown in Figure 1. Figure 1. VPCPSTDL, xDR status panel
-
Edit your current JSON configuration file and disable GDPS support by removing the entire block with GDPS attributes:
"gdps_mode": "true" + gdps_configuration + gdps_serversNote: You will re-add this block later. So, store it somewhere safe. -
Edit your current JSON configuration file and add a second storage environment for your new devices you want to migrate to by extending the indicated block as shown below in Figure 2. The example uses the following storage device names:Figure 2. JSON configuration file with second storage environment added• Boot disk: 0.0.ce00
• Data pool: 0.0.ce01, 0.0.ce02, 0.0.cf01, 0.0.cf02
• Runtime devices: 0.0.cf00Note:the device names containing the letter b are the old devices that you are migrating from while those containing the letter c are the ones you are migrating to. - Open the Admin UI of the accelerator.
- On the home page, under Configuration, click “Upload, validate and apply an updated configuration file for the Accelerator”. The Update page opens. Upload your new JSON configuration file, which now contains definitions for two storage environments and disabled GDPS.
- Click Update.
- Click Download to export your current configuration.
- The export file is encrypted. In addition to the accelerator configuration, it contains the encryption keys of the storage pools. The storage pools are always encrypted and use encryption keys that are stored in an encrypted part of the boot volume. Without these keys, you can no longer access the accelerator.
- Open the Hardware Management Console (HMC) of your SSC LPAR.
-
On the HMC, deactivate the LPAR that uses the boot disk on the old storage box.
-
In the navigation tree on the left, under Systems Management, select the IBM Z system the SSC LPAR belongs to.All LPARs on that system are listed in the Partitions view (tab) on the main page.
-
In the Partitions view, right-click the name of the LPAR you want to deactivate and select Daily→ Deactivate from the context menu.
-
-
Make sure the new DASDs are identical binary copies of the old DASDs.
-
Bring the SSC LPAR into the installer mode.
- On the HMC, select the Customize/Delete Activation profiles table.
- In the navigation tree on the left, navigate to and select the SSC link.
- Make sure that Secure Service Container installer is selected on the main page in the center.
-
Activate the SSC LPAR using Daily→ Activate from the context menu.
-
Log on to the Admin UI again. That is, enter the IP address of your SSC LPAR in a web browser and provide the Master user ID and Master password. You see the Install Software Appliance window.
-
In the Install Software Appliance window, select Attach existing disk.
-
Select the new boot volume (0.0.ce00 in the example).
-
Click Apply.
-
Update your GDPS GEOPARM to have the new volumes included and the proxy tag added for them. Be aware that you should never have the proxy tag specified for both the old and the new volumes at the same time. Both can be part of the GEOPARM at the same time, but only the currently used ones should have the proxy tag specified.
Example: <PROXY>gdpsssc1</PROXY> - Run a DASD config on your primary GDPS Kp system.
- When your accelerator has completed startup and reached status ready, edit your current JSON configuration file and enable GDPS support by re-adding the entire GDPS block "gdps_mode": "true" + gdps_configuration + gdps_servers as saved in step 2.
- Edit your current JSON configuration file and specify an IP address for the Kp systems in your remote region (server3 and server4), that is valid but not reachable from the accelerator for the time being. This step is only needed if you have IP connectivity to your remote region at this time and if xdr is started on the Kp systems in the remote region.
Example: "gdps_servers": { "server1": { "ipv4": "10.20.76.45", "port": "5529" }, "server2": { "ipv4": "10.20.76.46", "port": "5529" }, "server3": { "ipv4": "11.20.76.44", "port": "5529" }, "server4": { "ipv4": "11.20.76.45", "port": "5529" } },
- Open the Admin UI of the accelerator.
-
On the home page, under Configuration, click “Upload, validate and apply an updated configuration file for the Accelerator”. The Update page opens. Upload your new JSON configuration file, which now contains definitions for two storage environments and enabled GDPS.
- Click Update.
- After some seconds the GDPS Kp primary system in your local region should connect to the accelerator and you should get an SDF entry telling that MTCONFIG failed with RSN 43, as shown in Figure 3:
Figure 3. SDF panel showing XDR MTCONFIG error message - If you performed Step 21, edit your current JSON configuration file and specify the correct IP addresses for the Kp systems in your remote region (server3 and server4).
- If you performed Step 21, repeat steps 22 through 24 to have the correct IP addresses for server3 and server4 applied.
- Click on ‘GDPS client’ in the Admin UI of the accelerator.
- The GDPS Agent Information page opens, then press the ‘Restart GDPS Agent’ button, as shown in Figure 4.
Figure 4. Detail of the Restart GDPS Agent button - Within a couple of seconds in the accelerator Admin UI the GDPS client should have a green status as shown in Figure 5.
Figure 5. GDPS client status indicator - Check that the accelerator is showing a green status of LNX-A in the GDPS Standard Actions panel, as shown in Figure 6.
Figure 6. Accelerator status indication on GDPS Standard Actions panel - Update the GDPS load table with the new load addresses for the accelerator, reflecting the new devices. Do this using the modify action on the GDPS Standard Actions panel for the accelerator, as shown in Figure 7. Afterwards update the load addresses appropriately.
Figure 7. VPCPFLFZ, Modify System panel
B. GDPS Metro and Metro Global - GM 3-site
To migrate the accelerator DASD devices, follow the steps described below:
- Put the accelerator in maintenance mode in GDPS (Kp primary system in production region) using action mm in the xDR status panel, as shown in Figure 8. Figure 8. VPCPSTDL, xDR status panel
-
Edit your current JSON configuration file and disable GDPS support by removing the entire block with GDPS attributes:
"gdps_mode": "true" + gdps_configuration + gdps_serversNote: You will re-add this block later. So, store it somewhere safe. -
Edit your current JSON configuration file and add a second storage environment for your new devices you want to migrate to by extending the indicated block as shown below in Figure 2. The example uses the following storage device names:Figure 9. JSON configuration file with second storage environment added• Boot disk: 0.0.ce00
• Data pool: 0.0.ce01, 0.0.ce02, 0.0.cf01, 0.0.cf02
• Runtime devices: 0.0.cf00Note:the device names containing the letter b are the old devices that you are migrating from while those containing the letter c are the ones you are migrating to. - Open the Admin UI of the accelerator.
- On the home page, under Configuration, click “Upload, validate and apply an updated configuration file for the Accelerator”. The Update page opens. Upload your new JSON configuration file, which now contains definitions for two storage environments and disabled GDPS.
- Click Update.
- Click Download to export your current configuration.
- The export file is encrypted. In addition to the accelerator configuration, it contains the encryption keys of the storage pools. The storage pools are always encrypted and use encryption keys that are stored in an encrypted part of the boot volume. Without these keys, you can no longer access the accelerator.
- Open the Hardware Management Console (HMC) of your SSC LPAR.
-
On the HMC, deactivate the LPAR that uses the boot disk on the old storage box.
-
In the navigation tree on the left, under Systems Management, select the IBM Z system the SSC LPAR belongs to.All LPARs on that system are listed in the Partitions view (tab) on the main page.
-
In the Partitions view, right-click the name of the LPAR you want to deactivate and select Daily→ Deactivate from the context menu.
-
-
Make sure the new DASDs are identical binary copies of the old DASDs.
-
Bring the SSC LPAR into the installer mode.
- On the HMC, select the Customize/Delete Activation profiles table.
- In the navigation tree on the left, navigate to and select the SSC link.
- Make sure that Secure Service Container installer is selected on the main page in the center.
-
Activate the SSC LPAR using Daily→ Activate from the context menu.
-
Log on to the Admin UI again. That is, enter the IP address of your SSC LPAR in a web browser and provide the Master user ID and Master password. You see the Install Software Appliance window.
-
In the Install Software Appliance window, select Attach existing disk.
-
Select the new boot volume (0.0.ce00 in the example).
-
Click Apply.
-
Update your GDPS GEOPARM to have the new volumes included and the proxy tag added for them. Be aware, that you should never have the proxy tag specified for both the old and the new volumes at the same time. Both can be part of the GEOPARM at the same time, but only the currently used ones should have the proxy tag specified.
Example: <PROXY>gdpsssc1</PROXY> - Run a DASD config on your primary GDPS Kp system.
- When your accelerator has completed startup and reached status ready, edit your current JSON configuration file and enable GDPS support by re-adding the entire GDPS block "gdps_mode": "true" + gdps_configuration + gdps_servers as saved in step 2.
- Open the Admin UI of the accelerator.
-
On the home page, under Configuration, click “Upload, validate and apply an updated configuration file for the Accelerator”. The Update page opens. Upload your new JSON configuration file, which now contains definitions for two storage environments and enabled GDPS.
- Click Update.
- After some seconds the GDPS Kp primary system in your local region should connect to the accelerator and you should get an SDF entry telling that MTCONFIG failed with RSN 43, as shown in Figure 10.
Figure 10. SDF panel showing XDR MTCONFIG error message - Click on ‘GDPS client’ in the Admin UI of the accelerator.
- The GDPS Agent Information page opens, then press the ‘Restart GDPS Agent’ button, as shown in Figure 11.
Figure 11. Detail of the Restart GDPS Agent button - Within a couple of seconds in the accelerator Admin UI the GDPS client should have a green status as shown in Figure 12.
Figure 12. GDPS client status indicator - Check that the accelerator is showing a green status of LNX-A in the GDPS Standard Actions panel, as shown in Figure 13.
Figure 13. Accelerator status indication on GDPS Standard Actions panel - Update the GDPS load table with the new load addresses for the accelerator, reflecting the new devices. Do this using the modify action on the GDPS Standard Actions panel for the accelerator as shown in Figure 14. Afterwards update the load addresses appropriately.
Figure 14. VPCPFLFZ, Modify System panel
Migrating Accelerator DASD devices with GDPS failover support (maintenance level 7.5.13 or later)
Note:
Your GDPS GEOPARM should never have the proxy tag specified for both the old and the new volumes at the same time. Both can be part of the GEOPARM at the same time if needed, but only the currently used ones should have the proxy tag specified.
Your GDPS GEOPARM should never have the proxy tag specified for both the old and the new volumes at the same time. Both can be part of the GEOPARM at the same time if needed, but only the currently used ones should have the proxy tag specified.
C. GDPS Metro and GDPS Metro Global - GM 3/4-site
-
Stop the accelerator system gracefully while running on existing volumes in Region A from GDPS (Kp primary system): use action ‘S – Stop’ on the GDPS Standard Actions panel (panel ID VPCPSTD1) as shown in Figure 15.
Figure 15. VPCPSTD1, GDPS Standard Actions panel - Action Stop
Region A corresponds to GDPS servers 1 and 2 in the accelerator’s JSON configuration file. -
Wait until the accelerator system is stopped. This is indicated by a value of RESET in the ‘Status’ column of the GDPS Standard Actions panel (panel ID VPCPSTD1) as shown in Figure 16.
Figure 16. VPCPSTD1, GDPS Standard Actions panel - Status Reset -
Make sure the new DASDs are identical binary copies of the existing DASDs.
-
Update your GDPS GEOPARM to have the new volumes included and the proxy tag added for them. Be aware that you should never have the proxy tag specified for both the old and the new volumes at the same time. Both can be part of the GEOPARM at the same time, but only the currently used ones should have the proxy tag specified.
Example: <PROXY>gdpsssc1</PROXY> -
Run a DASD config on your primary GDPS Kp system.
-
If you are not using GDPS Metro Global GM 4-site, omit this step and skip to step 7.
Otherwise stop xdr in your remote Region B to ensure the accelerator system is not going to be communicating with the Kp s ystems in this region when being re-activated the first time:
use the command XDRSTOP on the xDR status panel (panel ID VPCPSTDL) of the primary Kp in Region B. -
Update the GDPS load table with the new load addresses for the accelerator system, reflecting the new devices. Proceed as follows:
- Use action ‘M – Modify’ on the GDPS Standard Actions panel (panel ID VPCPSTD1) for the accelerator system which takes you to panel “Modify System” (panel ID VPCPFLFZ) as shown in Figure 17.
Figure 17. VPCPFLFZ, Modify System panel - Make the applicable changes of the load addresses.
- Use action ’S - Select (S)’ for this new entry to make it the current primary new boot volume of the accelerator system.
- Use action ‘M – Modify’ on the GDPS Standard Actions panel (panel ID VPCPSTD1) for the accelerator system which takes you to panel “Modify System” (panel ID VPCPFLFZ) as shown in Figure 17.
-
Activate the accelerator system from GDPS (Kp primary system in your local Region A):
use action ‘A – Activate’ on the GDPS Standard Actions panel (panel ID VPCPSTD1) as shown in Figure 18.
Figure 18. VPCPSTD1, GDPS Standard Actions panel - Action Activate -
When the accelerator system is up and running, open the Admin UI of the accelerator. Proceed as follows:
- Click on ‘GDPS client’ to open the GDPS Agent Information window.
- Press the ‘Restart GDPS Agent’ button as it is shown in Figure 19.
Figure 19. Restart GDPS Agent Within a couple of seconds, the status of the GDPS client shown in the Admin UI should turn to green as shown in Figure 20.
Figure 20. Accelerator Admin UI - GDPS client status indicator
-
Switch to the GDPS Standard Actions panel (panel ID VPCPSTD1) and check that the accelerator system is showing a green status of LNX-A as shown in Figure 21.
Figure 21. VPCPSTD1 GDPS Standard Actions panel - Accelerator system status indication -
If you are not using GDPS Metro Global GM 4-site, you are done.
Otherwise, you have performed step 6 and have stopped xdr in your remote Region B. You can now restart xdr using the XDRSTART command on the xDR status panel (panel ID VPCPSTDL) of the primary Kp in Region B.
Document Location
Worldwide
[{"Type":"MASTER","Line of Business":{"code":"LOB10","label":"Data and AI"},"Business Unit":{"code":"BU048","label":"IBM Software"},"Product":{"code":"SS4LQ8","label":"Db2 Analytics Accelerator for z\/OS"},"ARM Category":[{"code":"a8m0z0000000741AAA","label":"Administration"},{"code":"a8m0z0000000775AAA","label":"Db2 related products and functions-\u003EDb2 Analytics Accelerator for z\/OS"}],"ARM Case Number":"","Platform":[{"code":"PF035","label":"z\/OS"}],"Version":"7.5.0"}]
Product Synonym
IDAA
Was this topic helpful?
Document Information
Modified date:
15 August 2024
UID
ibm17149074