Installing and running the z/OS components manually
You can manually install the z/OS® components for IBM®® Data Replication VSAM for z/OS Remote Source.
Before you begin
- You must have VSAM Remote Source running in a Docker container to proceed with this installation.
- If you run this process multiple times, some z/OS Unix System Services (USS) files and data sets will be deleted and re-allocated. This process fully replaces your z/OS components for VSAM Remote Source. Use caution if you performed any z/OS component customization from a previous installation.
Procedure
-
To manually install the z/OS components, you must perform the following tasks:
- Execute the ClassicInstallAndMaintenanceMenu.sh script to generate the jobs needed to install and run the z/OS log reader components
- Transfer a series of z/OS support files to your USS staging area (/u/USER001).
- Transfer the customization jobs that have been created to an existing partitioned data set.
- Submit and successfully run the jobs from step 2 to complete the installation of the z/OS log reader component
-
To create the sample members, bash into the container by running the following command:
docker exec -it --workdir /classic/usr/scripts/ ClassicCDCVSAM bash
-
Generate the jobs used to allocate the data sets on z/OS
by running the bash script with the following command:
ClassicInstallAndMaintenanceMenu.sh
After running the script, you should receive the following output:*---------------------------------------------------------------------* * * IBM Data Replication VSAM for z/OS Remote Source, V11.4 * PID: 5737-C30 * * Menu options: * 1. Set the job card * 2. Set the z/OS High Level Qualifier (HLQ) * 3. Set the z/OS Unix System Services (USS) path * 4. Install or replace z/OS libraries with this container's maintenance level * 5. Start a new z/OS log reader * 6. Stop the existing z/OS log reader * 7. Get diagnostic log for the existing z/OS log reader * 8. Configure a z/OS VSAM cluster for the container environment * 9. Configure VSAM IVP file * 10. Update container scripts from the current image * 11. Generate diagnostics tar * 99. Exit this script * *---------------------------------------------------------------------*
-
Enter "1" to select option 1.
When prompted, enter the first line of the job card. For example:
//JOB0000 JOB (JOBNAME), ‘CLASSIC_CDC’,
After entering the above line, you have the option of entering another. You might enter a total of three lines. Choose yes or no, and proceed accordingly. When complete, you are returned to the menu for additional steps.
-
Enter "2" to select option 2. When prompted, enter the name of your APF authorized load
library's High Level Qualifier (HLQ) . This value is prepended to .SCACLOAD to create the full data
set name. For example:
CEC.V114.REMOTE
-
Enter 3 to select option 3. When prompted, enter the USS path to use
during installation to temporarily stage files for z/OS. For
example:
/u/USER001
-
Enter "4" to select option 4:
Select an option: 4 *---------------------------------------------------------------------* * * Classic CDC for Remote Log Reader Installation and Maintenance * Install scripting for Classic CDC remote log reader * *---------------------------------------------------------------------*
When prompted, indicate whether the z/OS libraries under the HLQ are SMS managed. Your libraries must be SMS-managed to use the automated installation process.Are your USER001.V114.REMOTE.* target libraries SMS-managed? (Y or N)y
You must have the required z/OS authority to APF authorize a library for the automation job to run successfully. If you do not, you can continue, but you must have the SCACLOAD library APF authorized before starting the z/OS components. The generated JCL uses a JES2 command to APF authorize SCACLOAD, so your site must use JES2 to have the installation process complete the APF authorization for you.
Should this installation process attempt to APF authorize SCACLOAD? (Y or N)y Does your z/OS installation use JES2? (Y or N)y
-
You are then be asked whether you are planning on running multiple remote log
readers on the z/OS system where you will install the
software. The installation process creates two DASD-only log streams. Each remote log reader needs
its own diagnostic log stream as a minimum requirement.
Will there be multiple z/OS remote log readers running on the same image? (Y or N)
If you respond Y, you are prompted for the log stream high level qualifier. This value is prepended to .CECDIAG for the diagnostic log stream and .CECREPL for a replication log stream that is (optionally) used for verification purposes.
The default log stream high level qualifier is CEC.RMTLRS. You can reply Y to this prompt even if you are only planning on deploying a single remote log reader and the default name violates site naming requirements.Enter the z/OS user ID : user001 Installation is completed through multiple steps. For SMS-managed environments, it is your choice whether to complete installation with this script or provide JCL to your z/OS team to complete the installation. The steps follow along with the generated JCL: Note: Paths are relative from current working directory: /classic/usr/scripts 1. Allocate z/OS data sets with ../output/InstallRemoteLRS.Alloc.jcl.2020-03-09_11:06:13 2. Copy terse format files from zFS to z/OS data sets with ../output/InstallRemoteLRS.Copy.jcl.2020-03-09_11:06:13 3. Unterse and receive into final libraries with ../output/InstallRemoteLRS.Untrs.jcl.2020-03-09_11:06:13 4. APF authorize the SCACLOAD load library with ../output/APFAuthorize.jcl.2020-03-09_11:06:13 OR by having your z/OS Systems Programmer run command: SETPROG APF,ADD,DSNAME=USER001.V114.REMOTE.SCACLOAD,SMS If not SMS-managed substitute SMS with VOL=<volser> 5. Update the default configuration to use a custom diagnostic log stream or z/OS log reader listen port with ../output/UpdateRemoteLRSConfig.jcl. 2020-03-09_11:06:13 6. Submit an IXCMIAPU job to define z/OS System Logger log streams with ../output/CreateLogs.jcl.2020-03-09_11:06:13 This creates two z/OS System Logger DASDONLY log streams: - Diagnostic log: CEC.RMTLRS.CECDIAG - IVP replication log: CEC.RMTLRS.CECREPL If you do not have authority to create log streams you will need to find someone that does to submit the generated JCL file. This installation process can attempt to complete the z/OS installation using SFTP and SSH. Would you like to perform the z/OS installation now? (Y or N)n
-
Choose option 5 and specify N.
The following information is displayed:
Select an option: 5 *---------------------------------------------------------------------* * * Classic CDC for Remote Log Reader Installation and Maintenance * Start z/OS address space for Classic CDC remote log reader * *---------------------------------------------------------------------* ZOS_HLQ is USER001.V114.REMOTE The Classic CDC for Remote Log Reader address space should run as a started task on z/OS that starts during the final stages of IPL. The goal is to have the address space available for the life of the z/OS LPAR and automatically restart when z/OS is restarted. In some environments you can choose to run the server as a job under your authorization with /classic/usr/output/StartServerJob.jcl.2020-03-19_13:14:04 This process can attempt to start the z/OS address space using SFTP and SSH. Enter the z/OS user ID: Would you like to start the z/OS address space now? (Y or N) N
Provide your TSO user ID that is used to tailor the generated JCL JOB card and enter N.
Exit the script and proceed to the next task. -
In addition to the jobs produced when executing the ClassicInstallAndMaintenanceMenu.sh shell script, you will need to upload the following z/OS support files from the /classic/shell
directory using binary FTP:
- zOS.LOADR.TRS
- zOS.MSGS.TRS
- zOS.RMTLRS.SCACSAMP.TRS
- zOS.RMTLRSV.ALLTYP01.TRS
- zOS.RMTLRSV.CACCFGD.TRS
- zOS.RMTLRSV.CACCFGX.TRS
The tailored JCL is set up to access these files under USS with the same names. You should not change the file names during the transfer process. These files must also be transferred in binary format.
Use the following Docker inspect volume command to identify the physical location of these files:
docker volume inspect classiccdc
The output should look similar to the following example:
docker volume inspect classiccdc [ { "CreatedAt": "2020-08-11T19:18:08-07:00", "Driver": "local", "Labels": null, "Mountpoint": "/var/lib/docker/volumes/classiccdc/_data", "Name": "classiccdc", "Options": { "device": "/home/cecuser/classic/docker/volumes/classiccdc", "o": "bind", "type": "none" }, "Scope": "local" } ]
Exit the Docker shell and navigate to the Options > device location, where these files are physically located.
To transfer the files to USS, initiate an FTP session with the z/OS host system using the IP address (or host name) specified in the CLASSIC_REMOTE_HOSTNAME variable that was used to initially set up the container environment.
After you are connected, change to the USS staging directory, enable binary transfer mode and issue a put command for each of the file names that are listed above. The following FTP commands need to be issued:
cd staging directory name
put zOS.LOADR.TRS
put zOS.MSGS.TRS
put zOS.RMTLRS.SCACSAMP.TRS
put zOS.RMTLRSV.ALLTYP01.TRS
put zOS.RMTLRSV.CACCFGD.TRS
put zOS.RMTLRSV.CACCFGX.TRS
Below is an example of the commands that need to be issued. Each command is highlighted in bold:
ftp> cd USER001 250 HFS directory /u/USER001 is the current working directory ftp> bin 200 Representation type is Image ftp> put zOS.LOADR.TRS local: zOS.LOADR.TRS remote: zOS.LOADR.TRS 125 Storing data set /u/USER001/zOS.LOADR.TRS 250 Transfer completed successfully. 2515968 bytes sent in 0.0294 secs (85548.04 Kbytes/sec) ftp> put zOS.MSGS.TRS local: zOS.MSGS.TRS remote: zOS.MSGS.TRS 125 Storing data set /u/USER001/zOS.MSGS.TRS 250 Transfer completed successfully. 391168 bytes sent in 0.00602 secs (64956.49 Kbytes/sec) ftp> put zOS.RMTLRS.SCACSAMP.TRS local: zOS.RMTLRS.SCACSAMP.TRS remote: zOS.RMTLRS.SCACSAMP.TRS 125 Storing data set /u/USER001/zOS.RMTLRS.SCACSAMP.TRS 250 Transfer completed successfully. 3072 bytes sent in 6.1e-05 secs (50360.66 Kbytes/sec) ftp> put zOS.RMTLRSV.ALLTYP01.TRS local: zOS.RMTLRSV.ALLTYP01.TRS remote: zOS.RMTLRSV.ALLTYP01.TRS 125 Storing data set /u/USER001/zOS.RMTLRSV.ALLTYP01.TRS 250 Transfer completed successfully. 29696 bytes sent in 0.000193 secs (153865.28 Kbytes/sec) ftp> put zOS.RMTLRSV.CACCFGD.TRS local: zOS.RMTLRSV.CACCFGD.TRS remote: zOS.RMTLRSV.CACCFGD.TRS 125 Storing data set /u/USER001/zOS.RMTLRSV.CACCFGD.TRS 250 Transfer completed successfully. 2048 bytes sent in 8.6e-05 secs (23813.95 Kbytes/sec) ftp> put zOS.RMTLRSV.CACCFGX.TRS local: zOS.RMTLRSV.CACCFGX.TRS remote: zOS.RMTLRSV.CACCFGX.TRS 125 Storing data set /u/USER001/zOS.RMTLRSV.CACCFGX.TRS 250 Transfer completed successfully. 1024 bytes sent in 0.000101 secs (10138.61 Kbytes/sec) ftp> bye 221 Quit command received. Goodbye.
-
Immediately prior to asking if the z/OS components should be automatically installed, the shell script displayed information about the jobs that were generated and the steps that need to be performed for the z/OS components.
The date and time when you executed the script are appended to the end of the file name.
Use the following Docker inspect volume to identify the physical location of these files:
The output produced will look similar to this:docker volume inspect classiccdcoutput
[ { "CreatedAt": "2020-08-11T19:18:08-07:00", "Driver": "local", "Labels": null, "Mountpoint": "/var/lib/docker/volumes/classiccdc/_data", "Name": "classiccdc", "Options": { "device": "/home/cecuser/classic/docker/volumes/classiccdc", "o": "bind", "type": "none" }, "Scope": "local" }
Exit Docker and navigate to the Options > device location, and then upload the files that are shown in Table 1 below to your z/OS system by using FTP. These are plain text files and cannot be transferred in binary format. The files should be uploaded to an existing partitioned data set that you can use for job submission. Because the generated file names are too long, you must provide a shorter member name with the put command.
Table 1. Recommended member names Generated “base” file name (1) Recommended member name (2) InstallRemoteLRS.Alloc.jcl RVSJOB1 InstallRemoteLRS.Copy.jcl RVSJOB2 InstallRemoteLRS.Untrs.jcl RVSJOB3 APFAuthorize.jcl RVSJOB4 UpdateRemoteLRSConfig.jcl RVSJOB5 CreateLogs.jcl RVSJOB6 StartServerJob.jcl RVSJOB7 Notes:
- The date and time these files were created are appended after the base file name.
- The names specified in the table are arbitrary. You can use any naming convention desired, ensuring these jobs are run in the sequence that is listed in the table.
Issue the following FTP commands:cd z/OS CNTL PDS name put InstallRemoteLRS.Alloc.jcl.date-time rvsjob1 put InstallRemoteLRS.Copy.jcl. date-time rvsjob2 put InstallRemoteLRS.Untrs.jcl. date-time rvsjob3 put APFAuthorize.jcl. date-time rvsjob4 put UpdateRemoteLRSConfig.jcl.date-time rvsjob5 put CreateLogs.jcl. date-time rvsjob6 put StartServerJob.jcl. date-time rvsjob7
The below output shows the commands to transfer a set of these files that were created on 2020-03-11_14:43:43. The commands that need to be entered are identified in bold:
ftp> cd 'USER001.CNTL' 250 The working directory "USER001.CNTL" is a partitioned data set ftp> put InstallRemoteLRS.Alloc.jcl.2020-03-11_14:43:43 rvsjob1 local: InstallRemoteLRS.Alloc.jcl.2020-03-11_14:43:43 remote: rvsjob1 125 Storing data set USER001.CNTL(RVSJOB1) 250 Transfer completed successfully. 3326 bytes sent in 0.000148 secs (22472.97 Kbytes/sec) ftp> put InstallRemoteLRS.Copy.jcl.2020-03-11_14:43:43 rvsjob2 local: InstallRemoteLRS.Copy.jcl.2020-03-11_14:43:43 remote: rvsjob2 125 Storing data set USER001.CNTL(RVSJOB2) 250 Transfer completed successfully. 2221 bytes sent in 0.000101 secs (21990.10 Kbytes/sec) ftp> put InstallRemoteLRS.Untrs.jcl.2020-03-11_14:43:43 rvsjob3 local: InstallRemoteLRS.Untrs.jcl.2020-03-11_14:43:43 remote: rvsjob3 125 Storing data set USER001.CNTL(RVSJOB3) 250 Transfer completed successfully. 3126 bytes sent in 0.000118 secs (26491.52 Kbytes/sec) ftp> put APFAuthorize.jcl.2020-03-11_14:43:43 rvsjob4 local: APFAuthorize.jcl.2020-03-11_14:43:43 remote: rvsjob4 125 Storing data set USER001.CNTL(RVSJOB4) 250 Transfer completed successfully. 2624 bytes sent in 0.000146 secs (17972.60 Kbytes/sec) ftp> put UpdateRemoteLRSConfig.jcl.2020-03-11_14:43:43 rvsjob5 local: UpdateRemoteLRSConfig.jcl.2020-03-11_14:43:43 remote: rvsjob5 ftp> put CreateLogs.jcl.2020-03-11_14:43:43 rvsjob6 125 Storing data set USER001.CNTL(RVSJOB6) 250 Transfer completed successfully. 1792 bytes sent in 0.000163 secs (4294.48 Kbytes/sec) local: CreateLogs.jcl.2020-03-11_14:43:43 remote: rvsjob6 125 Storing data set USER001.CNTL(RVSJOB5) 250 Transfer completed successfully. 700 bytes sent in 0.000163 secs (4294.48 Kbytes/sec) ftp> put StartServerJob.jcl.2020-03-13_12:21:59 rvsjob7 local: StartServerJob.jcl.2020-03-13_12:21:59 remote: rvsjob7 125 Storing data set UPTONG.CNTL(RVSJOB7) 250 Transfer completed successfully. 667 bytes sent in 0.000109 secs (6119.27 Kbytes/sec) ftp> bye 221 Quit command received. Goodbye.
-
Running the installation/customization jobs
When you are done transferring the z/OS support files and customization jobs to your z/OS system, run these jobs in the order that is shown below.
Table 2. Member name Expected completion code RVSJOB1 All steps should end with a completion code of zeros RVSJOB2 All steps should end with a completion code of zeros RVSJOB3 All steps should end with a completion code of zeros RVSJOB4 You may receive a non-zero completion code if: - You do not have authority to issue the SET,PROG command
- All libraries reference in the STEPLIB DD statement for the TSO logon procedure are not APF authorized
If you receive a non-zero return code for the APF authorization job, review the output. If the D PROG,APF command identifies the SCACLOAD library as being DYNAMIC the SET,PROG command worked. If not then you will need to find an individual with the required authority to run this job, issue the required command, or add the library to the system authorization member.Note: It is recommended that the SCACLOAD library be permanently authorized since the method used in the APF authorization job only lasts until the next IPL.RVSJOB5 All steps should end with a completion code of zeros.
If you attempt to run it a second time you will receive a return code of 12 because the log streams already exist. Normally, you will not want to create a new set of log streams when applying maintenance. However, the sample JCL contains the commands to delete an existing log stream commented out. You can remove the
/*
and*/
around the DELETE command to cause the existing contents of the diagnostic log or verification replication log to be deleted if desired.RVSJOB6 The first time that you run this job, all steps should end with a completion code of zeros. -
To start the z/OS log reader, submit job RVSJOB7. Unlike the previous jobs, the z/OS log reader runs continuously until a STOP,ALL or STOP,ALL,IMMEDIATE
operator command is issued.
You should inspect the JESMSGLG output to confirm that the server was able to successfully access the diagnostic log stream and establish a listen session for connection requests from the container on the CLASSIC_REMOTE_PORT number you provided when setting up the container environment. The output will look like the following:
CACA0120I SERVER RUNNING WITH MODE: <0> CACA001I SERVER TOKEN: <CACSRVR_00000034> CAC00105I Memory requested 67108864, Memory obtained 67107032 CACB002I CONNECT IN PROGRESS FOR LOG STREAM: CEC.RMTLRS.CECDIAG CACB003I CONNECT COMPLETED FOR LOG STREAM: CEC.RMTLRS.CECDIAG CACB0105E EVENTLOG not specified. Event messages will not be captured. CAC00100I CONTROLLER: LOGGING STARTED CAC00105I LOG VNXT 04222015: STARTED CAC00105I CECLRS VNXT 04222015: STARTED CAC00102I CONTROLLER: STARTED CECLRS CAC00103I DATA SERVER: VNXT 04222015 READY CECL0001I The log reader service is now active.
Note: The server start-up JCL provided was an in-stream procedure. It is highly recommended you convert this to a start task that is started during the system IPL process.