Installing the pax edition

To install the pax edition, you must meet the system prerequisites, download the .pax file, and then follow the installation steps.

Meeting system prerequisites

In addition to the prerequisites documented in System prerequisites, installing the pax edition has the following size requirements:

  • 450 MB or more space for zFS
  • 55 CYL or more volume for MVS

Downloading and verifying the .pax file

Download the .pax file from the Open Enterprise SDK for Apache Kafka product page.

To verify the integrity of the .pax file downloaded, run the sha256sum checksum tool with the following command and verify if the output is the same as the checksum value given in the README file:
sha256sum zkafka_for_zOS.pax.Z

Performing installation steps

  1. Copy the .pax file zkafka_for_zOS.pax.Z to the z/OS system as a binary. It is recommended to transfer the file through SFTP.
    ./pax2mvs.sh
  2. Unpack the .pax file in any directory. For example, to unpax the file in your home directory, issue the following commands:
    cd ~/
    pax -rzvf zkafka_for_zOS.pax.Z
  3. After extracting the files, configure the file permissions and tags by running the environment setup script:
    export KAFKA_HOME=<Product installation path>/ixy/v1r1m0/zkafka  
    . $KAFKA_HOME/bin/.env

    This command will automatically set the file permissions and encoding tags for the USS files.

  4. Run the pax2mvs.sh shell script to automate the transfer of .XMIT files from your local directory to MVS. The script performs the following tasks:
    1. Cleans up any existing PDS datasets from a previous RECEIVE operation to avoid conflicts during the new RECEIVE step.
    2. Copies each .XMIT file to a corresponding sequential dataset (FB 80) on MVS using dynamic space allocation.
    3. Generates a JCL script (receive.jcl) that can be submitted to the system to unpack each .XMIT dataset into its respective PDS.
    Note: The script deletes any existing PDS datasets that match the expected output names (derived from the .XMIT filenames) before performing the RECEIVE operation. Review the dataset names carefully before running the script.
  5. Modify "JOBCARD" on the first line of receive.jcl and HLQ (high-level qualifier) in receive.jcl. Submit the job to receive the datasets. The job has a return code of 0. Note that the HLQ in receive.jcl is automatically selected. Change it only when needed. The HLQ is to be used in the following step Verifying the installation of Open Enterprise SDK for Apache Kafka.

Next step

To verify if the installation is successful, follow the steps in Verifying the installation of Open Enterprise SDK for Apache Kafka.