Hardware and software requirements
Learn about hardware and software requirements for ZD&T Enterprise Edition.
For a complete list of hardware and software requirements, you can generate the report from Software Product Compatibility Reports. Hardware and software requirements are also documented in the zPDT® Guide and Reference.
- Storage server requirements
- Source environments
- Target environments:
Storage server requirements
To install and run ZD&T Enterprise Edition, a storage server to host the Enterprise Edition artifacts, such as z system volumes, data sets, and Enterprise Edition metadata, must be set up. To transfer volumes images files from the storage server or to the storage server, you must choose SFTP as the transferring method.
- Disk space
- Sufficient space is needed to hold numerous and potentially large files for extracted IBM® Z volumes.
- Sufficient disk space to potentially hold multiple Extended ADCD z/OS® distributions.
- Software requirements
- A running SFTP server
- SFTP server
- Open the firewall port for SFTP command.
z/OS system requirements
- Supported z/OS versions: V2.4 and V2.5.
- You must request for the fix for APAR OA62239, and then apply it to your source z/OS environment before extraction. For more information, see IBM Fix Category Values and Descriptions and Checklist for applying an APAR or PTF.
- If you want to extract volumes from z/OS
systems, the following requirements are needed.Required
- An SSH server must be running and accessible by the system to run Enterprise Edition.
- The SFTP client must be able to connect to the Enterprise Edition storage server.
- To use SFTP, Java™ 1.6 or later versions must be installed, and the PATH needs to be specified in the $HOME/.profile and pointed to the bin directory of the Java installation.
- Make sure to grant access to each volume or data set that is extracted. For more information, see Creating components from IBM Z® mainframe volumes and Creating components from IBM Z mainframe data sets.
- Make sure to grant READ access to DFDSS program ADRDSSU.
Optional- Configure zEnterprise® Data Compression (zEDC) if it is available. Grant READ access to the resource FPZ.ACCELERATOR.COMPRESSION in SAF class FACILITY to the user ID that is used in the Enterprise Edition.
- Grant READ access to resource STGADMIN.ADR.DUMP.CNCURRNT in SAF class FACILITY.
- If you want to extract volumes from an existing z/OS instance, Java must be installed, and the user who creates volume components must have access to Java after login.
Db2 extraction requirements
- Db2 REXX Language Support (DSNREXX).
- Stored Procedure DSNWZP for using the Db2 Admin Tool.
- Stored Procedure DSNUTILU for running Db2 online utilities.
- DSNTIJTM
- This job can be used to bind DSNREXX.
- DSNTIJRT
- This job can be used to define the Db2 routines DSNUTILU and DSNWZP.
- DSNTIJRW
- This job can be used to define and optionally activate Workload Manager application environments that are needed for DSNUTILU, DSNWZP, and other Db2 WLM environments.
- User AccessTo use the user ID that is specified on the source system to extract Db2 data, you must ensure that the user ID has the following access.
- Read access to the Db2 catalog tables.
- Read access to the tables that are selected for an extraction.
- Unload access to the tables that are selected for an extraction.
- Authority to stop Db2 UNLOAD utilities.
- If you need to use the Db2 Admin Tool, the user ID that runs the extraction must have the Db2 or RACF® access to run the DDL Generation Plan, for example, ADB2GEN.
- System Libraries
REXX.SEAGALT or REXX.SEAGLPA must be in the system search order, that is, Linklist or LPA.
- Db2 Admin Tool
To obtain the source database DDL, the Db2 Admin Tool must be installed and available. If the Db2 Admin Tool is not available, you must supply and verify all DDL source. The database DDL that is created on the target system must be compatible to Db2 supplied sample DSNTEP2.
ADCD z/OS V2R5 December Edition of 2021 is distributed with Db2 V12 at Function Level 506 (V12R1M506). Beginning at Function Level 504, applications that are bound at this function level or higher no longer support segmented (non-UTS) and partitioned (non-UTS) tablespaces. If Db2 table extraction is performed from a source system where tablespaces are deprecated types, where Db2 Admin Tool is used to create DDL for the component, and where Db2 and Db2 Admin Tools are not at levels to support Function Level 504 or higher, the provisioning of the Db2 component might fail when Db2 objects are created.
Linux target environment requirements
- If you choose to install the required Linux packages
during the provisioning, the software repository needs to be available and accessible by the target environments.
- A Red Hat® software repository for 'yum' needs to be available and accessible by the target environments.
- An Ubuntu software repository for 'apt-get' need to be available and accessible by the target environments.
- An SSH server must be running on the target environments and accessible by the system to run Enterprise Edition.
- The root permission is needed for the users who are responsible for provisioning.
- An extra 100 M of disk space is needed for the folder /root in the target environment, as the loadparm.txt that is generated for a script to modify z/OS parameters might cause space problem.
- Starting with V14.0.1, sha256sum must be installed. It can be installed with package coreutils.
- Accessing to software repository to
run
YUM
orapt-get
commands. - Users and group settings.
- Sudo access configuration.
- Network configuration.
- Accessing to software repository to run
YUM
orapt-get
commandsMake sure that you have access to software repository to run
YUM
commands on RHEL machine, or runapt-get
commands on Ubuntu machine. ZD&T installer will install all required packages. However, if you don't want ZD&T installer to install the required packages that are listed below, you need to install the packages before you start ZD&T installer.YUM
commands on RHEL operating systemyum -y install iptables yum -y install libstdc++.i686 yum -y install perl yum -y install zip yum -y install unzip yum -y install gzip yum -y install bc
- Additional
YUM
commands on RHEL 8 operating systemyum -y install ncurses-libs yum -y install libnsl
apt-get
commands on UBUNTU operating systemapt-get -y install iptables dpkg --add-architecture i386 apt-get -y update apt-get -y install libc6:i386 libncurses5:i386 libstdc++6:i386 lib32z1 lib32stdc++6 apt-get -y install perl apt-get -y install zip apt-get -y install unzip apt-get -y install gzip apt-get -y install bc apt-get -y install libasound2 apt-get -f install
Note: The 'nc' command is not available by default on the RHEL 7.4 and 7.5. As the 'nc' command is required to pass the validation before you start a provisioning to the target environment, the missing command 'nc' might cause the failure of the connection. To install the command, run the following command.yum -y install nc
- Users and group settingsBefore you provision instances from created images, make sure to create a new group 'zpdt' in the target environment if the group does not exist.
- If you use the root user ID to provision instances, create a user ID 'ibmsys1' if the user ID does not exist, and assign the user ID 'ibmsys1' to the group 'zpdt'.
- If you use a non-root user ID to provision instances, assign the user ID to the group 'zpdt'.
- Sudo access configuration
The term sudo stands for super user do. Sudoers is the configuration file with the corresponding operating system sudo settings. This file is typically at /etc/sudoers. For more information about the specific sudoers file format, see Sudoers Manual.
The following code shows an example of a sudoers entry.ibmsys1 ALL = (root) NOPASSWD: ALL
In the code example, user ibmsys1 can access any shell script file on any host in any location as the root user without providing an identification password.
During the provisioning, Enterprise Edition runs several scripts that require the root access. For the security reasons, Enterprise Edition also changes the ownership of the scripts to the root user ID. The user ID that is used for the provisioning needs to have the permission to run the scripts and change the ownership of the scripts. The scripts list is shown as below.[deployment directory]/zdt/zdtInstall/zdt_install_product_byRoot.sh [deployment directory]/zdt/zdtInstall/zdt_modify_files_byRoot.sh [deployment directory]/zdt/zdtInstall/zdt_install_dependencies_byRoot.sh (optional) [deployment directory]/zdt/zdtInstall/zdt_config_user_byRoot.sh (optional) [deployment directory]/zdt/zdtInstall/zdt_config_network_byRoot.sh (optional) [deployment directory]/zdt/zdtInstall/zdt_cleanup_byRoot.sh (optional)
The deployment directory is an optional input value that can be specified from web user interface or REST API. By default, the deployment directory is /home/ibmsys1 if you log in as the root user, and /home/[userid] if you log in as a non-root user.
The following code shows an example of the sudoers entry. The user ID that is used is ibmsys1, and the deployment directory is /home/ibmsys1.ibmsys1 ALL=(root) NOPASSWD: /bin/chown root /home/ibmsys1/zdt/zdtInstall/zdt_modify_files_byRoot.sh, /home/ibmsys1/zdt/zdtInstall/zdt_modify_files_byRoot.sh, /home/ibmsys1/zdt/zdtInstall/zdt_install_dependencies_byRoot.sh, /home/ibmsys1/zdt/zdtInstall/zdt_config_user_byRoot.sh, /home/ibmsys1/zdt/zdtInstall/zdt_install_product_byRoot.sh, /home/ibmsys1/zdt/zdtInstall/zdt_config_network_byRoot.sh, /home/ibmsys1/zdt/zdtInstall/zdt_cleanup_byRoot.sh
If you use other privilege management tools other than sudo, you also need to do the configuration.
- Network configuration
To make other systems communicate with your emulated z/OS, you need to configure the emulated environment to ensure that the emulated environment can be accessible. The only requirement is to route a port number to port 22 on the emulated z/OS. The port number to be routed is the one that you will specify when you configure the source system on the ZD&T web server.
To configure the network, complete the following steps:- Back up the current iptables rules.
- Run the following commands. For example,
iptables --table nat --append POSTROUTING --out-interface eth1 -j MASQUERADE iptables --table filter --append FORWARD --in-interface tap0 -j ACCEPT iptables --table filter -A OUTPUT -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A PREROUTING --table nat -i eth1 -p tcp --dport 2022 -j DNAT --to 10.1.1.2:22 iptables -A FORWARD -p tcp -d 10.1.1.2 --dport 2022 -j ACCEPT
- Run the command
echo 1 > /proc/sys/net/ipv4/ip_forward
.
Note:- eth1 is an example of the network interface name. To find available network
interfaces, run the command
ifconfig
,ip -o address show
, and so on. - 2022 is the port number that will be routed to port 22.
- 10.1.1.2 is the IP address of network interface tap0, which can be found by running the command
find_io
.
Docker target environment requirements
In a
normal Docker setup, a container image is pushed to a remotely accessible Docker registry. Then, the
container image can be pulled by instances that need to run the image by using the Docker
command-line utilities such as docker pull
or docker run
.
- Use HTTPS that is protected by using the TLS cryptographic protocol to communicate to the Docker daemon.
- Load the image directly to the local Docker image registry of the system that is running the container.
- Create the container from the image in privileged mode with a small set of port bindings. A Docker volume that contains all the persistence data such as volumes will be run on the emulator.
To complete the task, the following initial setup needs to be done before you create a provisioning.
- Configuring the Docker daemon for HTTPS communication
To configure the Docker daemon for HTTPS communication, refer to the instructions in the Protect the Docker daemon socket.
After the Docker daemon is configured for HTTPS communication by using the TLS cryptographic protocol, save the files for the CA certificate (ca.pem), server certificate (cert.pem), and client certificate (key.pem).
- Planning the port mapping for Docker containers
Each Docker environment supports a maximum of five containers. In a Docker environment, each container runs their own emulated z/OS instances and will be allocated 100 ports for clients to access services on each of those emulated z/OS instances. The entire set of ports for all the containers (up to a maximum of five) must be contiguous and specified in intervals of one hundred. The ephemeral port range is recommended.
For each container, the following port mapping is in place:- xxy00 maps to port 3270 in the container
- xxy21 maps to port 21 (FTP) in the container
- xxy22 maps to port 22 (SSH) in the container
- xxy23 maps to port 23 (Telnet) in the container
- xxy99 maps to port 8443 (ZD&T Instance controller) in the container
Where xx is the number in the thousands and y is the number in the hundreds.
For example, if you plan to use 40000 as the start port and provision two Docker containers, the first container that is provisioned will use ports 40000 - 40099 and the second container that is provisioned will use ports 40100 - 40199.
The first provision to a container has the ports 40000 - 40099 from the hosting system that is allocated to it and has the following port mappings:- 40000 → 3270
- 40021 → 21
- 40022 → 22
- 40023 → 23
- 40099 → 8443
The second container will have the following port mappings:
- 40100 → 3270
- 40121 → 21
- 40122 → 22
- 40123 → 23
- 40199 → 8443
Red Hat OpenShift requirements
- Red Hat OpenShift®
-
Sandbox is supported in customer-managed clusters on x86_64 architecture for OpenShift 4.8 to 4.11.
On IBM Cloud® managed Classic or VPC OpenShift, Sandbox is supported only for x86_64 architecture clusters running OpenShift 4.11 on RHEL8 nodes. For OpenShift 4.8 to 4.10, use Sandbox 2.2.
Note: Maintenance support for OpenShift 4.8 will end on January 27, 2023 and Sandbox will remove support for 4.8 and 4.9 in an upcoming release, so OpenShift 4.10 is recommended. - SecurityContextConstraints
- The Sandbox Operator and
wazi-sandbox
container use a privileged security context constraints (SCC).Figure 1. Custom SecurityContextConstraints definition allowHostDirVolumePlugin: true allowHostIPC: true allowHostNetwork: true allowHostPID: true allowHostPorts: true allowPrivilegeEscalation: true allowPrivilegedContainer: true allowedCapabilities: - '*' allowedUnsafeSysctls: - '*' apiVersion: security.openshift.io/v1 defaultAddCapabilities: null fsGroup: type: RunAsAny kind: SecurityContextConstraints metadata: annotations: kubernetes.io/description: 'wazi-sandbox-operator allows access to all privileged and host features and the ability to run as any user, any group, any fsGroup, and with any SELinux context.' name: wazi-sandbox-operator priority: null readOnlyRootFilesystem: false requiredDropCapabilities: null runAsUser: type: RunAsAny seLinuxContext: type: RunAsAny seccompProfiles: - '*' supplementalGroups: type: RunAsAny users: - wazi-sandbox-operator volumes: - '*'
Thewazi-sandbox-volume-copy
container uses the anyuid SCC or a modified definition.Figure 2. Custom SecurityContextConstraints definition allowHostDirVolumePlugin: false allowHostIPC: false allowHostNetwork: false allowHostPID: false allowHostPorts: false allowPrivilegeEscalation: true allowPrivilegedContainer: false allowedCapabilities: null apiVersion: security.openshift.io/v1 defaultAddCapabilities: null fsGroup: ranges: - max: 2500 min: 2105 type: MustRunAs groups: [] kind: SecurityContextConstraints metadata: name: wazi-sandbox-volume-copy priority: null readOnlyRootFilesystem: false requiredDropCapabilities: - KILL - MKNOD runAsUser: type: MustRunAsRange uidRangeMax: 2500 uidRangeMin: 2105 seLinuxContext: type: MustRunAs supplementalGroups: ranges: - max: 2500 min: 2105 type: MustRunAs users: - wazi-sandbox-volume-copy volumes: - '*'
- Required resources
-
The required cluster resources for Sandbox depend on:
- The cluster storage
- The number of expected sandbox instances
- The resource requirements from the emulator machine characteristics and device map file (devmap) for the sandbox instances
- The size of the z/OS volume files for the sandbox instance
Because the compute resources for the cluster storage depend on the specific storage driver and configuration, those compute resources are excluded from the calculations here. The cluster must have sufficient extra compute resources to manage the required storage. Contact your cluster administrator or cloud provider if you need help on these requirements.
Overall, the cluster sizing includes the following requirements:- The base resource requirements for control nodes
- Variable requirements for worker nodes that satisfy the scheduling capacity for the expected number of instances including storage, and compute requirements for that storage
Note: When you plan for persistent volume storage, if you intend to expand z/OS storage later, for example by adding extra volumes, you should account for that as well.The following tables assume that the devmap specifies P processors including both CP and zIIP and M GiB of memory, and the z/OS volume files are V GiB in total.
Table 1. Cluster base resources Count Memory (GiB) Control nodes 3 16/node Table 2. Scheduling capacity for worker nodes Software Memory (GiB) CPU (cores) Ephemeral storage (GiB) Persistent storage (GiB) 1 single sandbox instance M + 2 P + 1 2 V * 1.125 A minimal
Starter
profile with a single instance that uses the included Extended ADCD image needs M = 8 GiB, P = 3, and V = 270 GiB. So the cluster requires at least one worker node with a capacity of at least 10 GiB of memory, 4 cores, 2 GiB of ephemeral storage and approximately 304 GiB of persistent storage.To scale up from the
Starter
profile, scale up worker nodes accordingly.For example, to run 5 instances with Extended ADCD, the cluster requires worker nodes with a total capacity of at least 50 GiB memory, 20 cores, 480 GiB of ephemeral storage, 1520 GiB of persistent storage, and also any extra compute capacity to support that storage. This might be 5 nodes each with a capacity for a single sandbox instance, or 2 with a capacity for 3 instances, or a single large node with a capacity for all 5.
Sandbox requires the IBM License Service. To enable this, Sandbox installs the IBM Cloud Pak® foundational services when it is installed, and creates the IBM License Service automatically if it is not already in the cluster.