IBM Cloud Private for Linux® on IBM® Z and LinuxONE technology preview

With IBM Cloud Private for Linux® on IBM® Z and LinuxONE, you can install an IBM Cloud Private cluster on IBM® Z and LinuxONE.

Architecture

The IBM® Cloud Private cluster on IBM® Z and LinuxONE has four main classes of nodes: boot, master, worker, and proxy. However, Vulnerability Advisor (VA), and etcd nodes are not supported in this technology preview release.

For the details of each type of cluster node, see IBM® Cloud Private architecture.

The IBM® Cloud Private cluster on IBM® Z and LinuxONE resembles the following architecture diagram:

IBM Cloud Private for Linux® on IBM® Z and LinuxONE architecture

If you use a management node in your cluster, the architecture resembles the following diagram:

IBM Cloud Private for Linux® on IBM® Z and LinuxONE architecture

Note: For a complete list of the supported features in the IBM Cloud Private for Linux® on IBM® Z and LinuxONE, see the Supported features section.

Hardware requirements

Ensure you review and verify that you meet the increased memory requirements.

Note:

Table 1. Minimum hardware requirements for a multi-node cluster on IBM® Z and LinuxONE
Requirement Boot node Master node Proxy node Worker node Management node
Number of hosts 1 1 1 or more 1 or more 1 or more
cores (IFLs) 1 2 1 or more 1 or more 1 or more
CPU >= 2.4 GHz >= 2.4 GHz >= 2.4 GHz >= 2.4 GHz >= 2.4 GHz
RAM >=4 GB >=16 GB >=4 GB >=4 GB >=16 GB
Free Disk space to install >=100 GB >=200 GB >=150 GB >=150 GB >=200 GB

For the disk size requirements, see Disk space requirements.

Supported operating systems and platforms

Table 2. Supported operating systems
Platform Operating system
Linux® on IBM® Z and LinuxONE Red Hat Enterprise Linux (RHEL) 7.3, 7.4 and 7.5
Ubuntu 18.04 LTS and 16.04 LTS
SUSE Linux Enterprise Server (SLES) 12 SP3

Note:

IBM Cloud Private components are distributed as a set of Docker images that incorporate their own operating system dependencies. It is recommended that you use one of the certified operating systems that are listed in the preceding table. However, IBM Cloud Private can run on any Linux operating system that supports Docker 1.12 and later.

Supported node types

Table 3. Supported node type by platforms
Node type IBM® Z and LinuxONE (s390x)
Boot Y
Master Y
Management Y
Proxy Y
Worker Y
VA N

Supported features

Table 4. Supported features by platform
Feature Linux® on IBM® Z and LinuxONE Notes
Cloud Foundry N
Cloud Automation Manager N* *IBM Cloud Automation Manager can manage IBM z/VM 6.4 virtual machines using the z/VM Cloud Manager Appliance.
Installation Y Installation supported on master nodes or dedicated boot nodes only.
Management console Y Management console runs on master nodes only.
ELK Y* *While ELK runs on only master or management nodes, data from worker nodes is collected by using Filebeat.
Monitoring
  • Prometheus
  • Grafana
Y* *While Prometheus and Grafana run on only master or management nodes, data from worker nodes is collected by using the node exporter.
Security and RBAC Y
FIPS mode N
Vulnerability advisor N
IPsec N
IPVS N
Networking: Calico Y
Networking: NSX-T N
Storage: GlusterFS N
Storage: VMware N
Storage: Minio N
Volume encryption N
Metering Y
Helm repo or API Y
Nvidia GPU support N
Containerd N
External load balancer N
HPA N
Multicloud Manager N
IBM Cloud Private-CE (Community Edition) N

Note: IBM® Cloud Private will support new versions of supported operating systems, Kubernetes, Docker, and other dependent infrastructure after new releases happen and when they are fully tested by the IBM® Cloud Private team.

Installation

Installation of IBM Cloud Private for Linux® on IBM® Z and LinuxONE can be completed in five main steps:

  1. Install Docker for your boot node only
  2. Set up the installation environment
  3. (Optional) Customize your cluster
  4. Set up Docker for your cluster nodes
  5. Deploy the environment
  6. Verify the installation

When the installation completes, access your cluster and complete post installation tasks.

Note: If you encounter errors during installation, see Troubleshooting install.

Step 1: Install Docker for your boot node only

The boot node is the node that is used for installation of your cluster. The boot node is usually your master node. For more information about the boot node, see Boot node.

You need a version of Docker that is supported by IBM Cloud Private installed on your boot node. See Supported Docker versions.

To install Docker, see Manually installing Docker.

Step 2: Set up the installation environment

  1. Log in to the boot node as a user with root permissions.
  2. Download the installation files from the IBM Early Program Opens in a new tab website.

    • For a IBM Cloud Private for Linux® on IBM® Z and LinuxONE cluster, download the ibm-cloud-private-s390x-3.1.1.tar.gz file.
  3. Extract the images and load them into Docker. Extracting the images might take a few minutes.

    tar xf ibm-cloud-private-s390x-3.1.1.tar.gz -O | sudo docker load
    
  4. Create an installation directory to store the IBM Cloud Private configuration files and change to that directory. For example, to store the configuration files in /opt/ibm-cloud-private-3.1.1, run the following commands:

    sudo mkdir /opt/ibm-cloud-private-3.1.1;
    cd /opt/ibm-cloud-private-3.1.1
    
  5. Extract the configuration files from the installer image.

    sudo docker run -v $(pwd):/data -e LICENSE=accept \
    ibmcom/icp-inception-s390x:3.1.1-ee \
    cp -r cluster /data
    

    A cluster directory is created inside your installation directory. For example, if your installation directory is /opt/ibm-cloud-private-3.1.1, the /opt/ibm-cloud-private-3.1.1/cluster folder is created. For an overview of the cluster directory structure, see Cluster directory structure.

  6. (Optional) You can view the license file for IBM Cloud Private, where $LANG is a supported language format.

    sudo docker run -e LICENSE=view -e LANG=$LANG ibmcom/icp-inception-s390x:3.1.1-ee
    

    For example, to view the license in Simplified Chinese by using Linux® x86_64 for IBM Cloud Private, run the following command:

    sudo docker run -e LICENSE=view -e LANG=zh_CN ibmcom/icp-inception-s390x:3.1.1-ee
    

    For a list of supported language formats, see Supported languages.

  7. Create a secure connection from the boot node to all other nodes in your cluster. Complete one of the following processes:

  8. Add the IP address of each node in the cluster to the /<installation_directory>/cluster/hosts file. See Setting the node roles in the hosts file. You can also define customized host groups, see Defining custom host groups.

  9. If you use SSH keys to secure your cluster, in the /<installation_directory>/cluster folder, replace the ssh_key file with the private key file that is used to communicate with the other cluster nodes. See Sharing SSH keys among cluster nodes. Run this command:

    sudo cp ~/.ssh/id_rsa ./cluster/ssh_key
    

    In this example, ~/.ssh/id_rsa is the location and name of the private key file.

  10. Move the image files for your cluster to the /<installation_directory>/cluster/images folder.

    1. Create an images directory:

      mkdir -p cluster/images;
      
    2. If your cluster contains the s390x node, place the s390x package in the images directory:

      sudo mv /<path_to_installation_file>/ibm-cloud-private-s390x-3.1.1.tar.gz  cluster/images/
      

    In the command, path_to_installation_file is the path to the images file.

Step 3: Customize your cluster

  1. You can set a variety of optional cluster customizations that are available in the /<installation_directory>/cluster/config.yaml file. See Customizing the cluster with the config.yaml file. For additional customizations, you can also review Customizing your installation.
  2. In an environment that has multiple network interfaces (NICs), such as OpenStack and AWS, you must add the following code to the config.yaml file:

    • For IBM Cloud Private:
      cluster_lb_address: <external address>
      proxy_lb_address: <external address>
      

    The <external address> value is the IP address, fully-qualified domain name, or OpenStack floating IP address that manages communication to external services. Setting the proxy_lb_address parameter is required for proxy HA environments only.

Step 4: Set up Docker for your cluster nodes

Cluster nodes are the master, worker, proxy, and management nodes. See Architecture.

You need a version of Docker that is supported by IBM Cloud Private installed on your cluster node. See Supported Docker versions.

If you do not have supported version of Docker installed on your cluster nodes, IBM Cloud Private can automatically install Docker on your cluster nodes during the installation.

To prepare your cluster nodes for automatic installation of Docker, see Configuring cluster nodes for automatic Docker installation.

Step 5: Deploy the environment

  1. Change to the cluster folder in your installation directory.

     cd ./cluster
    
  2. Deploy your environment. Depending on your options, you might need to add more parameters to the deployment command.

    • If you had specified the offline_pkg_copy_path parameter in the config.yaml file, add the -e ANSIBLE_REMOTE_TEMP=<offline_pkg_copy_path> option in the deployment command, where <offline_pkg_copy_path> is the value of the offline_pkg_copy_path parameter that you set in the config.yaml file.

    • By default, the command to deploy your environment is set to deploy 15 nodes at a time. If your cluster has more than 15 nodes, the deployment might take a longer time to finish. If you want to speed up the deployment, you can specify a higher number of nodes to be deployed at a time. Use the argument -f <number of nodes to deploy> with the command.

      To deploy your environment:

      • For IBM Cloud Private for Linux® on IBM® Z and LinuxONE, run this command:
          sudo docker run --net=host -t -e LICENSE=accept \
          -v "$(pwd)":/installer/cluster ibmcom/icp-inception-s390x:3.1.1-ee install
        

      Note: If you encounter errors during deployment, run the deployment command again with -v to collect other error messages. If you continue to receive errors during the rerun, run the following command to collect the log files:

      sudo docker run --net=host -t -e LICENSE=accept -v "$(pwd)":/installer/cluster ibmcom/icp-inception-s390x:3.1.1-ee healthcheck
      

      The log files are located in the cluster/logs directory.

Step 6: Verify the status of your installation

If the installation succeeded, the access information for your cluster is displayed. The master_ip is the IP address of the master node for your cluster.

  UI URL is https://master_ip:8443 , default username/password is admin/admin

For IBM Cloud Private: If you specified a cluster_lb_address value in your config.yaml file, the <ip_address> is the cluster_lb_address address. If you did not specify that value, in HA clusters, the <ip_address> in this message is the cluster_vip address that you specified, and in standard clusters, it is the IP address of the master node.

Note: If you created your cluster within a private network, use the public IP address of the master node to access the cluster. Specify the public IP address in the config.yaml and update the cluster_lb_address parameter before you install IBM Cloud Private.

Access your cluster

Access your cluster. From a web browser, browse to the URL for your cluster. For a list of supported browsers, see Supported browsers.

Post installation tasks

  1. Restart your firewall.
  2. Ensure that all the IBM Cloud Private default ports are open. For more information about the default IBM Cloud Private ports, see Default ports.
  3. Back up the boot node. Copy your /<installation_directory>/cluster directory to a secure location. If you use SSH keys to secure your cluster, ensure that the SSH keys in the backup directory remain in sync.
  4. Install other software from your bundle. See Installing IBM software onto IBM Cloud Private.

Known issues and limitations

IBM Cloud Private on Linux® on IBM® Z and LinuxONE is a technology preview release and has the following limitations: