Installing IBM® Cloud Private either Cloud Native or Enterprise
You can install a standard or high availability (HA) cluster for either your IBM Cloud Private Cloud Native or Enterprise bundle.
- Before you install IBM Cloud Private, prepare your cluster. See Configuring your cluster.
- If your master or proxy node uses a SUSE Linux Enterprise Server (SLES) operating system, during installation, you must disable all firewalls in your cluster.
Installation of an IBM Cloud Private can be completed in five main steps:
- Install Docker for your boot node only
- Set up the installation environment
- (Optional) Customize your cluster
- Set up Docker for your cluster nodes
- Deploy the environment
- Verify the installation
When the installation completes, access your cluster and complete postinstallation tasks.
Note: If you encounter errors during installation, see Troubleshooting install.
Step 1: Install Docker for your boot node only
The boot node is the node that is used for installation of your cluster. The boot node is usually your master node. For more information about the boot node, see Boot node.
You need a version of Docker that is supported by IBM Cloud Private installed on your boot node. See Supported Docker versions.
To install Docker, see Manually installing Docker.
Step 2: Set up the installation environment
- Log in to the boot node as a user with root permissions.
- Download the installation files for IBM Cloud Private. You must download the correct file or files for the type of nodes in your cluster. You can obtain these files from the IBM Passport Advantage® website.
- For a Linux® x86_64 cluster, download the
ibm-cloud-private-x86_64-3.1.0.tar.gz
file. - For a Linux® on Power® (ppc64le) cluster, download the
ibm-cloud-private-ppc64le-3.1.0.tar.gz
file. - For a cluster that uses IBM® Z worker and proxy nodes, download the
ibm-cloud-private-s390x-3.1.0.tar.gz
file.
- For a Linux® x86_64 cluster, download the
-
Extract the images and load them into Docker. Extracting the images might take a few minutes.
-
For Linux® x86_64, run this command:
tar xf ibm-cloud-private-x86_64-3.1.0.tar.gz -O | sudo docker load
-
For Linux® on Power® (ppc64le), run this command:
tar xf ibm-cloud-private-ppc64le-3.1.0.tar.gz -O | sudo docker load
-
-
Create an installation directory to store the IBM Cloud Private configuration files in and change to that directory. Note: The installation directory must have at least 50 GB of available disk space for the installation and install files. For example, to store the configuration files in
/opt/ibm-cloud-private-3.1.0
, run the following commands:sudo mkdir /opt/ibm-cloud-private-3.1.0; cd /opt/ibm-cloud-private-3.1.0
-
Extract the configuration files from the installer image.
-
For Linux® x86_64, run this command:
sudo docker run -v $(pwd):/data -e LICENSE=accept ibmcom/icp-inception-amd64:3.1.0-ee cp -r cluster /data
-
For Linux® on Power® (ppc64le), run this command:
sudo docker run -v $(pwd):/data -e LICENSE=accept ibmcom/icp-inception-ppc64le:3.1.0-ee cp -r cluster /data
A
cluster
directory is created inside your installation directory. For example, if your installation directory is/opt/ibm-cloud-private-3.1.0
, the/opt/ibm-cloud-private-3.1.0/cluster
folder is created. For an overview of the cluster directory structure, see Cluster directory structure.
-
-
(Optional) You can view the license file for IBM Cloud Private.
-
For Linux® x86_64, run this command:
sudo docker run -e LICENSE=view -e LANG=$LANG ibmcom/icp-inception-amd64:3.1.0-ee
-
For Linux® on Power® (ppc64le), run this command:
sudo docker run -e LICENSE=view -e LANG=$LANG ibmcom/icp-inception-ppc64le:3.1.0-ee
Where
$LANG
is a supported language format. For example, to view the license in Simplified Chinese using Linux® x86_64, run the following command:sudo docker run -e LICENSE=view -e LANG=zh_CN ibmcom/icp-inception-amd64:3.1.0-ee
For a list of supported language formats, see Supported languages.
-
-
Create a secure connection from the boot node to all other nodes in your cluster. Complete one of the following processes:
- Set up SSH in your cluster. See Sharing SSH keys among cluster nodes.
- Set up password authentication in your cluster. See Configuring password authentication for cluster nodes.
-
Add the IP address of each node in the cluster to the
/<installation_directory>/cluster/hosts
file. See Setting the node roles in the hosts file. You can also define customized host groups, see Defining custom host groups.Note: Worker nodes can support mixed architectures. You can add worker nodes into a single cluster that run on Linux® x86_64, Linux® on Power® (ppc64le) and IBM® Z platforms.
-
If you use SSH keys to secure your cluster, in the
/<installation_directory>/cluster
folder, replace thessh_key
file with the private key file that is used to communicate with the other cluster nodes. See Sharing SSH keys among cluster nodes. Run this command:sudo cp ~/.ssh/id_rsa ./cluster/ssh_key
In this example,
~/.ssh/id_rsa
is the location and name of the private key file. -
For IBM Cloud Private only: Move the image files for your cluster to the
/<installation_directory>/cluster/images
folder.-
Create an images directory:
sudo mkdir -p cluster/images
-
If your cluster contains the
x86_64
node, place the x86 package in the images directory:sudo mv /<path_to_installation_file>/ibm-cloud-private-x86_64-3.1.0.tar.gz cluster/images/
-
If your cluster contains the
ppc64le
node, place the ppc64le package in the images directory:sudo mv /<path_to_installation_file>/ibm-cloud-private-ppc64le-3.1.0.tar.gz cluster/images/
-
If your cluster contains the
s390x
node, place the s390x package in the images directory:sudo mv /<path_to_installation_file>/ibm-cloud-private-s390x-3.1.0.tar.gz cluster/images/
In these command,
path_to_installation_file
is the path to the images file. -
-
(Optional) If you plan to add IBM® Z worker and proxy nodes to your cluster, run this command:
sudo mv /<path_to_installation_file>/ibm-cloud-private-s390x-3.1.0.tar.gz cluster/images/
Step 3: Customize your cluster
- Set up resource limits for proxy nodes. See Configuring process resource limit on proxy nodes.
- You can also set a variety of optional cluster customizations that are available in the
/<installation_directory>/cluster/config.yaml
file. See Customizing the cluster with the config.yaml file. For additional customizations, you can also review Customizing your installation. -
In an environment that has multiple network interfaces (NICs), such as OpenStack and AWS, you must add the following code to the
config.yaml
file:cluster_lb_address: <external address> proxy_lb_address: <external address>
The
<external address>
value is the IP address, fully qualified domain name, or OpenStack floating IP address that manages communication to external services. Setting theproxy_lb_address
parameter is required for proxy HA environments only. - For HA environments, see HA settings.
Step 4: Set up Docker for your cluster nodes
Cluster nodes are the master, worker, proxy, and management nodes. See, Architecture.
You need a version of Docker that is supported by IBM Cloud Private installed on your cluster node. See Supported Docker versions.
If you do not have supported version of Docker installed on your cluster nodes, IBM Cloud Private can automatically install Docker on your cluster nodes during the installation.
To prepare your cluster nodes for automatic installation of Docker, see Configuring cluster nodes for automatic Docker installation.
Step 5: Deploy the environment
-
Change to the
cluster
folder in your installation directory.cd ./cluster
-
Deploy your environment. Depending on your options, you might need to add more parameters to the deployment command.
- If you had specified the
offline_pkg_copy_path
parameter in theconfig.yaml
file. In the deployment command, add the-e ANSIBLE_REMOTE_TEMP=<offline_pkg_copy_path>
option, where<offline_pkg_copy_path>
is the value of theoffline_pkg_copy_path
parameter that you set in theconfig.yaml
file. - By default, the command to deploy your environment is set to deploy 15 nodes at a time. If your cluster has more than 15 nodes, the deployment might take a longer time to finish. If you want to speed up the deployment, you can specify a higher
number of nodes to be deployed at a time. Use the argument
-f <number of nodes to deploy>
with the command.
To deploy your environment:
-
For Linux® x86_64, run the following command:
sudo docker run --net=host -t -e LICENSE=accept -v "$(pwd)":/installer/cluster ibmcom/icp-inception-amd64:3.1.0-ee install
-
For Linux® on Power® (ppc64le), run the following command:
sudo docker run --net=host -t -e LICENSE=accept -v "$(pwd)":/installer/cluster ibmcom/icp-inception-ppc64le:3.1.0-ee install
Note: If you encounter errors during deployment, rerun the deployment command with
-v
to collect other error messages. If you continue to receive errors during the rerun, run the following command to collect the log files:-
For Linux® x86_64, run the following command:
sudo docker run --net=host -t -e LICENSE=accept -v "$(pwd)":/installer/cluster ibmcom/icp-inception-amd64:3.1.0-ee healthcheck
-
For Linux® on Power® (ppc64le), run the following command:
sudo docker run --net=host -t -e LICENSE=accept -v "$(pwd)":/installer/cluster ibmcom/icp-inception-ppc64le:3.1.0-ee healthcheck
The log files are located in the
cluster/logs
directory.
- If you had specified the
Verify the status of your installation
If the installation succeeded, the access information for your cluster is displayed.
The URL can be either https://<ip_address>:8443
or https://master_ip:8443
, where master_ip is the IP address of the master node for your cluster.
If you specified a cluster_lb_address
value in your config.yaml
file, the <ip_address>
is the cluster_lb_address
address. If you did not specify that value, in HA clusters, the <ip_address>
in this message is the cluster_vip
address that you specified, and in standard clusters, it is the IP address of the master node.
Note: If you created your cluster within a private network, use the public IP address of the master node to access the cluster.
Access your cluster
Access your cluster. From a web browser, browse to the URL for your cluster. For a list of supported browsers, see Supported browsers.
- For more information about accessing your cluster by using the IBM Cloud Private management console from a web browser, see Accessing your IBM Cloud Private cluster by using the management console.
-
For more information about accessing your cluster by using the Kubernetes command line (kubectl), see Accessing your IBM Cloud Private cluster by using the kubectl CLI.
Note: If you’re unable to log in immediately after the installation completes, it might be because the management services are not ready. Wait for a few minutes and try again.
Note: You might see a
502 Bad Gateway
message when you open a page in the management console shortly after installation. If you do, the NGINX service has not started all components. The pages load after all components start.
Postinstallation tasks
- Restart your firewall.
- Ensure that all the IBM Cloud Private default ports are open. For more information about the default IBM Cloud Private ports, see Default ports.
- Back up the boot node. Copy your
/<installation_directory>/cluster
directory to a secure location. If you use SSH keys to secure your cluster, ensure that the SSH keys in the backup directory remain in sync. - Install other software from your bundle. See Installing bundled products.