Installing IBM Cloud Private Cloud Native and Enterprise editions
Use the following steps to install either IBM Cloud Private Native or Enterprise editions.
For either IBM Cloud Private Cloud Native or Enterprise editions, you can install a standard or high availability (HA) cluster.
You can have an IBM Cloud Private cluster that supports Linux® x86_64, Linux® on Power® (ppc64le), and Linux on IBM Z and LinuxONE systems.
Before you install:
- Plan your cluster. For instance, your cluster needs one master (and proxy) node, one management node, and one worker node. For more information, see Planning your cluster.
- You must prepare your cluster. See Configuring your cluster.
- If you want to enable IBM Multicloud Manager, see Preparing for IBM Multicloud Manager installation.
- If your master or proxy node uses a SUSE Linux Enterprise Server (SLES) operating system, during installation, you must disable all firewalls in your cluster.
- If you want to install IBM Edge Computing for Servers, follow these instructions to install IBM Cloud Private and enable IBM Edge Computing for Servers to set up the hub cluster. Then, install IBM Cloud Private at each of your remote edge servers, following the same instructions, optionally by using the edge computing profile. For more information, see Installing IBM Edge Computing for Servers.
Installation can be completed in six main steps:
- Install Docker for your boot node only
- Set up the installation environment
- Customize your cluster
- Set up Docker for your cluster nodes
- Deploy the environment
- Verify the installation
When the installation completes, you can access your cluster and complete post installation tasks.
If you encounter errors during installation, see Troubleshooting install.
Step 1: Install Docker for your boot node only
The boot node is the node that is used for installation of your cluster. The boot node is usually your master node. For more information about the boot node, see Boot node.
You need a version of Docker that is supported by IBM Cloud Private installed on your boot node. See Supported Docker versions. To install Docker, see Manually installing Docker.
If you are installing the 3.2.1.2203 fix pack, you can upgrade the Docker version from 18.09.7 to 19.03.11 after the installation is complete. For more information, see Upgrading Docker.
Step 2: Set up the installation environment
- Log in to the boot node as a user with full root permissions.
-
Download the installation file or image.
-
For IBM Cloud Private Native or Enterprise: Download the correct file or files for the type of nodes in your cluster from the IBM Passport Advantage®
website.
- For a Linux x86_64 cluster, download the
ibm-cloud-private-x86_64-3.2.1.tar.gzfile. - For a Linux on Power (ppc64le) cluster, download the
ibm-cloud-private-ppc64le-3.2.1.tar.gzfile. - For a IBM® Z cluster, download the
ibm-cloud-private-s390x-3.2.1.tar.gzfile.
- For a Linux x86_64 cluster, download the
-
For IBM Cloud Private fix pack: Download the correct file or files for the type of nodes in your cluster from the IBM® Fix Central
website.
Currently, there are two fix pack versions available, 3.2.1.x fix packs and 3.2.2.x fix packs. The 3.2.1.x fix packs are intended for environments that include Kubernetes version 1.13.12. The 3.2.2.x fix packs include fixes to upgrade the supported version of Kubernetes. The fixes that are included within these 3.2.2.x fix packs include all fixes that are included within the equivalent 3.2.1.x fix pack, except for Kubernetes specific fixes. If you apply a 3.2.2.x fix pack, do not apply an equivalent 3.2.1.x fix pack. The latest 3.2.1.x fix pack is 3.2.1.2203. The latest 3.2.2.x fix pack is 3.2.2.2203 and upgrades Kubernetes to version 1.19.3.
For downloading the 3.2.2.2203 fix pack:
- For a Linux® x86_64 cluster, download the
ibm-cloud-private-x86_64-3.2.2.2203.tar.gzfile. - For a Linux® on Power® (ppc64le) cluster, download the
ibm-cloud-private-ppc64le-3.2.2.2203.tar.gzfile. - For a IBM® Z cluster, download the
ibm-cloud-private-s390x-3.2.2.2203.tar.gzfile.
For downloading the 3.2.1.2203 fix pack:
- For a Linux® x86_64 cluster, download the
ibm-cloud-private-x86_64-{site.data.keyword.version_fixpack}}.tar.gzfile. - For a Linux® on Power® (ppc64le) cluster, download the
ibm-cloud-private-ppc64le-3.2.1.2203.tar.gzfile. -
For a IBM® Z cluster, download the
ibm-cloud-private-s390x-3.2.1.2203.tar.gzfile.Important: If you are installing IBM Cloud Private with a fix pack, use the same instructions as an IBM Cloud Private Native or Enterprise installation.
- For a Linux® x86_64 cluster, download the
-
-
For IBM Cloud Private Native or Enterprise: Extract the images and load them into Docker. Extracting the images can take a few minutes, during which time output is not displayed.
Note: If you are installing IBM Cloud Private with a fix pack, replace the file name in the following commands with the file name for the fix pack installation file that you downloaded.
-
For IBM Cloud Private Native or Enterprise installation, run the following command:
-
For Linux x86_64, use the command:
tar xf ibm-cloud-private-x86_64-3.2.1.tar.gz -O | sudo docker load -
For Linux on Power (ppc64le), use the command:
tar xf ibm-cloud-private-ppc64le-3.2.1.tar.gz -O | sudo docker load - For Linux on IBM Z and LinuxONE, use the command:
tar xf ibm-cloud-private-s390x-3.2.1.tar.gz -O | sudo docker load
-
-
-
Create an installation directory (
/<installation_directory>/) to store the IBM Cloud Private configuration files in and change to that directory.Note: The installation directory must have at least 50 GB of available disk space for the installation and install files. For example, to store the IBM Cloud Private configuration files in
/opt/ibm-cloud-private-3.2.1, run the following commands:sudo mkdir /opt/ibm-cloud-private-3.2.1; cd /opt/ibm-cloud-private-3.2.1 -
Extract the configuration files from the installer image.
-
For Linux x86_64, run this command:
sudo docker run -v $(pwd):/data -e LICENSE=accept \ ibmcom/icp-inception-amd64:3.2.1-ee \ cp -r cluster /data -
For Linux on Power (ppc64le), run this command:
sudo docker run -v $(pwd):/data -e LICENSE=accept \ ibmcom/icp-inception-ppc64le:3.2.1-ee \ cp -r cluster /data -
For Linux on IBM Z and LinuxONE, run this command:
sudo docker run -v $(pwd):/data -e LICENSE=accept \ ibmcom/icp-inception-s390x:3.2.1-ee \ cp -r cluster /dataA
clusterdirectory is created inside your installation directory. For example, if your installation directory is/opt/ibm-cloud-private-3.2.1, the/opt/ibm-cloud-private-3.2.1/clusterfolder is created. For an overview of the cluster directory structure, see Cluster directory structure.Note: By default, the cluster directory is owned by
root. If you require the directory to be owned by a different user, runchmod -Ron the directory.
-
-
Optional: You can view the license file for IBM Cloud Private. For a list of supported language formats, see Supported languages.
-
For Linux x86_64, run this command:
sudo docker run -e LICENSE=view -e LANG=$LANG ibmcom/icp-inception-amd64:3.2.1-ee -
For Linux on Power (ppc64le), run this command:
sudo docker run -e LICENSE=view -e LANG=$LANG ibmcom/icp-inception-ppc64le:3.2.1-ee -
For Linux on IBM Z and LinuxONE, run this command:
sudo docker run -e LICENSE=view -e LANG=$LANG ibmcom/icp-inception-s390x:3.2.1-ee cp -r cluster /data
Note: The
$LANGvalue must be in supported language. For example, to view the license in Simplified Chinese using Linux x86_64 for a IBM Cloud Private Cloud Native or Enterprise installation, run the following command:sudo docker run -e LICENSE=view -e LANG=zh_CN ibmcom/icp-inception-amd64:3.2.1-ee -
-
Create a secure connection from the boot node to all other nodes in your cluster. Complete one of the following setups:
- Set up SSH in your cluster. See Sharing SSH keys among cluster nodes.
- Set up password authentication in your cluster. See Configuring password authentication for cluster nodes.
-
Add the IP address of each node in the cluster to the
/<installation_directory>/cluster/hostsfile. See Setting the node roles in the hosts file. You can also define customized host groups, see Defining custom host groups.Note: Worker nodes can support mixed architectures. You can add worker nodes into a single cluster that run on Linux x86_64, Linux on Power (ppc64le), and IBM Z platforms. Non-worker nodes support only one type of architecture.
-
Move the image files for your cluster to the
/<installation_directory>/cluster/imagesfolder.-
Create an images directory by running the following command:
sudo mkdir -p /<installation_directory>/cluster/images -
If your cluster contains any Linux x86_64 nodes, place the x86 package in the images directory. Add the path to your installation image file to the following command and then run the command:
sudo mv <installation_image_directory>/ibm-cloud-private-x86_64-3.2.1.tar.gz cluster/images/ -
If your cluster contains any Linux on Power (ppc64le) nodes, place the ppc64le package in the images directory. Add the path to your installation image file to the following command and then run the command:
sudo mv <installation_image_directory>/ibm-cloud-private-ppc64le-3.2.1.tar.gz cluster/images/ -
If your cluster contains any Linux on IBM Z and LinuxONE nodes, place the s390x package in the images directory. Add the path to your installation image file to the following command and then run the command:
sudo mv <installation_image_directory>/ibm-cloud-private-s390x-3.2.1.tar.gz cluster/images/
-
Step 3: Customize your cluster
The settings in the config.yaml file, which is located in the /<installation_directory>/cluster/ directory, contains all of the configuration settings that are needed to deploy your cluster.
-
Optional: Replace the
config.yamlfile for an IBM Power environment and or Linux® on IBM® Z and LinuxONE environment-
For IBM Power environment only: Replace the
config.yamlfile with thepower.config.yamlfile. If you are deploying your cluster into an IBM Power environment, you must use the settings in thepower.config.yamlfile. Complete the following steps to replace the file:-
Enter the following command to rename the existing
config.yamlfile toconfig.yaml.orig:sudo mv /<installation_directory>/cluster/config.yaml /<installation_directory>/cluster/config.yaml.origReplace installation_directory with the path to your installation directory.
-
Enter the following command to rename the
power.config.yamlfile toconfig.yaml:sudo cp /<installation_directory>/cluster/power.config.yaml /<installation_directory>/cluster/config.yamlReplace installation_directory with the path to your installation directory.
-
-
For Linux® on IBM® Z and LinuxONE environment only: Replace the
config.yamlfile with thez.config.yamlfile. If you are deploying your cluster into an IBM Z environment, you must use the settings in thez.config.yamlfile. Complete the following steps to replace the file:-
Run the following command to rename the existing
config.yamlfile toconfig.yaml.orig:sudo mv /<installation_directory>/cluster/config.yaml /<installation_directory>/cluster/config.yaml.origReplace the installation_directory with the path to your installation directory.
-
Enter the following command to rename the
z.config.yamlfile toconfig.yaml:sudo cp /<installation_directory>/cluster/z.config.yaml /<installation_directory>/cluster/config.yamlReplace installation_directory with the path to your installation directory.
-
-
-
Set up a default password in the
config.yamlfile that meets the default password enforcement rule'^([a-zA-Z0-9\-]{32,})$'. This rule specifies that a password must meet the following conditions:- The password must have a minimum length of 32 characters.
- The password can include lowercase letters, uppercase letters, numbers, and hyphens.
To define a custom set of password rules:
-
Open the
/<installation_directory>/cluster/config.yamlfile, and set thedefault_admin_password. The password must satisfy all regular expressions that are specified inpassword_rules. -
Optional: You can define one or more rules as regular expressions in an array list that the password must pass. For example, a rule can state that the password must be longer than a specified number of characters or contain at least one special character, or both. The rules are written as regular expressions that are supported by the Go programming language.
To define your custom set of password rules, add the
password_ruleparameter and rule values to theconfig.yamlfile:password_rules: - `rule value` - `rule value`For example, the following settings define two password rules:
password_rules: - '^.{10,}' # The password must have a minimum length of 10 characters. - '.*[!@#\$%\^&\*].*' # The password must include at least one of the listed special characters.To disable the
password_rule, add(.*)password_rules: - '(.*)'Note: The
default_admin_passwordmust match all rules that are defined. Ifpassword_rulesis not defined, thedefault_admin_passwordmust meet the default passport enforcement rule'^([a-zA-Z0-9\-]{32,})$'.
-
Optional: Customize your cluster. To review the full list of parameters that are available to customize, see Customizing the cluster with the config.yaml file. For other types of customizations that must be configured during installation, such as configuring the monitoring service or GlusterFS, review Customizing your installation.
-
Enable python3 and configure your cluster name and cluster certificate authority (CA) domain within your
config.yamlfile. Define these settings as the values for the following parameters:- General settings
ansible_python_interpreter- Set the value to/usr/bin/python3if you use python3 in your cluster nodes.
- Cluster access settings
cluster_name- The name of your cluster.cluster_CA_domain- The certificate authority (CA) domain to use in your cluster.
For more information about these configuration settings and other settings that you can configure for your cluster, see Customizing the cluster with the config.yaml file.
- General settings
-
Optional: Enable IBM Multicloud Manager from your
config.yamlfile. By default, themulticluster-huboption isenabledand thesingle_cluster_modeoption istrue, which means IBM Multicloud Manager is not configured. You cannot use IBM Multicloud Manager with thesingle_cluster_modedefaulttruesetting.For more information and other configuration scenarios for IBM Multicloud Manager, see Configuration options for IBM Multicloud Manager with IBM Cloud Private installation.
-
In an environment that has multiple network interfaces (NICs), such as OpenStack and AWS, you must add the following code to the
config.yamlfile:cluster_lb_address: <external address> proxy_lb_address: <external address>The
<external address>value is the IP address, fully qualified domain name, or OpenStack floating IP address that manages communication to external services. Setting theproxy_lb_addressparameter is required for proxy HA environments only. -
For HA environments, there are several HA installation options. See HA settings.
Step 4: Set up Docker for your cluster nodes
Cluster nodes are the master, worker, proxy, and management nodes. To learn more, see Architecture.
You need a version of Docker that is supported by IBM Cloud Private installed on your cluster node. See Supported Docker versions. If you do not have a supported version of Docker that is installed on your cluster nodes, IBM Cloud Private can automatically install Docker on your cluster nodes during the installation.
To prepare your cluster nodes for automatic installation of Docker, see Configuring cluster nodes for automatic Docker installation.
Step 5: Deploy the environment
-
Change to the
clusterfolder in your installation directory by running the following command:cd /<installation_directory>/cluster -
Optional: Depending on your options, you might need to add more parameters to the deployment command. If you specified the
offline_pkg_copy_pathparameter in theconfig.yamlfile; in the deployment command, add the-e ANSIBLE_REMOTE_TEMP=<offline_pkg_copy_path>option, where<offline_pkg_copy_path>is the value of theoffline_pkg_copy_pathparameter that you set in theconfig.yamlfile.Note: By default, the command to deploy your environment is set to deploy 15 nodes at a time. If your cluster has more than 15 nodes, the deployment might take a longer time to finish. If you want to speed up the deployment, you can specify a higher number of nodes to be deployed at a time. Use the argument
-f <number of nodes to deploy>with the command. -
Deploy your environment:
-
For Linux x86_64, run this command:
sudo docker run --net=host -t -e LICENSE=accept \ -v "$(pwd)":/installer/cluster ibmcom/icp-inception-amd64:3.2.1-ee install -
For Linux on Power (ppc64le), run this command:
sudo docker run --net=host -t -e LICENSE=accept \ -v "$(pwd)":/installer/cluster ibmcom/icp-inception-ppc64le:3.2.1-ee install -
For Linux on IBM Z and LinuxONE, run this command:
sudo docker run --net=host -t -e LICENSE=accept \ -v "$(pwd)":/installer/cluster ibmcom/icp-inception-s390x:3.2.1-ee install
-
-
Optional: If you encounter errors during deployment, rerun the deployment command with
-vto collect other error messages. If you continue to receive errors during the rerun, run the following command to collect the log files:-
For Linux x86_64, run this command:
sudo docker run --net=host -t -e LICENSE=accept -v "$(pwd)":/installer/cluster ibmcom/icp-inception-amd64:3.2.1-ee healthcheck -
For Linux on Power (ppc64le), run this command:
sudo docker run --net=host -t -e LICENSE=accept -v "$(pwd)":/installer/cluster ibmcom/icp-inception-ppc64le:3.2.1-ee healthcheck -
For Linux on IBM Z and LinuxONE, run this command:
sudo docker run --net=host -t -e LICENSE=accept -v "$(pwd)":/installer/cluster ibmcom/icp-inception-s390x:3.2.1-ee healthcheckThe log files are located in the
cluster/logsdirectory.
-
Step 6: Verify the status of your installation
If the installation succeeded, the access information for your cluster is displayed. The URL is https://<Cluster Master Host>:<Cluster Master API Port>, where <Cluster Master Host>:<Cluster Master API Port> is defined in Master endpoint.
Access your cluster
Now you can access your cluster. From a web browser, browse to the URL of your cluster. For a list of supported browsers, see Supported browsers.
- To learn how to access your cluster by using the IBM Cloud Private management console from a web browser, see Accessing your IBM Cloud Private cluster by using the management console.
- To learn how to access your cluster by using the Kubernetes command line (kubectl), see Accessing your IBM Cloud Private cluster by using the kubectl CLI.
Notes:
- If you’re unable to log in immediately after the installation completes, it might be because the management services are not ready. Wait for a few minutes and try again.
- You might see a
502 Bad Gatewaymessage when you open a page in the management console shortly after installation. If you do, the NGINX service is not started all components. The pages load after all components start. - If you installed fix pack version 3.2.1.2203, add the root CA certificate to your trust store. With this fix pack, users on macOS 10.15 or newer cannot access the management console until the root CA certificate is added to the trust store. For more information, see:
Post installation tasks
-
Restart your firewall.
-
Ensure that all the IBM Cloud Private default ports are open. For more information about the default IBM Cloud Private ports, see Default ports.
-
Back up the boot node. Copy your
/<installation_directory>/clusterdirectory to a secure location. If you use SSH keys to secure your cluster, ensure that the SSH keys in the backup directory remain in sync. -
Maintain proper boot node access. The boot node contains authentication information that is used for initial day 1 deployment of IBM Cloud Private and day 2 updates to IBM Cloud Private. Access to the boot node must be limited to only the users with an actual business need. This access must be governed by using enterprise identity management tools for approval, periodic recertification, access revocation on employee termination, or job role changes. The only users who have access to the boot node should be users who have the
clusteradminrole for IBM Cloud Private. -
Install other software from your bundle. See Installing IBM software onto IBM Cloud Private.
-
Optional: Review the International Program License Agreement (IPLA) for IBM Cloud Private and IBM Multicloud Manager:
- Open the following link: https://www-03.ibm.com/software/sla/sladb.nsf/search?OpenForm
- Search for one of the following License Information numbers:
- L-TKAO-BA3Q8F - IBM Cloud Private 3.2.1
- L-TKAO-BA3Q3J - IBM Cloud Private Foundation 3.2.1
- L-ECUN-BALP9Z - IBM Multicloud Manager Enterprise Edition 3.2.1
-
Optional: Review the Notice file and non-IBM license file for IBM Cloud Private and IBM Multicloud Manager:
- Go to the
<installation_directory>/cfc/licensedirectory. - Open the
Stacked_License_for_ICP_ICP_Foundation_MCMEE_3.2.1.zipfile. - Go to the
RTFdirectory and open thenotices.rtfandnon_ibm_license.rtffiles to review the notices and non-IBM license information.
- Go to the