Installing IBM Cloud Private Cloud Native, Enterprise, and Community editions
Use the following steps to install either IBM Cloud Private Native or Enterprise editions or IBM® Cloud Private-CE (Community Edition).
For either IBM Cloud Private Cloud Native or Enterprise editions, you can install a standard or high availability (HA) cluster. For IBM Cloud Private-CE, you can set up a master, worker, proxy, and optional management nodes in your cluster.
You can have an IBM Cloud Private cluster that supports Linux® x86_64, Linux® on Power® (ppc64le), and Linux on IBM Z and LinuxONE systems.
Before you install:
- Plan your cluster. For instance, your cluster needs one master (and proxy) node, one management node, and one worker node. For more information, see Planning your cluster.
- You must prepare your cluster. See Configuring your cluster.
- If you want to enable IBM Multicloud Manager, see Preparing for IBM Multicloud Manager installation.
- If your master or proxy node uses a SUSE Linux Enterprise Server (SLES) operating system, during installation, you must disable all firewalls in your cluster.
- If you want to install IBM Edge Computing for Servers, follow these instructions to install IBM Cloud Private and enable IBM Edge Computing for Servers to set up the hub cluster. Then, install IBM Cloud Private at each of your remote edge servers, following the same instructions, optionally by using the edge computing profile. For more information, see Installing IBM Edge Computing for Servers.
Installation can be completed in six main steps:
- Install Docker for your boot node only
- Set up the installation environment
- Customize your cluster
- Set up Docker for your cluster nodes
- Deploy the environment
- Verify the installation
When the installation completes, you can access your cluster and complete post installation tasks.
If you encounter errors during installation, see Troubleshooting install.
Step 1: Install Docker for your boot node only
The boot node is the node that is used for installation of your cluster. The boot node is usually your master node. For more information about the boot node, see Boot node.
You need a version of Docker that is supported by IBM Cloud Private installed on your boot node. See Supported Docker versions. To install Docker, see Manually installing Docker.
If you are installing the 3.2.0.2003 fix pack, you can upgrade the Docker version from 18.06.2 to 19.03.11 after the installation is complete. For more information, see Upgrading Docker.
Step 2: Set up the installation environment
- Log in to the boot node as a user with full root permissions.
-
Download the installation file or image.
-
For IBM Cloud Private Native or Enterprise installation: Download the correct file or files for the type of nodes in your cluster from the IBM Passport Advantage®
website.
- For a Linux x86_64 cluster, download the
ibm-cloud-private-x86_64-3.2.0.tar.gz
file. - For a Linux on Power (ppc64le) cluster, download the
ibm-cloud-private-ppc64le-3.2.0.tar.gz
file. - For a IBM® Z cluster, download the
ibm-cloud-private-s390x-3.2.0.tar.gz
file.
- For a Linux x86_64 cluster, download the
-
For IBM Cloud Private-CE installation: Download the CE image from Docker Hub
by running the following command:
docker pull ibmcom/icp-inception:3.2.0
Note: This installer image supports Linux® on x86_64 systems, Linux on Power 64-bit LE systems, and Linux on IBM Z and LinuxONE systems.
-
For IBM Cloud Private fix pack version 3.2.0.2003: Download the correct file or files for the type of nodes in your cluster from the IBM® Fix Central
website.
- For a Linux® x86_64 cluster, download the
ibm-cloud-private-x86_64-{site.data.keyword.version_fixpack}}.tar.gz
file. - For a Linux® on Power® (ppc64le) cluster, download the
ibm-cloud-private-ppc64le-3.2.0.2003.tar.gz
file. -
For a IBM® Z cluster, download the
ibm-cloud-private-s390x-3.2.0.2003.tar.gz
file.Important: If you are installing IBM Cloud Private with a fix pack, use the same instructions as an IBM Cloud Private Native or Enterprise installation.
- For a Linux® x86_64 cluster, download the
-
-
For IBM Cloud Private Native or Enterprise installation: Extract the images and load them into Docker. Extracting the images can take a few minutes, during which time output is not displayed.
Note: If you are installing IBM Cloud Private with a fix pack, replace the file name in the following commands with the file name for the fix pack installation file that you downloaded.
-
For IBM Cloud Private Native or Enterprise installation, run the following command:
-
For Linux x86_64, use the command:
tar xf ibm-cloud-private-x86_64-3.2.0.tar.gz -O | sudo docker load
-
For Linux on Power (ppc64le), use the command:
tar xf ibm-cloud-private-ppc64le-3.2.0.tar.gz -O | sudo docker load
- For Linux on IBM Z and LinuxONE, use the command:
tar xf ibm-cloud-private-s390x-3.2.0.tar.gz -O | sudo docker load
-
-
-
Create an installation directory (
/<installation_directory>/
) to store either the IBM Cloud Private or IBM Cloud Private-CE configuration files in and change to that directory. Note: The installation directory must have at least 50 GB of available disk space for the installation and install files. For example, to store the IBM Cloud Private configuration files in/opt/ibm-cloud-private-3.2.0
, run the following commands:sudo mkdir /opt/ibm-cloud-private-3.2.0; cd /opt/ibm-cloud-private-3.2.0
-
Extract the configuration files from the installer image.
-
For IBM Cloud Private Native or Enterprise installation:
-
For Linux x86_64, run this command:
sudo docker run -v $(pwd):/data -e LICENSE=accept \ ibmcom/icp-inception-amd64:3.2.0-ee \ cp -r cluster /data
-
For Linux on Power (ppc64le), run this command:
sudo docker run -v $(pwd):/data -e LICENSE=accept \ ibmcom/icp-inception-ppc64le:3.2.0-ee \ cp -r cluster /data
-
For Linux on IBM Z and LinuxONE, run this command:
sudo docker run -v $(pwd):/data -e LICENSE=accept \ ibmcom/icp-inception-s390x:3.2.0-ee \ cp -r cluster /data
-
-
For IBM Cloud Private-CE installation:
sudo docker run -e LICENSE=accept \ -v "$(pwd)":/data ibmcom/icp-inception:3.2.0 cp -r cluster /data
A
cluster
directory is created inside your installation directory. For example, if your installation directory is/opt/ibm-cloud-private-3.2.0
, the/opt/ibm-cloud-private-3.2.0/cluster
folder is created. For an overview of the cluster directory structure, see Cluster directory structure.Note: By default, the cluster directory is owned by
root
. If you require the directory to be owned by a different user, runchmod -R
on the directory.
-
-
Optional: You can view the license file for IBM Cloud Private. For a list of supported language formats, see Supported languages.
-
For IBM Cloud Private Native or Enterprise installation:
-
For Linux x86_64, run this command:
sudo docker run -e LICENSE=view -e LANG=$LANG ibmcom/icp-inception-amd64:3.2.0-ee
-
For Linux on Power (ppc64le), run this command:
sudo docker run -e LICENSE=view -e LANG=$LANG ibmcom/icp-inception-ppc64le:3.2.0-ee
-
For Linux on IBM Z and LinuxONE, run this command:
sudo docker run -e LICENSE=view -e LANG=$LANG ibmcom/icp-inception-s390x:3.2.0-ee cp -r cluster /data
-
-
For IBM Cloud Private-CE installation:
sudo docker run -e LICENSE=view -e LANG=$LANG ibmcom/icp-inception:3.2.0
Note: The
$LANG
value must be in supported language. For example, to view the license in Simplified Chinese using Linux x86_64 for a IBM Cloud Private Cloud Native or Enterprise installation, run the following command:sudo docker run -e LICENSE=view -e LANG=zh_CN ibmcom/icp-inception-amd64:3.2.0-ee
-
-
Create a secure connection from the boot node to all other nodes in your cluster. Complete one of the following setups:
- Set up SSH in your cluster. See Sharing SSH keys among cluster nodes.
- Set up password authentication in your cluster. See Configuring password authentication for cluster nodes.
-
Add the IP address of each node in the cluster to the
/<installation_directory>/cluster/hosts
file. See Setting the node roles in the hosts file. You can also define customized host groups, see Defining custom host groups.Note: For IBM Cloud Private Native or Enterprise installation: Worker nodes can support mixed architectures. You can add worker nodes into a single cluster that run on Linux x86_64, Linux on Power (ppc64le), and IBM Z platforms. Non-worker nodes support only one type of architecture.
Note: For IBM Cloud Private-CE installation: Worker and proxy nodes can support mixed architectures. You do not need to download or pull any platform-specific packages to set up a mixed architecture worker or proxy environment for IBM Cloud Private-CE. To add worker or proxy nodes into a cluster that contains Linux x86_64, Linux on Power (ppc64le), and Linux on IBM Z and LinuxONE platforms, you need to add the IP address of these nodes to the
/<installation_directory>/cluster/hosts
file only. -
For IBM Cloud Private Native or Enterprise installation only: Move the image files for your cluster to the
/<installation_directory>/cluster/images
folder.-
Create an images directory by running the following command:
sudo mkdir -p /<installation_directory>/cluster/images
-
If your cluster contains any Linux x86_64 nodes, place the x86 package in the images directory. Add the path to your installation image file to the following command and then run the command:
sudo mv <installation_image_directory>/ibm-cloud-private-x86_64-3.2.0.tar.gz cluster/images/
-
If your cluster contains any Linux on Power (ppc64le) nodes, place the ppc64le package in the images directory. Add the path to your installation image file to the following command and then run the command:
sudo mv <installation_image_directory>/ibm-cloud-private-ppc64le-3.2.0.tar.gz cluster/images/
-
If your cluster contains any Linux on IBM Z and LinuxONE nodes, place the s390x package in the images directory. Add the path to your installation image file to the following command and then run the command:
sudo mv <installation_image_directory>/ibm-cloud-private-s390x-3.2.0.tar.gz cluster/images/
-
Step 3: Customize your cluster
The settings in the config.yaml
file, which is located in the /<installation_directory>/cluster/
directory, contains all of the configuration settings that are needed to deploy your cluster.
-
For IBM Power environment only Replace the
config.yaml
file with thepower.config.yaml
file. If you are deploying your cluster into an IBM Power environment, you must use the settings in thepower.config.yaml
file. Complete the following steps to replace the file:- Enter the following command to rename the existing
config.yaml
file toconfig.yaml.orig
:
Replace installation_directory with the path to your installation directory.sudo mv /<installation_directory>/cluster/config.yaml /<installation_directory>/cluster/config.yaml.orig
- Enter the following command to rename the
power.config.yaml
file toconfig.yaml
:
Replace installation_directory with the path to your installation directory.sudo cp /<installation_directory>/cluster/power.config.yaml /<installation_directory>/cluster/config.yaml
- Enter the following command to rename the existing
-
Set up a default password in the
config.yaml
file that meets the default password enforcement rule'^([a-zA-Z0-9\-]{32,})$'
. This rule specifies that a password must meet the following conditions:- The password must have a minimum length of 32 characters.
- The password can include lowercase letters, uppercase letters, numbers, and hyphens.
To define a custom set of password rules:
-
Open the
/<installation_directory>/cluster/config.yaml
file, and set thedefault_admin_password
. The password must satisfy all regular expressions that are specified inpassword_rules
. -
Optional: You can define one or more rules as regular expressions in an array list that the password must pass. For example, a rule can state that the password must be longer than a specified number of characters or contain at least one special character, or both. The rules are written as regular expressions that are supported by the Go programming language.
To define your custom set of password rules, add the
password_rule
parameter and rule values to theconfig.yaml
file:password_rules: - `rule value` - `rule value`
For example, the following settings define two password rules:
password_rules: - '^.{10,}' # The password must have a minimum length of 10 characters. - '.*[!@#\$%\^&\*].*' # The password must include at least one of the listed special characters.
To disable the
password_rule
, add(.*)
password_rules: - '(.*)'
Note: The
default_admin_password
must match all rules that are defined. Ifpassword_rules
is not defined, thedefault_admin_password
must meet the default passport enforcement rule'^([a-zA-Z0-9\-]{32,})$'
.
-
Enable python3 and configure your cluster name and cluster certificate authority (CA) domain within your
config.yaml
file. Define these settings as the values for the following parameters:- General settings
ansible_python_interpreter
- Set the value to/usr/bin/python3
if you use python3 in your cluster nodes.
- Cluster access settings
cluster_name
- The name of your cluster.cluster_CA_domain
- The certificate authority (CA) domain to use in your cluster.
For more information about these configuration settings and other settings that you can configure for your cluster, see Customizing the cluster with the config.yaml file.
- General settings
-
Optional: Enable IBM Multicloud Manager from your
config.yaml
file. By default, themulticluster-hub
option isenabled
and thesingle_cluster_mode
option istrue
, which means IBM Multicloud Manager is not configured. You cannot use IBM Multicloud Manager with thesingle_cluster_mode
defaulttrue
setting.For more information and other configuration scenarios for IBM Multicloud Manager, see Configuration options for IBM Multicloud Manager with IBM Cloud Private installation.
-
Optional: Customize your cluster. To review the full list of parameters that are available to customize, see Customizing the cluster with the config.yaml file.
For other types of customizations that must be configured during installation, such as configuring the monitoring service or GlusterFS, review Customizing your installation.
-
In an environment that has multiple network interfaces (NICs), such as OpenStack and AWS, you must add the following code to the
config.yaml
file:-
For IBM Cloud Private Native or Enterprise installation:
cluster_lb_address: <external address> proxy_lb_address: <external address>
-
For IBM Cloud Private-CE installation:
cluster_lb_address: <external IP address>
The
<external address>
value is the IP address, fully qualified domain name, or OpenStack floating IP address that manages communication to external services. Setting theproxy_lb_address
parameter is required for proxy HA environments only. -
-
For HA environments, there are several HA installation options. See HA settings.
Step 4: Set up Docker for your cluster nodes
Cluster nodes are the master, worker, proxy, and management nodes. To learn more, see Architecture.
You need a version of Docker that is supported by IBM Cloud Private installed on your cluster node. See Supported Docker versions. If you do not have a supported version of Docker that is installed on your cluster nodes, IBM Cloud Private can automatically install Docker on your cluster nodes during the installation.
To prepare your cluster nodes for automatic installation of Docker, see Configuring cluster nodes for automatic Docker installation.
Step 5: Deploy the environment
-
Change to the
cluster
folder in your installation directory by running the following command:cd /<installation_directory>/cluster
-
Optional for IBM Cloud Private Native or Enterprise installation only: Depending on your options, you might need to add more parameters to the deployment command. If you specified the
offline_pkg_copy_path
parameter in theconfig.yaml
file; in the deployment command, add the-e ANSIBLE_REMOTE_TEMP=<offline_pkg_copy_path>
option, where<offline_pkg_copy_path>
is the value of theoffline_pkg_copy_path
parameter that you set in theconfig.yaml
file.Note: By default, the command to deploy your environment is set to deploy 15 nodes at a time. If your cluster has more than 15 nodes, the deployment might take a longer time to finish. If you want to speed up the deployment, you can specify a higher number of nodes to be deployed at a time. Use the argument
-f <number of nodes to deploy>
with the command. -
Deploy your environment:
-
For IBM Cloud Private Native or Enterprise installation:
-
For Linux x86_64, run this command:
sudo docker run --net=host -t -e LICENSE=accept \ -v "$(pwd)":/installer/cluster ibmcom/icp-inception-amd64:3.2.0-ee install
-
For Linux on Power (ppc64le), run this command:
sudo docker run --net=host -t -e LICENSE=accept \ -v "$(pwd)":/installer/cluster ibmcom/icp-inception-ppc64le:3.2.0-ee install
-
For Linux on IBM Z and LinuxONE, run this command:
sudo docker run --net=host -t -e LICENSE=accept \ -v "$(pwd)":/installer/cluster ibmcom/icp-inception-s390x:3.2.0-ee install
-
-
For IBM Cloud Private-CE installation:
sudo docker run --net=host -t -e LICENSE=accept \ -v "$(pwd)":/installer/cluster ibmcom/icp-inception:3.2.0 install
-
-
Optional for IBM Cloud Private Native or Enterprise installation only: If you encounter errors during deployment, rerun the deployment command with
-v
to collect other error messages. If you continue to receive errors during the rerun, run the following command to collect the log files:-
For Linux x86_64, run this command:
sudo docker run --net=host -t -e LICENSE=accept -v "$(pwd)":/installer/cluster ibmcom/icp-inception-amd64:3.2.0-ee healthcheck
-
For Linux on Power (ppc64le), run this command:
sudo docker run --net=host -t -e LICENSE=accept -v "$(pwd)":/installer/cluster ibmcom/icp-inception-ppc64le:3.2.0-ee healthcheck
-
For Linux on IBM Z and LinuxONE, run this command:
sudo docker run --net=host -t -e LICENSE=accept -v "$(pwd)":/installer/cluster ibmcom/icp-inception-s390x:3.2.0-ee healthcheck
The log files are located in the
cluster/logs
directory.
-
Step 6: Verify the status of your installation
If the installation succeeded, the access information for your cluster is displayed. The URL is https://<Cluster Master Host>:<Cluster Master API Port>
, where <Cluster Master Host>:<Cluster Master API Port>
is defined in Master endpoint.
Access your cluster
Now you can access your cluster. From a web browser, browse to the URL of your cluster. For a list of supported browsers, see Supported browsers.
- To learn how to access your cluster by using the IBM Cloud Private management console from a web browser, see Accessing your IBM Cloud Private cluster by using the management console.
- To learn how to access your cluster by using the Kubernetes command line (kubectl), see Accessing your IBM Cloud Private cluster by using the kubectl CLI.
Notes:
- If you’re unable to log in immediately after the installation completes, it might be because the management services are not ready. Wait for a few minutes and try again.
- You might see a
502 Bad Gateway
message when you open a page in the management console shortly after installation. If you do, the NGINX service is not started all components. The pages load after all components start. - If you installed fix pack version 3.2.0.2003, add the root CA certificate to your trust store. With this fix pack, users on macOS 10.15 or newer cannot access the management console until the root CA certificate is added to the trust store. For more information, see:
Post installation tasks
-
Restart your firewall.
-
Ensure that all the IBM Cloud Private default ports are open. For more information about the default IBM Cloud Private ports, see Default ports.
-
Back up the boot node. Copy your
/<installation_directory>/cluster
directory to a secure location. If you use SSH keys to secure your cluster, ensure that the SSH keys in the backup directory remain in sync. -
Maintain proper boot node access. The boot node contains authentication information that is used for initial day 1 deployment of IBM Cloud Private and day 2 updates to IBM Cloud Private. Access to the boot node must be limited to only the users with an actual business need. This access must be governed by using enterprise identity management tools for approval, periodic recertification, access revocation on employee termination, or job role changes. The only users who have access to the boot node should be users who have the
clusteradmin
role for IBM Cloud Private. -
Install other software from your bundle. See Installing IBM software onto IBM Cloud Private.
-
Optional: Review the International Program License Agreement (IPLA) for IBM Cloud Private and IBM Multicloud Manager:
- Open the following link: https://www-03.ibm.com/software/sla/sladb.nsf/search?OpenForm
- Search for one of the following License Information numbers:
- L-TKAO-BA3Q8F - IBM Cloud Private 3.2.0
- L-TKAO-BA3Q3J - IBM Cloud Private Foundation 3.2.0
- L-ECUN-BALP9Z - IBM Multicloud Manager Enterprise Edition 3.2.0
-
Optional: Review the Notice file and non-IBM license file for IBM Cloud Private and IBM Multicloud Manager:
- Go to the
<installation_directory>/cfc/license
directory. - Open the
Stacked_License_for_ICP_ICP_Foundation_MCMEE_3.2.0.zip
file. - Go to the
RTF
directory and open thenotices.rtf
andnon_ibm_license.rtf
files to review the notices and non-IBM license information.
- Go to the