Installing IBM® Cloud Private-CE
Set up IBM® Cloud Private-CE (Community Edition) master, worker, proxy, and optional management nodes in your cluster.
- Before you install IBM Cloud Private-CE, prepare your cluster. See Configuring your cluster.
- If your master or proxy node uses a SUSE Linux Enterprise Server (SLES) operating system, during installation, you must disable all firewalls in your cluster.
Installation of an IBM Cloud Private can be completed in 5 main steps:
- Install Docker for your boot node only
- Set up the installation environment
- (Optional) Customize your cluster
- Setup Docker for your cluster nodes
- Deploy the environment
- Verify the installation
When the installation completes, access your cluster and complete post installation tasks.
Note: If you encounter errors during installation, see Troubleshooting.
Step 1: Install Docker for your boot node only
The boot node is the node that is used for installation of your cluster. The boot node is usually your master node. For more information about the boot node, see Boot node.
You need a version of Docker that is supported by IBM Cloud Private installed on your boot node. See Supported Docker versions.
To install Docker, see Manually installing Docker.
Step 2: Set up the installation environment
- Log in to the boot node as a user with root permissions.
-
Pull the IBM Cloud Private-CE installer image from Docker Hub, run the following command:
sudo docker pull ibmcom/icp-inception:3.1.0
-
Create an installation directory to store the IBM Cloud Private configuration files in and change to that directory. For example, to store the configuration files in
/opt/ibm-cloud-private-ce-3.1.0
, run the following commands:sudo mkdir /opt/ibm-cloud-private-ce-3.1.0; \ cd /opt/ibm-cloud-private-ce-3.1.0
-
Extract the configuration files.
sudo docker run -e LICENSE=accept \ -v "$(pwd)":/data ibmcom/icp-inception:3.1.0 cp -r cluster /data
A cluster directory is created inside your installation directory. For example, if your installation directory is
/opt/ibm-cloud-private-ce-<icp-version>
, the/opt/ibm-cloud-private-<icp-version>/cluster
folder is created. For an overview of the cluster directory structure, see Cluster directory structure. -
(Optional) You can view the license file for IBM Cloud Private by running the following command:
sudo docker run -e LICENSE=view -e LANG=$LANG ibmcom/icp-inception:3.1.0
Where
$LANG
is a supported language format. For example, to view the license in Simplified Chinese, run the following command:sudo docker run -e LICENSE=view -e LANG=zh_CN ibmcom/icp-inception:3.1.0
For a list of supported language formats, see Supported languages.
-
Create a secure connection from the boot node to all other nodes in your cluster. Complete one of the following processes:
- Set up SSH in your cluster. See Sharing SSH keys among cluster nodes.
- Set up password authentication in your cluster. See Configuring password authentication for cluster nodes.
-
Add the IP address of each node in the cluster to the
/<installation_directory>/cluster/hosts
file. See Setting the node roles in the hosts file. You can also define customized host groups, see Defining custom host groups.Note: Worker and proxy nodes can support mixed architectures. You do not need to download or pull any platform specific packages to set up a mixed architecture worker or proxy environment for IBM Cloud Private-CE. To add worker or proxy nodes into a cluster that contains Linux® x86_64, Linux® on Power® (ppc64le) and IBM® Z platforms, you need to add the IP address of these nodes to the
/<installation_directory>/cluster/hosts
file only. -
If you use SSH keys to secure your cluster, in the
/<installation_directory>/cluster
folder, replace thessh_key
file with the private key file that is used to communicate with the other cluster nodes. See Sharing SSH keys among cluster nodes. Run this command:sudo cp ~/.ssh/id_rsa ./cluster/ssh_key
In this example,
~/.ssh/id_rsa
is the location and name of the private key file.
Step 3: Customize your cluster
- Set up resource limits for proxy nodes. See Configuring process resource limit on proxy nodes.
- You can also set a variety of optional cluster customizations that are available in the
/<installation_directory>/cluster/config.yaml
file. See Customizing the cluster with the config.yaml file. For additional customizations, you can also review Customizing your installation. -
In an environment that has multiple network interfaces (NICs), such as OpenStack and AWS, you must add the following code to the
config.yaml
file:cluster_lb_address: <external IP address>
Where
<external address>
is the IP address, fully-qualified domain name, or OpenStack floating IP address that manages communication to external services.
Step 4: Setup Docker for your cluster nodes
Cluster nodes are the master, worker, proxy, and management nodes. See, Architecture.
You need a version of Docker that is supported by IBM Cloud Private installed on your cluster node. See Supported Docker versions.
If you do not have supported version of Docker installed on your cluster nodes, IBM Cloud Private can automatically install Docker on your cluster nodes during the installation.
To prepare your cluster nodes for automatic installation of Docker, see Configuring cluster nodes for automatic Docker installation.
Step 5: Deploy the environment
-
Change to the
cluster
folder in your installation directory.cd ./cluster
-
Deploy your environment. Depending on your options, you might need to add more parameters to the deployment command.
- By default, the command to deploy your environment is set to deploy 15 nodes at a time. If your cluster has more than 15 nodes, the deployment might take a longer time to finish. If you want to speed up the deployment, you can specify a higher
number of nodes to be deployed at a time. Use the argument
-f <number of nodes to deploy>
with the command.
To deploy your environment, run the following command:
sudo docker run --net=host -t -e LICENSE=accept \ -v "$(pwd)":/installer/cluster ibmcom/icp-inception:3.1.0 install
- By default, the command to deploy your environment is set to deploy 15 nodes at a time. If your cluster has more than 15 nodes, the deployment might take a longer time to finish. If you want to speed up the deployment, you can specify a higher
number of nodes to be deployed at a time. Use the argument
Verify the status of your installation
If the installation succeeded, the access information for your cluster is displayed.
The URL is https://master_ip:8443
, where master_ip is the IP address of the master node for your cluster.
Note: If you created your cluster within a private network, use the public IP address of the master node to access the cluster.
Access your cluster
Access your cluster. From a web browser, browse to the URL for your cluster. For a list of supported browsers, see Supported browsers.
- For more information about accessing your cluster by using the IBM Cloud Private-CE management console from a web browser, see Accessing your IBM Cloud Private cluster by using the management console.
- For more information about accessing your cluster by using the Kubernetes command line (kubectl), see Accessing your IBM Cloud Private cluster by using the kubectl CLI.
Note: If you are unable to log in immediately after the installation completes, it might be because the management services are not ready. Wait for a few minutes and try again.
Note: You might see a 502 Bad Gateway
message when you open a page in the management console shortly after installation. If you do, the NGINX service has not started all components. The pages load after all components
start.
Post installation tasks
- If you had disabled firewalls, restart your firewall.
- Ensure that all the IBM Cloud Private-CE default ports are open. For more information about the default IBM Cloud Private-CE ports, see Default ports.
- Back up the boot node. Copy your
/<installation_directory>/cluster
directory to a more secure location. If you use SSH keys to secure your cluster, ensure that the SSH keys in the backup directory remain in sync.