Upgrading IBM Cloud Private-CE
You can upgrade IBM Cloud Private-CE from specific previous versions.
Supported upgrade paths
You can upgrade only the following supported paths:
- IBM Cloud Private-CE version 3.1.1 to 3.1.2
- IBM Cloud Private-CE version 3.1.0 to 3.1.2
If you use an earlier version of IBM Cloud Private-CE, you must upgrade to version 3.1.0 first.
You can only upgrade from one version of IBM Cloud Private-CE to another version of IBM Cloud Private-CE. You cannot upgrade IBM Cloud Private-CE to IBM Cloud Private Cloud Native or Enterprise editions.
During the upgrade process, you can't access the IBM Cloud Private management console. You also can't set cloud provider options, such as configuring a vSphere Cloud Provider, or choose to use NSX-T.
Upgrading
- Log in to the boot node as a user with root permissions. The boot node is usually your master node. For more information about node types, see Architecture. During installation, you specify the IP addresses for each node type.
-
Pull the IBM Cloud Private-CE installer image from Docker Hub.
sudo docker pull ibmcom/icp-inception:3.1.2
-
Create an installation directory and copy the
cluster
directories from the previous installation directory to the new IBM Cloud Privatecluster
folder. Use a different installation directory than you used for the previous version. For example, to store the configuration files in/opt/ibm-cloud-private-3.1.2
, run the following commands:mkdir -p /opt/ibm-cloud-private-3.1.2 cd /opt/ibm-cloud-private-3.1.2 cp -r /<installation_directory>/cluster . sudo rm -rf .upgrade upgrade_version
Note:
/<installation_directory>
is the full path to your version 3.1.1 installation directory, and/<new_installation_directory>
is the full path to your version 3.1.2 installation directory. -
Check the
calico_ipip_enabled
parameter value in the version that you are upgrading from.- If the parameter was set as
calico_ipip_enabled: true
, replace the parameter in the/<new_installation_directory>/cluster/config.yaml
withcalico_ipip_mode: Always
. - If the parameter was set as
calico_ipip_enabled: false
, replace the parameter in the/<new_installation_directory>/cluster/config.yaml
withcalico_ipip_mode: Never
.
- If the parameter was set as
-
Deploy your environment by completing the following steps:
- Change to the
cluster
folder in your installation directory.cd /<new_installation_directory>/cluster
-
Prepare the cluster for upgrade.
sudo docker run -e LICENSE=accept --net=host --rm -t -v "$(pwd)":/installer/cluster \ ibmcom/icp-inception:3.1.2 upgrade-prepare
If the cluster preparation fails, review the error message and resolve any issues. Then, remove the
cluster/.install.lock
file, and run theupgrade-prepare
command again. -
Upgrade Kubernetes.
sudo docker run -e LICENSE=accept --net=host --rm -t -v "$(pwd)":/installer/cluster \ ibmcom/icp-inception:3.1.2 upgrade-k8s
- If the Kubernetes upgrade fails with a different message, review the error message and resolve any issues. Then, roll back the Kubernetes services and run the upgrade Kubernetes services command again.
-
Upgrade chart.
sudo docker run -e LICENSE=accept --net=host --rm -t -v "$(pwd)":/installer/cluster \ ibmcom/icp-inception:3.1.2 upgrade-chart
- If the chart upgrade fails with a different message, review the error message and resolve any issues. Then, re-run the upgrade chart command again.
-
If GlusterFS is installed in your cluster, you must upgrade the GlusterFS client to version 4.1.5.
- Change to the
-
Verify the status of your upgrade.
-
If the upgrade succeeded, the access information for your cluster is displayed. The
<Cluster Master Host>
is defined in Master endpoint.UI URL is https://<Cluster Master Host>:<Cluster Master API Port>
Where,
<Cluster Master Host>:<Cluster Master API Port>
is defined in Master endpoint. -
If you encounter errors, see Troubleshooting.
-
-
Clear your browser cache.
-
If you have either applications that use GPU resources or a resource quota for GPU resources, you need to manually update the application or resource quota with the new GPU resource name
nvidia.com/gpu
.- For applications that use GPU resources, follow the steps in Creating a deployment with attached GPU resources to run a sample GPU application. For your own GPU application, you need
to update the application to use the new GPU resource name
nvidia.com/gpu
. For example, to update the deployment properties, you can use either the management console (see Modifying a deployment) or thekubectl
CLI. - To update the resource quota for GPU resources, follow the steps in Setting resource quota to set a resource quota for your namespace. For upgrading, you need to update the resource
quota to use the GPU resource name
nvidia.com/gpu
. For example, you can set the GPU quota torequests.nvidia.com/gpu: "2"
.
- For applications that use GPU resources, follow the steps in Creating a deployment with attached GPU resources to run a sample GPU application. For your own GPU application, you need
to update the application to use the new GPU resource name
-
Access your cluster. From a web browser, browse to the URL for your cluster. For a list of supported browsers, see Supported browsers.
- For more information about accessing your cluster by using the IBM Cloud Private management console from a web browser, see Accessing your IBM Cloud Private cluster by using the management console.
- For more information about accessing your cluster by using the Kubernetes command line (kubectl), see Accessing your IBM Cloud Private cluster by using the kubectl CLI. Note: After your upgrade the Pod Security Policy for your clusters is automatically enabled, but set to the least restrictive setting to avoid access problems. See Pod security for information about how to manage the Pod Security Policy settings.
-
Ensure that all the IBM Cloud Private default ports are open. For more information about the default IBM Cloud Private ports, see Default ports.
-
Back up the boot node. Copy your
/<new_installation_directory>/cluster
directory to a secure location.