Learn how to migrate your worker pools to a new operating system like Ubuntu 20.
In the following example scenarios, you will learn how to use Terraform to migrate your worker nodes to a new Ubuntu version (e.g., from Ubuntu 18 to Ubuntu 20) and change your default worker pool to use different worker nodes.
Migrating to a new Ubuntu version with Terraform
To migrate your worker nodes to a new Ubuntu version, you must first provision a worker pool that uses a newer Ubuntu version. Then, you can add worker nodes to the new pool and finally remove the original worker pool.
- We begin with the following example cluster configuration. This cluster contains an Ubuntu 18 worker pool called
oldpool
: - Next, add a worker pool resource for your Ubuntu 20 workers. In the following example, a temporary
new_worker_count
variable is introduced to control the migration: - Start the migration by gradually increasing the
new_worker_count
variable. In the following example, thenew_worker_count
is set to 1: - Review the following actions that are performed when you change the worker count:
- Verify that the new worker pool and the new worker(s) have been created and the old worker pool is scaled down.
- Finish the migration by setting the new worker pool’s worker count to the same value as the old one before the migration. As a best practice, always review your changes using the
terraform plan
command: - Verify that the old worker pool has been deleted.
- Remove the old worker pool resource and the temporary changes from the Terraform script:
Changing the default worker pool
Begin by defining the worker pool as its own resource.
While you are changing the default worker pool, a backup worker pool is required if the change includes a `ForceNew` operation. If you update the default worker pool without not having a separate worker pool with existing workers already added, your cluster will stop working until the worker replacement is finished.
- Create the resource similar to the following example:
- Import the worker pool:
- Add the following lifecycle options to
ibm_container_vpc_cluster.cluster
so changes made by theibm_container_vpc_worker_pool.default
won’t trigger new updates and won’t triggerForceNew
. Note that the events that triggerForceNew
might change. Always runterraform plan
and review the changes before applying them: - In this example, we modify the operating system of the default worker pool and set the worker count to two. Note that updating the worker count would normally resize the worker pool, but since we changed the operating system, a new worker pool is created. Making this change on a cluster resource would trigger the
ForceNew
option on the cluster itself and would result in a new cluster being created. However, since we defined the worker pool resource separately, new workers are created instead: - Run
terraform plan
to review your changes: - Apply your changes to replace your Ubuntu 18 worker nodes with Ubuntu 20 worker nodes:
- Verify your changes by listing your worker nodes:
- After updating the default worker pool, pull your changes into the current state and remove the lifecycle operations you added earlier:
- Then, remove the
ibm_container_vpc_worker_pool.default
resource so it is no longer managed: - Remove the lifecycle options that you added earlier from cluster resource.
Conclusion
In the previous examples you learned how to do the following.
- Migrate your worker pools to a new operating system, such as Ubuntu 20.
- Make changes to the default worker pool while using a backup pool to prevent downtime.
For more information about the IBM Cloud provider plug-in for Terraform, see the Terraform registry documentation.
For more information about IBM Cloud Kubernetes Service, see the docs.
For more information about Red Hat OpenShift on IBM Cloud, see the docs.