In the following example scenarios, you will learn how to use Terraform to migrate your worker nodes to a new Ubuntu version (e.g., from Ubuntu 18 to Ubuntu 20) and change your default worker pool to use different worker nodes.
To migrate your worker nodes to a new Ubuntu version, you must first provision a worker pool that uses a newer Ubuntu version. Then, you can add worker nodes to the new pool and finally remove the original worker pool.
resource "ibm_container_vpc_cluster" "cluster" { ... } resource "ibm_container_vpc_worker_pool" "oldpool" { cluster = ibm_container_vpc_cluster.cluster.id worker_pool_name = "ubuntu18pool" flavor = var.flavor vpc_id = data.ibm_is_vpc.vpc.id worker_count = var.worker_count ... operating_system = "UBUNTU_18_64" }
resource "ibm_container_vpc_worker_pool" "oldpool" { count = var.worker_count - var.new_worker_count == 0 ? 0 : 1 cluster = ibm_container_vpc_cluster.cluster.id worker_pool_name = "ubuntu18pool" flavor = var.flavor vpc_id = data.ibm_is_vpc.vpc.id worker_count = var.worker_count - var.new_worker_count ... operating_system = "UBUNTU_18_64" } resource "ibm_container_vpc_worker_pool" "newpool" { count = var.new_worker_count == 0 ? 0 : 1 cluster = ibm_container_vpc_cluster.cluster.id worker_pool_name = "ubuntu20pool" flavor = var.flavor vpc_id = data.ibm_is_vpc.vpc.id worker_count = var.new_worker_count ... operating_system = "UBUNTU_20_64" }
terraform plan -var new_worker_count=1
terraform apply -var new_worker_count=1
# ibm_container_vpc_worker_pool.newpool[0] will be created + resource "ibm_container_vpc_worker_pool" "newpool" { + cluster = "<clusterid>" + flavor = "bx2.4x16" + id = (known after apply) + labels = (known after apply) + operating_system = "UBUNTU_20_64" + resource_controller_url = (known after apply) + resource_group_id = (known after apply) + secondary_storage = (known after apply) + vpc_id = "<vpcid>" + worker_count = 1 + worker_pool_id = (known after apply) + worker_pool_name = "ubuntu20pool" + zones { + name = "<zone_name>" + subnet_id = "<subnet_id>" } } # ibm_container_vpc_worker_pool.oldpool[0] will be updated in-place ~ resource "ibm_container_vpc_worker_pool" "oldpool" { id = "<oldpoolid>" ~ worker_count = 3 -> 2 # (9 unchanged attributes hidden) # (1 unchanged block hidden) } Plan: 1 to add, 1 to change, 0 to destroy.
terraform plan -var new_worker_count=3
terraform apply -var new_worker_count=3
... Terraform will perform the following actions: # ibm_container_vpc_worker_pool.newpool[0] will be updated in-place ~ resource "ibm_container_vpc_worker_pool" "newpool" { id = "<newpoolid>" ~ worker_count = 2 -> 3 # (9 unchanged attributes hidden) # (1 unchanged block hidden) } # ibm_container_vpc_worker_pool.oldpool[0] will be destroyed - resource "ibm_container_vpc_worker_pool" "oldpool" { - cluster = "<clusterid>" -> null ... } Plan: 0 to add, 1 to change, 1 to destroy.
resource "ibm_container_vpc_cluster" "cluster" { ... } resource "ibm_container_vpc_worker_pool" "newpool" { cluster = ibm_container_vpc_cluster.cluster.id worker_pool_name = "ubuntu20pool" flavor = var.flavor vpc_id = data.ibm_is_vpc.vpc.id worker_count = var.worker_count ... operating_system = "UBUNTU_20_64" }
Begin by defining the worker pool as its own resource.
While you are changing the default worker pool, a backup worker pool is required if the change includes a `ForceNew` operation. If you update the default worker pool without not having a separate worker pool with existing workers already added, your cluster will stop working until the worker replacement is finished.
resource "ibm_container_vpc_cluster" "cluster" { ... } resource "ibm_container_vpc_worker_pool" "default" { cluster = ibm_container_vpc_cluster.cluster.id flavor = <flavor> vpc_id = <vpc_id> worker_count = 1 worker_pool_name = "default" operating_system = "UBUNTU_18_64" ... }
terraform import ibm_container_vpc_worker_pool.default <cluster_id/workerpool_id>
resource "ibm_container_vpc_cluster" "cluster" { ... lifecycle { ignore_changes = [ flavor, operating_system, host_pool_id, secondary_storage, worker_count ] } }
resource "ibm_container_vpc_worker_pool" "default" { cluster = ibm_container_vpc_cluster.cluster.id flavor = <flavor> vpc_id = <vpc_id> worker_count = 2 worker_pool_name = "default" operating_system = "UBUNTU_20_64" ... }
ibmcloud ks worker ls -c <cluster_id>
terraform state pull ibm_container_vpc_cluster.cluster
terraform state rm ibm_container_vpc_worker_pool.default
In the previous examples you learned how to do the following.
For more information about the IBM Cloud provider plug-in for Terraform, see the Terraform registry documentation.
For more information about IBM Cloud Kubernetes Service, see the docs.
For more information about Red Hat OpenShift on IBM Cloud, see the docs.