This blog is part of a series on the IBM Cloud Kubernetes Service’s release of support for the Kubernetes Ingress controller.
For more information about this announcement, check out “Announcing Managed Support of the Kubernetes Ingress Controller for the IBM Cloud Kubernetes Service.”
Introduction
As of 24 August 2020, IBM Cloud Kubernetes Service now supports the Kubernetes Ingress controller for the default application load balancer (ALB) in clusters. As part of the beta release, a migration tool is provided that you can use to migrate Ingress resources from the previous format for the custom IBM Ingress controller to the Kubernetes Ingress format.
The goal of the migration tool is to give users a starting basis for how the Kubernetes Ingress resources should look. For example, you can use the migration tool to migrate IBM Ingress controller ingress.bluemix.net annotations to the corresponding Kubernetes Ingress controller nginx.ingress.kubernetes.io annotations. Additionally, configmap keys that are used by the IBM Ingress controller can be migrated to the new configmap and configmap keys that are used by the Kubernetes Ingress controller. Note that the migration tool migrates all the Ingress resources in all namespaces at once, meaning that there is no way to run the migration in only one namespace.
Operation
The migration tool has two different modes of operation:
- Running in test mode
- Running in production mode
Test mode
In “test” mode, the migration tool generates a test environment on the public network. Within test mode, there are two additional options. You can choose to migrate only public Ingress resources (--type test
) or both public and private Ingress resources (--type test-with-private
).
Using the --type test-with-private
parameter generates Kubernetes Ingress resources for the IBM Ingress resources that were targeted to private ALBs by using the ingress.bluemix.net/ALB-ID
annotation. However, the hostname that is defined in your private Ingress resources is now on the public network.
Because you might have multiple host names across your Ingress resources, the test host name is generated by replacing the ‘.’ with ‘-‘ in a host name that is defined in an Ingress resource and prepending this host name to the wildcard test subdomain.
Examples of how test subdomains are generated for various host names that are defined in Ingress resources:
-
Example 1
-
Original Ingress host name: example.com
-
Kubernetes test Ingress host name: example-com-<random{6}>.myCluster-<unique_hash>-m000.containers.appdomain.cloud.
-
Example 2
-
Original Ingress host name: rand.example.com
-
Kubernetes test Ingress host name: Rand-example-com-<random{6}>.myCluster-<unique_hash>-m000.containers.appdomain.cloud.
-
Example 3
-
Original Ingress host name: test1.myCluster-<unique_hash>-0000.containers.appdomain.cloud
-
Kubernetes test Ingress host name: test1-<random{6}>.myCluster-<unique_hash>-m000.containers.appdomain.cloud.
Production mode
In production mode, the Kubernetes Ingress resources are generated with the same host names that were defined in resources for the IBM Ingress controller. There is no regeneration of the host names for the new Ingress resources. Ingress resources used by the public Kubernetes Ingress controller have the kubernetes.io/ingress.class: public-iks-k8s-nginx
annotation. Ingress resources used by the private Kubernetes Ingress controller–which was previously indicated by the ingress.bluemix.net/ALB-ID
annotation in IBM Ingress resources—have the kubernetes.io/ingress.class: private-iks-k8s-nginx
annotation.
Currently, when the migration tool is run in production mode, any changes that you made to the test Kubernetes Ingress resources during test mode are not incorporated. After you run the migration tool in production mode, you must re-verify the generated Kubernetes Ingress resources and make the same adjustments that you made in test mode.
Prerequisites
General prerequisites to help ensure a successful migration:
- Network policies are set correctly for incoming traffic.
- One unique hostname is defined per IBM Ingress resource.
Specific requirements to run the migration tool:
- Install the Kubernetes Service plug-in to run
ibmcloud ks
commands.
- There are available IP(s) for the additional load balancer generated in test mode operation.
- Create additional ALBs that run the Kubernetes Ingress controller image, or disable an existing ALB and re-enable it with the Kubernetes Ingress controller image.
Copy any TLS secrets into the same namespace as the Ingress resources. For the Kubernetes Ingress controller implementation, secrets must exist in the same namespace as the Ingress resource.
Results
After you run the migration tool, you can see the results by running ibmcloud ks ingress alb migrate status --cluster <cluster name/id>
. The command output provides the following information about the migration status:
- Which Ingress resources had an error migrating an annotation.
- Which ConfigMap parameter migrations had errors.
- Which Ingress resources have been successfully migrated.
- The test host names in the generated Kubernetes Ingress resources.
- The mode (test/test-with-private/production) in which that the migration tool ran.
- The status of the migration.
Cleanup
After the you complete a migration and are happy with the results, you can run ibmcloud ks ingress alb migrate clean --iks-ingresses
to remove the original IBM Ingress resources to ensure that the Kubernetes Ingress controller can’t read them. Note that no copy is made of the original IBM Ingress resources, so once these are deleted, there is no way to recover them.
Additionally, if you want to start over, you can run ibmcloud ks ingress alb migrate clean
and answer the prompts to remove either the generated test or Kubernetes Ingress resources and reset the ConfigMap used by the Kubernetes Ingress controller.
Limitations
The migration tool does have limitations in what it can do. There are features that the IBM Ingress controller has that the Kubernetes Ingress controller does not possess or approaches differently and therefore cannot be automatically migrated.
For example, the Kubernetes Ingress controller does not automatically update the load balancer port mappings based on the configuration. If you change the listening ports of the Ingress controller, you must manually update the load balancer exposing the Ingress controller. Error handling, rate limiting and some upstream proxying configurations also differ between the Ingress controllers.
Details about mapping each IBM Ingress controller annotation to the equivalent Kubernetes Ingress controller annotation and the limitations can be found in the documentation.
Example flow
Example environment setup (before migration)
The cluster used in this example runs the NGINX “cafe” example applications:
$ kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
coffee 2/2 2 2 4h44m
tea 3/3 3 3 4h44m
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
coffee-svc ClusterIP 172.21.179.18 <none> 80/TCP 4h45m
tea-svc ClusterIP 172.21.110.201 <none> 80/TCP 4h45m
The tea and coffee applications are exposed with the IBM Ingress controller using the Ingress resource below:
$ kubectl get ingress cafe -o yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
ingress.bluemix.net/redirect-to-https: "True"
ingress.bluemix.net/rewrite-path: serviceName=tea-svc rewrite=/drinks/tea; serviceName=coffee-svc rewrite=/
ingress.bluemix.net/sticky-cookie-services: serviceName=tea-svc name=sticky-tea expires=1h path=/ hash=sha1
name: cafe
namespace: default
spec:
rules:
- host: tea.example.com
http:
paths:
- backend:
serviceName: tea-svc
servicePort: 80
path: /
- host: example.com
http:
paths:
- backend:
serviceName: coffee-svc
servicePort: 80
path: /coffee
tls:
- hosts:
- tea.example.com
- example.com
secretName: example-com
status:
loadBalancer:
ingress:
- ip: 169.60.26.66
The applications are accessible on the defined hosts and paths:
$ curl -L -v tea.example.com
...
< Set-Cookie: sticky-tea=26e8d3b77d1238d643aed6f1824d1dc8eff19f05; Expires=Tue, 11-Aug-2020 19:09:46 GMT; Path=/
...
Server address: 172.30.230.84:8080
Server name: tea-7769bdf646-lr5xz
Date: 11/Aug/2020:18:09:46 +0000
URI: /drinks/tea
Request ID: 4c377ca0a35936c1f8fd9237de044777
$ curl -L example.com/coffee
Server address: 172.30.230.82:8080
Server name: coffee-7c45f487fd-7pbqk
Date: 11/Aug/2020:18:02:55 +0000
URI: /coffee
Request ID: 4fea4a338b5775eeb1105ab92188bffb
Migration overview
In this example, you run a simple migration from the IBM Ingress controller to the Kubernetes Ingress controller.
The migration consists of the following steps:
- Run test migration
- Clean up test Ingress resources
- Run production migration
- Clean up original IBM Ingress resources
Let’s get started!
1. Run test migration
Start by running a test migration:
$ ibmcloud ks ingress alb migrate start --cluster migration-example --type test
Migrate Ingress resources in cluster migration-example with migration type test? [y/N]> y
Starting the migration of Ingress components and resources in cluster migration-example with type test...
OK
Note: You can also run the migration in the “test-with-private” mode, in which Ingress resources with the ingress.bluemix.net/ALB-ID
annotation are also exposed on the provided public test subdomain.
Check the status of the migration:
$ ibmcloud ks ingress alb migrate status --cluster migration-example
OK
Cluster bspdck020295l7kd67og
Migration type test
Status completed
Test subdomain migration-example-7a13fae41c466d2224b963f00c9f778f-m000.us-south.containers.appdomain.cloud
Test secret migration-example-7a13fae41c466d2224b963f00c9f778f-m000
Subdomain mappings
example-com -> example-com-mqxvj3.migration-example-7a13fae41c466d2224b963f00c9f778f-m000.us-south.containers.appdomain.cloud
tea.example.com -> tea-example-com-ea3ayc.migration-example-7a13fae41c466d2224b963f00c9f778f-m000.us-south.containers.appdomain.cloud
Migrated resources
Resource name: ibm-cloud-provider-ingress-cm
Resource namespace: kube-system
Resource kind: ConfigMap
Migrated to:
- ConfigMap/ibm-k8s-controller-config-test
Resource migration warnings:
- The 'vts-status-zone-size' parameter could not be migrated.
- The 'ingress-resource-creation-rate' parameter could not be migrated.
- The 'ingress-resource-timeout' parameter could not be migrated.
- The 'private-ports' parameter could not be migrated.
- The 'public-ports' parameter could not be migrated.
Resource name: cafe
Resource namespace: default
Resource kind: Ingress
Migrated to:
- Ingress/default-cafe-tea-svc
- Ingress/default-cafe-coffee-svc-coffee
- Ingress/default-cafe-server
Resource migration warnings:
- Annotation 'ingress.bluemix.net/sticky-cookie-services' does not include the 'secure' parameter. However, in the community Ingress implementation, sticky cookies must always be secure and the 'Secure' attribute is added to cookies by default. For more info about session affinity, see https://kubernetes.github.io/ingress-nginx/examples/affinity/cookie/
- Annotation 'ingress.bluemix.net/sticky-cookie-services' does not include the 'HttpOnly' parameter. However, in the community Ingress implementation, sticky cookies must always be HTTP only and the 'HttpOnly' attribute is added to cookies by default. For more info about session affinity, see https://kubernetes.github.io/ingress-nginx/examples/affinity/cookie/
The output shows that the migration is completed in test mode. A subdomain mapping shows the pairs of test subdomains and original subdomains. Under the migrated resources, check how the original resources are translated, including some warnings that signal behavioral differences and potential problems.
The following resources were generated:
$ kubectl get ingress default-cafe-tea-svc -o yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: test
nginx.ingress.kubernetes.io/affinity: cookie
nginx.ingress.kubernetes.io/affinity-mode: persistent
nginx.ingress.kubernetes.io/enable-rewrite-log: "true"
nginx.ingress.kubernetes.io/rewrite-target: /drinks/tea
nginx.ingress.kubernetes.io/session-cookie-change-on-failure: "false"
nginx.ingress.kubernetes.io/session-cookie-expires: "3600"
nginx.ingress.kubernetes.io/session-cookie-max-age: "3600"
nginx.ingress.kubernetes.io/session-cookie-name: sticky-tea
nginx.ingress.kubernetes.io/session-cookie-path: /
name: default-cafe-tea-svc
namespace: default
spec:
rules:
- host: tea-example-com-ea3ayc.migration-example-7a13fae41c466d2224b963f00c9f778f-m000.us-south.containers.appdomain.cloud
http:
paths:
- backend:
serviceName: tea-svc
servicePort: 80
path: /
tls:
- hosts:
- tea-example-com-ea3ayc.migration-example-7a13fae41c466d2224b963f00c9f778f-m000.us-south.containers.appdomain.cloud
secretName: migration-example-7a13fae41c466d2224b963f00c9f778f-m000
status:
loadBalancer:
ingress:
- ip: 169.60.26.67
$ kubectl get ingress default-cafe-coffee-svc-coffee -o yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: test
nginx.ingress.kubernetes.io/enable-rewrite-log: "true"
nginx.ingress.kubernetes.io/rewrite-target: /
name: default-cafe-coffee-svc-coffee
namespace: default
spec:
rules:
- host: example-com-mqxvj3.migration-example-7a13fae41c466d2224b963f00c9f778f-m000.us-south.containers.appdomain.cloud
http:
paths:
- backend:
serviceName: coffee-svc
servicePort: 80
path: /coffee
tls:
- hosts:
- example-com-mqxvj3.migration-example-7a13fae41c466d2224b963f00c9f778f-m000.us-south.containers.appdomain.cloud
secretName: migration-example-7a13fae41c466d2224b963f00c9f778f-m000
status:
loadBalancer:
ingress:
- ip: 169.60.26.67
$ kubectl get ingress default-cafe-server -o yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: test
name: default-cafe-server
namespace: default
spec:
rules:
- host: tea-example-com-ea3ayc.migration-example-7a13fae41c466d2224b963f00c9f778f-m000.us-south.containers.appdomain.cloud
- host: example-com-mqxvj3.migration-example-7a13fae41c466d2224b963f00c9f778f-m000.us-south.containers.appdomain.cloud
tls:
- hosts:
- tea-example-com-ea3ayc.migration-example-7a13fae41c466d2224b963f00c9f778f-m000.us-south.containers.appdomain.cloud
- example-com-mqxvj3.migration-example-7a13fae41c466d2224b963f00c9f778f-m000.us-south.containers.appdomain.cloud
secretName: migration-example-7a13fae41c466d2224b963f00c9f778f-m000
status:
loadBalancer:
ingress:
- ip: 169.60.26.67
As the migration worked quite well for this scenario, no manual adjustments are required for the generated resources. Next, check if the applications are accessible on the test subdomain through the test Kubernetes Ingress controller:
$ curl -L -v tea-example-com-ea3ayc.migration-example-7a13fae41c466d2224b963f00c9f778f-m000.us-south.containers.appdomain.cloud
...
< set-cookie: sticky-tea=e3c6b44880fa682c1889fbc95b1e77cc; Expires=Tue, 11-Aug-20 19:25:57 GMT; Max-Age=3600; Path=/; Secure; HttpOnly
...
Server address: 172.30.230.84:8080
Server name: tea-7769bdf646-lr5xz
Date: 11/Aug/2020:18:25:57 +0000
URI: /drinks/tea
Request ID: 7833a1a2aa63789cb5b79755d5f27caa
$ curl -L example-com-mqxvj3.migration-example-7a13fae41c466d2224b963f00c9f778f-m000.us-south.containers.appdomain.cloud/coffee
Server address: 172.30.230.82:8080
Server name: coffee-7c45f487fd-7pbqk
Date: 11/Aug/2020:18:28:31 +0000
URI: /
Request ID: 90f9b52f1de99a6a9c6fdec68a7d7fd1
2. Clean up test Ingress resources
After testing is completed, clean up the unnecessary resources:
$ ibmcloud ks ingress alb migrate clean --cluster migration-example
Delete all Ingress resources and ConfigMaps that were automatically generated during an Ingress migration? (This will delete Ingress resources and ConfigMaps listed in the 'Migrated to' sections in the output of 'ibmcloud ks ingress alb migrate status'.) [y/N]> y
Reset the 'ibm-k8s-controller-config' ConfigMap to the default settings? [y/N]> n
OK
Cluster bspdck020295l7kd67og
Test ALB deleted yes
Test subdomain unregistered yes
Kubernetes ingress controller ConfigMap reset no
Deleted resources
default namespace
- Ingress/default-cafe-tea-svc
- Ingress/default-cafe-coffee-svc-coffee
- Ingress/default-cafe-server
kube-system namespace
- ConfigMap/ibm-k8s-controller-config-test
The generated resources and the test ALB service are deleted, and the test subdomain is unregistered.
3. Run production migration
After experimenting with the test migration, start the production migration:
$ ibmcloud ks ingress alb migrate start --cluster migration-example --type production
Migrate Ingress resources in cluster migration-example with migration type production? [y/N]> y
Starting the migration of Ingress components and resources in cluster migration-example with type production...
OK
When the production migration is done, the same resources that were generated in test mode are generated in production mode. This time, the generated resources contain the is public-iks-k8s-nginx or private-iks-k8s-nginx class so that they can be processed by the public or private ALBs that run the Kubernetes Ingress controller image. Next, change the existing ALBs to use the Kubernetes Ingress controller image:
$ibmcloud ks ingress alb versions
OK
IBM Cloud Ingress: 'auth' version
421
IBM Cloud Ingress versions
647 (default)
645
642
Kubernetes Ingress versions
0.34.1_365_iks
0.33.0_360_iks (default)
0.32.0_123_iks
$ ibmcloud ks ingress alb ls –cluster migration-example
OK
ALB ID Enabled Status Type ALB IP Zone Build ALB VLAN ID NLB Version
private-crbspdck020295l7kd67og-alb1 false disabled private - dal10 ingress:/ingress-auth: 2838378 1.0
public-crbspdck020295l7kd67og-alb1 true enabled public 169.60.26.66 dal10 ingress:647/ingress-auth:421 2838388 1.0
$ ibmcloud ks ingress alb configure classic --alb-id public-crbspdck020295l7kd67og-alb1 --disable
Configuring ALB...
OK
$ ibmcloud ks ingress alb configure classic --alb-id public-crbspdck020295l7kd67og-alb1 --enable --version 0.34.1_365_iks
Configuring ALB...
OK
Note: It might take up to 5 minutes for your ALB to be fully deployed.
Finally, the ALBs are running the Kubernetes Ingress controller and processing the migrated Ingress resources.
Check access to the applications:
$ curl -L -v tea.example.com
...
< set-cookie: sticky-tea=e25870785b6a5095a03c3bcb7f9d278f; Expires=Tue, 11-Aug-20 20:22:22 GMT; Max-Age=3600; Path=/; Secure; HttpOnly
...
Server address: 172.30.230.86:8080
Server name: tea-7769bdf646-ptjcz
Date: 11/Aug/2020:19:22:22 +0000
URI: /drinks/tea
Request ID: a56b6dc3ac670ef5c3048b6f74ce307f
$ curl -L example.com/coffee
Server address: 172.30.230.83:8080
Server name: coffee-7c45f487fd-bpphz
Date: 11/Aug/2020:19:23:28 +0000
URI: /
Request ID: eadb10d9deddd480a96b41559be828ac
4. Clean up original IBM Ingress resources
As the last step of the migration, delete the original resources from the cluster:
$ ibmcloud ks ingress alb migrate clean --cluster migration-example --iks-ingresses
Delete all Ingress resources and ConfigMaps that were automatically generated during an Ingress migration? (This will delete Ingress resources and ConfigMaps listed in the 'Migrated to' sections in the output of 'ibmcloud ks ingress alb migrate status'.) [y/N]> n
Delete all Ingress resources of class 'iks-nginx', class 'nginx', or of no class? [y/N]> y
Reset the 'ibm-k8s-controller-config' ConfigMap to the default settings? [y/N]> n
OK
Cluster bspdck020295l7kd67og
Test ALB deleted no
Test subdomain unregistered no
Kubernetes ingress controller ConfigMap reset no
Deleted resources
default namespace
- Ingress/cafe
This is the end of the example migration.
More information
For more information, check out our official documentation.
Contact us
If you have questions, engage our team via Slack by registering here and join the discussion in the #general channel on our public IBM Cloud Kubernetes Service Slack.