Enable Istio with IBM Cloud Private
Istio is an open platform that you can use to connect, secure, control, and observe microservices. With Istio, you can create a network of deployed services that include load balancing, service-to-service authentication, monitoring, and more, without changing the service code.
Limitation: Istio does not support Federal Information Processing Standards (FIPS). For more information, see FIPS 140-2 encryption using Istio . Istio is disabled by default in the Cloud Foundry Enterprise Environment installer.
To add Istio support to services, you must deploy a special sidecar proxy throughout your environment, which intercepts all network communication between microservices, configured and managed, by using the control plane functionality provided in Istio.
- Enabling Istio during cluster installation
- Installing Istio for an existing cluster
- Enabling Istio CNI for an existing cluster
- Enabling tracing for an existing cluster
- Verifying the installation
- Upgrading Istio for an existing cluster
- Deploying the applications
- Collecting and visualizing
- Restrictions
- Extending self-signed certificate lifetime
IBM Cloud Private version 3.2.1 supports two methods to enable Istio. You can choose to enable Istio during cluster installation or install Istio chart from the Catalog after cluster installation. Istio fully supports Linux®, Linux® on Power® (ppc64le), and Linux® on IBM® Z and LinuxONE platforms.
Enabling Istio during cluster installation
Note: You must have a minimum of 8 cores on your management node.
To enable Istio, change the value of the istio
parameter to enabled
in the list of management services in the config.yaml
file. You might set the parameter as it is displayed in the following example:
management_services:
istio: enabled
vulnerability-advisor: disabled
storage-glusterfs: disabled
storage-minio: disabled
You can install IBM Cloud Private. Istio is installed during your IBM Cloud Private cluster installation.
Note: If you enabled selinux for IBM Cloud Private nodes, you need to enable privileged permission for Istio sidecar by adding the following section to the config.yaml
:
istio:
global:
proxy:
privileged: true
Enabling Istio Kiali, Grafana, and Prometheus during cluster installation
To enable Kiali, Grafana and Prometheus, first, enable Istio. Then, add the following code to the config.yaml
file:
istio:
kiali:
enabled: true
grafana:
enabled: true
prometheus:
enabled: true
Installing Istio for an existing cluster
Note: IBM Cloud Private 3.2.1 cluster supports the ibm-istio
chart version 1.0.x, 1.1.x and 1.2.x. The ibm-istio
1.2.x charts are now available in the ibm-charts
repository: https://raw.githubusercontent.com/IBM/charts/master/repo/stable/
.
You can deploy Istio if you already have an IBM Cloud Private 3.2.1 cluster installed. To install from the IBM Cloud Private management console, click Catalog and search for the ibm-istio
chart.
-
If you are enabling Grafana with security mode, create the secret first by following the procedure:
-
Encode the username by running the following command, you can change the username:
echo -n 'admin' | base64 YWRtaW4=
-
Encode the passphrase by running the following command, you can change the passphrase:
echo -n 'admin' | base64 YWRtaW4=
-
Set the namespace to where Istio is installed by running the following command:
NAMESPACE=istio-system
-
Create a secret for Grafana by running the following command:
cat <<EOF | kubectl apply -f - apiVersion: v1 kind: Secret metadata: name: grafana namespace: $NAMESPACE labels: app: grafana type: Opaque data: username: YWRtaW4= passphrase: YWRtaW4= EOF
-
-
If you are enabling
kiali
, you also need to create the secret that contains the username and passphrase for the Kiali dashboard. Run the following commands:echo -n 'admin' | base64 YWRtaW4=
echo -n 'admin' | base64 YWRtaW4=
NAMESPACE=istio-system cat <<EOF | kubectl apply -f - apiVersion: v1 kind: Secret metadata: name: kiali namespace: $NAMESPACE labels: app: kiali type: Opaque data: username: YWRtaW4= passphrase: YWRtaW4= EOF
-
Log in to the IBM Cloud Private management console. To install from the IBM Cloud Private management console, click Menu > Catalog.
-
You can search for
Istio
in the Search bar. You can also find Istio from the Filter or from Categories (Operation category). After the search is complete, theibm-istio
chart is displayed.
Note: The ibm-istio
chart is hosted in multiple Helm repositories, depending on the version you're installing. If you want to install the ibm-istio
1.0.x or 1.1.x chart, it's only available in the
ibm-charts
repository. However, if you want to deploy the ibm-istio
1.2.x chart, you can select it from the ibm-charts
or mgmt-charts
Helm repositories.
-
Click the
ibm-istio
chart. A readme file displays information about installing, uninstalling, configuring, and other chart details for Istio. -
Click Configure to navigate to the configuration page.
-
Name your Helm release, select the
istio-system
as the target namespace andlocal-cluster
as the target cluster from the menu. The name must consist of lowercase alphanumeric characters or dash characters (-), and must start and end with an alphanumeric character. -
Be sure to read and agree to the license agreement.
-
Optional: Customize the All parameters fields to your preference.
-
Click Install to deploy the Istio chart and create an Istio release.
Enabling Istio CNI for an existing cluster
For application pods in the Istio service mesh, all traffic to and from the pods needs to be intercepted by the sidecar proxies. This Istio CNI plug-in sets up networking for the pods to fulfill this requirement in place of the current NET_ADMIN
privileged initContainers container, istio-init
, which is injected in the pods along with istio-proxy
sidecars. The plug-in removes the need for a privileged NET_ADMIN container in the Istio users' application pods.
Note: Currently, the Istio CNI only supports IPv4.
To enable Istio CNI in your existing cluster, follow the steps:
-
Install Helm CLI. For more information, see Installing the Helm CLI (helm).
-
Get the existing values from the
values.yaml
file by using the following command:helm get values istio --tls > istio-old-values.yaml
-
Upgrade the
istio
chart by using the following command:helm upgrade istio <path-to-the-istio-chart> --namespace istio-system --force -f istio-old-values.yaml --set istiocni.enabled=true --tls
Enabling tracing for an existing cluster
To enable tracing in your existing cluster, run the following commands:
-
Install Helm CLI. For more information, see Installing the Helm CLI (helm).
-
Get the existing values from the
values.yaml
file.helm get values istio --tls > istio-old-values.yaml
-
Upgrade the
istio
chart.helm upgrade istio <path-to-the-istio-chart> --namespace istio-system --force -f istio-old-values.yaml --set tracing.enabled=true --tls
Verifying the installation
After installation completes, verify that all the components you enabled for the Istio control plane
are created and running:
-
Ensure that the services are deployed by running the following command to get a list of services:
kubectl -n istio-system get svc
Note: The following Kubernetes services are mandatory:
istio-pilot
,istio-ingressgateway
,istio-egressgateway
,istio-policy
,istio-telemetry
,istio-citadel
,istio-galley
, and, optionally,istio-sidecar-injector
,prometheus
,grafana
,jaeger-*
,kiali*
,tracing
,zipkin
.The output might resemble the following content:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE grafana ClusterIP 10.0.0.135 <none> 3000/TCP 37m istio-citadel ClusterIP 10.0.0.167 <none> 8060/TCP,9093/TCP 37m istio-egressgateway ClusterIP 10.0.0.79 <none> 80/TCP,443/TCP 37m istio-galley ClusterIP 10.0.0.70 <none> 443/TCP,9093/TCP 37m istio-ingressgateway LoadBalancer 10.0.0.233 <pending> 80:31380/TCP,443:31390/TCP,31400:31400/TCP,15011:30692/TCP,8060:32603/TCP,15030:31295/TCP,15031:31856/TCP 37m istio-pilot ClusterIP 10.0.0.148 <none> 15010/TCP,15011/TCP,8080/TCP,9093/TCP 37m istio-policy ClusterIP 10.0.0.89 <none> 9091/TCP,15004/TCP,9093/TCP 37m istio-sidecar-injector ClusterIP 10.0.0.199 <none> 443/TCP 37m istio-telemetry ClusterIP 10.0.0.140 <none> 9091/TCP,15004/TCP,9093/TCP,42422/TCP 37m jaeger-agent ClusterIP None <none> 5775/UDP,6831/UDP,6832/UDP 37m jaeger-collector ClusterIP 10.0.0.102 <none> 14267/TCP,14268/TCP 37m jaeger-query ClusterIP 10.0.0.118 <none> 16686/TCP 37m kiali ClusterIP 10.0.0.177 <none> 20001/TCP 37m kiali-jaeger NodePort 10.0.0.65 <none> 20002:32439/TCP 37m prometheus ClusterIP 10.0.0.200 <none> 9090/TCP 37m tracing ClusterIP 10.0.0.99 <none> 16686/TCP 37m zipkin ClusterIP 10.0.0.134 <none> 9411/TCP
-
Ensure that the corresponding Kubernetes pods are deployed and all containers are up. Run the following command:
kubectl -n istio-system get pods
Note: The following pods are mandatory:
istio-pilot-*
,istio-ingressgateway-*
,istio-egressgateway-*
,istio-policy-*
,istio-telemetry-*
,istio-citadel-*
,istio-galley-*
, and, optionally,istio-sidecar-injector-*
,prometheus-*
,grafana-*
,istio-tracing-*
,kiali*
.The output might resemble the following content:
NAME READY STATUS RESTARTS AGE grafana-75f4f8dcf7-2p92z 1/1 Running 0 37m istio-citadel-5d5d5bcd5-tmv2w 1/1 Running 0 37m istio-egressgateway-6669b4888d-t8fqs 1/1 Running 0 37m istio-galley-d6d995d66-d6tb8 1/1 Running 0 37m istio-ingressgateway-57bf47dc7c-ntz8h 1/1 Running 0 37m istio-pilot-745899bb46-kf4z4 2/2 Running 0 37m istio-policy-57567ff748-96vvb 2/2 Running 0 37m istio-sidecar-injector-76fc499f9c-r57bw 1/1 Running 0 37m istio-telemetry-6fc9f55c4f-4229b 2/2 Running 0 37m istio-tracing-66f4676d88-wjgzr 1/1 Running 0 37m kiali-7bdd48bd7d-b6vwd 1/1 Running 0 37m prometheus-b8446f488-fpprf 1/1 Running 0 37m
Upgrading Istio for an existing cluster
-
Install Helm CLI. For more information, see Installing the Helm CLI (helm).
-
Add the IBM Cloud Private building
mgmt-charts
Helm repository by the following command:$ helm repo add mgmt-charts https://<cluster_CA_domain>:8443/mgmt-repo/charts --ca-file ${HELM_HOME}/ca.pem
-
Fetch the
ibm-istio
chart and decompress it to the local file system by the following command:$ helm fetch mgmt-charts/ibm-istio --untar
-
Apply all the Istio Custom Resource Definitions(CRD) for current version before upgrading: Note: There is a known issue that
crd-install
hook doesn't work during the upgrade.-
To upgrade the
ibm-istio
chart to the current version, you need to manually apply all the CRDs before upgrading by using the following commands:$ kubectl apply -f ibm-istio/additionalFiles/crds/crd-10.yaml $ kubectl apply -f ibm-istio/additionalFiles/crds/crd-11.yaml $ kubectl apply -f ibm-istio/additionalFiles/crds/crd-12.yaml
-
If you are enabling
certmanager
, you also need to install its CRDs and wait a few seconds for the CRDs to be committed in the kube-apiserver. Use the following commands:$ kubectl apply -f ibm-istio/additionalFiles/crds/crd-certmanager-10.yaml $ kubectl apply -f ibm-istio/additionalFiles/crds/crd-certmanager-11.yaml
-
-
Get the external values for the old chart release and write them to a local file that is named
values-old.yaml
:helm get values istio --tls > values-old.yaml
-
Create a new values file that is named
values-override.yaml
that contains all the customized fields for the new chart release. For example, if you want to deploy all Istio components to the management node and enablekiali
, then execute the following commands:cat > values-override.yaml <<EOF global: defaultTolerations: - key: "dedicated" operator: "Exists" effect: "NoSchedule" - key: "CriticalAddonsOnly" operator: "Exists" defaultNodeSelector: management: "true" kiali: enabled: true EOF
-
Upgrade the Istio chart with the
--force
flag by using the following command:helm upgrade istio ibm-istio --namespace istio-system --force -f values-old.yaml -f values-override.yaml --tls
Note: Add --dry-run
and --debug
flags to helm upgrade
before you upgrade the istio chart to help address troubleshooting issues.
helm upgrade istio ibm-istio --namespace istio-system --force -f values-old.yaml -f values-override.yaml --tls --dry-run --debug
If there are no errors in the output, you can remove the --dry-run
and --debug
flags.
Note: Helm manages all the resources in the chart templates based on each chart release, so if the upgrade or installation fails, you need to delete the chart release and clean all the relevant resources before proceeding to the next Helm upgrade or installation. For the Istio chart, use the following commands to completely clean the chart release and resources:
helm delete istio --purge --no-hooks --tls
kubectl delete clusterrole $(kubectl get clusterrole | grep istio | awk '{print $1}')
kubectl delete clusterrolebinding $(kubectl get clusterrolebinding | grep istio | awk '{print $1}')
kubectl delete crd $(kubectl get crds | grep istio.io | awk '{print $1}')
Note: From Istio 1.1.x, policy checks is turned off by default to improve performance for most customer scenarios. If you are upgrading istio from 1.0.x and you didn't use policy check function, it is recommended to disable
policy check during upgrade by adding the following parameter to helm upgrade command --set mixer.policy.enabled=false
.
Deploying the applications
After the Istio control plane
is successfully deployed, you can start deploying your applications.
-
Creating
ClusterRoleBinding
to grantprivileged
permissions.If you are deploying applications in default namespace, you can skip this step. If you are deploying applications with Istio injection to non-default namespaces, you must create extra
ClusterRoleBinding
to grantprivileged
permissions to service accounts in that namespace.For example, to deploy an application to the non-default namespace,
istio-lab
, edit your YAML file. Your YAML file content might resemble the following code:export APPLICATION_NAMESPACE=istio-lab cat <<EOF | kubectl apply -f - apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: ibm-istio-privileged-clusterrolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: ibm-privileged-clusterrole subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:serviceaccounts:${APPLICATION_NAMESPACE} EOF
-
Creating imagePullSecrets for the IBM Cloud Private Docker private registry.
You must create imagePullSecrets
for the IBM Cloud Private Docker private registry in the namespace where your applications are deployed. Your applications can then pull the sidecar images from the Docker private registry. The syntax
of the Docker private registry name is <cluster_hostname>:<registry_port>
, the default value of which is mycluster.icp:8500
.
-
Create a
secret
that is namedinfra-registry-key
in the IBM Cloud Private Docker registry that holds your authorization token.kubectl -n <application_namespace> create secret docker-registry infra-registry-key \ --docker-server=<cluster_hostname>:<registry_port> --docker-username=<your-name> \ --docker-password=<your-password> --docker-email=<your-email>
-
Patch your secret to the
ServiceAccount
that is associated with your applications.kubectl -n <application_namespace> get serviceaccount <your-service-account-name> -o yaml | grep -w infra-registry-key || kubectl -n <application_namespace> patch serviceaccount <your-service-account-name> -p '{"imagePullSecrets": [{"name": "infra-registry-key"}]}'
-
If you enabled the automatic sidecar injection, the
istio-sidecar-injector
automatically injects Envoy containers into your application pods that run in namespaces that are labeled withistio-injection=enabled
.To automatically inject Envoy containers, complete the following steps:
-
Label your namespace as
istio-injection=enabled
. Run the following command:kubectl label namespace <application_namespace> istio-injection=enabled
-
Run the following command to create your namespace in your
.yaml
file:kubectl create -n <application_namespace> -f <your-app-spec>.yaml
-
-
If you did not enable automatic sidecar injection, you can manually inject Envoy containers.
To manually enable the sidecar injection you must use
istioctl
. Deploy your applications with your sidecar injection manually. Run the following command:kubectl create -f <(istioctl kube-inject -f <your-app-spec>.yaml)
Collecting and visualizing
Collecting trace spans by using Jaeger
By default, Istio enables Jaeger
with a service type of ClusterIP
. During installation, you can change the default service type to NodePort
so that you can access Jaeger
from an external environment.
To view other service types of NodePort
that have access to Jaeger
, run the following commands:
kubectl expose service jaeger-query --type=NodePort --name=<jaeger-query-svc> --namespace istio-system
export JAEGER_URL=$(kubectl get po -l app=jaeger -n istio-system -o 'jsonpath={.items[0].status.hostIP}'):$(kubectl get svc <jaeger-query-svc> -n istio-system -o 'jsonpath={.spec.ports[0].nodePort}')
echo http://${JAEGER_URL}/
You can access http://${JAEGER_URL}/
from your browser to view trace spans.
Collecting metrics by using Prometheus
Similar to the Collecting trace spans by using Jaeger section, if you install Istio with prometheus
enabled, there is a prometheus
service with a type of ClusterIP
by default. You can change the default
service type to NodePort
.
To view other service types of NodePort
that has access to prometheus
from an external environment, run the following commands:
kubectl expose service prometheus --type=NodePort --name=<prometheus-svc> --namespace istio-system
export PROMETHEUS_URL=$(kubectl get po -l app=prometheus -n istio-system -o 'jsonpath={.items[0].status.hostIP}'):$(kubectl get svc <prometheus-svc> -n istio-system -o 'jsonpath={.spec.ports[0].nodePort}')
echo http://${PROMETHEUS_URL}
You can access http://${PROMETHEUS_URL}/
from your browser to verify that metrics are being collected into Prometheus.
Visualizing metrics with Grafana
Similar to Jaeger
and Prometheus
service, if you install Istio with grafana
enabled, there is a grafana
service with a type of ClusterIP
by default. You can change the default service
type to NodePort
.
To view other service types of NodePort
that has access to grafana
from an external environment, run the following commands:
kubectl expose service grafana --type=NodePort --name=<grafana-svc> --namespace istio-system
export GRAFANA_URL=$(kubectl get po -l app=grafana -n istio-system -o 'jsonpath={.items[0].status.hostIP}'):$(kubectl get svc <grafana-svc> -n istio-system -o 'jsonpath={.spec.ports[0].nodePort}')
echo http://${GRAFANA_URL}/
You can access http://${GRAFANA_URL}/
from your browser to view the Grafana web page.
Observe microservices with Kiali
Like Jaeger
, Prometheus
, and Grafana
service, if you install Istio with kiali
enabled, there is a kiali
service with a type of ClusterIP
by default. You can change the
default service type to NodePort
.
To view other service types of NodePort
that have access to kiali
from an external environment, run the following command:
kubectl expose service kiali --type=NodePort --name=<kiali-svc> --namespace istio-system
export KIALI_URL=$(kubectl get po -l app=kiali -n istio-system -o 'jsonpath={.items[0].status.hostIP}'):$(kubectl get svc <kiali-svc> -n istio-system -o 'jsonpath={.spec.ports[0].nodePort}')
echo http://${KIALI_URL}/
You can access http://${KIALI_URL}/
from your browser to view the Kiali dashboard.
For more information about Istio, see Istio Docs .
Restrictions
Istio on Linux® on Power® (ppc64le)
With Istio, you can create your own filters, one type of which are HTTP filters. When you run Istio on Linux® on Power® (ppc64le), the HTTP Lua filters are not supported. The filters use the LuaJIT compiler, which has no 64-bit little-endian support for Linux® on Power® (ppc64le). Currently, there is no HTTP Lua filter support for Linux® on Power® (ppc64le).
For more information about creating your own filters by using Lua or other extensions, see the Envoy documentation for your specific release.
Extending self-signed certificate lifetime
Istio self-signed certificates have a default lifetime of one year. If you are using Istio self-signed certificates, you must schedule regular root transitions before the certificates expire. An expiration of a root certificate might lead to an unexpected cluster-wide outage.
Complete the following steps to do the root transition.
Note: The Envoy instances are hot restarted to reload the new root certificates, which might impact long-lived connections.
If you are currently not using the mutual TLS feature in Istio and do not plan to use it in the future, you do not need to complete the steps in the Root transition section.
If you are currently using the mutual TLS feature or might use it in the future with Istio self-signed certificates, you must complete the steps in the Root transition section to rotate the Istio self-signed certificates.
Root transition
-
Check the root certificate expiration date by downloading and executing the root transition script on a machine that has
kubectl
access to the cluster. For more information about installingkubectl
, see Installing the Kubernetes CLI (kubectl).wget https://raw.githubusercontent.com/istio/tools/release-1.4/bin/root-transition.sh chmod +x root-transition.sh ./root-transition.sh check-root
Following is a sample output:
Fetching root cert from istio-system namespace... Your Root Cert will expire after ... =====YOU HAVE 3649 DAYS BEFORE THE ROOT CERT EXPIRES!=====
-
Execute a root certificate transition.
./root-transition.sh root-transition
Following is a sample output:
Create new ca cert, with trust domain as k8s.cluster.local Mon Feb 17 01:42:58 PST 2020 delete old CA secret secret "istio-ca-secret" deleted Mon Feb 17 01:42:58 PST 2020 create new CA secret secret/istio-ca-secret created Mon Feb 17 01:42:58 PST 2020 Restarting Citadel ... pod "istio-citadel-68998bc9f7-xlw2k" deleted Mon Feb 17 01:43:02 PST 2020 restarted Citadel, checking status NAME READY STATUS RESTARTS AGE istio-citadel-68998bc9f7-j6fjt 0/1 ContainerCreating 0 4s New root certificate: Certificate: Data: Version: 3 (0x2) Serial Number: 9926940133297523651 (0x89c39374bfc48bc3) Signature Algorithm: sha256WithRSAEncryption Issuer: O=k8s.cluster.local Validity Not Before: Feb 17 09:42:58 2020 GMT Not After : Feb 14 09:42:58 2030 GMT ... Your old certificate is stored as old-ca-cert.pem, and your private key is stored as ca-key.pem Please save them safely and privately.
-
Verify that the new workload certificates are generated.
./root-transition.sh verify-certs
Following is a sample output:
This script checks the current root CA certificate is propagated to all the Istio-managed workload secrets in the cluster. Root cert MD5 is 1cc473f89c4342e38fffa6761e4c9e83 Checking namespace: cert-manager Secret cert-manager.istio.default matches current root. Checking namespace: default Secret default.istio.default matches current root. ... =====All Istio mutual TLS keys and certificates match the current root!=====
Note: If the command fails, wait for a minute and run the command again. It takes some time for Citadel to propagate the certificates.
-
Verify that the new workload certificates are loaded by Envoy.
kubectl -n <APP_NAMESPACE> exec <APP_POD> -c istio-proxy -- curl http://localhost:15000/certs | head -c 1000
Following is a sample output:
{ "certificates": [ { "ca_cert": [ { "valid_from": "2020-02-17T09:42:58Z", "expiration_time": "2030-02-14T09:42:58Z" ...
Note: Check the
expiration_time
value of theca_cert
. If it matches theNot After
value in the new certificate as shown in Step 2, your Envoy successfully loaded the new root certificate.