Enable Istio with IBM Cloud Private
Istio is an open platform that you can use to connect, secure, control, and observe microservices. With Istio, you can create a network of deployed services that include load balancing, service-to-service authentication, monitoring, and more, without changing the service code.
Limitation: Istio does not support Federal Information Processing Standards (FIPS). For more information, see FIPS 140-2 encryption using Istio .
Istio is disabled by default in the IBM Cloud Private Cloud Foundry installer.
To add Istio support to services, you must deploy a special sidecar proxy throughout your environment, which intercepts all network communication between microservices, configured and managed, by using the control plane functionality provided in Istio.
- Enabling Istio during cluster installation
- Installing Istio for an existing cluster
- Enabling tracing for an existing cluster
- Verifying the installation
- Deploying the applications
- Collecting and visualizing
- Restrictions
IBM Cloud Private version 3.1.2 supports two methods to enable Istio. You can choose to enable Istio during cluster installation or install Istio chart from the Catalog after cluster installation. Istio fully supports Linux®, Linux® on Power® (ppc64le), and IBM® Z platforms.
Enabling Istio during cluster installation
Note: You must have a minimum of 8 cores on your management node.
To enable Istio, change the value of the istio
parameter to enabled
in the list of management services in the config.yaml
file. You might set the parameter as it is displayed in the following example:
management_services:
istio: enabled
vulnerability-advisor: disabled
storage-glusterfs: disabled
storage-minio: disabled
You can install IBM Cloud Private. Istio is installed during your IBM Cloud Private cluster installation.
Enabling Kiali, Grafana, and Prometheus during cluster installation
To enable Kiali and Grafana, first, enable Istio. Then, add the following piece of code to the config.yaml
file:
istio:
kiali:
enabled: true
grafana:
enabled: true
prometheus:
enabled: true
Installing Istio for an existing cluster
You can deploy Istio if you already have an IBM Cloud Private 3.1.2 cluster installed. To install from the IBM Cloud Private management console, click Catalog and search for the ibm-istio
chart.
-
If you are enabling Grafana with security mode, create the secret first by following the procedure:
-
Encode the username by running the following command, you can change the username:
echo -n 'admin' | base64 YWRtaW4=
-
Encode the passphrase by running the following command, you can change the passphrase:
echo -n 'admin' | base64 YWRtaW4=
-
Set the namespace to where Istio is installed by running the following command:
NAMESPACE=istio-system
-
Create a secret for Grafana by running the following command:
cat <<EOF | kubectl apply -f - apiVersion: v1 kind: Secret metadata: name: grafana namespace: $NAMESPACE labels: app: grafana type: Opaque data: username: YWRtaW4= passphrase: YWRtaW4= EOF
-
-
If you are enabling
kiali
, you also need to create the secret that contains the username and passphrase for the Kiali dashboard. Run the following commands:echo -n 'admin' | base64 YWRtaW4=
echo -n 'admin' | base64 YWRtaW4=
NAMESPACE=istio-system cat <<EOF | kubectl apply -f - apiVersion: v1 kind: Secret metadata: name: kiali namespace: $NAMESPACE labels: app: kiali type: Opaque data: username: YWRtaW4= passphrase: YWRtaW4= EOF
-
Log in to the IBM Cloud Private management console. To install from the IBM Cloud Private management console, click Menu > Catalog.
-
You can search for
Istio
in the Search bar. You can also find Istio from the Filter or from Categories (Operation category). After the search is complete, theibm-istio
chart is displayed. -
Click on the
ibm-istio
chart. A readme file displays information about installing, uninstalling, configuring, and other chart details for Istio. -
Click Configure to navigate to the configuration page.
-
Name your Helm release and select the
istio-system
namespace from the menu. The name must consist of lowercase alphanumeric characters or dash characters (-), and must start and end with an alphanumeric character. -
Be sure to read and agree to the license agreement.
-
Optional: Customize the All parameters fields to your preference.
-
Click Install to deploy the Istio chart and create an Istio release.
Enabling tracing for an existing cluster
To enable tracing in your existing cluster, run the following commands:
-
Install Helm CLI. For more information, see Installing the Helm CLI (helm).
-
Get the existing values from the
values.yaml
file.helm get values istio --tls > istio-old-values.yaml
-
Upgrade the
istio
chart.helm upgrade istio <path-to-the-istio-chart> --namespace istio-system --force -f istio-old-values.yaml --set tracing.enabled=true --tls
Verifying the installation
After installation completes, verify that all the components you have enabled for the Istio control plane
are created and running:
-
Ensure that the services are deployed by running the following command to get a list of services:
kubectl -n istio-system get svc
Note: The following Kubernetes services are mandatory:
istio-pilot
,istio-ingressgateway
,istio-egressgateway
,istio-policy
,istio-telemetry
,istio-citadel
,istio-statsd-prom-bridge
,istio-galley
, and, optionally,istio-sidecar-injector
,prometheus
,grafana
,jaeger-*
,kiali*
,servicegraph
,tracing
,zipkin
.The output might resemble the following content:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE grafana ClusterIP 10.0.0.135 <none> 3000/TCP 37m istio-citadel ClusterIP 10.0.0.167 <none> 8060/TCP,9093/TCP 37m istio-egressgateway ClusterIP 10.0.0.79 <none> 80/TCP,443/TCP 37m istio-galley ClusterIP 10.0.0.70 <none> 443/TCP,9093/TCP 37m istio-ingressgateway LoadBalancer 10.0.0.233 <pending> 80:31380/TCP,443:31390/TCP,31400:31400/TCP,15011:30692/TCP,8060:32603/TCP,15030:31295/TCP,15031:31856/TCP 37m istio-pilot ClusterIP 10.0.0.148 <none> 15010/TCP,15011/TCP,8080/TCP,9093/TCP 37m istio-policy ClusterIP 10.0.0.89 <none> 9091/TCP,15004/TCP,9093/TCP 37m istio-sidecar-injector ClusterIP 10.0.0.199 <none> 443/TCP 37m istio-statsd-prom-bridge ClusterIP 10.0.0.198 <none> 9102/TCP,9125/UDP 37m istio-telemetry ClusterIP 10.0.0.140 <none> 9091/TCP,15004/TCP,9093/TCP,42422/TCP 37m jaeger-agent ClusterIP None <none> 5775/UDP,6831/UDP,6832/UDP 37m jaeger-collector ClusterIP 10.0.0.102 <none> 14267/TCP,14268/TCP 37m jaeger-query ClusterIP 10.0.0.118 <none> 16686/TCP 37m kiali ClusterIP 10.0.0.177 <none> 20001/TCP 37m kiali-jaeger NodePort 10.0.0.65 <none> 20002:32439/TCP 37m prometheus ClusterIP 10.0.0.200 <none> 9090/TCP 37m servicegraph ClusterIP 10.0.0.197 <none> 8088/TCP 37m tracing ClusterIP 10.0.0.99 <none> 16686/TCP 37m zipkin ClusterIP 10.0.0.134 <none> 9411/TCP
-
Ensure the corresponding Kubernetes pods are deployed and all containers are up. Run the following command:
kubectl -n istio-system get pods
Note: The following pods are mandatory:
istio-pilot-*
,istio-ingressgateway-*
,istio-egressgateway-*
,istio-policy-*
,istio-telemetry-*
,istio-citadel-*
,istio-statsd-prom-bridge-*
,istio-galley-*
, and, optionally,istio-sidecar-injector-*
,prometheus-*
,grafana-*
,istio-tracing-*
,kiali*
,servicegraph-*
.The output might resemble the following content:
NAME READY STATUS RESTARTS AGE grafana-75f4f8dcf7-2p92z 1/1 Running 0 37m istio-citadel-5d5d5bcd5-tmv2w 1/1 Running 0 37m istio-egressgateway-6669b4888d-t8fqs 1/1 Running 0 37m istio-galley-d6d995d66-d6tb8 1/1 Running 0 37m istio-ingressgateway-57bf47dc7c-ntz8h 1/1 Running 0 37m istio-pilot-745899bb46-kf4z4 2/2 Running 0 37m istio-policy-57567ff748-96vvb 2/2 Running 0 37m istio-sidecar-injector-76fc499f9c-r57bw 1/1 Running 0 37m istio-statsd-prom-bridge-676dcc4f8b-d7fhc 1/1 Running 0 37m istio-telemetry-6fc9f55c4f-4229b 2/2 Running 0 37m istio-tracing-66f4676d88-wjgzr 1/1 Running 0 37m kiali-7bdd48bd7d-b6vwd 1/1 Running 0 37m prometheus-b8446f488-fpprf 1/1 Running 0 37m servicegraph-fcdc4c44d-tdm2z 1/1 Running 0 37m
Deploying the applications
After the Istio control plane
is successfully deployed, you can start deploying your applications.
Creating imagePullSecrets for the IBM Cloud Private Docker private registry
If you deploy Istio during cluster installation, you must create imagePullSecrets
for the IBM Cloud Private Docker private registry: <cluster_hostname>:<registry_port>
(the default value is mycluster.icp:8500
),
in the namespace where your applications are deployed, then your applications can pull the sidecar images from the Docker private registry.
-
Create a
secret
that is namedinfra-registry-key
in the IBM Cloud Private Docker registry that holds your authorization token. Run the following command:kubectl -n <application_namespace> create secret docker-registry infra-registry-key --docker-server=<cluster_hostname>:<registry_port> --docker-username=<your-name> --docker-password=<your-password> --docker-email=<your-email>
-
Patch your secret to the
ServiceAccount
that is associated with your applications. Run the following command:kubectl get serviceaccount <your-service-account-name> -o yaml | grep -w infra-registry-key || kubectl patch serviceaccount <your-service-account-name> -p '{"imagePullSecrets": [{"name": "infra-registry-key"}]}'
Automatic sidecar injection
If you enabled the automatic sidecar injection, the istio-sidecar-injector
automatically injects Envoy containers into your application pods that run in namespaces that are labeled with istio-injection=enabled
.
To inject Envoy containers automatically complete the following steps:
-
Label your namespace as
istio-injection=enabled
. Run the following command:kubectl label namespace <namespace> istio-injection=enabled
-
Run the following command to create your namespace in your
.yaml
file:kubectl create -n <namespace> -f <your-app-spec>.yaml
Manual sidecar injection
If you did not enable automatic sidecar injection, you can inject Envoy container manually.
To manually enable the sidecar injection you must use istioctl
. Deploy your applications with your sidecar injection manually. Run the following command:
kubectl create -f <(istioctl kube-inject -f <your-app-spec>.yaml)
Collecting and visualizing
Collecting trace spans using Jaeger
By default, Istio enables Jaeger
with a service type of ClusterIP
. During installation, you can change the default service type to NodePort
so that you can access Jaeger
from an external environment.
To view other service types of NodePort
that have access to Jaeger
, run the following commands:
kubectl expose service jaeger-query --type=NodePort --name=<jaeger-query-svc> --namespace istio-system
export JAEGER_URL=$(kubectl get po -l app=jaeger -n istio-system -o 'jsonpath={.items[0].status.hostIP}'):$(kubectl get svc <jaeger-query-svc> -n istio-system -o 'jsonpath={.spec.ports[0].nodePort}')
echo http://${JAEGER_URL}/
You can access http://${JAEGER_URL}/
from your browser to view trace spans.
Collecting metrics using Prometheus
Similar to the Collecting trace spans using Jaeger section, if you install Istio with prometheus
enabled, there is a prometheus
service with a type of ClusterIP
by default. You can change the default
service type to NodePort
.
To view other service types of NodePort
that has access to prometheus
from an external environment, run the following commands:
kubectl expose service prometheus --type=NodePort --name=<prometheus-svc> --namespace istio-system
export PROMETHEUS_URL=$(kubectl get po -l app=prometheus -n istio-system -o 'jsonpath={.items[0].status.hostIP}'):$(kubectl get svc <prometheus-svc> -n istio-system -o 'jsonpath={.spec.ports[0].nodePort}')
echo http://${PROMETHEUS_URL}
You can access http://${PROMETHEUS_URL}/
from your browser to verify that metrics are being collected into Prometheus.
Visualizing metrics with Grafana
Similar to Jaeger
and Prometheus
service, if you install Istio with grafana
enabled, there is a grafana
service with a type of ClusterIP
by default. You can change the default service
type to NodePort
.
To view other service types of NodePort
that has access to grafana
from an external environment, run the following commands:
kubectl expose service grafana --type=NodePort --name=<grafana-svc> --namespace istio-system
export GRAFANA_URL=$(kubectl get po -l app=grafana -n istio-system -o 'jsonpath={.items[0].status.hostIP}'):$(kubectl get svc <grafana-svc> -n istio-system -o 'jsonpath={.spec.ports[0].nodePort}')
echo http://${GRAFANA_URL}/
You can access http://${GRAFANA_URL}/
from your browser to view the Grafana web page.
Observe microservices with Kiali
Like Jaeger
, Prometheus
, and Grafana
service, if you install Istio with kiali
enabled, there is a kiali
service with a type of ClusterIP
by default. You can change the
default service type to NodePort
.
To view other service types of NodePort
that have access to kiali
from an external environment, run the following command:
kubectl expose service kiali --type=NodePort --name=<kiali-svc> --namespace istio-system
export KIALI_URL=$(kubectl get po -l app=kiali -n istio-system -o 'jsonpath={.items[0].status.hostIP}'):$(kubectl get svc <kiali-svc> -n istio-system -o 'jsonpath={.spec.ports[0].nodePort}')
echo http://${KIALI_URL}/
You can access http://${KIALI_URL}/
from your browser to view the Kiali dashboard.
For more information about Istio, see Istio Docs .
Restrictions
Istio on Linux® on Power® (ppc64le)
With Istio, you can create your own filters, one type of which are HTTP filters. When you run Istio on Linux® on Power® (ppc64le), the HTTP Lua filters are not supported. The filters use the LuaJIT compiler, which has no 64-bit little-endian support for Linux® on Power® (ppc64le). There is currently no HTTP Lua filter support for Linux® on Power® (ppc64le).
For more information about creating your own filters using Lua or other extensions, see the Envoy documentation for your specific release.
If you are deploying applications with Istio injection to non-default namespaces, you must create extra ClusterRoleBinding
to grant privileged
permissions to service accounts in that namespace.
For example, to deploy an application to the non-default namespace, istio-lab
, edit the your YAML. Your YAML might resemble the following content:
export APPLICATION_NAMESPACE=istio-lab
cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: istio-privileged-users
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: privileged
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:serviceaccounts:${APPLICATION_NAMESPACE}
EOF
Then you are ready to deploy your applications with manual injection or automatic injection.