Installing Knative on IBM Cloud Private
Knative provides a set of middleware components that are essential to build modern, source-centric, and container-based
applications that can run anywhere on premises, in the cloud, or even in a third-party data center.
Limitation: Knative does not support specifying a namespace to install. Before you install Knative, make sure you install istio on your IBM Cloud Private cluster.
- Installing Knative for an existing cluster
- Installing knative by using the CLI
- Verifying the installation
- knative support for Linux® on Power® (ppc64le)
- Workaround: Pod state is
Init:ImagePullBackOff
Installing Knative on an existing cluster
Note: A IBM Cloud Private 3.2.0 cluster supports the knative chart version 0.2.x and 0.1.x. The knative charts are available in the ibm-charts repository: https://github.com/IBM/charts/tree/master/community/knative.
You can deploy Knative if you already have an IBM Cloud Private 3.2.0 cluster installed. The current Knative chart must use the CLI to install the knative crds first. Use the following command:
kubectl apply -f https://raw.githubusercontent.com/IBM/charts/master/community/knative/all-crds.yaml
To install a Knative chart from the IBM Cloud Private management console, click Catalog and search for the knative chart. Choose the Knative chart that you want to install.
Installing Knative by using the CLI
Install knative crds with the following command:
kubectl apply -f https://raw.githubusercontent.com/IBM/charts/master/community/knative/all-crds.yaml
Install the chart by using the Helm CLI:
helm repo add ibm-community-charts https://raw.githubusercontent.com/IBM/charts/master/repo/community
helm install ibm-community-charts/knative --name knative [--tls]
The command deploys knative on a IBM Cloud Private 3.2.0 cluster in the default configuration. The configuration section lists the parameters that can be configured during installation.
Verifying the installation
After installation completes, verify that all the components you enabled for Knative are created and running. Monitor the Knative components until all of the components show a status of Running. For example:
kubectl get pod -n knative-serving
NAME READY STATUS RESTARTS AGE
activator-74bc454c4b-tcpqs 2/2 Running 0 4m
autoscaler-8bd664478-pqhph 2/2 Running 0 4m
controller-7cbd5bdc88-9z6dq 1/1 Running 0 4m
webhook-7bcff85bf9-vz9hm 1/1 Running 0 3m
kubectl get pod -n knative-build
NAME READY STATUS RESTARTS AGE
build-controller-d9584dcd6-hpb7b 1/1 Running 0 10m
build-webhook-5bfdbd4fb7-nn79w 1/1 Running 0 10m
If you enable the eventing, monitoring, and eventingSources parameters, you can view the status information. For example:
kubectl get pod -n knative-eventing
NAME READY STATUS RESTARTS AGE
eventing-controller-6f8f5698ff-mrvbq 2/2 Running 0 12m
in-memory-channel-controller-787865b86d-w78lv 2/2 Running 1 12m
in-memory-channel-dispatcher-78bfc7d88f-864wc 2/2 Running 1 12m
webhook-75dcb58956-5smqc 1/1 Running 0 12m
kubectl get pod -n knative-monitoring
NAME READY STATUS RESTARTS AGE
elasticsearch-logging-0 1/1 Running 0 12m
elasticsearch-logging-1 1/1 Running 0 7m
grafana-744b8d4ccb-vsskd 1/1 Running 0 12m
kibana-logging-7d8bd66996-mx7rm 1/1 Running 0 13m
kube-state-metrics-68cd885bf7-sc7zs 4/4 Running 0 9m
node-exporter-pgv8h 2/2 Running 0 13m
node-exporter-q8txs 2/2 Running 0 13m
prometheus-system-0 1/1 Running 0 12m
prometheus-system-1 1/1 Running 0 12m
kubectl get pod -n knative-sources
NAME READY STATUS RESTARTS AGE
controller-manager-0 1/1 Running 0 13m
Knative support for Linux on Power (ppc64le)
A knative installation on a IBM Cloud Private 3.2.0 cluster that is running on a Linux on Power (ppc64le) machine requires the following configurations:
build:
buildController:
image: ibmcom/knative-build-cmd-controller:0.5
buildWebhook:
image: ibmcom/knative-build-cmd-webhook:0.5
credsInit:
image: ibmcom/knative-build-cmd-creds-init:0.5
gcsFetcher:
image: ibmcom/gcs-fetche:0.5
gitInit:
image: ibmcom/knative-build-cmd-git-init:0.5
nop:
image: ibmcom/knative-build-cmd-nop:0.5
eventing:
enabled: true
eventingController:
image: ibmcom/knative-eventing-cmd-controller:0.5
inMemoryProvisioner:
enabled: true
inMemoryChannelController:
controller:
image: ibmcom/knative-eventing-pkg-provisioners-inmemory-controller:0.5
inMemoryChannelDispatcher:
dispatcher:
image: ibmcom/knative-eventing-cmd-fanoutsidecar:0.5
webhook:
image: ibmcom/knative-eventing-cmd-webhook:0.5
eventingSources:
enabled: true
controllerManager:
manager:
image: ibmcom/knative-eventing-sources-cmd-manager:0.5
serving:
activator:
image: ibmcom/knative-serving-cmd-activator:0.5.2
autoscaler:
image: ibmcom/knative-serving-cmd-autoscaler:0.5.2
controller:
image: ibmcom/knative-serving-cmd-controller:0.5.2
queueProxy:
image: ibmcom/knative-serving-cmd-queue:0.5.2
webhook:
image: ibmcom/knative-serving-cmd-webhook:0.5.2
Save the configuration content in power-values.yaml. Install Knative crds with the following command:
kubectl apply -f https://raw.githubusercontent.com/IBM/charts/master/community/knative/all-crds.yaml
Install the chart by using the Helm CLI with the option -f power-values.yaml
helm repo add ibm-community-charts https://raw.githubusercontent.com/IBM/charts/master/repo/community
helm install ibm-community-charts/knative --name knative -f power-values.yaml [--tls]
Limitation: Only the image in power-values.yaml supports a Linux on Power (ppc64le) machine.
Workaround: Pod state is Init:ImagePullBackOff
If your pod has the state Init:ImagePullBackOff, run the following commands as a workaround to the issue:
kubectl -n ${NAMESPACE} create secret docker-registry infra-registry-key --docker-server=${IMAGE_REPO} --docker-username=${DOCKER_USER} --docker-password=${DOCKER_TOKEN} --docker-email='icp@ibm.com'
kubectl -n ${NAMESPACE} patch serviceaccount default -p '{"imagePullSecrets": [{"name": "infra-registry-key"}]}'
Where:
${NAMESPACE}is the Knative namespaceknative-serving,knative-build,knative-eventing, orknative-sources.${IMAGE_REPO}is the repository store that includes the image that the Knative chart needs to use. The default value ismycluster.icp:8500.${DOCKER_USER}and${DOCKER_TOKEN}are the user account that has the authority to access the${IMAGE_REPO}repository.