Upgrading your console and components

IBM® Blockchain Platform 2.5.2 is now available. To take advantage of the latest features and for upgrade instructions, see [upgrading your console and components] (https://www.ibm.com/docs/en/SSVKZ7_2.5.2/howto/console-upgrade-ocp.html).

You can upgrade the IBM® Blockchain Platform without disrupting a running network. Because the platform is deployed by using a Kubernetes operator, you can pull the latest IBM Blockchain Platform images from the IBM Entitlement registry without having to reinstall the platform. You can only use these instructions to upgrade to the IBM Blockchain Platform 2.5.

IBM Blockchain Platform overview

Use these instruction to upgrade to the IBM Blockchain Platform 2.5 from v2.1.3, v2.1.2, v2.1.1, and v2.1.0. The table provides an overview of the current and past releases.

Table 1. IBM Blockchain Platform versions
Version Release date Image tags New features
IBM Blockchain Platform 2.5 01 Oct 2020 Console and tools
  • 2.5.0-20201001-amd64
  • 2.5.0-20200825-amd64
  • 2.5.0-20200714-amd64
  • 2.5.0-20200618-amd64
Fabric nodes
  • 1.4.7-20201001-amd64
  • 1.4.7-20200825-amd64
  • 1.4.7-20200714-amd64
  • 1.4.7-20200618-amd64
  • 2.1.1-202001001-amd64
  • 2.1.1-202000825-amd64
  • 2.1.1-20200714-amd64
  • 2.1.1-20200618-amd64
CouchDB
  • 2.3.1-20201001-amd64
  • 2.3.1-20200825-amd64
  • 2.3.1-20200714-amd64
  • 2.3.1-20200618-amd64
Fabric Version Upgrade
  • Fabric version 1.4.7 and 2.1.1
Improvements to the Console UI
  • Ability to select Fabric version when you deploy a new peer or ordering node.
  • Ability to view certificate expiration dates.
IBM Blockchain Platform v2.1.3 24 March 2020 Console and tools
  • 2.1.3-20200520-amd64
  • 2.1.3-20200416-amd64
  • 2.1.3-20200324-amd64
Fabric nodes
  • 1.4.6-20200520-amd64
  • 1.4.6-20200416-amd64
  • 1.4.6-20200324-amd64
CouchDB
  • 2.3.1-20200520-amd64
  • 2.3.1-20200416-amd64
  • 2.3.1-20200324-amd64
Fabric Version Upgrade
  • Fabric version 1.4.6
Additional platforms
  • Platform can be deployed on the OpenShift Container Platform 4.2 on LinuxONE (s390x)
Improvements to the Console UI
  • Hardware Security Module (HSM) support for node identities
  • Ability to override CA, peer, and ordering node configuration
  • Ability to add and remove Raft ordering nodes
  • Java smart contract instantiation
  • Updated create channel and create organization panels
IBM Blockchain Platform v2.1.2 17 December 2019 Console and tools
  • 2.1.2-20191217-amd64
  • 2.1.2-20200213-amd64
Fabric nodes
  • 1.4.4-20191217-amd64
  • 1.4.4-20200213-amd64
CouchDB
  • 2.3.1-20191217-amd64
  • 2.3.1-20200213-amd64
Fabric Version Upgrade
  • Fabric version 1.4.4
Additional platforms
  • Platform can be deployed on the OpenShift Container Platform 4.1 and 4.2
Improvements to the Console UI
  • Simplified component creation flows
  • Zone selection for ordering nodes
  • Add peer to a channel from Channels tab
  • Anchor peer during join
  • Export/Import all
IBM Blockchain Platform v2.1.1 8 November 2019 Console and tools
  • 2.1.1-20191108-amd64
Fabric nodes
  • 1.4.3-20191108-amd64
CouchDB
  • 2.3.1-20191108-amd64
Additional platforms
  • Platform can be deployed on Kubernetes v1.14 - v1.16
  • Platform can be deployed on IBM Cloud Private 3.2.1
IBM Blockchain Platform v2.1.0 24 September 2019 Console and tools
  • 2.1.0-20190918-amd64
Fabric nodes
  • 1.4.3-20190918-amd64
CouchDB
  • 2.3.1-20190918-amd64
Fabric Version Upgrade
  • Fabric version 1.4.3
Additional platforms
  • Platform can be deployed on the OpenShift Container Platform 3.11

Platform limitations

If your IBM Blockchain Platform is running on OpenShift Container Platform 3.11, you cannot upgrade to IBM Blockchain Platform 2.5 unless you first upgrade your OpenShift cluster from 3.11 to 4.3. For more information, see Migrating OpenShift Container Platform to 4.3.

Upgrade to the IBM Blockchain Platform 2.5

You can upgrade an IBM Blockchain Platform network by using the following steps:

  1. Create the ibpinfra project for the webhook
  2. Create a secret for your entitlement key
  3. Deploy the webhook and custom resource definitions to your OpenShift cluster
  4. Update the ClusterRole
  5. Upgrade the IBM Blockchain Platform operator
  6. Use your console to upgrade your running blockchain nodes

After you upgrade the IBM Blockchain Platform operator, the operator will automatically upgrade the console that is deployed on your OpenShift project. You can then use the upgraded console to upgrade your blockchain nodes.

You need to complete these steps 4-6 for each network that that runs on a separate project. If you experience any problems, see the instructions for rolling back an upgrade. If you deployed your network behind a firewall, without access to the external internet, see the separate set of instructions for Upgrading the IBM Blockchain Platform behind a firewall.

You can continue to submit transactions to your network while you are upgrading your network. However, you cannot use the console to deploy new nodes, install or instantiate smart contracts, or create new channels during the upgrade process.

It is a best practice to upgrade your SDK to the latest version as part of a general upgrade of your network. While the SDK will always be compatible with equivalent releases of Fabric and lower, it might be necessary to upgrade to the latest SDK to leverage the latest Fabric features. Also, after upgrading, it's possible your client application may experience errors. Consult the your Fabric SDK documentation for information about how to upgrade.

Roll back an upgrade

When you upgrade your operator, the operator saves the secrets, deployment spec, and network information of your console before attempting to upgrade the console. If your upgrade fails for any reason, the IBM Support can roll back your upgrade and restore your previous deployment by using the information on your cluster. If you need to roll back your upgrade, you can submit a support case from the mysupport page.

You can roll back an upgrade after you use the console to operate your network. However, after you use the console to upgrade your blockchain nodes, you can no longer roll back your console to a previous version of the platform.

Before you begin

To upgrade your network, you need to retrieve your entitlement key from the My IBM Dashboard, and create a Kubernetes secret to store the key on your OpenShift project. If the Entitlement key secret was removed from your cluster, or if your key is expired, then you need to download another key and create a new secret.

Occasionally, a five node ordering service that was deployed using v2.1.2 will be deleted by the Kubernetes garbage collector because it considers the nodes a resource that needs to be cleaned up. This process is both random and unrecoverable --- if the ordering service is deleted, all of the channels hosted on it are permanently lost. To prevent this, the ownerReferences field in the configuration of each ordering node must be removed before upgrading to 2.5. For the steps about how to pull the configuration file, remove ordererReferences, and apply the change, see Known issues in the v2.1.2 documentation.

Step one: Create the ibpinfra project for the webhook

Because the platform has updated the internal apiversion from v1alpha1 in previous versions to v1alpha2 in 2.5, a Kubernetes conversion webhook is required to update the CA, peer, operator, and console to the new API version. This webhook will continue to be used in the future, so new deployments of the platform are required to deploy it as well. The webhook is deployed to its own project, referred to as ibpinfra throughout these instructions.

After you log in to your cluster, you can create the new ibpinfra project for the Kubernetes conversion webhook using the kubectl CLI. The new project needs to be created by a cluster administrator.

Run the following command to create the project:

oc new-project ibpinfra

When you create a new project, a new namespace is created with the same name as your project. You can verify that the existence of the new namespace by using the oc get namespace command:

$ oc get namespace
NAME                                STATUS    AGE
ibpinfra                            Active    2m

Step two: Create a secret for your entitlement key

After you purchase the IBM Blockchain Platform, you can access the My IBM dashboard to obtain your entitlement key for the offering. You need to store the entitlement key on your cluster by creating a Kubernetes Secret. Kubernetes secrets are used to securely store the key on your cluster and pass it to the operator and the console deployments.

Run the following command to create the secret and add it to your ibpinfra namespace or project:

kubectl create secret docker-registry docker-key-secret --docker-server=cp.icr.io --docker-username=cp --docker-password=<KEY> --docker-email=<EMAIL> -n ibpinfra

The name of the secret that you are creating is docker-key-secret. It is required by the webhook that you will deploy later. You can only use the key once per deployment. You can refresh the key before you attempt another deployment and use that value here.

Step three: Deploy the webhook and custom resource definitions to your OpenShift cluster

Before you can upgrade an existing network to 2.5, or deploy a new instance of the platform to your Kubernetes cluster, you need to create the conversion webhook by completing the steps in this section. The webhook is deployed to its own namespace or project, referred to ibpinfra throughout these instructions.

The first three steps are for deployment of the webhook. The last five steps are for the custom resource definitions for the CA, peer, orderer, and console components that the IBM Blockchain Platform requires. You only have to deploy the webhook and custom resource definitions once per cluster. If you have already deployed this webhook and custom resource definitions to your cluster, you can skip these eight steps below.

1. Configure role-based access control (RBAC) for the webhook

First, copy the following text to a file on your local system and save the file as rbac.yaml. This step allows the webhook to read and create a TLS secret in its own project.

apiVersion: v1
kind: ServiceAccount
metadata:
  name: webhook
  namespace: ibpinfra
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: webhook
rules:
- apiGroups:
  - "*"
  resources:
  - secrets
  verbs:
  - "*"
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: ibpinfra
subjects:
- kind: ServiceAccount
  name: webhook
  namespace: ibpinfra
roleRef:
  kind: Role
  name: webhook
  apiGroup: rbac.authorization.k8s.io

Run the following command to add the file to your cluster definition:

kubectl apply -f rbac.yaml -n ibpinfra

When the command completes successfully, you should see something similar to:

serviceaccount/webhook created
role.rbac.authorization.k8s.io/webhook created
rolebinding.rbac.authorization.k8s.io/ibpinfra created

2. (OpenShift cluster only) Apply the Security Context Constraint

The IBM Blockchain Platform requires specific security and access policies to be added to the ibpinfra project. Copy the security context constraint object below and save it to your local system as ibpinfra-scc.yaml.

allowHostDirVolumePlugin: true
allowHostIPC: true
allowHostNetwork: true
allowHostPID: true
allowHostPorts: true
allowPrivilegeEscalation: true
allowPrivilegedContainer: true
allowedCapabilities:
- NET_BIND_SERVICE
- CHOWN
- DAC_OVERRIDE
- SETGID
- SETUID
- FOWNER
apiVersion: security.openshift.io/v1
defaultAddCapabilities: []
fsGroup:
  type: RunAsAny
groups:
- system:serviceaccounts:ibpinfra
kind: SecurityContextConstraints
metadata:
  name: ibpinfra
readOnlyRootFilesystem: false
requiredDropCapabilities: []
runAsUser:
  type: RunAsAny
seLinuxContext:
  type: RunAsAny
supplementalGroups:
  type: RunAsAny
volumes:
- "*"
priority: 1

After you save the file, run the following commands to add the file to your cluster and add the policy to your project.

oc apply -f ibpinfra-scc.yaml -n ibpinfra
oc adm policy add-scc-to-user ibpinfra system:serviceaccounts:ibpinfra

If the commands are successful, you can see a response that is similar to the following example:

securitycontextconstraints.security.openshift.io/ibpinfra created
scc "ibpinfra" added to: ["system:serviceaccounts:ibpinfra"]

3. Deploy the webhook

In order to deploy the webhook, you need to create two .yaml files and apply them to your Kubernetes cluster.

deployment.yaml

Copy the following text to a file on your local system and save the file as deployment.yaml. If you are deploying on OpenShift Container Platform 4.3 on LinuxONE, you need to replace amd64 with s390x.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: "ibp-webhook"
  labels:
    helm.sh/chart: "ibm-ibp"
    app.kubernetes.io/name: "ibp"
    app.kubernetes.io/instance: "ibp-webhook"
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/instance: "ibp-webhook"
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      labels:
        helm.sh/chart: "ibm-ibp"
        app.kubernetes.io/name: "ibp"
        app.kubernetes.io/instance: "ibp-webhook"
      annotations:
        productName: "IBM Blockchain Platform"
        productID: "54283fa24f1a4e8589964e6e92626ec4"
        productVersion: "2.5.0"
    spec:
      serviceAccountName: webhook
      imagePullSecrets:
        - name: docker-key-secret
      hostIPC: false
      hostNetwork: false
      hostPID: false
      securityContext:
        runAsNonRoot: true
        runAsUser: 1000
        fsGroup: 2000
      containers:
        - name: "ibp-webhook"
          image: "cp.icr.io/cp/ibp-crdwebhook:2.5.0-20201001-amd64"
          imagePullPolicy: Always
          securityContext:
            privileged: false
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsNonRoot: true
            runAsUser: 1000
            capabilities:
              drop:
              - ALL
              add:
              - NET_BIND_SERVICE
          env:
            - name: "LICENSE"
              value: "accept"
            - name: NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
            - name: server
              containerPort: 3000
          livenessProbe:
            httpGet:
              path: /healthz
              port: server
              scheme: HTTPS
            initialDelaySeconds: 30
            timeoutSeconds: 5
            failureThreshold: 6
          readinessProbe:
            httpGet:
              path: /healthz
              port: server
              scheme: HTTPS
            initialDelaySeconds: 26
            timeoutSeconds: 5
            periodSeconds: 5
          resources:
            requests:
              cpu: 0.1
              memory: "100Mi"

Run the following command to add the file to your cluster definition:

kubectl apply -n ibpinfra -f deployment.yaml

When the command completes successfully, you should see something similar to:

deployment.apps/ibp-webhook created

service.yaml

Second, copy the following text to a file on your local system and save the file as service.yaml.

apiVersion: v1
kind: Service
metadata:
  name: "ibp-webhook"
  labels:
    type: "webhook"
    app.kubernetes.io/name: "ibp"
    app.kubernetes.io/instance: "ibp-webhook"
    helm.sh/chart: "ibm-ibp"
spec:
  type: ClusterIP
  ports:
    - name: server
      port: 443
      targetPort: server
      protocol: TCP
  selector:
    app.kubernetes.io/instance: "ibp-webhook"

Run the following command to add the file to your cluster definition:

kubectl apply -n ibpinfra -f service.yaml

When the command completes successfully, you should see something similar to:

service/ibp-webhook created

4. Extract the certificate

Next, we need to extract the TLS certificate that was generated by the webhook deployment so that it can be used in the custom resource definitions in the next steps. Run the following command to extract the secret to a base64 encoded string:

kubectl get secret webhook-tls-cert -n ibpinfra -o json | jq -r .data.\"cert.pem\"

The output of this command is a base64 encoded string and looks similar to:

LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJoRENDQVNtZ0F3SUJBZ0lRZDNadkhZalN0KytKdTJXbFMvVDFzakFLQmdncWhrak9QUVFEQWpBU01SQXcKRGdZRFZRUUtFd2RKUWswZ1NVSlFNQjRYRFRJd01EWXdPVEUxTkRrME5sFORGsxTVZvdwpFakVRTUE0R0ExVUVDaE1IU1VKTklFbENVREJaTUJGcVRyV0Z4WFBhTU5mSUkrYUJ2RG9DQVFlTW3SUZvREFUQmdOVkhTVUVEREFLQmdncgpCZ0VGQlFjREFUQU1CZ05WSFJNQkFmOEVBakFBTUNvR0ExVWRFUVFqTUNHQ0gyTnlaQzEzWldKb2IyOXJMWE5sCmNuWnBZMlV1ZDJWaWFHOXZheTV6ZG1Nd0NnWUlLb1pJemowRUF3SURTUUF3UmdJaEFNb29kLy9zNGxYaTB2Y28KVjBOMTUrL0h6TkI1cTErSTJDdU9lb1c1RnR4MUFpRUEzOEFlVktPZnZSa0paN0R2THpCRFh6VmhJN2lBQVV3ZAo3ZStrOTA3TGFlTT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=

Save the base64 encoded string that is returned by this command to be used in the next steps when you create the custom resource definitions.

5. Create the CA custom resource definition

Copy the following text to a file on your local system and save the file as ibpca-crd.yaml.

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  labels:
    app.kubernetes.io/instance: ibpca
    app.kubernetes.io/managed-by: ibp-operator
    app.kubernetes.io/name: ibp
    helm.sh/chart: ibm-ibp
    release: operator
  name: ibpcas.ibp.com
spec:
  preserveUnknownFields: false
  conversion:
    strategy: Webhook
    webhookClientConfig:
      service:
        namespace: ibpinfra
        name: ibp-webhook
        path: /crdconvert
      caBundle: "<CABUNDLE>"
  validation:
    openAPIV3Schema:
      x-kubernetes-preserve-unknown-fields: true    
  group: ibp.com
  names:
    kind: IBPCA
    listKind: IBPCAList
    plural: ibpcas
    singular: ibpca
  scope: Namespaced
  subresources:
    status: {}
  version: v1alpha2
  versions:
  - name: v1alpha2
    served: true
    storage: true
  - name: v210
    served: false
    storage: false
  - name: v212
    served: false
    storage: false
  - name: v1alpha1
    served: true
    storage: false

Replace the value of <CABUNDLE> with the base64 encoded string that you extracted in step three after the webhook deployment.

Then, use the kubectl CLI to add the custom resource definition to your project.

kubectl apply -f ibpca-crd.yaml

You should see the following output when it is successful:

customresourcedefinition.apiextensions.k8s.io/ibpcas.ibp.com created

6. Create the peer custom resource definition

Copy the following text to a file on your local system and save the file as ibppeer-crd.yaml.

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: ibppeers.ibp.com
  labels:
    release: "operator"
    helm.sh/chart: "ibm-ibp"
    app.kubernetes.io/name: "ibp"
    app.kubernetes.io/instance: "ibppeer"
    app.kubernetes.io/managed-by: "ibp-operator"
spec:
  preserveUnknownFields: false
  conversion:
    strategy: Webhook
    webhookClientConfig:
      service:
        namespace: ibpinfra
        name: ibp-webhook
        path: /crdconvert
      caBundle: "<CABUNDLE>"
  validation:
    openAPIV3Schema:
      x-kubernetes-preserve-unknown-fields: true    
  group: ibp.com
  names:
    kind: IBPPeer
    listKind: IBPPeerList
    plural: ibppeers
    singular: ibppeer
  scope: Namespaced
  subresources:
    status: {}
  version: v1alpha2
  versions:
  - name: v1alpha2
    served: true
    storage: true
  - name: v1alpha1
    served: true
    storage: false

Replace the value of <CABUNDLE> with the base64 encoded string that you extracted in step three after the webhook deployment.

Then, use the kubectl CLI to add the custom resource definition to your project.

kubectl apply -f ibppeer-crd.yaml

You should see the following output when it is successful:

customresourcedefinition.apiextensions.k8s.io/ibppeers.ibp.com created

7. Create the orderer custom resource definition

Copy the following text to a file on your local system and save the file as ibporderer-crd.yaml.

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: ibporderers.ibp.com
  labels:
    release: "operator"
    helm.sh/chart: "ibm-ibp"
    app.kubernetes.io/name: "ibp"
    app.kubernetes.io/instance: "ibporderer"
    app.kubernetes.io/managed-by: "ibp-operator"
spec:
  preserveUnknownFields: false
  conversion:
    strategy: Webhook
    webhookClientConfig:
      service:
        namespace: ibpinfra
        name: ibp-webhook
        path: /crdconvert
      caBundle: "<CABUNDLE>"
  validation:
    openAPIV3Schema:
      x-kubernetes-preserve-unknown-fields: true    
  group: ibp.com
  names:
    kind: IBPOrderer
    listKind: IBPOrdererList
    plural: ibporderers
    singular: ibporderer
  scope: Namespaced
  subresources:
    status: {}
  version: v1alpha2
  versions:
  - name: v1alpha2
    served: true
    storage: true
  - name: v1alpha1
    served: true
    storage: false

Replace the value of <CABUNDLE> with the base64 encoded string that you extracted in step three after the webhook deployment.

Then, use the kubectl CLI to add the custom resource definition to your project.

kubectl apply -f ibporderer-crd.yaml

You should see the following output when it is successful:

customresourcedefinition.apiextensions.k8s.io/ibporderers.ibp.com created

8. Create the console custom resource definition

Copy the following text to a file on your local system and save the file as ibpconsole-crd.yaml.

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: ibpconsoles.ibp.com
  labels:
    release: "operator"
    helm.sh/chart: "ibm-ibp"
    app.kubernetes.io/name: "ibp"
    app.kubernetes.io/instance: "ibpconsole"
    app.kubernetes.io/managed-by: "ibp-operator"
spec:
  preserveUnknownFields: false
  conversion:
    strategy: Webhook
    webhookClientConfig:
      service:
        namespace: ibpinfra
        name: ibp-webhook
        path: /crdconvert
      caBundle: "<CABUNDLE>"
  validation:
    openAPIV3Schema:
      x-kubernetes-preserve-unknown-fields: true
  group: ibp.com
  names:
    kind: IBPConsole
    listKind: IBPConsoleList
    plural: ibpconsoles
    singular: ibpconsole
  scope: Namespaced
  subresources:
    status: {}
  version: v1alpha2
  versions:
  - name: v1alpha2
    served: true
    storage: true
  - name: v1alpha1
    served: true
    storage: false

Replace the value of <CABUNDLE> with the base64 encoded string that you extracted in step three after the webhook deployment.
Then, use the kubectl CLI to add the custom resource definition to your project.

kubectl apply -f ibpconsole-crd.yaml

You should see the following output when it is successful:

customresourcedefinition.apiextensions.k8s.io/ibpconsoles.ibp.com created

Step four: Update the ClusterRole

You need to update the ClusterRole that is applied to your project. Copy the following text to a file on your local system and save the file as ibp-clusterrole.yaml. Edit the file and replace <PROJECT_NAME> with the name of your project.

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: <PROJECT_NAME>
rules:
- apiGroups:
  - apiextensions.k8s.io
  resources:
  - persistentvolumeclaims
  - persistentvolumes
  verbs:
  - '*'
- apiGroups:
  - apiextensions.k8s.io
  resources:
  - customresourcedefinitions
  verbs:
  - 'get'
- apiGroups:
  - "*"
  resources:
  - pods
  - pods/log
  - services
  - endpoints
  - persistentvolumeclaims
  - persistentvolumes
  - events
  - configmaps
  - secrets
  - ingresses
  - roles
  - rolebindings
  - serviceaccounts
  - nodes
  - routes
  - routes/custom-host
  verbs:
  - '*'
- apiGroups:
  - ""
  resources:
  - namespaces
  - nodes
  verbs:
  - get
- apiGroups:
  - apps
  resources:
  - deployments
  - daemonsets
  - replicasets
  - statefulsets
  verbs:
  - '*'
- apiGroups:
  - monitoring.coreos.com
  resources:
  - servicemonitors
  verbs:
  - get
  - create
- apiGroups:
  - apps
  resourceNames:
  - ibp-operator
  resources:
  - deployments/finalizers
  verbs:
  - update
- apiGroups:
  - ibp.com
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - config.openshift.io
  resources:
  - '*'
  verbs:
  - '*'

After you save and edit the file, run the following commands. Replace <PROJECT_NAME> with your project.

oc apply -f ibp-clusterrole.yaml
oc adm policy add-scc-to-group <PROJECT_NAME> system:serviceaccounts:<PROJECT_NAME>

Step five: Upgrade the IBM Blockchain operator

You can upgrade the IBM Blockchain operator by fetching the operator deployment spec from your OpenShift project. When the operator upgrade is complete, the operator will upgrade your console and download the latest images for your blockchain nodes.

Log in to your cluster by using the OpenShift CLI. Because each IBM Blockchain network runs in a different project, you must switch to each OpenShift project and upgrade each network separately. Go to the OpenShift project of the network that you want to upgrade. Replace <PROJECT_NAME> with the name of your project.

oc project <PROJECT_NAME>

When you are operating from your project, run the following command to download the operator deployment spec to your local file system:

kubectl get deployment ibp-operator -o yaml > operator.yaml

Open operator.yaml in a text editor and save a new copy of the file as operator-upgrade.yaml. Open operator-upgrade.yaml in a text editor. You need to update the image: field with the updated version of the operator image. You can find the name and tag of the latest operator image below:

cp.icr.io/cp/ibp-operator:2.5.0-20201001-amd64

If you are upgrading from v2.1.0 or v2.1.1, then you also need to edit the env: section of the file. Find the following lines in operator-upgrade.yaml:

- name: ISOPENSHIFT
  value: "true"

Replace this section with the following lines at the same indentation:

- name: CLUSTERTYPE
  value: OPENSHIFT

When you are finished editing the file, the env: section would look similar to the following:

env:
- name: WATCH_NAMESPACE
  valueFrom:
    fieldRef:
      apiVersion: v1
      fieldPath: metadata.namespace
- name: POD_NAME
  valueFrom:
    fieldRef:
      apiVersion: v1
      fieldPath: metadata.name
- name: OPERATOR_NAME
  value: ibp-operator
- name: CLUSTERTYPE
  value: OPENSHIFT

Save the file on your local system. You can then issue the following command upgrade your operator:

kubectl apply -f operator-upgrade.yaml

You can use the kubectl get deployment ibp-operator -o yaml command to confirm that the command updated the operator spec.

After you apply the operator-upgrade.yaml operator spec to your OpenShift project, the operator will restart and pull the latest image. The upgrade takes about a minute. While the upgrade is taking place, you can still access your console UI. However, you cannot use the console to install and instantiate chaincode, or use the console or the APIs to create or remove a node.

You can check that the upgrade is complete by running kubectl get deployment. If the upgrade is successful, then you can see the following tables with four ones displayed for your operator and your console.

NAME           READY     UP-TO-DATE   AVAILABLE   AGE
ibp-operator   1/1       1            1           1m
ibpconsole     1/1       1            1           4m

If you experience a problem while you are upgrading the operator, go to this troubleshooting topic for a list of commonly encountered problems. You can run the command to apply the original operator file, kubectl apply -f operator.yaml to restore your original operator deployment.

Step six: Upgrade your blockchain nodes

After you upgrade your console, you can use the console UI to upgrade the nodes of your blockchain network. Browse to the console UI and open the nodes overview tab. You can find the Patch available text on a node tile if there is an update available for the component. You can install this patch whenever you are ready. These patches are optional, but they are recommended. You cannot patch nodes that were imported into the console.

Apply patches to nodes one at a time. Your nodes are unavailable to process requests or transactions while the patch is being applied. Therefore, to avoid any disruption of service, you need to ensure that another node of the same type is available to process requests whenever possible. Installing patches on a node takes about a minute to complete and when the update is complete, the node is ready to process requests.

To apply a patch to a node, open the node tile and click the Update available button. You cannot patch nodes that you imported to the console.

Upgrading the IBM Blockchain Platform behind a firewall

If you deployed the IBM Blockchain Platform behind a firewall, without access to the external internet, you can upgrade your network by using the following steps:

  1. Pull the latest IBM Blockchain Platform images
  2. Create the ibpinfra project for the webhook
  3. Create a secret for your entitlement key
  4. Deploy the webhook and custom resource definitions to your OpenShift cluster
  5. Update the ClusterRole
  6. Upgrade the IBM Blockchain Platform operator
  7. Use your console to upgrade your running blockchain nodes

You can continue to submit transactions to your network while you are upgrading your network. However, you cannot use the console to deploy new nodes, install or instantiate smart contracts, or create new channels during the upgrade process.

Before you begin

To upgrade your network, you need to retrieve your entitlement key from the My IBM Dashboard, and create a Kubernetes secret to store the key on your OpenShift project. If the Entitlement key secret was removed from your cluster, or if your key is expired, then you need to download another key and create a new secret.

Step one: Pull the latest IBM Blockchain Platform images

To upgrade your network, download the latest set of IBM Blockchain Platform images and push them to a docker registry that you can access from behind your firewall.

Use the following command to log in to the IBM Entitlement Registry:

docker login --username cp --password <KEY> cp.icr.io

After you log in, use the following command to pull the images for IBM Blockchain Platform 2.5:

docker pull cp.icr.io/cp/ibp-operator:2.5.0-20201001-amd64
docker pull cp.icr.io/cp/ibp-init:2.5.0-20201001-amd64
docker pull cp.icr.io/cp/ibp-console:2.5.0-20201001-amd64
docker pull cp.icr.io/cp/ibp-grpcweb:2.5.0-20201001-amd64
docker pull cp.icr.io/cp/ibp-deployer:2.5.0-20201001-amd64
docker pull cp.icr.io/cp/ibp-fluentd:2.5.0-20201001-amd64
docker pull cp.icr.io/cp/ibp-couchdb:2.3.1-20201001-amd64
docker pull cp.icr.io/cp/ibp-peer:1.4.7-20201001-amd64
docker pull cp.icr.io/cp/ibp-orderer:1.4.7-20201001-amd64
docker pull cp.icr.io/cp/ibp-ca:1.4.7-20201001-amd64
docker pull cp.icr.io/cp/ibp-dind:1.4.7-20201001-amd64
docker pull cp.icr.io/cp/ibp-utilities:1.4.7-20201001-amd64
docker pull cp.icr.io/cp/ibp-peer:2.1.1-20201001-amd64
docker pull cp.icr.io/cp/ibp-orderer:2.1.1-20201001-amd64
docker pull cp.icr.io/cp/ibp-chaincode-launcher:2.1.1-20201001-amd64
docker pull cp.icr.io/cp/ibp-utilities:2.1.1-20201001-amd64
docker pull cp.icr.io/cp/ibp-ccenv:2.1.1-20201001-amd64
docker pull cp.icr.io/cp/ibp-goenv:2.1.1-20201001-amd64
docker pull cp.icr.io/cp/ibp-nodeenv:2.1.1-20201001-amd64
docker pull cp.icr.io/cp/ibp-javaenv:2.1.1-20201001-amd64
docker pull cp.icr.io/cp/ibp-crdwebhook:2.5.0-20201001-amd64
docker pull cp.icr.io/cp/ibp-ccenv:1.4.7-20201001-amd64
docker pull cp.icr.io/cp/ibp-goenv:1.4.7-20201001-amd64
docker pull cp.icr.io/cp/ibp-nodeenv:1.4.7-20201001-amd64
docker pull cp.icr.io/cp/ibp-javaenv:1.4.7-20201001-amd64

After you download the images, you must change the image tags to refer to your docker registry. Replace <LOCAL_REGISTRY> with the URL of your local registry and run the following commands:

docker tag cp.icr.io/cp/ibp-operator:2.5.0-20201001-amd64 <LOCAL_REGISTRY>/ibp-operator:2.5.0-20201001-amd64
docker tag cp.icr.io/cp/ibp-init:2.5.0-20201001-amd64 <LOCAL_REGISTRY>/ibp-init:2.5.0-20201001-amd64
docker tag cp.icr.io/cp/ibp-console:2.5.0-20201001-amd64 <LOCAL_REGISTRY>/ibp-console:2.5.0-20201001-amd64
docker tag cp.icr.io/cp/ibp-grpcweb:2.5.0-20201001-amd64 <LOCAL_REGISTRY>/ibp-grpcweb:2.5.0-20201001-amd64
docker tag cp.icr.io/cp/ibp-deployer:2.5.0-20201001-amd64 <LOCAL_REGISTRY>/ibp-deployer:2.5.0-20201001-amd64
docker tag cp.icr.io/cp/ibp-fluentd:2.5.0-20201001-amd64 <LOCAL_REGISTRY>/ibp-fluentd:2.5.0-20201001-amd64
docker tag cp.icr.io/cp/ibp-couchdb:2.3.1-20201001-amd64 <LOCAL_REGISTRY>/ibp-couchdb:2.3.1-20201001-amd64
docker tag cp.icr.io/cp/ibp-peer:1.4.7-20201001-amd64 <LOCAL_REGISTRY>/ibp-peer:1.4.7-20201001-amd64
docker tag cp.icr.io/cp/ibp-orderer:1.4.7-20201001-amd64 <LOCAL_REGISTRY>/ibp-orderer:1.4.7-20201001-amd64
docker tag cp.icr.io/cp/ibp-ca:1.4.7-20201001-amd64 <LOCAL_REGISTRY>/ibp-ca:1.4.7-20201001-amd64
docker tag cp.icr.io/cp/ibp-dind:1.4.7-20201001-amd64 <LOCAL_REGISTRY>/ibp-dind:1.4.7-20201001-amd64
docker tag cp.icr.io/cp/ibp-utilities:1.4.7-20201001-amd64 <LOCAL_REGISTRY>/ibp-utilities:1.4.7-20201001-amd64
docker tag cp.icr.io/cp/ibp-peer:2.1.1-20201001-amd64 <LOCAL_REGISTRY>/ibp-peer:2.1.1-20201001-amd64
docker tag cp.icr.io/cp/ibp-orderer:2.1.1-20201001-amd64 <LOCAL_REGISTRY>/ibp-orderer:2.1.1-20201001-amd64
docker tag cp.icr.io/cp/ibp-chaincode-launcher:2.1.1-20201001-amd64 <LOCAL_REGISTRY>/ibp-chaincode-launcher:2.1.1-20201001-amd64
docker tag cp.icr.io/cp/ibp-utilities:2.1.1-20201001-amd64 <LOCAL_REGISTRY>/ibp-utilities:2.1.1-20201001-amd64
docker tag cp.icr.io/cp/ibp-ccenv:2.1.1-20201001-amd64 <LOCAL_REGISTRY>/ibp-ccenv:2.1.1-20201001-amd64
docker tag cp.icr.io/cp/ibp-goenv:2.1.1-20201001-amd64 <LOCAL_REGISTRY>/ibp-goenv:2.1.1-20201001-amd64
docker tag cp.icr.io/cp/ibp-nodeenv:2.1.1-20201001-amd64 <LOCAL_REGISTRY>/ibp-nodeenv:2.1.1-20201001-amd64
docker tag cp.icr.io/cp/ibp-javaenv:2.1.1-20201001-amd64 <LOCAL_REGISTRY>/ibp-javaenv:2.1.1-20201001-amd64
docker tag cp.icr.io/cp/ibp-crdwebhook:2.5.0-20201001-amd64 <LOCAL_REGISTRY>/ibp-crdwebhook:2.5.0-20201001-amd64
docker tag cp.icr.io/cp/ibp-ccenv:1.4.7-20201001-amd64 <LOCAL_REGISTRY>/ibp-ccenv:1.4.7-20201001-amd64
docker tag cp.icr.io/cp/ibp-goenv:1.4.7-20201001-amd64 <LOCAL_REGISTRY>/ibp-goenv:1.4.7-20201001-amd64
docker tag cp.icr.io/cp/ibp-nodeenv:1.4.7-20201001-amd64 <LOCAL_REGISTRY>/ibp-nodeenv:1.4.7-20201001-amd64
docker tag cp.icr.io/cp/ibp-javaenv:1.4.7-20201001-amd64 <LOCAL_REGISTRY>/ibp-javaenv:1.4.7-20201001-amd64

You can use the docker images command to check that the new tags were added. You can then push the images with the new tags to your docker registry. Log in to your registry by using the following command:

docker login --username <USER> --password <LOCAL_REGISTRY_PASSWORD> <LOCAL_REGISTRY>

Then, run the following command to push the images. Replace <LOCAL_REGISTRY> with the URL of your local registry.

docker push <LOCAL_REGISTRY>/ibp-operator:2.5.0-20201001-amd64
docker push <LOCAL_REGISTRY>/ibp-init:2.5.0-20201001-amd64
docker push <LOCAL_REGISTRY>/ibp-console:2.5.0-20201001-amd64
docker push <LOCAL_REGISTRY>/ibp-grpcweb:2.5.0-20201001-amd64
docker push <LOCAL_REGISTRY>/ibp-deployer:2.5.0-20201001-amd64
docker push <LOCAL_REGISTRY>/ibp-fluentd:2.5.0-20201001-amd64
docker push <LOCAL_REGISTRY>/ibp-couchdb:2.3.1-20201001-amd64
docker push <LOCAL_REGISTRY>/ibp-peer:1.4.7-20201001-amd64
docker push <LOCAL_REGISTRY>/ibp-orderer:1.4.7-20201001-amd64
docker push <LOCAL_REGISTRY>/ibp-ca:1.4.7-20201001-amd64
docker push <LOCAL_REGISTRY>/ibp-dind:1.4.7-20201001-amd64
docker push <LOCAL_REGISTRY>/ibp-utilities:1.4.7-20201001-amd64
docker push <LOCAL_REGISTRY>/ibp-peer:2.1.1-20201001-amd64
docker push <LOCAL_REGISTRY>/ibp-orderer:2.1.1-20201001-amd64
docker push <LOCAL_REGISTRY>/ibp-chaincode-launcher:2.1.1-20201001-amd64
docker push <LOCAL_REGISTRY>/ibp-utilities:2.1.1-20201001-amd64
docker push <LOCAL_REGISTRY>/ibp-ccenv:2.1.1-20201001-amd64
docker push <LOCAL_REGISTRY>/ibp-goenv:2.1.1-20201001-amd64
docker push <LOCAL_REGISTRY>/ibp-nodeenv:2.1.1-20201001-amd64
docker push <LOCAL_REGISTRY>/ibp-javaenv:2.1.1-20201001-amd64
docker push <LOCAL_REGISTRY>/ibp-crdwebhook:2.5.0-20201001-amd64
docker push <LOCAL_REGISTRY>/ibp-ccenv:1.4.7-20201001-amd64
docker push <LOCAL_REGISTRY>/ibp-goenv:1.4.7-20201001-amd64
docker push <LOCAL_REGISTRY>/ibp-nodeenv:1.4.7-20201001-amd64
docker push <LOCAL_REGISTRY>/ibp-javaenv:1.4.7-20201001-amd64

After you complete these steps, you can use the following instructions to deploy the IBM Blockchain Platform with the images in your registry.

Step two: Create the ibpinfra project for the webhook

Because the platform has updated the internal apiversion from v1alpha1 in previous versions to v1alpha2 in 2.5, a Kubernetes conversion webhook is required to update the CA, peer, operator, and console to the new API version. This webhook will continue to be used in the future, so new deployments of the platform are required to deploy it as well. The webhook is deployed to its own project, referred to as ibpinfra throughout these instructions.

After you log in to your cluster, you can create the new ibpinfra project for the Kubernetes conversion webhook using the kubectl CLI. The new project needs to be created by a cluster administrator.

Run the following command to create the project:

oc new-project ibpinfra

When you create a new project, a new namespace is created with the same name as your project. You can verify that the existence of the new namespace by using the oc get namespace command:

$ oc get namespace
NAME                                STATUS    AGE
ibpinfra                            Active    2m

Step three: Create a secret for your entitlement key

After you purchase the IBM Blockchain Platform, you can access the My IBM dashboard to obtain your entitlement key for the offering. You need to store the entitlement key on your cluster by creating a Kubernetes Secret. Kubernetes secrets are used to securely store the key on your cluster and pass it to the operator and the console deployments.

Run the following command to create the secret and add it to your ibpinfra namespace or project:

kubectl create secret docker-registry docker-key-secret --docker-server=cp.icr.io --docker-username=cp --docker-password=<KEY> --docker-email=<EMAIL> -n ibpinfra

The name of the secret that you are creating is docker-key-secret. It is required by the webhook that you will deploy later. You can only use the key once per deployment. You can refresh the key before you attempt another deployment and use that value here.

Step four: Deploy the webhook and custom resource definitions to your OpenShift cluster

Before you can upgrade an existing network to 2.5, or deploy a new instance of the platform to your Kubernetes cluster, you need to create the conversion webhook by completing the steps in this section. The webhook is deployed to its own namespace or project, referred to ibpinfra throughout these instructions.

The first three steps are for deployment of the webhook. The last five steps are for creation of the custom resource definitions for the CA, peer, orderer and console components that the IBM Blockchain Platform requires. You only have to deploy the webhook and custom resource definitions once per cluster. If you have already deployed this webhook and custom resource definitions to your cluster, you can skip these eight steps below.

1. Configure role-based access control (RBAC) for the webhook

Copy the following text to a file on your local system and save the file as rbac.yaml. This step allows the webhook to read and create a TLS secret in its own project.

apiVersion: v1
kind: ServiceAccount
metadata:
  name: webhook
  namespace: ibpinfra
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: webhook
rules:
- apiGroups:
  - "*"
  resources:
  - secrets
  verbs:
  - "*"
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: ibpinfra
subjects:
- kind: ServiceAccount
  name: webhook
  namespace: ibpinfra
roleRef:
  kind: Role
  name: webhook
  apiGroup: rbac.authorization.k8s.io

Run the following command to add the file to your cluster definition:

kubectl apply -f rbac.yaml -n ibpinfra

When the command completes successfully, you should see something similar to:

serviceaccount/webhook created
role.rbac.authorization.k8s.io/webhook created
rolebinding.rbac.authorization.k8s.io/ibpinfra created

2.(OpenShift cluster only) Apply the Security Context Constraint

The IBM Blockchain Platform requires specific security and access policies to be added to the ibpinfra project. Copy the security context constraint object below and save it to your local system as ibpinfra-scc.yaml.

allowHostDirVolumePlugin: true
allowHostIPC: true
allowHostNetwork: true
allowHostPID: true
allowHostPorts: true
allowPrivilegeEscalation: true
allowPrivilegedContainer: true
allowedCapabilities:
- NET_BIND_SERVICE
- CHOWN
- DAC_OVERRIDE
- SETGID
- SETUID
- FOWNER
apiVersion: security.openshift.io/v1
defaultAddCapabilities: []
fsGroup:
  type: RunAsAny
groups:
- system:cluster-admins
- system:authenticated
kind: SecurityContextConstraints
metadata:
  name: ibpinfra
readOnlyRootFilesystem: false
requiredDropCapabilities: []
runAsUser:
  type: RunAsAny
seLinuxContext:
  type: RunAsAny
supplementalGroups:
  type: RunAsAny
volumes:
- "*"
priority: 1

After you save the file, run the following commands to add the file to your cluster and add the policy to your project.

oc apply -f ibpinfra-scc.yaml -n ibpinfra
oc adm policy add-scc-to-user ibpinfra system:serviceaccounts:ibpinfra

If the commands are successful, you can see a response that is similar to the following example:

securitycontextconstraints.security.openshift.io/ibpinfra created
scc "ibpinfra" added to: ["system:serviceaccounts:ibpinfra"]

3. Deploy the webhook

In order to deploy the webhook, you need to create two .yaml files and apply them to your Kubernetes cluster.

deployment.yaml

Copy the following text to a file on your local system and save the file as deployment.yaml. If you are deploying on OpenShift Container Platform 4.3 on LinuxONE, you need to replace amd64 with s390x.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: "ibp-webhook"
  labels:
    helm.sh/chart: "ibm-ibp"
    app.kubernetes.io/name: "ibp"
    app.kubernetes.io/instance: "ibp-webhook"
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/instance: "ibp-webhook"
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      labels:
        helm.sh/chart: "ibm-ibp"
        app.kubernetes.io/name: "ibp"
        app.kubernetes.io/instance: "ibp-webhook"
      annotations:
        productName: "IBM Blockchain Platform"
        productID: "54283fa24f1a4e8589964e6e92626ec4"
        productVersion: "2.5.0"
    spec:
      serviceAccountName: webhook
      imagePullSecrets:
        - name: docker-key-secret
      hostIPC: false
      hostNetwork: false
      hostPID: false
      securityContext:
        runAsNonRoot: true
        runAsUser: 1000
        fsGroup: 2000
      containers:
        - name: "ibp-webhook"
          image: "cp.icr.io/cp/ibp-crdwebhook:2.5.0-20201001-amd64"
          imagePullPolicy: Always
          securityContext:
            privileged: false
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsNonRoot: true
            runAsUser: 1000
            capabilities:
              drop:
              - ALL
              add:
              - NET_BIND_SERVICE
          env:
            - name: "LICENSE"
              value: "accept"
            - name: NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
            - name: server
              containerPort: 3000
          livenessProbe:
            httpGet:
              path: /healthz
              port: server
              scheme: HTTPS
            initialDelaySeconds: 30
            timeoutSeconds: 5
            failureThreshold: 6
          readinessProbe:
            httpGet:
              path: /healthz
              port: server
              scheme: HTTPS
            initialDelaySeconds: 26
            timeoutSeconds: 5
            periodSeconds: 5
          resources:
            requests:
              cpu: 0.1
              memory: "100Mi"

Run the following command to add the configuration to your cluster definition:

kubectl apply -n ibpinfra -f deployment.yaml

When the command completes successfully you should see something similar to:

deployment.apps/ibp-webhook created
service.yaml

Secondly, copy the following text to a file on your local system and save the file as service.yaml.

apiVersion: v1
kind: Service
metadata:
  name: "ibp-webhook"
  labels:
    type: "webhook"
    app.kubernetes.io/name: "ibp"
    app.kubernetes.io/instance: "ibp-webhook"
    helm.sh/chart: "ibm-ibp"
spec:
  type: ClusterIP
  ports:
    - name: server
      port: 443
      targetPort: server
      protocol: TCP
  selector:
    app.kubernetes.io/instance: "ibp-webhook"

Run the following command to add the configuration to your cluster definition:

kubectl apply -n ibpinfra -f service.yaml

When the command completes successfully you should see something similar to:

service/ibp-webhook created

4. Extract the certificate

Next, we need to extract the TLS certificate that was generated by the webhook deployment so that it can be used in the next steps. Run the following command to extract the secret to a base64 encoded string:

kubectl get secret webhook-tls-cert -n ibpinfra -o json | jq -r .data.\"cert.pem\"

The output of this command is a base64 encoded string and looks similar to:

LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJoRENDQVNtZ0F3SUJBZ0lRZDNadkhZalN0KytKdTJXbFMvVDFzakFLQmdncWhrak9QUVFEQWpBU01SQXcKRGdZRFZRUUtFd2RKUWswZ1NVSlFNQjRYRFRJd01EWXdPVEUxTkRrME5sFORGsxTVZvdwpFakVRTUE0R0ExVUVDaE1IU1VKTklFbENVREJaTUJGcVRyV0Z4WFBhTU5mSUkrYUJ2RG9DQVFlTW3SUZvREFUQmdOVkhTVUVEREFLQmdncgpCZ0VGQlFjREFUQU1CZ05WSFJNQkFmOEVBakFBTUNvR0ExVWRFUVFqTUNHQ0gyTnlaQzEzWldKb2IyOXJMWE5sCmNuWnBZMlV1ZDJWaWFHOXZheTV6ZG1Nd0NnWUlLb1pJemowRUF3SURTUUF3UmdJaEFNb29kLy9zNGxYaTB2Y28KVjBOMTUrL0h6TkI1cTErSTJDdU9lb1c1RnR4MUFpRUEzOEFlVktPZnZSa0paN0R2THpCRFh6VmhJN2lBQVV3ZAo3ZStrOTA3TGFlTT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=

Save the base64 encoded string that is returned by this command to be used in the next steps when you create the custom resource definitions.

5. Create the CA custom resource definition

Copy the following text to a file on your local system and save the file as ibpca-crd.yaml.

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  labels:
    app.kubernetes.io/instance: ibpca
    app.kubernetes.io/managed-by: ibp-operator
    app.kubernetes.io/name: ibp
    helm.sh/chart: ibm-ibp
    release: operator
  name: ibpcas.ibp.com
spec:
  preserveUnknownFields: false
  conversion:
    strategy: Webhook
    webhookClientConfig:
      service:
        namespace: ibpinfra
        name: ibp-webhook
        path: /crdconvert
      caBundle: "<CABUNDLE>"
  validation:
    openAPIV3Schema:
      x-kubernetes-preserve-unknown-fields: true    
  group: ibp.com
  names:
    kind: IBPCA
    listKind: IBPCAList
    plural: ibpcas
    singular: ibpca
  scope: Namespaced
  subresources:
    status: {}
  version: v1alpha2
  versions:
  - name: v1alpha2
    served: true
    storage: true
  - name: v210
    served: false
    storage: false
  - name: v212
    served: false
    storage: false
  - name: v1alpha1
    served: true
    storage: false

Replace the value of <CABUNDLE> with the base64 encoded string that you extracted in step three after the webhook deployment.

Then, use the kubectl CLI to add the custom resource definition to your namespace or project.

kubectl apply -f ibpca-crd.yaml

You should see the following output when it is successful:

customresourcedefinition.apiextensions.k8s.io/ibpcas.ibp.com created

6. Create the peer custom resource definition

Copy the following text to a file on your local system and save the file as ibppeer-crd.yaml.

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: ibppeers.ibp.com
  labels:
    release: "operator"
    helm.sh/chart: "ibm-ibp"
    app.kubernetes.io/name: "ibp"
    app.kubernetes.io/instance: "ibppeer"
    app.kubernetes.io/managed-by: "ibp-operator"
spec:
  preserveUnknownFields: false
  conversion:
    strategy: Webhook
    webhookClientConfig:
      service:
        namespace: ibpinfra
        name: ibp-webhook
        path: /crdconvert
      caBundle: "<CABUNDLE>"
  validation:
    openAPIV3Schema:
      x-kubernetes-preserve-unknown-fields: true    
  group: ibp.com
  names:
    kind: IBPPeer
    listKind: IBPPeerList
    plural: ibppeers
    singular: ibppeer
  scope: Namespaced
  subresources:
    status: {}
  version: v1alpha2
  versions:
  - name: v1alpha2
    served: true
    storage: true
  - name: v1alpha1
    served: true
    storage: false

Replace the value of <CABUNDLE> with the base64 encoded string that you extracted in step three after the webhook deployment.

Then, use the kubectl CLI to add the custom resource definition to your namespace or project.

kubectl apply -f ibppeer-crd.yaml

You should see the following output when it is successful:

customresourcedefinition.apiextensions.k8s.io/ibppeers.ibp.com created

7. Create the orderer custom resource definition

Copy the following text to a file on your local system and save the file as ibporderer-crd.yaml.

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: ibporderers.ibp.com
  labels:
    release: "operator"
    helm.sh/chart: "ibm-ibp"
    app.kubernetes.io/name: "ibp"
    app.kubernetes.io/instance: "ibporderer"
    app.kubernetes.io/managed-by: "ibp-operator"
spec:
  preserveUnknownFields: false
  conversion:
    strategy: Webhook
    webhookClientConfig:
      service:
        namespace: ibpinfra
        name: ibp-webhook
        path: /crdconvert
      caBundle: "<CABUNDLE>"
  validation:
    openAPIV3Schema:
      x-kubernetes-preserve-unknown-fields: true    
  group: ibp.com
  names:
    kind: IBPOrderer
    listKind: IBPOrdererList
    plural: ibporderers
    singular: ibporderer
  scope: Namespaced
  subresources:
    status: {}
  version: v1alpha2
  versions:
  - name: v1alpha2
    served: true
    storage: true
  - name: v1alpha1
    served: true
    storage: false

Replace the value of <CABUNDLE> with the base64 encoded string that you extracted in step three after the webhook deployment.

Then, use the kubectl CLI to add the custom resource definition to your namespace or project.

kubectl apply -f ibporderer-crd.yaml

You should see the following output when it is successful:

customresourcedefinition.apiextensions.k8s.io/ibporderers.ibp.com created

8. Create the console custom resource definition

Copy the following text to a file on your local system and save the file as ibpconsole-crd.yaml.

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: ibpconsoles.ibp.com
  labels:
    release: "operator"
    helm.sh/chart: "ibm-ibp"
    app.kubernetes.io/name: "ibp"
    app.kubernetes.io/instance: "ibpconsole"
    app.kubernetes.io/managed-by: "ibp-operator"
spec:
  preserveUnknownFields: false
  conversion:
    strategy: Webhook
    webhookClientConfig:
      service:
        namespace: ibpinfra
        name: ibp-webhook
        path: /crdconvert
      caBundle: "<CABUNDLE>"
  validation:
    openAPIV3Schema:
      x-kubernetes-preserve-unknown-fields: true
  group: ibp.com
  names:
    kind: IBPConsole
    listKind: IBPConsoleList
    plural: ibpconsoles
    singular: ibpconsole
  scope: Namespaced
  subresources:
    status: {}
  version: v1alpha2
  versions:
  - name: v1alpha2
    served: true
    storage: true
  - name: v1alpha1
    served: true
    storage: false

Replace the value of <CABUNDLE> with the base64 encoded string that you extracted in step three after the webhook deployment.
Then, use the kubectl CLI to add the custom resource definition to your namespace or project.

kubectl apply -f ibpconsole-crd.yaml

You should see the following output when it is successful:

customresourcedefinition.apiextensions.k8s.io/ibpconsoles.ibp.com created

Step five: Update the ClusterRole

You need to update the ClusterRole that is applied to your project. Copy the following text to a file on your local system and save the file as ibp-clusterrole.yaml. Edit the file and replace <PROJECT_NAME> with the name of your project.

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: <PROJECT_NAME>
rules:
- apiGroups:
  - apiextensions.k8s.io
  resources:
  - persistentvolumeclaims
  - persistentvolumes
  verbs:
  - '*'
- apiGroups:
  - apiextensions.k8s.io
  resources:
  - customresourcedefinitions
  verbs:
  - 'get'
- apiGroups:
  - "*"
  resources:
  - pods
  - pods/log
  - services
  - endpoints
  - persistentvolumeclaims
  - persistentvolumes
  - events
  - configmaps
  - secrets
  - ingresses
  - roles
  - rolebindings
  - serviceaccounts
  - nodes
  - routes
  - routes/custom-host
  verbs:
  - '*'
- apiGroups:
  - ""
  resources:
  - namespaces
  - nodes
  verbs:
  - get
- apiGroups:
  - apps
  resources:
  - deployments
  - daemonsets
  - replicasets
  - statefulsets
  verbs:
  - '*'
- apiGroups:
  - monitoring.coreos.com
  resources:
  - servicemonitors
  verbs:
  - get
  - create
- apiGroups:
  - apps
  resourceNames:
  - ibp-operator
  resources:
  - deployments/finalizers
  verbs:
  - update
- apiGroups:
  - ibp.com
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - config.openshift.io
  resources:
  - '*'
  verbs:
  - '*'

After you save and edit the file, run the following commands. Replace <PROJECT_NAME> with the name of your IBM Blockchain Platform deployment project.

oc apply -f ibp-clusterrole.yaml -n <PROJECT_NAME>
oc adm policy add-scc-to-group <PROJECT_NAME> system:serviceaccounts:<PROJECT_NAME>

Step six: Upgrade the IBM Blockchain operator

You can upgrade the IBM Blockchain operator by fetching the operator deployment spec from your OpenShift project. You can then update the spec with the latest operator image that you pushed to your local registry. When the operator upgrade is complete, the operator will download the images that you pushed to your local registry and upgrade your console.

Log in to your cluster by using the OpenShift CLI. Because each IBM Blockchain network runs in a different project, you must switch to each OpenShift project and upgrade each network separately. Go to the OpenShift project of the network that you want to upgrade. Replace <PROJECT_NAME> with the name of your project.

oc project <PROJECT_NAME>

When you are operating from your project, run the following command to download the operator deployment spec to your local file system:

kubectl get deployment ibp-operator -o yaml > operator.yaml

Open operator.yaml in a text editor and save a new copy of the file as operator-upgrade.yaml. Open operator-upgrade.yaml a text editor. You need to update the image: field with the updated version of the operator image:

<LOCAL_REGISTRY>/ibp-operator:2.5.0-20201001-amd64

If you are upgrading from v2.1.0 or v2.1.1, then you also need to edit the env: section of the file. Find the following lines in operator-upgrade.yaml:

- name: ISOPENSHIFT
  value: "true"

Replace the values above with the following lines at the same indentation:

- name: CLUSTERTYPE
  value: OPENSHIFT

When you are finished editing the file, the env: section would look similar to the following:

env:
- name: WATCH_NAMESPACE
  valueFrom:
    fieldRef:
      apiVersion: v1
      fieldPath: metadata.namespace
- name: POD_NAME
  valueFrom:
    fieldRef:
      apiVersion: v1
      fieldPath: metadata.name
- name: OPERATOR_NAME
  value: ibp-operator
- name: CLUSTERTYPE
  value: OPENSHIFT

Save the file on your local system. You can then issue the following command upgrade your operator:

kubectl apply -f operator-upgrade.yaml

You can use the kubectl get deployment ibp-operator -o yaml command to confirm that the command updated the operator spec.

After you apply the operator-upgrade.yaml operator spec to your OpenShift project, the operator will restart and pull the latest image. The upgrade takes about a minute. While the upgrade is taking place, you can still access your console UI. However, you cannot use the console to install and instantiate chaincode, or use the console or the APIs to create or remove a node.

You can check that the upgrade is complete by running kubectl get deployment. If the upgrade is successful, then you can see the following tables with four ones displayed for your operator and your console.

NAME           READY     UP-TO-DATE   AVAILABLE   AGE
ibp-operator   1/1       1            1           1m
ibpconsole     1/1       1            1           4m

If you experience a problem while you are upgrading the operator, go to this troubleshooting topic for a list of commonly encountered problems.

If your console experiences an image pull error, you may need to update the console deployment spec with the local registry that you used to download the images.

NAME                          READY     STATUS              RESTARTS   AGE
ibp-operator-b9446759-6tmls   1/1       Running             0          1m
ibpconsole-57ff4bcbb7-79dhn   0/4       Init:ErrImagePull   0          1m

This can happen if you have changed your regsitry URL between deployments. Run the following command to download the CR spec of the console:

kubectl get ibpconsole ibpconsole -o yaml > console.yaml

Then add the URL of your local registry to the spec: section of console.yaml. Replace <LOCAL_REGISTRY> with the URL of your local registry:

spec:
   registryURL: <LOCAL_REGISTRY>

Save the updated file as console-upgrade.yaml on your local system. You can then issue the following command upgrade your console:

kubectl apply -f console-upgrade.yaml

Step seven: Upgrade your blockchain nodes

After you upgrade your console, you can use the console UI to upgrade the nodes of your blockchain network. For more information, see Upgrade your blockchain nodes.