Deploying IBM Support for Hyperledger Fabric behind a firewall
You can use these instructions to deploy IBM Support for Hyperledger Fabric behind a firewall without internet connectivity. If you are deploying the platform on a cluster with access to the external internet, use the main instructions for Deploying IBM Support for Hyperledger Fabric.
You can use the following instructions to deploy the IBM Support for Hyperledger Fabric on any x86_64 Kubernetes cluster running at v1.32 - v1.33. Use these instructions if you are using open source Kubernetes or distributions such as Rancher. The IBM Support for Hyperledger Fabric uses a Kubernetes Operator to install the Fabric Operations Console on your cluster and manage the deployment and your blockchain nodes. When the Fabric Operations Console is running on your cluster, you can use the console to create blockchain nodes and operate a multicloud blockchain network.
Need to Know
-
If you are deploying the IBM Support for Hyperledger Fabric behind a firewall, without access to the public internet, your JavaScript, or TypeScript smart contract will not be able to download external dependencies when it is deployed. You need to point to a local NPM registry for your smart contract to access the required dependencies. See Building Node.js Contracts with limited internet access. This problem does not occur if you are using smart contracts that are written in Go.
-
After you deploy your peer and ordering nodes, you need to expose the ports of your nodes for your network to be able to respond to requests from applications or nodes outside your firewall. For more information about the ports that you need to expose, see Internet Ports in the security guide.
Kubernetes cluster does not download and update the latest version of IBM Support for Hyperledger Fabric automatically. To get the latest update, you need to create a new cluster and a new service instance.
Resources required
Ensure that your Kubernetes cluster has sufficient resources for the Fabric Operations Console and for the blockchain nodes that you create. The amount of resources that are required can vary depending on your infrastructure, network design, and performance requirements. To help you deploy a cluster of the appropriate size, the default CPU, memory, and storage requirements for each component type are provided in this table. Your actual resource allocations are visible in your blockchain console when you deploy a node and can be adjusted at deployment time or after deployment according to your business needs.
| Component (all containers) | CPU** | Memory (GB) | Storage (GB) |
|---|---|---|---|
| Peer (Hyperledger Fabric v2.x) | 0.7 | 2.0 | 200 (includes 100GB for peer and 100GB for state database) |
| CA | 0.1 | 0.2 | 20 |
| Ordering node | 0.35 | 0.7 | 100 |
| Operator | 0.1 | 0.2 | 0 |
| Console | 1.2 | 2.4 | 10 |
| Webhook | 0.1 | 0.2 | 0 |
** These values can vary slightly. Actual VPC allocations are visible in the blockchain console when a node is deployed.
Note that when smart contracts are installed on peers that run a Fabric v2.x image, the smart contract is launched in its own pod instead of a separate container on the peer, which accounts for the smaller amount of resources required on the peer.
Browsers
The Fabric Operations Console has been successfully tested on the following browsers:
- Chrome Version 125.0.6422.141/142 (Official Build) (x86_64)
- Safari Version 17.5
- Firefox 126.0.1 (64-bit)
For known issue while accessing console URL using Chrome browser, please refer to Troubleshooting page.
Storage
In addition to a small amount of storage (10 GB) required by the Fabric Operations Console, persistent storage is required for each CA, peer, and ordering node that you deploy. Because blockchain components do not use the Kubernetes node local storage, network-attached remote storage is required so that blockchain nodes can fail over to a different Kubernetes worker node in the event of a node outage. And because you cannot change your storage type after deploying peer, CA, or ordering nodes, you need to decide the type of persistent storage that you want to use before you deploy any blockchain nodes.
The Fabric Operations Console uses dynamic provisioning to allocate storage for each blockchain node that you deploy by using a pre-defined storage class. You can choose your persistent storage from the available Kubernetes storage options. Each Storage Volume is provisioned with the RWO (Read Write Once) Access mode.
If the storage includes support for the volumeBindingMode: WaitForFirstCustomer setting, you should configure it to delay volume binding until the pod is scheduled. Read more in the Kubernetes Storage Classes documentation.
Before you deploy the IBM Support for Hyperledger Fabric, you must create a storage class with enough backing storage for the Fabric Operations Console and the nodes that you create. You can set this storage class to use the default storage
class of your Kubernetes cluster or create a new class that is used by the Fabric Operations Console. If you are using a multizone cluster in IBM Cloud and you change the default storage class definition, then you must configure the default
storage class for each zone.
After you create the storage class, run the following command to set the storage class of the multizone region to be the default storage class:
kubectl patch storageclass
If you prefer not to choose a persistent storage option, the default storage class of your namespace or OpenShift project is used. Host-local volume storage is not supported.
Considerations when choosing your persistent storage
| Consideration | Recommendations |
|---|---|
| Type | Fabric supports three types of storage for your nodes: File, Block, and Portworx. All three types support snapshots, which are important for backups and disaster recovery. Portworx is also useful when you have multiple zones in your cluster. Object storage is not supported by Fabric but can be used for backups. |
| Performance | When you choose your storage, you need to factor in the read/write speed (IOPS/GB). IBM Support for Hyperledger Fabric suggests at least 2 IOPS/GB for a development or test environment and 4 IOPS/GB for production networks. Your results may vary depending on your use case and how your client application is written. |
| Scalability | When a node runs out of storage, it ceases to operate. Even if Kubernetes attempts to redeploy the node elsewhere, when the storage is full, the node cannot operate. Because the ledger is immutable and can only grow over time, expandable storage is nice to have for blockchain networks whenever available. At some point, you will run out of storage on your peers and ordering nodes and expandable storage helps to avoid this situation. If the storage is not expandable, when a peer or ordering runs out of storage, you need to provision a new node with a larger storage capacity and then delete the old one. When the new node is deployed, it must replicate the ledger, which can take time depending on the depth of the block history. |
| Monitoring | It's critical to monitor the storage consumption by your nodes. Consider the scenario where you deploy five ordering nodes, all with the same amount of storage. They are all replicating the same ledgers so they will all run out of storage at approximately the same time and you will lose consensus, causing the network to stop functioning. Therefore, you might want to consider varying the storage across the nodes and monitoring it as the ledger grows to avoid this situation. Before storage is exhausted on a node, you can expand it or provision a new node. |
| Encryption | Fabric does not require storage to be encrypted but it is a best practice for Security. You need to decide whether encryption is important for your business. If you have the option of encrypting the persistent volume, there may be some performance implications with encryption to consider. |
| High Availability (HA) | There should be redundancy in the storage to avoid a single point of failure. |
| Multi-zone capable storage | IBM Cloud includes the ability to create a single Kubernetes cluster across multiple data centers or "zones". Portworx offers multi-zone capable storage that can be used to potentially reduce the number of peers required. Consider a scenario where you build two zones with two peers for redundancy, one zone can go down and you still have two peers up in another zone. With multi-zone capable storage, you could instead have two zones with one peer each. If one zone goes down, the peer comes up in the other zone with its storage intact, reducing the overall redundant peer requirements. |
Before you begin
-
IBM Support for Hyperledger Fabric can be installed only on the Supported Platforms.
-
You need to install and connect to your cluster by using the kubectl CLI to deploy the platform.
-
If you are not running the platform on Red Hat OpenShift Container Platform or Red Hat Open Kubernetes Distribution, then you need to set up the NGINX Ingress controller and it needs to be running in SSL passthrough mode. For more information, see Considerations when using Kubernetes distributions.
-
If you have a Hardware Security Module (HSM) that you plan to use to generate and store the private key for your CA, peer, or ordering nodes, you need to create an HSM client image and push it to your container registry. Follow instructions in the advanced deployment topic to build the image.
Pull the IBM Support for Hyperledger Fabric images
You can download the complete set of IBM Support for Hyperledger Fabric images from the cpopen registry. To deploy the platform without access to the public internet, you need to pull the images and then push the images to a container registry that you can access from behind your firewall.
It is recommended to use the skopeo utility to download and copy your images to your local container registry. The skopeo tool is used for moving container
images between different types of container storages. In order to download the platform images and copy them to your container registry behind a firewall, you first need to install skopeo.
The following commands only work with a Docker container registry. Depending on the level of permissions required for the target location for the images, you might need to prefix each command with sudo.
The platform recommends that you use the skopeo utility to download and copy your images to your local container registry. skopeo is a tool for moving container
images between different types of container storages. In order to download the platform images and copy them to your container registry behind a firewall, you first need to install skopeo.
Run the following set of commands to download the images and push them to your registry.
Replace:
<LOCAL_REGISTRY_USER>with the user ID with access to your container registry.<LOCAL_REGISTRY_PASSWORD>with the password to your container registry.
The following commands only work with a Docker container registry. Depending on the level of permissions required for the target location for the images, you might need to prefix each command with sudo.
skopeo copy docker://icr.io/cpopen/ibm-hlfsupport-operator:1.0.9-20251112-amd64 <LOCAL_REGISTRY>/ibm-hlfsupport-operator:1.0.9-20251112-amd64 -q --dest-creds <LOCAL_REGISTRY_USER>:<LOCAL_REGISTRY_PASSWORD> --all
skopeo copy docker://icr.io/cpopen/ibm-hlfsupport-crdwebhook:1.0.9-20251112-amd64 <LOCAL_REGISTRY>/ibm-hlfsupport-crdwebhook:1.0.9-20251112-amd64 -q --dest-creds <LOCAL_REGISTRY_USER>:<LOCAL_REGISTRY_PASSWORD> --all
skopeo copy docker://icr.io/cpopen/ibm-hlfsupport/ibm-hlfsupport-console:1.0.9-20251112-amd64 <LOCAL_REGISTRY>/ibm-hlfsupport-console:1.0.9-20251112-amd64 -q --dest-creds <LOCAL_REGISTRY_USER>:<LOCAL_REGISTRY_PASSWORD> --all
skopeo copy docker://icr.io/cpopen/ibm-hlfsupport/ibm-hlfsupport-deployer:1.0.9-20251112-amd64 <LOCAL_REGISTRY>/ibm-hlfsupport-deployer:1.0.9-20251112-amd64 -q --dest-creds <LOCAL_REGISTRY_USER>:<LOCAL_REGISTRY_PASSWORD> --all
skopeo copy docker://icr.io/cpopen/ibm-hlfsupport-mustgather:1.0.9-20251112-amd64 <LOCAL_REGISTRY>/ibm-hlfsupport-mustgather:1.0.9-20251112-amd64 -q --dest-creds <LOCAL_REGISTRY_USER>:<LOCAL_REGISTRY_PASSWORD> --all
skopeo copy docker://icr.io/cpopen/ibm-hlfsupport/ibm-hlfsupport-init:1.0.9-20251112-amd64 <LOCAL_REGISTRY>/ibm-hlfsupport-init:1.0.9-20251112-amd64 -q --dest-creds <LOCAL_REGISTRY_USER>:<LOCAL_REGISTRY_PASSWORD> --all
skopeo copy docker://icr.io/cpopen/ibm-hlfsupport/ibm-hlfsupport-enroller:1.0.9-20251112-amd64 <LOCAL_REGISTRY>/ibm-hlfsupport-enroller:1.0.9-20251112-amd64 -q --dest-creds <LOCAL_REGISTRY_USER>:<LOCAL_REGISTRY_PASSWORD> --all
skopeo copy docker://icr.io/cpopen/ibm-hlfsupport/ibm-hlfsupport-couchdb:3.3.3-20251112-amd64 <LOCAL_REGISTRY>/ibm-hlfsupport-couchdb:3.3.3-20251112-amd64 -q --dest-creds <LOCAL_REGISTRY_USER>:<LOCAL_REGISTRY_PASSWORD> --all
skopeo copy docker://icr.io/cpopen/ibm-hlfsupport/ibm-hlfsupport-ca:1.5.15-20251112-amd64 <LOCAL_REGISTRY>/ibm-hlfsupport-ca:1.5.15-20251112-amd64 -q --dest-creds <LOCAL_REGISTRY_USER>:<LOCAL_REGISTRY_PASSWORD> --all
skopeo copy docker://icr.io/cpopen/ibm-hlfsupport/ibm-hlfsupport-grpcweb:1.0.9-20251112-amd64 <LOCAL_REGISTRY>/ibm-hlfsupport-grpcweb:1.0.9-20251112-amd64 -q --dest-creds <LOCAL_REGISTRY_USER>:<LOCAL_REGISTRY_PASSWORD> --all
skopeo copy docker://icr.io/cpopen/ibm-hlfsupport/ibm-hlfsupport-orderer:2.5.12-20251112-amd64 <LOCAL_REGISTRY>/ibm-hlfsupport-orderer:2.5.12-20251112-amd64 -q --dest-creds <LOCAL_REGISTRY_USER>:<LOCAL_REGISTRY_PASSWORD> --all
skopeo copy docker://icr.io/cpopen/ibm-hlfsupport/ibm-hlfsupport-utilities:2.5.12-20251112-amd64 <LOCAL_REGISTRY>/ibm-hlfsupport-utilities:2.5.12-20251112-amd64 -q --dest-creds <LOCAL_REGISTRY_USER>:<LOCAL_REGISTRY_PASSWORD> --all
skopeo copy docker://icr.io/cpopen/ibm-hlfsupport/ibm-hlfsupport-peer:2.5.12-20251112-amd64 <LOCAL_REGISTRY>/ibm-hlfsupport-peer:2.5.12-20251112-amd64 -q --dest-creds <LOCAL_REGISTRY_USER>:<LOCAL_REGISTRY_PASSWORD> --all
skopeo copy docker://icr.io/cpopen/ibm-hlfsupport/ibm-hlfsupport-chaincode-launcher:2.5.12-20251112-amd64 <LOCAL_REGISTRY>/ibm-hlfsupport-chaincode-launcher:2.5.12-20251112-amd64 -q --dest-creds <LOCAL_REGISTRY_USER>:<LOCAL_REGISTRY_PASSWORD> --all
skopeo copy docker://icr.io/cpopen/ibm-hlfsupport/ibm-hlfsupport-ccenv:2.5.12-20251112-amd64 <LOCAL_REGISTRY>/ibm-hlfsupport-ccenv:2.5.12-20251112-amd64 -q --dest-creds <LOCAL_REGISTRY_USER>:<LOCAL_REGISTRY_PASSWORD> --all
skopeo copy docker://icr.io/cpopen/ibm-hlfsupport/ibm-hlfsupport-goenv:2.5.12-20251112-amd64 <LOCAL_REGISTRY>/ibm-hlfsupport-goenv:2.5.12-20251112-amd64 -q --dest-creds <LOCAL_REGISTRY_USER>:<LOCAL_REGISTRY_PASSWORD> --all
skopeo copy docker://icr.io/cpopen/ibm-hlfsupport/ibm-hlfsupport-nodeenv:2.5.12-20251112-amd64 <LOCAL_REGISTRY>/ibm-hlfsupport-nodeenv:2.5.12-20251112-amd64 -q --dest-creds <LOCAL_REGISTRY_USER>:<LOCAL_REGISTRY_PASSWORD> --all
skopeo copy docker://icr.io/cpopen/ibm-hlfsupport/ibm-hlfsupport-javaenv:2.5.12-20251112-amd64 <LOCAL_REGISTRY>/ibm-hlfsupport-javaenv:2.5.12-20251112-amd64 -q --dest-creds <LOCAL_REGISTRY_USER>:<LOCAL_REGISTRY_PASSWORD> --all
After you complete these steps, you can use the following instructions to deploy the IBM Support for Hyperledger Fabric with the images in your registry.
Log in to your Kubernetes cluster
Before you can complete the next steps, you need to log in to your cluster by using the kubectl CLI. Follow the instructions for logging in to your cluster. If the command is successful, you can see the list of the namespaces in your cluster from your terminal by running the following command:
kubectl get pods
If successful, you can see the pods that are running in your default namespace:
docker-registry-7d8875c7c5-5fv5j 1/1 Running 0 7d
docker-registry-7d8875c7c5-x8dfq 1/1 Running 0 7d
registry-console-6c74fc45f9-nl5nw 1/1 Running 0 7d
router-6cc88df47c-hqjmk 1/1 Running 0 7d
router-6cc88df47c-mwzbq 1/1 Running 0 7d
Create the ibm-hlfsupport-infra namespace for the webhook
Because the platform has updated the internal apiversion from v1alpha1 in previous versions to v1beta1, a Kubernetes conversion webhook is required to update the CA, peer, operator, and console to the new API version.
This webhook will continue to be used in the future, so new deployments of the platform are required to deploy it as well. The webhook is deployed to its own namespace, referred to as ibm-hlfsupport-infra throughout these instructions.
After you log in to your cluster, you can create the new ibm-hlfsupport-infra namespace for the Kubernetes conversion webhook using the kubectl CLI. The new namespace needs to be created by a cluster administrator.
Run the following command to create the namespace.
kubectl create namespace ibm-hlfsupport-infra
Set up the entitlement for a local registry
After you push the IBM Support for Hyperledger Fabric images to your own Docker registry, you need to store the password to that registry on your cluster by creating a Kubernetes Secret. Using a Kubernetes secret allows you to securely store the key on your cluster and pass it to the operator and the console deployments.
Run the following command to create the secret and add it to your ibm-hlfsupport-infra namespace or project:
kubectl create secret docker-registry cp-pull-secret --docker-server=<LOCAL_REGISTRY> --docker-username=<USER> --docker-password=<LOCAL_REGISTRY_PASSWORD> --docker-email=<EMAIL> -n <NAMESPACE>
- Replace
<USER>with your username - Replace
<EMAIL>with your email address. - Replace
<LOCAL_REGISTRY_PASSWORD>with the password to your registry. - Replace
<LOCAL_REGISTRY>with the url of your local registry. - Replace
<NAMESPACE>withibm-hlfsupport-infra.
The name of the secret that you are creating is cp-pull-secret. It is required by the webhook that you will deploy later. If you change the name of any of secrets that you create, you need to change the corresponding name in future
steps.
Deploy the webhook and custom resource definitions (CRDs)
Before you can deploy a new instance of the platform to your Kubernetes cluster, you need to create the conversion webhook by completing the steps in this section. The webhook is deployed to its own namespace or project, referred to ibm-hlfsupport-infra throughout these instructions.
The first three steps are for deployment of the webhook. The last step is for the custom resource definitions for the CA, peer, orderer, and console components that IBM Support for Hyperledger Fabric requires. You only have to deploy the webhook and custom resource definitions once per cluster. If you have already deployed this webhook and custom resource definitions to your cluster, you can skip these four steps below.
1. Configure role-based access control (RBAC) for the webhook
First, copy the following text to a file on your local system and save the file as rbac.yaml. This step allows the webhook to read and create a TLS secret in its own project.
apiVersion: v1
kind: ServiceAccount
metadata:
name: webhook
namespace: ibm-hlfsupport-infra
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: webhook
rules:
- apiGroups:
- "*"
resources:
- secrets
verbs:
- "*"
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: ibm-hlfsupport-infra
subjects:
- kind: ServiceAccount
name: webhook
namespace: ibm-hlfsupport-infra
roleRef:
kind: Role
name: webhook
apiGroup: rbac.authorization.k8s.io
Run the following command to add the file to your cluster definition:
kubectl apply -f rbac.yaml -n ibm-hlfsupport-infra
When the command completes successfully, you should see something similar to:
serviceaccount/webhook created
role.rbac.authorization.k8s.io/webhook created
rolebinding.rbac.authorization.k8s.io/ibm-hlfsupport-infra created
2. Deploy the webhook
In order to deploy the webhook, you need to create two .yaml files and apply them to your Kubernetes cluster.
deployment.yaml
Copy the following text to a file on your local system and save the file as deployment.yaml.
apiVersion: apps/v1
kind: Deployment
metadata:
name: "ibm-hlfsupport-webhook"
labels:
helm.sh/chart: "ibm-hlfsupport"
app.kubernetes.io/name: "ibm-hlfsupport"
app.kubernetes.io/instance: "ibm-hlfsupport-webhook"
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/instance: "ibm-hlfsupport-webhook"
strategy:
type: Recreate
template:
metadata:
labels:
helm.sh/chart: "ibm-hlfsupport"
app.kubernetes.io/name: "ibm-hlfsupport"
app.kubernetes.io/instance: "ibm-hlfsupport-webhook"
annotations:
productName: "IBM Support for Hyperledger Fabric"
productID: "5d5997a033594f149a534a09802d60f1"
productVersion: "1.0.9"
productChargedContainers: ""
productMetric: "VIRTUAL_PROCESSOR_CORE"
spec:
serviceAccountName: webhook
imagePullSecrets:
- name: cp-pull-secret
hostIPC: false
hostNetwork: false
hostPID: false
securityContext:
runAsNonRoot: true
runAsUser: 1000
fsGroup: 2000
containers:
- name: "ibm-hlfsupport-webhook"
image: "icr.io/cpopen/ibm-hlfsupport-crdwebhook:1.0.9-20251112-amd64"
imagePullPolicy: Always
securityContext:
privileged: false
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
seccompProfile:
type: RuntimeDefault
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
env:
- name: "LICENSE"
value: "accept"
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: SERVICE_NAME
value: ibm-hlfsupport-webhook
ports:
- name: server
containerPort: 3000
livenessProbe:
httpGet:
path: /healthz
port: server
scheme: HTTPS
initialDelaySeconds: 30
timeoutSeconds: 5
failureThreshold: 6
readinessProbe:
httpGet:
path: /healthz
port: server
scheme: HTTPS
initialDelaySeconds: 26
timeoutSeconds: 5
periodSeconds: 5
resources:
requests:
cpu: 0.1
memory: "100Mi"
Run the following command to add the file to your cluster definition:
kubectl apply -n ibm-hlfsupport-infra -f deployment.yaml
When the command completes successfully, you should see something similar to:
deployment.apps/ibm-hlfsupport-webhook created
service.yaml
Second, copy the following text to a file on your local system and save the file as service.yaml.
apiVersion: v1
kind: Service
metadata:
name: "ibm-hlfsupport-webhook"
labels:
type: "webhook"
app.kubernetes.io/name: "ibm-hlfsupport"
app.kubernetes.io/instance: "ibm-hlfsupport-webhook"
helm.sh/chart: "ibm-hlfsupport"
spec:
type: ClusterIP
ports:
- name: server
port: 443
targetPort: server
protocol: TCP
selector:
app.kubernetes.io/instance: "ibm-hlfsupport-webhook"
Run the following command to add the file to your cluster definition:
kubectl apply -n ibm-hlfsupport-infra -f service.yaml
When the command completes successfully, you should see something similar to:
service/ibm-hlfsupport-webhook created
3. Extract the certificate and create the custom resource definitions (CRDs)
- Extract the webhook TLS certificate from the
ibm-hlfsupport-infranamespace by running the following command:export TLS_CERT=$(kubectl get secret/webhook-tls-cert -n ibm-hlfsupport-infra -o jsonpath={'.data.cert\.pem'}) - When you deploy IBM Support for Hyperledger Fabric you need to apply the following four CRDs for the CA, peer, orderer, and console. Run the following four commands to apply or update each CRD.
Run this command to update the CA CRD:
cat <<EOF | kubectl apply -f -
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: ibpcas.ibp.com
labels:
app.kubernetes.io/name: "ibm-hlfsupport"
app.kubernetes.io/instance: "ibm-hlfsupport"
app.kubernetes.io/managed-by: "ibm-hlfsupport"
spec:
conversion:
strategy: Webhook
webhook:
clientConfig:
caBundle: "${TLS_CERT}"
service:
name: ibm-hlfsupport-webhook
namespace: ibm-hlfsupport-infra
path: /crdconvert
conversionReviewVersions:
- v1beta1
- v1alpha2
- v1alpha1
group: ibp.com
names:
kind: IBPCA
listKind: IBPCAList
plural: ibpcas
singular: ibpca
scope: Namespaced
versions:
- name: v1beta1
schema:
openAPIV3Schema:
x-kubernetes-preserve-unknown-fields: true
served: true
storage: true
subresources:
status: {}
- name: v1alpha2
schema:
openAPIV3Schema:
x-kubernetes-preserve-unknown-fields: true
served: true
storage: false
subresources:
status: {}
- name: v210
schema:
openAPIV3Schema:
x-kubernetes-preserve-unknown-fields: true
served: false
storage: false
subresources:
status: {}
- name: v212
schema:
openAPIV3Schema:
x-kubernetes-preserve-unknown-fields: true
served: false
storage: false
subresources:
status: {}
- name: v1alpha1
schema:
openAPIV3Schema:
x-kubernetes-preserve-unknown-fields: true
served: true
storage: false
subresources:
status: {}
status:
acceptedNames:
kind: IBPCA
listKind: IBPCAList
plural: ibpcas
singular: ibpca
conditions: []
storedVersions:
- v1beta1
EOF
Depending on whether you are creating or updating the CRD, when successful, you should see:
customresourcedefinition.apiextensions.k8s.io/ibpcas.ibp.com created
or
customresourcedefinition.apiextensions.k8s.io/ibpcas.ibp.com configured
Run this command to update the peer CRD:
cat <<EOF | kubectl apply -f -
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: ibppeers.ibp.com
labels:
app.kubernetes.io/name: "ibm-hlfsupport"
app.kubernetes.io/instance: "ibm-hlfsupport"
app.kubernetes.io/managed-by: "ibm-hlfsupport"
spec:
conversion:
strategy: Webhook
webhook:
clientConfig:
caBundle: "${TLS_CERT}"
service:
name: ibm-hlfsupport-webhook
namespace: ibm-hlfsupport-infra
path: /crdconvert
conversionReviewVersions:
- v1beta1
- v1alpha2
- v1alpha1
group: ibp.com
names:
kind: IBPPeer
listKind: IBPPeerList
plural: ibppeers
singular: ibppeer
scope: Namespaced
versions:
- name: v1beta1
schema:
openAPIV3Schema:
x-kubernetes-preserve-unknown-fields: true
served: true
storage: true
subresources:
status: {}
- name: v1alpha2
schema:
openAPIV3Schema:
x-kubernetes-preserve-unknown-fields: true
served: true
storage: false
subresources:
status: {}
- name: v1alpha1
schema:
openAPIV3Schema:
x-kubernetes-preserve-unknown-fields: true
served: true
storage: false
subresources:
status: {}
status:
acceptedNames:
kind: IBPPeer
listKind: IBPPeerList
plural: ibppeers
singular: ibppeer
conditions: []
storedVersions:
- v1beta1
EOF
When successful, you should see:
customresourcedefinition.apiextensions.k8s.io/ibppeers.ibp.com created
or
customresourcedefinition.apiextensions.k8s.io/ibppeers.ibp.com configured
Run this command to update the console CRD:
cat <<EOF | kubectl apply -f -
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: ibpconsoles.ibp.com
labels:
app.kubernetes.io/name: "ibm-hlfsupport"
app.kubernetes.io/instance: "ibm-hlfsupport"
app.kubernetes.io/managed-by: "ibm-hlfsupport"
spec:
conversion:
strategy: Webhook
webhook:
clientConfig:
caBundle: "${TLS_CERT}"
service:
name: ibm-hlfsupport-webhook
namespace: ibm-hlfsupport-infra
path: /crdconvert
conversionReviewVersions:
- v1beta1
- v1alpha2
- v1alpha1
group: ibp.com
names:
kind: IBPConsole
listKind: IBPConsoleList
plural: ibpconsoles
singular: ibpconsole
scope: Namespaced
versions:
- name: v1beta1
schema:
openAPIV3Schema:
x-kubernetes-preserve-unknown-fields: true
served: true
storage: true
subresources:
status: {}
- name: v1alpha2
schema:
openAPIV3Schema:
x-kubernetes-preserve-unknown-fields: true
served: true
storage: false
subresources:
status: {}
- name: v1alpha1
schema:
openAPIV3Schema:
x-kubernetes-preserve-unknown-fields: true
served: true
storage: false
subresources:
status: {}
status:
acceptedNames:
kind: IBPConsole
listKind: IBPConsoleList
plural: ibpconsoles
singular: ibpconsole
conditions: []
storedVersions:
- v1beta1
EOF
When successful, you should see:
customresourcedefinition.apiextensions.k8s.io/ibpconsoles.ibp.com created
or
customresourcedefinition.apiextensions.k8s.io/ibpconsoles.ibp.com configured
Run this command to update the orderer CRD:
cat <<EOF | kubectl apply -f -
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: ibporderers.ibp.com
labels:
app.kubernetes.io/name: "ibm-hlfsupport"
app.kubernetes.io/instance: "ibm-hlfsupport"
app.kubernetes.io/managed-by: "ibm-hlfsupport"
spec:
conversion:
strategy: Webhook
webhook:
clientConfig:
caBundle: "${TLS_CERT}"
service:
name: ibm-hlfsupport-webhook
namespace: ibm-hlfsupport-infra
path: /crdconvert
conversionReviewVersions:
- v1beta1
- v1alpha2
- v1alpha1
group: ibp.com
names:
kind: IBPOrderer
listKind: IBPOrdererList
plural: ibporderers
singular: ibporderer
scope: Namespaced
versions:
- name: v1beta1
schema:
openAPIV3Schema:
x-kubernetes-preserve-unknown-fields: true
served: true
storage: true
subresources:
status: {}
- name: v1alpha2
schema:
openAPIV3Schema:
x-kubernetes-preserve-unknown-fields: true
served: true
storage: false
subresources:
status: {}
- name: v1alpha1
schema:
openAPIV3Schema:
x-kubernetes-preserve-unknown-fields: true
served: true
storage: false
subresources:
status: {}
status:
acceptedNames:
kind: IBPOrderer
listKind: IBPOrdererList
plural: ibporderers
singular: ibporderer
conditions: []
storedVersions:
- v1beta1
EOF
When successful, you should see:
customresourcedefinition.apiextensions.k8s.io/ibporderers.ibp.com created
or
customresourcedefinition.apiextensions.k8s.io/ibporderers.ibp.com configured
Create a components namespace for your IBM Support for Hyperledger Fabric deployment
Next, you need to create a second project for your deployment of the IBM Support for Hyperledger Fabric. You (as a cluster administrator) can create a components namespace by using the kubectl CLI.
If you are using the CLI, create a new components namespace with the following command:
kubectl create namespace <NAMESPACE>
Replace <NAMESPACE> with the name for the IBM Support for Hyperledger Fabric deployment components namespace.
It is required that you create a components namespace for each blockchain network that you deploy with the IBM Support for Hyperledger Fabric. For example, if you plan to create different networks for development, staging, and production, then you need to create a unique namespace for each environment. Using a separate namespace provides each network with separate resources and allows you to set unique access policies for each network. You need to follow these deployment instructions to deploy a separate operator and console for each components namespace.
After creating the components namespace, overwrite the label as follows:
kubectl label --overwrite ns <NAMESPACE> pod-security.kubernetes.io/enforce=baseline
You can also use the CLI to find the available storage classes for your namespace. If you created a new storage class for your deployment, that storage class must be visible in the output in the following command:
kubectl get storageclasses
If you are not using the default storage class, additional configuration is required. See Storage for the considerations.
Set up the entitlement for a namespace
You already set up entitlement for a local registry in the ibm-hlfsupport-infra namespace or project, now you need to create one in your IBM Support for Hyperledger Fabric namespace or project.
Run the following command to create the secret and add it to your namespace:
kubectl create secret docker-registry cp-pull-secret --docker-server=<LOCAL_REGISTRY> --docker-username=<USER> --docker-password=<LOCAL_REGISTRY_PASSWORD> --docker-email=<EMAIL> -n <NAMESPACE>
- Replace
<USER>with your username - Replace
<EMAIL>with your email address. - Replace
<LOCAL_REGISTRY_PASSWORD>with the password to your registry. - Replace
<LOCAL_REGISTRY>with the url of your local registry. - Replace
<NAMESPACE>with the name of your project or namespace.
The name of the secret that you are creating is cp-pull-secret. This value is used by the operator to deploy the offering in future steps. If you change the name of any of secrets that you create, you need to change the corresponding
name in future steps.
Add security and access policies
The IBM Support for Hyperledger Fabric requires specific security and access policies to be added to your namespace. The contents of a set of .yaml files are provided here for you to copy and edit to define the security policies.
You must save these files to your local system and then add them your namespace by using the kubectl CLI. These steps need to be completed by a cluster administrator. Also, be aware that the peer init and dind containers
that get deployed are required to run in privileged mode.
Apply the ClusterRole
Copy the following text to a file on your local system and save the file as ibm-hlfsupport-clusterrole.yaml. This file defines the required ClusterRole for the PodSecurityPolicy. Edit the file and replace <NAMESPACE> with the name of your IBM Support for Hyperledger Fabric deployment namespace.
#*******************************************************************************
# © Copyright IBM Corporation 2021
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#*******************************************************************************
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: <NAMESPACE>
labels:
release: "operator"
helm.sh/chart: "ibm-hlfsupport"
app.kubernetes.io/name: "ibm-hlfsupport"
app.kubernetes.io/instance: "ibm-hlfsupport"
app.kubernetes.io/managed-by: "ibm-hlfsupport-operator"
rules:
- apiGroups:
- apiextensions.k8s.io
resources:
- persistentvolumeclaims
- persistentvolumes
verbs:
- get
- list
- create
- update
- patch
- watch
- delete
- deletecollection
- apiGroups:
- apiextensions.k8s.io
resources:
- customresourcedefinitions
verbs:
- get
- apiGroups:
- route.openshift.io
resources:
- routes
- routes/custom-host
verbs:
- get
- list
- create
- update
- patch
- watch
- delete
- deletecollection
- apiGroups:
- ""
resources:
- pods
- pods/log
- persistentvolumeclaims
- persistentvolumes
- services
- endpoints
- events
- configmaps
- secrets
- nodes
- serviceaccounts
verbs:
- get
- list
- create
- update
- patch
- watch
- delete
- deletecollection
- apiGroups:
- "batch"
resources:
- jobs
verbs:
- get
- list
- create
- update
- patch
- watch
- delete
- deletecollection
- apiGroups:
- "authorization.openshift.io"
- "rbac.authorization.k8s.io"
resources:
- roles
- rolebindings
verbs:
- get
- list
- create
- update
- patch
- watch
- delete
- deletecollection
- bind
- escalate
- apiGroups:
- ""
resources:
- namespaces
verbs:
- get
- apiGroups:
- apps
resources:
- deployments
- daemonsets
- replicasets
- statefulsets
verbs:
- get
- list
- create
- update
- patch
- watch
- delete
- deletecollection
- apiGroups:
- monitoring.coreos.com
resources:
- servicemonitors
verbs:
- get
- create
- apiGroups:
- apps
resourceNames:
- ibm-hlfsupport-operator
resources:
- deployments/finalizers
verbs:
- update
- apiGroups:
- ibp.com
resources:
- ibpcas.ibp.com
- ibppeers.ibp.com
- ibporderers.ibp.com
- ibpconsoles.ibp.com
- ibpcas
- ibppeers
- ibporderers
- ibpconsoles
- ibpcas/finalizers
- ibppeers/finalizers
- ibporderers/finalizers
- ibpconsoles/finalizers
- ibpcas/status
- ibppeers/status
- ibporderers/status
- ibpconsoles/status
verbs:
- get
- list
- create
- update
- patch
- watch
- delete
- deletecollection
- apiGroups:
- extensions
- networking.k8s.io
- config.openshift.io
resources:
- ingresses
- networkpolicies
verbs:
- get
- list
- create
- update
- patch
- watch
- delete
- deletecollection
After you save and edit the file, run the following commands.
kubectl apply -f ibm-hlfsupport-clusterrole.yaml -n <NAMESPACE>
Replace <NAMESPACE> with the name of your IBM Support for Hyperledger Fabric deployment namespace.
Apply the ClusterRoleBinding
Copy the following text to a file on your local system and save the file as ibm-hlfsupport-clusterrolebinding.yaml. This file defines the ClusterRoleBinding. Edit the file and replace <NAMESPACE> with the name
of your IBM Support for Hyperledger Fabric deployment namespace.
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: <NAMESPACE>
labels:
release: "operator"
helm.sh/chart: "ibm-hlfsupport"
app.kubernetes.io/name: "ibm-hlfsupport"
app.kubernetes.io/instance: "ibm-hlfsupport"
app.kubernetes.io/managed-by: "ibm-hlfsupport-operator"
subjects:
- kind: ServiceAccount
name: default
namespace: <NAMESPACE>
roleRef:
kind: ClusterRole
name: <NAMESPACE>
apiGroup: rbac.authorization.k8s.io
After you save and edit the file, run the following commands.
kubectl apply -f ibm-hlfsupport-clusterrolebinding.yaml -n <NAMESPACE>
Replace <NAMESPACE> with the name of your IBM Support for Hyperledger Fabric deployment namespace.
Create the cluster role binding
After applying the policies, you must grant your service account the required level of permissions to deploy your console. Run the following command with the name of your target namespace:
kubectl -n <NAMESPACE> create clusterrolebinding ibm-hlfsupport-operator-clusterrolebinding --clusterrole=<NAMESPACE> --group=system:serviceaccounts:<NAMESPACE>
Deploy the Hyperledger Fabric operator
The IBM Support for Hyperledger Fabric uses an operator to install the Fabric Operations Console. You can deploy the operator on your cluster by adding a custom resource to your namespace by using the kubectl CLI. The custom resource pulls the operator image from the Docker registry and starts it on your cluster.
Copy the following text to a file on your local system and save the file as ibm-hlfsupport-operator.yaml.
Replace image: icr.io/cpopen/ with image: <LOCAL_REGISTRY>/, insert the URL of your local registry.
apiVersion: apps/v1
kind: Deployment
metadata:
name: ibm-hlfsupport-operator
labels:
release: "operator"
helm.sh/chart: "ibm-hlfsupport"
app.kubernetes.io/name: "ibm-hlfsupport"
app.kubernetes.io/instance: "ibm-hlfsupport"
app.kubernetes.io/managed-by: "ibm-hlfsupport-operator"
spec:
replicas: 1
strategy:
type: "Recreate"
selector:
matchLabels:
name: ibm-hlfsupport-operator
template:
metadata:
labels:
name: ibm-hlfsupport-operator
release: "operator"
helm.sh/chart: "ibm-hlfsupport"
app.kubernetes.io/name: "ibm-hlfsupport"
app.kubernetes.io/instance: "ibm-hlfsupport"
app.kubernetes.io/managed-by: "ibm-hlfsupport-operator"
annotations:
productName: "IBM Support for Hyperledger Fabric"
productID: "5d5997a033594f149a534a09802d60f1"
productVersion: "1.0.0"
productChargedContainers: ""
productMetric: "VIRTUAL_PROCESSOR_CORE"
spec:
hostIPC: false
hostNetwork: false
hostPID: false
serviceAccountName: default
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/arch
operator: In
values:
- amd64
securityContext:
runAsNonRoot: true
runAsUser: 1001
fsGroup: 2000
imagePullSecrets:
- name: cp-pull-secret
containers:
- name: ibm-hlfsupport-operator
image: icr.io/cpopen/ibm-hlfsupport-operator:1.0.9-20251112-amd64
command:
- ibp-operator
imagePullPolicy: Always
securityContext:
privileged: false
allowPrivilegeEscalation: false
readOnlyRootFilesystem: false
runAsNonRoot: true
runAsUser: 7051
runAsGroup: 7051
seccompProfile:
type: RuntimeDefault
capabilities:
drop:
- ALL
add:
- CHOWN
- FOWNER
livenessProbe:
tcpSocket:
port: 8383
initialDelaySeconds: 10
timeoutSeconds: 5
failureThreshold: 5
readinessProbe:
tcpSocket:
port: 8383
initialDelaySeconds: 10
timeoutSeconds: 5
periodSeconds: 5
env:
- name: WATCH_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: OPERATOR_NAME
value: "ibm-hlfsupport-operator"
- name: CLUSTERTYPE
value: K8S
resources:
requests:
cpu: 100m
memory: 200Mi
limits:
cpu: 100m
memory: 200Mi
- If you changed the name of the Docker key secret, then you need to edit the field of
name: cp-pull-secret.
Then, use the kubectl CLI to add the custom resource to your namespace.
kubectl apply -f ibm-hlfsupport-operator.yaml -n <NAMESPACE>
Replace <NAMESPACE> with the name of your IBM Support for Hyperledger Fabric deployment namespace.
You can confirm that the operator deployed by running the command kubectl get deployment -n <NAMESPACE>. If your operator deployment is successful, then you can see the following tables with four ones displayed. The operator
takes about a minute to deploy.
NAME READY UP-TO-DATE AVAILABLE AGE
ibm-hlfsupport-operator 1/1 1 1 1m
Deploy the Fabric Operations Console
When the operator is running on your namespace, you can apply a custom resource to start the Fabric Operations Console on your cluster. You can then access the console from your browser. You can deploy only one console per Kubernetes namespace.
Save the custom resource definition below as ibm-hlfsupport-console.yaml on your local system:
apiVersion: ibp.com/v1beta1
kind: IBPConsole
metadata:
name: ibm-hlfsupport-console
spec:
arch:
- amd64
license:
accept: false
serviceAccountName: default
email: "<EMAIL>"
password: "<PASSWORD>"
registryURL: icr.io/cpopen/ibm-hlfsupport
imagePullSecrets:
- cp-pull-secret
networkinfo:
domain: <DOMAIN>
storage:
console:
class: ""
size: 5Gi
usetags: true
version: 1.0.0
Accept the license:
- Accept the IBM Support for Hyperledger Fabric license by replacing the
licenseparameteraccept: falsewith the textaccept: true.
Specify the external endpoint information of the console in the ibm-hlfsupport-console.yaml file:
- Replace
registryURL: icr.io/cpopen/ibm-hlfsupportwith the URL of your local registry. - Replace
<DOMAIN>with the name of your cluster domain. You need to make sure that this domain is pointed to the load balancer of your cluster.
Provide the user name and password that is used to access the console for the first time:
- Replace
<EMAIL>with the email address of the console administrator. - Replace
<PASSWORD>with the password of your choice. This password also becomes the default password of the console until it is changed.
You may need to make additional edits to the file depending on your choices in the deployment process:
- If you changed the name of your Docker key secret, change corresponding value of the
imagePullSecrets:field. - If you created a new storage class for your network, provide the storage class that you created to the
class:field.
Because you can only run the following command once, you should review the Advanced deployment options in case any of the options are relevant to your configuration before you install the console. For example, if you are deploying your console on a multizone cluster, you need to configure that before you run the following step to install the console.
After you update the file, you can use the CLI to install the console.
kubectl apply -f ibm-hlfsupport-console.yaml -n <NAMESPACE>
Replace <NAMESPACE> with the name of your IBM Support for Hyperledger Fabric deployment namespace. Before you install the console, you might want to review the advanced deployment options in the next section. The console can
take a few minutes to deploy.
Advanced deployment options
You can edit the ibm-hlfsupport-console.yaml file to allocate more resources to your console or use zones for high availability in a multizone cluster. To take advantage of these deployment options, you can use the console resource
definition with the resources: and clusterdata: sections added:
apiVersion: ibp.com/v1beta1
kind: IBPConsole
metadata:
name: ibm-hlfsupport-console
spec:
arch:
- amd64
license:
accept: false
serviceAccountName: default
email: "<EMAIL>"
password: "<PASSWORD>"
registryURL: icr.io/cpopen/ibm-hlfsupport
imagePullSecrets:
- cp-pull-secret
networkinfo:
domain: <DOMAIN>
storage:
console:
class: ""
size: 5Gi
clusterdata:
zones:
resources:
console:
requests:
cpu: 500m
memory: 1000Mi
limits:
cpu: 500m
memory: 1000Mi
configtxlator:
limits:
cpu: 25m
memory: 50Mi
requests:
cpu: 25m
memory: 50Mi
couchdb:
limits:
cpu: 500m
memory: 1000Mi
requests:
cpu: 500m
memory: 1000Mi
deployer:
limits:
cpu: 100m
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
usetags: true
version: 1.0.0
- Reminder: Replace
registryURL: icr.io/cpopen/ibm-hlfsupportwith the URL of your local registry and accept the license. - You can use the
resources:section to allocate more resources to your console. The values in the example file are the default values allocated to each container. Allocating more resources to your console allows you to operate a larger number of nodes or channels. You can allocate more resources to a currently running console by editing the resource file and applying it to your cluster. The console will restart and return to its previous state, allowing you to operate all of your exiting nodes and channels. - If you plan to use the console with a multizone Kubernetes cluster, you need to add the zones to the
clusterdata.zones:section of the file. When zones are provided to the deployment, you can select the zone that a node is deployed to using the console or the APIs. As an example, if you are deploying to a cluster across the zones of dal10, dal12, and dal13, you would add the zones to the file by using the format below.clusterdata: zones: - dal10 - dal12 - dal13
When you finish editing the file, apply it to your cluster.
kubectl apply -f ibm-hlfsupport-console.yaml -n <NAMESPACE>
Unlike the resource allocation, you cannot add zones to a running network. If you have already deployed a console and used it to create nodes on your cluster, you will lose your previous work. After the console restarts, you need to deploy new nodes.
Use your own TLS Certificates (Optional)
The Fabric Operations Console uses TLS certificates to secure the communication between the console and your blockchain nodes and between the console and your browser. You have the option of creating your own TLS certificates and providing them to the console by using a Kubernetes secret. If you skip this step, the console creates its own self-signed TLS certificates during deployment.
This step needs to be performed before the console is deployed.
You can use a Certificate Authority or tool to create the TLS certificates for the console. The TLS certificate needs to include the hostname of the console and the proxy in the subject name or the alternative domain names. The console and proxy hostname are in the following format:
Console hostname: <NAMESPACE>-ibm-hlfsupport-console-console.<DOMAIN>
Proxy hostname: <NAMESPACE>-ibm-hlfsupport-console-proxy.<DOMAIN>
- Replace
<NAMESPACE>with the name of your IBM Support for Hyperledger Fabric deployment namespace. - Replace
<DOMAIN>with the name of your cluster domain.
Navigate to the TLS certificates that you plan to use on your local system. Name the TLS certificate tlscert.pem and the corresponding private key tlskey.pem. Run the following command to create the Kubernetes secret
and add it to your Kubernetes namespace. The TLS certificate and key need to be in PEM format.
kubectl create secret generic console-tls-secret --from-file=tls.crt=./tlscert.pem --from-file=tls.key=./tlskey.pem -n <NAMESPACE>
After you create the secret, add the following field to the spec: section of ibm-hlfsupport-console.yaml with one indent added, at the same level as the resources: and clusterdata: sections
of the advanced deployment options. You must provide the name of the TLS secret that you created to the field. The following example deploys a console with the TLS certificate and key stored in a secret named "console-tls-secret".
Replace "<CONSOLE_TLS_SECRET_NAME>" with "console-tls-secret" unless you used a different name for the secret.
apiVersion: ibp.com/v1beta1
kind: IBPConsole
metadata:
name: ibm-hlfsupport-console
spec:
arch:
- amd64
license:
accept: false
serviceAccountName: default
email: "<EMAIL>"
password: "<PASSWORD>"
registryURL: icr.io/cpopen/ibm-hlfsupport
imagePullSecrets:
- cp-pull-secret
networkinfo:
domain: <DOMAIN>
storage:
console:
class: default
size: 10Gi
usetags: true
tlsSecretName: "<CONSOLE_TLS_SECRET_NAME>"
- Reminder: Replace
registryURL: icr.io/cpopen/ibm-hlfsupportwith the URL of your local registry and accept the license.
When you finish editing the file, you can apply it to your cluster in order to secure communications with your own TLS certificates:
kubectl apply -f ibm-hlfsupport-console.yaml -n <NAMESPACE>
Verifying the console installation
You can confirm that the operator deployed by running the command kubectl get deployment -n <NAMESPACE>. If your console deployment is successful, you can see ibm-hlfsupport-console added to the deployment table,
with four ones displayed. The console takes a few minutes to deploy. You might need to click refresh and wait for the table to be updated.
NAME READY UP-TO-DATE AVAILABLE AGE
ibm-hlfsupport-operator 1/1 1 1 10m
ibm-hlfsupport-console 1/1 1 1 4m
The console consists of four containers that are deployed inside a single pod:
optools: The console UI.deployer: A tool that allows your console to communicate with your deployments.configtxlator: A tool used by the console to read and create channel updates.couchdb: An instance of CouchDB that stores the data from your console, including your authorization information.
If there is an issue with your deployment, you can view the logs from an individual container. First, run the following command to get the name of the console pod:
kubectl get pods -n <NAMESPACE>
Then, use the following command to get the logs from one of the four containers inside the pod:
kubectl logs -f <pod_name> <container_name> -n <NAMESPACE>
As an example, a command to get the logs from the UI container would look like the following example:
kubectl logs -f ibm-hlfsupport-console-55cf9db6cc-856nz console -n blockchain-project
Log in to the console
You can use your browser to access the console by using the console URL:
https://<NAMESPACE>-ibm-hlfsupport-console-console.<DOMAIN>:443
- Replace
<NAMESPACE>with the name of your IBM Support for Hyperledger Fabric deployment namespace. - Replace
<DOMAIN>with the name of your cluster domain. You passed this value to theDOMAIN:field of theibm-hlfsupport-console.yamlfile.
Your console URL looks similar to the following example:
https://blockchain-project-ibm-hlfsupport-console-console.xyz.abc.com:443
If you navigate to the console URL in your browser, you can see the console log in screen:
- For the User ID, use the value you provided for the
email:field in theibm-hlfsupport-console.yamlfile. - For the Password, use the value you encoded for the
password:field in theibm-hlfsupport-console.yamlfile. This password becomes the default password for the console that all new users use to log in to the console. After you log in for the first time, you will be asked to provide a new password that you can use to log in to the console.
If you are unable to log in, ensure that you are not using the ESR version of Firefox. If you are, switch to another browser such as Chrome and log in. Otherwise, clear your browser cache and try logging in again.
The administrator who provisions the console can grant access to other users and restrict the actions they can perform. For more information, see Managing users from the console.
Next steps
When you access your console, you can view the nodes tab of your console UI. You can use this screen to deploy components on the cluster where you deployed the console. See the Build a network tutorial to get started with the console. You can also use this tab to operate nodes that are created on other clouds. For more information, see Importing nodes.
To learn how to manage the users that can access the console, view the logs of your console and your blockchain components, see Administering your console.
Considerations when using Kubernetes distributions
Before you attempt to install the IBM Support for Hyperledger Fabric on Azure Kubernetes Service, Amazon Web Services, Rancher, Amazon Elastic Kubernetes Service, or Google Kubernetes Engine, you should perform the following steps. Refer to your Kubernetes distribution documentation for more details.
-
Ensure that a load balancer with a public IP is configured in front of the Kubernetes cluster.
-
Create a DNS entry for the IP address of the load balancer.
-
Create a wild card host entry in DNS for the load balancer. This entry is a
DNS Arecord with a wild card host.For example, if the DNS entry for the load balancer is
test.example.com, the DNS entry would be:*.test.example.comthat ultimately resolves to
test.example.comWhen this host entry is configured, the following examples should all resolve to test.example.com:
console.test.example.com peer.test.example.comYou can use
nslookupto verify that DNS is configured correctly:$ nslookup console.test.example.com -
The DNS entry for the load balancer should then be used as the Domain name during the installation of IBM Support for Hyperledger Fabric.
-
The NGINX ingress controller must be used. See the ingress controller installation guide that can be used for most Kubernetes distributions. If you are using IBM Cloud Kubernetes Service, then refer to these instructions for specific configuration information.
-
Use the following instructions to edit the NGINX ingress controller deployment to enable ssl-passthrough or refer to the Kubernetes instructions.
This example might not be exact for your installation. The key is to ensure the last line that enables
ssl-passthroughis present./nginx-ingress-controller --configmap=$(POD_NAMESPACE)/nginx-configuration --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services --udp-services-configmap=$(POD_NAMESPACE)/udp-services --publish-service=$(POD_NAMESPACE)/ingress-nginx --annotations-prefix=nginx.ingress.kubernetes.io --enable-ssl-passthrough=true -
Verify that all pods are running before you attempt to install the IBM Support for Hyperledger Fabric.
You can now resume your installation.
Considerations when using IBM Cloud Kubernetes Service
The IBM Support for Hyperledger Fabric service requires additional configuration when installing on IBM Kubernetes Platform (IKS). Additional steps are needed if a Virtual Private Cloud (VPC) is used. Also, the application load balancer (ALB) requires additional configuration.
Considerations for Virtual Private Cloud (VPC)
If you are installing on IBM Cloud using a Virtual Private Cloud (VPC), be sure to use a storage class that has a volumeBindingMode with WaitForFirstConsumer See Storage.
All VPC subnets must have a public gateway enabled. Refer to Configuring VPC subnets.
Configure the Application Load Balancer (ALB)
The nginx ingress class is required for IBM Support for Hyperledger Fabric. First check if the nginx ingress class exists, as follows:
kubectl get ingressclasses.networking.k8s.io
If the nginx ingress class does not exist, create a file named nginx.yaml and add the following contents:
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
name: nginx
annotations:
ingressclass.kubernetes.io/is-default-class: "true"
spec:
controller: "k8s.io/ingress-nginx"
Then apply the nginx ingress class, as follows:
kubectl apply -f nginx.yaml
To override the Kubernetes Ingress configuration and enable SSL passthrough, follow instructions to customize the ALB deployment by creating a configmap and applying it to your cluster. For each ALB, you need to set the value of "enableSslPassthrough" and "ingressClass" as follows:
<alb*-id>: '{"enableSslPassthrough":"true", "ingressClass":"nginx"}'
After you create the configmap and update the ALBs, you can verify that the change is successful by checking the deployment of the ALB on your cluster. You need to wait for the pods to restart. Generally, it takes five to ten minutes for an ALB to pick up new changes. After you run the update command and wait five to ten minutes, check the deployment spec for ALB to confirm it is updated by running the following command:
kubectl get deploy -n kube-system <alb-id> -o yaml
Repeat the command for each ALB in your cluster, replacing <alb-id> with the id of each load balancer.
In the output, examine the args section of the containers. You should see something similar to:
containers:
- args:
- /nginx-ingress-controller
- --configmap=kube-system/ibm-k8s-controller-config
- --annotations-prefix=nginx.ingress.kubernetes.io
- --default-ssl-certificate=default/community-ingress-ibm-hlfsupport-68e10f583f026529fe7a89da40169ef4-0000
- --ingress-class=nginx
- --http-port=80
- --https-port=443
- --enable-ssl-passthrough=true
- --default-backend-service=kube-system/ibm-k8s-controller-default-backend
- --tcp-services-configmap=kube-system/tcp-services
- --publish-service=kube-system/public-crbukohphd0ps6erapoulg-alb1
Confirm that - --ingress-class=nginx and - --enable-ssl-passthrough=true.
This result indicates that you have successfully enabled SSL passthrough and that the associated ingress class is named nginx, which is what the software version of the platform requires in order for it to be able to be installed
on an IBM Cloud Kubernetes Service cluster. Verify that all pods are running before you attempt to install the IBM Support for Hyperledger Fabric.