Applying the Fix Pack 3

Use this information if you have IBM® Telco Network Cloud Manager - Performance, version 1.4.6 already installed.

Apply the Telco Network Cloud Manager - Performance Fix Pack 3 on Red Hat® OpenShift® by using the Operator Lifecycle Manager (OLM) user interface and CASE (Container Application Software for Enterprises).

Before you begin

  • Download the command-line tool ibm-pak v1.16.1 or higher.
    Download the ibm-pak, which is the IBM Catalog Management plug-in for IBM Cloud Paks from https://github.com/IBM/ibm-pak link. The ibm-pak plug-in streamlines the deployment of IBM CloudPaks in a disconnected environment that was done earlier by using cloudctl.
    Note: The cloudctl case command is deprecated and replaced with ibm-pak plug-in. Support for the cloudctl case command will be removed in a future release.
  • Run the following commands to install the ibm-pak CLI in the environment.
    wget --no-check-certificate https://github.com/IBM/ibm-pak/releases/download/v1.16.1/oc-ibm_pak-linux-amd64.tar.gz
    tar -xvf oc-ibm_pak-linux-amd64.tar.gz 
    sudo cp oc-ibm_pak-linux-amd64 /usr/local/bin/oc-ibm_pak
  • Run the following command to verify that the installation is successful.
    oc ibm-pak --help

Uninstall the 1.4.6 Operator

  1. Log in to OpenShift Container Platform web console.
  2. If you renamed any annotation manager values previously, you must rename them back to the original name tncp-operator.
  3. Stop all processors that are running in the NiFi user interface.
  4. Go to Operators > OperatorHub > openshift-marketplace and search for tncp.
  5. Click the installed Telco Network Cloud Manager - Performance, 1.4.6 existing Operator, and then click Uninstall.
    Note: Do not delete the instance. All the Pods are still running.

Installation modes

Starting from Telco Network Cloud Manager - Performance 1.4.6 Fix Pack 3 onwards, three installation modes are supported. See the following table to understand how these installation modes work.
Installation mode Namespaces
OwnNamespace It is the default behavior in the K8 environment.
  • The Operator is installed in any user-defined namespace. For example, tncp.
  • All other Telco Network Cloud Manager - Performance StatefulSets and Services are also installed in the same namespace. For example, tncp namespace.
Note:

Select A specific namespace on the cluster option in the Installation mode section.

Select the tncp namespace in the Installed Namespace section.

SingleNamespace
  • The Operator is installed in any user-defined namespace. For example, tncp-operator.
  • All other Telco Network Cloud Manager - Performance StatefulSets and Services are in a different user-defined namespace. For example, tncp namespace.
Note:

Select A specific namespace on the cluster option in the Installation mode section.

Select the tncp-opertor namespace in the Installed Namespace section.

AllNamespaces
  • The Operator must be installed in openshift-operators namespace.
  • All other Telco Network Cloud Manager - Performance StatefulSets and Services are in a different user-defined namespace. For example, tncp namespace.
Note:

Select All namespaces on the cluster option in the Installation mode section.

Select the openshift-operators namespace in the Installed Namespace section.

MultiNamespace Currently, not supported.
Note: In all these installation modes, a single instance of Telco Network Cloud Manager - Performance is supported. Multiple instances of Telco Network Cloud Manager - Performance are not supported by using the MultiNamespace mode.

Applying the Fix Pack 3

Create the entitlement API key and secret
Make sure that you completed the steps from here.
Creating the entitlement API key and secret
Add the catalog source for the IBM Operator Catalog
Use one of the following methods to add the IBM Operator Catalog source.
Use OpenShift Container Platform console. Follow these steps.
  1. Click + at the upper-right area of the page to open the Import YAML page.
  2. Copy the content from the catalog_source.yaml file that is available in <DIST_DIR>\ibm-tncp-case-1.4.6-fp003\ibm-tncp-case\inventory\operator\files\olm.
    apiVersion: operators.coreos.com/v1alpha1
    kind: CatalogSource
    metadata:
      name: ibm-tncp-catalog
      namespace: openshift-marketplace
    spec:
      displayName: ibm-tncp-catalog
      publisher: IBM
      sourceType: grpc
      image: docker.io/persistentsystems/tncp-catalog:1.4.6-fp003-18-1b4dfd45@sha256:ccd51d9c474eaa95ced0f135cb44cdead467694541d66a7cb9461506e8ee5000
      updateStrategy:
        registryPoll:
          interval: 45m
  3. Click Create.
  4. Verify that the IBM Operator Catalog source is added to your cluster.
    1. From the navigation menu, click Operators > OperatorHub.
    2. From the Project list, select openshift-marketplace.
    3. Verify that the catalog Pod is running in the openshift-marketplace project.

      Go to Operators > OperatorHub > openshift-operators and search for tncp.

Or

Use the OpenShift Container Platform CLI. Follow these steps.
  1. Install the catalog in the openshift-marketplace project by using the following command:
    oc ibm-pak launch \
        --case ibm-tncp-case-1.4.6-fp003.tgz \
        --namespace openshift-marketplace \
        --inventory operator \
        --action install-catalog \
        --tolerance 1
    You can see the following output.
    oc ibm-pak launch \
      --case ibm-tncp-case-1.4.6-fp003.tgz \
      --namespace openshift-marketplace \
      --inventory operator \
      --action install-catalog \
      --tolerance 1
    Welcome to the CASE launcher
    Attempting to retrieve and extract the CASE from the specified location
    [✓] CASE has been retrieved and extracted
    Attempting to validate the CASE
    [✓] CASE has been successfully validated
    Attempting to locate the launch inventory item, script, and action in the specified CASE
    [✓] Found the specified launch inventory item, action, and script for the CASE
    Attempting to check the cluster and machine for required prerequisites for launching the item
    Checking for required prereqs...
     
    Prerequisite                             Result
    Kubernetes version is 1.14.6 or greater  true
    Cluster has at least one amd64 node      true
     
    Required prereqs result: OK
    Checking user permissions...
    Kubernetes RBAC Prerequisite                            Verbs                               Result  Reason
    rbac.authorization.k8s.io.clusterroles/*               get,list,watch,create,patch,update true
    apiextensions.k8s.io.customresourcedefinitions/v1beta1  get,list,watch,create,patch,update  true
    security.openshift.io.securitycontextconstraints/       get,list,watch,create,patch,update  true
    User permissions result: OK
    [✓] Cluster and Client Prerequisites have been met for the CASE
    Running the CASE operator launch script with the following action context: installCatalog
    Executing inventory item operator, action installCatalog : launch.sh
    -------------Installing Catalog-------------
    Error from server (AlreadyExists): namespaces "openshift-marketplace" already exists
    Context "openshift-marketplace/api-tncpnoicluster5-cp-fyre-ibm-com:6443/kube:admin" modified.
    Already on project "openshift-marketplace" on server "https://api.tncpnoicluster5.cp.fyre.ibm.com:6443".
    catalogsource.operators.coreos.com/ibm-tncp-catalog created
    [✓] CASE launch script completed successfully
  2. Check whether the IBM Operator Catalog exists on your cluster.
    oc get pods -n openshift-marketplace | grep -i tncp
    NAME                   DISPLAY                TYPE   PUBLISHER     AGE
    ibm-tncp-catalog           ibm-tncp-catalog          grpc   IBM Content   7m56s
    
Create a namespace or project
  1. Run the following command to create the namespace.
    oc create namespace tncp
    namespace/tncp created
  2. Create another namespace or project.
    Important: This step is needed only if your installation mode is SingleNamespace.

    For example, tncp-operators.

    oc create namespace tncp-operators
    namespace/tncp-operators created
    For more information, see Creating a custom namespace.
Install the Fix Pack
  1. Verify that the catalog Pod is running in the openshift-marketplace project.

    Go to Operators > OperatorHub > openshift-operators and search for tncp.

    You can see a tile for the Fix Pack 3 catalog.
    Fix Pack3 catalog
  2. Click the FP003 catalog tile and click Install.
  3. From the Install Operator page, provide the following details:
    Install Operator page
    • Update Channel
      The supported update channels are shown, with 1.4 by default. It indicates that an Operator subscription is automatically created to keep the Operator most recent when new versions are delivered to the channel.
      Note: Make sure to select the 1.4 channel.
    • Installation Mode

      Choose whether to install the Operator into all namespaces in the cluster or into a specific namespace. By default, All namespaces on the cluster is selected. The Operator is installed in the openshift-operators namespace. If you install the Operator in the openshift-operators project, it is accessible by all other projects or namespaces. The approval strategy is Automatic.

      Select A specific namespace on the cluster, if you want to install the Operator in a specific namespace. If you select the option A specific namespace on the cluster, you can change the namespace. The approval strategy must be Manual.

    • Installed Namespace

      If you select the option, A specific namespace on the cluster, openshift-operators namespace is selected by default.

      If you select the option A specific namespace on the cluster, and if the project is created in tncp namespace, tncp-operators namespace is selected.

    • Approval Strategy

      Click Automatic to indicate that the installation must proceed with no additional approval. The running instance of your Operator is automatically upgraded whenever new versions are delivered to the channel.

      Click Manual if you want to review a generated Install Plan for the Operator and then manually approve the installation. Review the Install Plan for each new Operator version that is delivered to the channel, and then manually approve an upgrade.

      Note: If needed, you can change the approval strategy later.
  4. Click Install on the Install Operator page.
  5. If you had an updated annotation manager value for any specific configuration changes, configure the parameters again and rename the manager value.
Create the Operator instance
Follow the steps to Create the Telco Network Cloud Manager - Performance Operator instance.
Update the tncp-operators-<xxxx> YAML file
This step is needed only if the installation mode is SingleNamespace.
  1. In a new tab, open the OpenShift Container Platform web console.
  2. Go to Home > Search > OperatorGroup.

    Make sure that you are in tncp-operators project.

  3. Click the ibm-tncp-catalog, and click the YAML tab.
  4. Change the targetNamespaces value in the spec section to tncp.
    spec:
      targetNamespaces:
        - tncp
  5. Click Save and reload the YAML file.
  6. Go to the previous tab where the approval strategy window is open. Click Approve.
Verify the installation
Verify that the operator Pod is created and running in the namespace where you created it. Use any of the following tasks to check.
  • Go to the tncp project from Operators > Installed Operators > tncp.

    Go to Workloads > Pods and verify that all Telco Network Cloud Manager - Performance Pods are re-created with the most recent images.

    Or

  • Run the following command to check that the Operator Pod is running.
    oc get pods -n tncp-operators | grep -i tncp
    1/1     Running   0          17d
  • Verify that the Telco Network Cloud Manager - Performance Pods are up.
    oc get services -o wide -n tncp
    NAME                                         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                         AGE   SELECTOR
    analytics-batch                              ClusterIP   172.30.226.54    <none>        30028/TCP,30029/TCP             41h   service=analytics-batch
    analytics-batch-latedata                     ClusterIP   172.30.105.223   <none>        30028/TCP,30029/TCP             41h   service=analytics-batch-latedata
    analytics-batch-latedata-baeutrancell15mnt   ClusterIP   172.30.112.66    <none>        30028/TCP,30029/TCP             22h   service=analytics-batch-latedata-baeutrancell15mnt
    analytics-stream                             ClusterIP   172.30.206.222   <none>        30030/TCP,30031/TCP             41h   service=analytics-stream
    analytics-stream-direct                      ClusterIP   None             <none>        30062/TCP,30063/TCP             41h   service=analytics-stream
    app                                          ClusterIP   172.30.47.249    <none>        30037/TCP                       41h   service=app
    cassandra                                    ClusterIP   None             <none>        9042/TCP,7000/TCP               41h   service=cassandra
    config-synchronizer                          ClusterIP   172.30.16.112    <none>        30052/TCP,30053/TCP             41h   service=config-synchronizer
    dashboard                                    ClusterIP   172.30.218.92    <none>        31080/TCP,31443/TCP             41h   service=dashboard
    data-synchronizer                            ClusterIP   172.30.191.221   <none>        30054/TCP,30055/TCP             41h   service=data-synchronizer
    diamond-db                                   ClusterIP   172.30.228.133   <none>        30010/TCP,30008/TCP             41h   service=diamond-db
    diamond-db-cluster                           ClusterIP   None             <none>        7110/TCP                        41h   service=diamond-db
    diamond-db-cluster-export                    ClusterIP   None             <none>        8120/TCP                        41h   service=diamond-db-export
    diamond-db-cluster-read                      ClusterIP   None             <none>        8110/TCP                        41h   service=diamond-db-read
    diamond-db-export                            ClusterIP   172.30.232.207   <none>        30120/TCP,30118/TCP             41h   service=diamond-db-export
    diamond-db-read                              ClusterIP   172.30.24.87     <none>        30110/TCP,30108/TCP             41h   service=diamond-db-read
    dns-collector                                ClusterIP   172.30.2.121     <none>        30042/TCP,30043/TCP             41h   service=dns-collector
    file-collector                               ClusterIP   172.30.9.83      <none>        30024/TCP                       41h   service=file-collector
    flow-analytics                               ClusterIP   172.30.87.175    <none>        30044/TCP,30045/TCP             41h   service=flow-analytics
    flow-collector                               ClusterIP   172.30.15.175    <none>        30040/TCP,30041/TCP             41h   service=flow-collector
    flow-collector-external                      ClusterIP   172.30.187.51    <none>        4381/TCP,4379/UDP               41h   service=flow-collector
    inventory                                    ClusterIP   172.30.238.194   <none>        30016/TCP,30017/TCP             41h   service=inventory
    kafka                                        ClusterIP   172.30.115.197   <none>        9092/TCP                        41h   service=kafka
    nifi                                         ClusterIP   172.30.49.247    <none>        30026/TCP                       41h   service=nifi
    opentelemetry                                ClusterIP   172.30.161.190   <none>        30050/TCP                       41h   <none>
    pack-service                                 ClusterIP   172.30.69.247    <none>        30048/TCP,30049/TCP             41h   service=pack-service
    ping-collector                               ClusterIP   172.30.222.76    <none>        30050/TCP,30051/TCP             41h   service=ping-collector
    postgres                                     ClusterIP   172.30.240.127   <none>        5432/TCP,31415/TCP              41h   service=postgres
    postgres-th                                  ClusterIP   172.30.149.195   <none>        5433/TCP                        41h   service=postgres-th
    security                                     ClusterIP   172.30.99.29     <none>        389/TCP                         41h   service=security
    snmp-collector                               ClusterIP   172.30.72.245    <none>        30034/TCP,30035/TCP             41h   service=snmp-collector
    snmp-discovery                               ClusterIP   172.30.19.191    <none>        30018/TCP,30019/TCP             41h   service=snmp-discovery
    solr                                         NodePort    172.30.6.212     <none>        8993:31704/TCP,8983:32316/TCP   41h   service=inventory
    threshold                                    ClusterIP   172.30.213.10    <none>        30032/TCP,30033/TCP             41h   service=threshold
    timeseries                                   ClusterIP   172.30.202.170   <none>        30014/TCP,30015/TCP             41h   service=timeseries
    ui                                           ClusterIP   172.30.46.56     <none>        30021/TCP                       41h   service=ui
    zookeeper                                    ClusterIP   172.30.110.208   <none>        2181/TCP,2888/TCP,3888/TCP      41h   service=zookeeper

Install Technology Packs

After Telco Network Cloud Manager - Performance is installed, follow the steps in Installing Technology Packs to install the Technology Packs: