Data Cataloging installation with alternative VLAN

Install and configure Data Cataloging that uses additional VLAN connected through the IBM Storage Fusion upstream links.

For security reasons, the target VLAN network is not available for the IBM Storage Fusion cluster through the default IBM Storage Fusion public network.

There are several approaches available to access alternate VLANs that do not have routable access through the default IBM Storage Fusion network.

The reminder of this document covers the option of OpenShift® NetworkAttachmentDefinitions to provide Data Cataloging access to alternative VLAN access. It explains the setup and implementation of NetworkAttachmentDefinitions and the required modifications to the Data Cataloging definitions.

IBM Storage Fusion VLAN Configuration

  1. Add VLAN. For procedure, see Adding VLANs.
  2. Add VLAN to link.
  3. Check switches to ensure that VLAN is available on bond250 and ports.
  4. Check the compute node to ensure that VLAN is added.

Data Cataloging Configuration

  1. Log in to the Red Hat® OpenShift Container Platform web console.
  2. Go to Networking > NetworkAttachmentDefinitions.
  3. Click Project drop-down and select ibm-data-cataloging namespace from the list.
  4. Click the NAD that you want and check the Container and VM NAD details.
    For example:
    For container access
    Note: Adding extra VLANs to the IBM Storage Fusion switch link creates an additional bridge interface for each additional VLAN on the worker nodes bond0 interface automatically.
    apiVersion: k8s.cni.cncf.io/v1
    kind: NetworkAttachmentDefinition
    metadata:
    annotations:
    k8s.v1.cni.cncf.io/resourceName: bridge.network.kubevirt.io/br4001
    name: br4001-c
    namespace: ibm-data-cataloging
    spec:
    config: >-
    {“cniVersion”: “0.3.1", “name”: “br4001", “type”: “macvlan”, “master”: “br4001",“mode”: “bridge”, “ipam”: { “type”: “static” , “routes”: [ {“dst”:“10.140.20.0/24",“gw”:“10.100.100.1" } ] }}
    Item Value Description
    Name br4001-c Network Attachment Definition
    dst 10.140.20.0 Destination subnet
    gw 10.100.100.1 Router
    Name br4001 Bridge 4001 (auto generated)
    For VM access
    Note: Once attached to any VM, additional configuration on the VM interface must be necessary, including IP and routing information.
    apiVersion: k8s.cni.cncf.io/v1 
    kind: NetworkAttachmentDefinition 
    metadata: 
     name: br4001 
     namespace: ibm-datacataloging 
    spec: 
      nodeSelector: 
        vmtest: "true" 
      config: >- 
        {"name":"br4001","type":"cnv-bridge","cniVersion":"0.3.1","bridge":"br4001","macspoofchk":true,"ipam":{}}
    Item Value Description
    Name br4001 Network Attachment Definition
    Bridge br4001 Bridge interface (auto generated)
Container updates
Important: The container updates might change based on your connection type such as NFS, S3, SMB, and Scale. The following example shows the Scale connection type.
The following Data Cataloging containers require updates for Scale scanning through secondary VLAN as follows:
  • isd-connmgr-main
  • isd-consumer-scale-le
  • isd-consumer-scale-scan

Assign a unique IP address for each container pod set.

For example:
Container IP address Resource
isd-connmgr-main 10.100.100.11 Statefulset
isd-consumer-scale-le 10.100.100.12 Deployment
isd-consumer-scale-scan 10.100.100.13 Deployment
apiVersion: apps/v1 
metadata:  
  name: isd-connmgr-main 
.... 
spec: 
  template: 
    annotations: 
        k8s.v1.cni.cncf.io/networks: |- 
          [ 
            { 
              "name": "br4001-c", 
              "interface": "br4001", 
              "namespace": "ibm-data-cataloging", 
              "ips": ["10.100.100.11/24"] 
            } 
          ]
Activate changes
  1. Do a graceful shutdown of the Data Cataloging service. For the procedure, see Graceful shutdown.
  2. Edit Statefulset isd-conn-mgr-main in the components. You can append the new annotation to the current annotations.
  3. Edit Deployments for the consumer type. In this case it depends on the connection type.

    For example for scale it must be the isd-consumer-scale-scan.

  4. Return the Data Cataloging service to the running state. For the procedure, see Returning Data Cataloging to a running state.
Example Scaling Commands:
  • Scale down Data Cataloging:
    oc get deployments |grep consumer 
    oc get deployments |grep consumer|awk '{ print $1}'|xargs -L 1 echo oc scale --\ replicas=0 deployment |bash
    oc get deployments |grep producer 
    oc get deployments |grep producer|awk '{ print $1}'|xargs -L 1 echo oc scale --\ replicas=0 deployment |bash
    oc get deployments |grep isd-producer-scale-le 
    oc get deployments |grep isd-producer-scale-le|awk '{ print $1}'|xargs -L 1 echo oc \ scale --replicas=0 deployment |bash
    oc get statefulset |grep connmgr 
    oc scale --replicas=0 statefulset.apps/isd-connmgr-main
    oc get deployments |grep db2whrest 
    oc scale --replicas=0 deployment db2whrest
  • Scale up Data Cataloging:
    $oc scale --replicas=1 deployment <deployment_name isd-db2whrest
    $oc get deployments |grep db2whrest
    isd-db2whrest                    1/1     1            1           6d20h
    oc get deployments |grep consumer 
    oc get deployments |grep consumer|awk '{ print $1}'|xargs -L 1 echo oc scale --\ replicas=10 deployment |bash
    oc get deployments |grep producer 
    oc get deployments |grep producer|awk '{ print $1}'|xargs -L 1 echo oc scale --\ replicas=10 deployment |bash
    oc get deployments |grep connmgr 
    oc scale --replicas=1 statefulset.apps/isd-connmgr-main

VLAN Testing

Use the following commands to verify that the VLAN is present in the containers.
Note: Addresses are hex byte reverse.
sh-4.4# cat /proc/net/route 
Iface  Destination     Gateway   Flags  RefCnt  Use  Metric  Mask  MTU     Window  IRTT 

br4001 0064640A        00000000  0001     0      0     0   00FFFFFF  0      0      0    
br4001 0064640A        0164640A  0003     0      0     0   00FFFFFF  0      0      0          
sh-4.4# 

This is what expecting to see for the 10.100.100.0 network through 10.100.100.1 router.

A utility container can be used with more commands like apline or centos to confirm that the network endpoints are reachable.

Using /proc to determine assigned IP address.
sh-4.4# cat /proc/net/fib_trie | grep "|--" | egrep -v "0.0.0.0" 
                    |-- 10.100.100.0 
                    |-- 10.100.100.11 
... 
            |-- 127.0.0.0 
           |-- 127.0.0.1 
                    |-- 10.100.00.0 
                    |-- 10.100.100.11 
... 
            |-- 127.0.0.0 
           |-- 127.0.0.1 
        |-- 127.255.255.255
Testing pods:
From the virtual machine or target cluster, ping the container or pod IP through VLAN.

After completion of all steps, create Data Cataloging data connection. For the procedure, see Creating an IBM Storage Scale data source connection.