Enabling query and result sharing with Guardium if on 11.2.1.7 or later

Learn how to enable query and result sharing with Guardium from Guardium enabled docker container on your Netezza Performance Server if you are on 11.2.1.7 or later.

Procedure

  1. Check the names of all control plane and connector nodes. Run:
    /opt/ibm/appliance/platform/xcat/scripts/xcat/display_nodes.py --control
    /opt/ibm/appliance/platform/xcat/scripts/xcat/display_nodes.py --connector_node
    
    Note: Connector nodes might not be present on some of the systems.
    For example:
    [root@e1n1 ~]# /opt/ibm/appliance/platform/xcat/scripts/xcat/display_nodes.py --control --connector_node
    e1n1 e1n2 e1n3
    [root@e1n1 ~]# /opt/ibm/appliance/platform/xcat/scripts/xcat/display_nodes.py --connector_node
    e7n1 e8n1
  2. Inspect ap node -d output and ssh to the node that acts as a VDB master.
    For example:
    [root@e1n1 ~]# ap node -d
    +------------------+---------+-------------+-----------+-----------+--------+---------------+---------------+
    | Node             |   State | Personality | Monitored | Is Master | Is HUB | Is VDB Master | Is NRS Master |
    +------------------+---------+-------------+-----------+-----------+--------+---------------+---------------+
    | enclosure1.node1 | ENABLED | CONTROL     |       YES |       YES |    YES |            NO |            NO |
    | enclosure1.node2 | ENABLED | CONTROL     |       YES |        NO |     NO |            NO |            NO |
    | enclosure1.node3 | ENABLED | CONTROL     |       YES |        NO |     NO |            NO |            NO |
    | enclosure1.node4 | ENABLED | UNSET       |       YES |        NO |     NO |            NO |            NO |
    | enclosure2.node1 | ENABLED | UNSET       |       YES |        NO |     NO |            NO |            NO |
    | enclosure2.node2 | ENABLED | UNSET       |       YES |        NO |     NO |            NO |            NO |
    | enclosure2.node3 | ENABLED | UNSET       |       YES |        NO |     NO |            NO |            NO |
    | enclosure2.node4 | ENABLED | UNSET       |       YES |        NO |     NO |            NO |            NO |
    | enclosure7.node1 | ENABLED | CN,VDB_HOST |       YES |        NO |     NO |           YES |            NO |
    | enclosure8.node1 | ENABLED | CN,VDB_HOST |       YES |        NO |     NO |            NO |            NO |
    +------------------+---------+-------------+-----------+-----------+--------+---------------+---------------+
    
    [root@e1n1 ~]# ssh e7n1
    [root@e7n1 ~]#
  3. Edit the guardium.env file:
    1. Open the file.
      vi /opt/ibm/appliance/storage/ips/ipshost1/guardium.env
    2. Edit the guardium.env file.
      STAP_CONFIG_TAP_PRIVATE_TAP_IP=127.0.0.1
      STAP_CONFIG_TAP_TAP_IP= NPS host name (example.customer.com)
      STAP_CONFIG_TAP_FORCE_SERVER_IP=1
      GUARDIUM_INFO=Guardium collector IP address
  4. Redeploy the container to other control plane nodes and connector nodes, if they are present on the system.
    /opt/ibm/appliance/storage/ips/ips1_deployment/v11.2.1.X/nzdeploy-remote -n control_plane_node -n control_plane_node -n connector_node
    Example:
    [root@e7n1 ~]# /opt/ibm/appliance/storage/ips/ips1_deployment/v11.2.1.7/nzdeploy-remote -n node1 -n node2 -n node3 -n e8n1
  5. Redeploy the container on the node where it is active to load the guardium.env variables.
    /opt/ibm/appliance/storage/ips/ips1_deployment/v11.2.1.X/nzdeploy-remote -n control_plane_node_with_active_nps_host_container
    Example:
    [root@e7n1 ~]# /opt/ibm/appliance/storage/ips/ips1_deployment/v11.2.1.7/nzdeploy-remote -n e7n1
  6. Edit the postgresql.conf file to enable sharing/query result with Guardium.
    1. Add the session variable.
      enable_guardium_share_info = yes
    2. Add the libguard_netezza_exit_64.so Guardium library path.
      guardium_exit_lib='PATH TO libguard_netezza_exit_64.so'
      Example:
      guardium_exit_lib='/usr/lib64/libguard_netezza_exit_64.so'
  7. Restart the ipshost1 container.
    1. docker stop ipshost1
    2. docker start ipshost1