Expanding the Netezza Performance Server instance and redistributing tables

Deployment options: Netezza Performance Server for Cloud Pak for Data System

Learn how to expand the Netezza Performance Server instance.

Before you begin

Important: Ensure that you are familiar with all of the information that is described in In-field expansion with offline redistribute and Caveats.

Procedure

  1. Ensure that no data slice is degraded.
    nzds -issues

    Make sure that Netezza Performance Server is in a healthy state. Also, allow regens to complete.

  2. Ensure that no node is in the FAILED state.
    nzhw -issues

    If there are failed disks, it is preferable to resolve them before expansion.

    If the system is going through rebalance or node failover, wait for the system to come Online before expanding.
  3. Provision the Cloud Pak for Data System hardware nodes by using an automation script.
    1. Identify the master node.
      The automated SPU node configuration script must be run from the master node.
      ap node
      Output example:
      ------------------- ----------- --------------- ----------- ----------- 
      | Node             | State   | Personality | Monitored | Is Master    | 
      ------------------- ----------- --------------- ----------- ----------- 
      | enclosure1.node1 | ENABLED | CONTROL     | YES       | NO           |
      | enclosure1.node2 | ENABLED | CONTROL     | YES       | YES    |

      In this example, enclosure1.node2 is the master node.

    2. Run the command on the master node to set up enclosure 5 and 6 to SPU nodes.
      /opt/ibm/appliance/platform/xcat/scripts/xcat/automation_script/ips_add_spu_nodes -n e{5..6}n{1..4}

      This setup process takes 45 - 60 minutes, depending on system size.

      When the configuration is completed, the SPU nodes are shut down.

  4. Run table row counts to validate the data.
    nz_db_tables_rowcount
  5. Expand the instance and redistribute tables.
    nzredrexpand

    You are suggested to run nzredrexpand through a screen session that is started on the host. The tool outputs progress and status messages to both the console and to the log file, nzredrexpand.log. The same information is written to both the console and the log file (duplicated).

    During the topology expand phase, you can monitor the sysmgr log or use nzpush -a status -a to monitor the progress.

    For more information about the process, see In-field expansion with offline redistribute.
  6. Starting from version 11.2.1.1, nzredr is modified for more accurate estimates and improved redistribution performance. The updated nzredr command skips tables that are distributed on random that satisfy both of the following criteria:
    • The number of extents on the largest data slice is no more than NZREDR_RANDOM_DIST_THRESH1. The default is one extent).
    • The total number of extents is no larger than NZREDR_RANDOM_DIST_THRESH2. The default is unlimited.

    The threshold values must be passed in the environment with the nzredrexpand command.

    Example:
    Skip the redistribution of randomly distributed tables with 2 or fewer extents on the largest slice.
    export NZREDR_RANDOM_DIST_THRESH1=2 
    export NZREDR_RANDOM_DIST_THRESH2=768 
    nzredrexpand
    When expansion needs to be resumed.
    export NZREDR_RANDOM_DIST_THRESH1=2 
    export NZREDR_RANDOM_DIST_THRESH2=768 
    nzredrexpand --resume
  7. Run table row counts to validate the data again:
    nz_db_tables_rowcount
    For troubleshooting, see Failures and troubleshooting.