Before adding a SAN mount for backup and restore

Deployment options: Netezza Performance Server for Cloud Pak for Data System

Before you run the nzinstall modify command to add a SAN mount for backup and restore, complete these steps.

Before you begin

  1. Customers must have their own FC SAN hardware. That includes:
    • Any storage arrays or JBOD (array with RAID is recommended).
    • Any FC switches (redundant pair with zoning recommended).
    • FC cables between the storage array and switch (or switches), and the switch (or switches) and connector nodes.
  2. Customers must engage their storage subject matter expert or admin to identify how many LUNs of what size to use, and to write down the WWIDs of those LUNs for future reference.
  3. Customers are recommended to add the WWNs of the connector node FC HBAs to their storage device so that the WWNs are permitted to see the LUNs from their storage device. See the following steps for more information.

Procedure

  1. Gather the WWPNs from the FC HBAs that came with the connector nodes:
    1. Identify which nodes are connector nodes and note the first column to learn the hostname:
      ap node | grep hba_ext
      Example:
      ap node | grep hba_ext
      
      enclosure5.node1	  ENABLED	WORKER      	      YES	     hba_ext
    2. ssh to the connector node:
      ssh core@e5n1
    3. Gather the WWPNs for the ports.
      cat /sys/class/fc_host/host*/port_name
      Example:
      cat /sys/class/fc_host/host*/port_name
      0x100000109be7fff3
      0x100000109be7fff4
      0x100000109be7ffc0
      0x100000109be7ffc1
    4. Repeat steps a - c for all connector nodes that are on the system.
  2. Add access to the WWPNs on the storage box so that the connector nodes have permission with the storage box to see the LUNs and use them.
  3. Ensure that the FC cables from the storage box to the customer-provided SAN switch (switches), and from the customer-provided SAN switch (switches) to the connector nodes are all wired up.

    Up to 32 Gb FC is supported. If the customer-provided equipment is slower than 32 Gb, the ports auto negotiate down. As few as one port per connector node can be used, but 4 is the best for overall speed and resiliency.

    If you are using only 2, make sure to use one port from each FC card in the connector node to get the most redundancy for that configuration.

    Each connector node has two dual port FC HBA cards for a total of 4 ports per connector node.

    At least two cables are recommended, one per FC HBA card, to provide redundancy.

    At least 4 cables are recommended for optimal I/O performance on BnR and external tables.

  4. Connect SAN Fibre Channel cables to slot 2 and slot 4, ports 0 and 1.

    You can choose how many FC links to connect, but more is better due to better bandwidth and better redundancy. If only connecting two cables, one should be in slot 2 port 0, one in slot 4 port 0.

    Connect the same SAN device on your end with all connector nodes on Cloud Pak for Data System.
  5. Configure the zoning of the customer SAN switch (switches) for the selected ports.
  6. Ensure that the FC links on the SAN switch-side meet the following requirements:
    • They are all enabled.
    • They have link.
    • They report the wanted speed.
  7. Configure multipath as described in Configuring multipath.conf for Netezza Performance Server with connector nodes.

What to do next

Configuring multipath.conf for Netezza Performance Server with connector nodes