Connector node

The Cloud Pak for Data System connector node is purpose-built to accelerate and improve the performance of your Netezza Performance Server engine by enabling the use of more storage arrays such as SAN, NAS and other lower cost storage options.

Connector node is a Lenovo based, 2U server platform with integrated storage, processing and I/O. The chassis comes as a pretested building block loaded with software. It is installed and configured by IBM.

The chassis can be installed in a Cloud Pak for Data System rack or a customer supplied rack meeting the guidelines in the Customer Install Guide, Rack and System document included on the Safety and Regulatory CD shipped with your Cloud Pak for Data System components.

Since the base Cloud Pak for Data System and Netezza Performance Server nodes do not have any PCIe slots available for fibre channel HBAs, the connector node provides the ability to use fibre channel (FC) block storage for high-speed backup and restore, load, and unload for Netezza Performance Server.

Benefits of the solution

  • Higher-speed backup and restore, load, and unload performance for Netezza Performance Server (NPS)
  • Base Cloud Pak for Data System and NPS SPUs do not have any PCIe slots available for FC HBAs, connector nodes add that option.
  • Faster backup can help with large systems where more data has to be backed up
  • Connector nodes provide high-performance backup and restore capability with 32 CPU cores, 192GB DDR4, 4 32Gb FC ports, 8 direct-attach NVMe drives, and 2 bonded 100Gb Ethernet ports to the internal fabric switches.
  • When connector node is present on a system, the NPS host container runs on it, providing enhanced compute and memory resources.
  • Connector node has 2x100GbE uplink to the fabric switch, meaning faster data load and unload over Ethernet with a lower CPU burden on the host.
  • Backups can be done over FC (LAN-free) at a rate of 10.65TB/hour (compressed) using zstd, for a single NPS instance. This was tested on a base+2 system with a very fast storage landing zone. Customer results might be better on larger systems, but worse if the customer storage is on the slower side.

    nzbackup with dropdata (host writes to /dev/null instead of remote storage over LAN or FC) speeds improve from 3.7TB/hour to over 10TB/hour, opening up an existing bottleneck internal to the system itself, which had been hit at some systems with fast enough storage and house network. This can help even with LAN-based backups for customers without FC storage connections.

  • Two connector nodes are recommended in production. This way, a failover of the NPS host from one connector node to another will allow the production cluster to continue accessing existing backups or creating new backups.

    Using only one connector node is possible but advised against in production. In such a configuration, if the only connector node fails, then access to the FC storage for backup and restore is lost until the node is fixed, repaired, or replaced. The NPS host container fails back over to a control plane node, and the available CPU and memory is reduced back to a normal NPS installation.

Hardware details

Table 1. Connector node hardware specifications
  SAN Gateway Notes
Server type SR650 Gen2  
CPU 2 x 16C 2.1 GHz Silver 4216 Processors
Total Memory 192 GB  
Memory DIMM 16 GB 2933 MHz DDR4 2R x 8
LOM Slot 2 x 1G RJ-45 FC: AUKG
PCI Slots DP 100G NIC Front-end
2 x DP 32G FC HBA Back-end
Internal Drives 8 x 3.84 TB NVMe Toshiba CM5 (1 DWPD)
Internal M.2 HBA Marvell 88SE9230 Part of M.2 Enablement Kit
Internal M.2 Drives 2 x 480 GB SATA Micron 5100 (1.5 DWPD)
Power Supply 2 x 1,100 W Fully redundant

Connector node setup in the system

  • Cloud Pak for Data System of size Base+8 or greater and including NPS comes with two connector nodes by default. Smaller systems may optionally be configured with one connector node, however, in this setup the ability to backup and restore to/from FC connections is not HA.
  • In the systems with two connector nodes, only one connector node is active at a time, the other is for high availability.
  • The system provides the ability to switch the connector node personality between UNSET and NPS Host.
  • The NPS host container runs on bare metal on connector node.
  • FC-based backup and restore, load, and unload for NPS via connector node

Connector nodes can be optionally used for filesystem-based backup, or using third-party LAN-free backup such as TSM/Spectrum Protect-based backup LAN-free. For more information, see https://www.ibm.com/docs/en/netezza?topic=spftsmc-setting-up-tivoli-storage-manager-8110-lan-free.

Important considerations

If using FC, any number of FC ports are supported per connector node, at any speed up to and including 32GT/s. However, it is recommended to use at least two and ideally four FC ports per connector node. With two compared to four, there might be reduced backup and restore speeds in case the 2 ports' total bandwidth is less than the backing storage.

Clients might back up to a spinning disk or to tape. Usually, these types of media are slower than the 4x32GT/s supported. Consider using flash storage, or many HDDs in parallel (can be RAIDed or not – the connector node can handle both or either).