Multi-node setup
A multi-node installation requires between three and five dedicated Secure Service Container (SSC) LPARs, which must be organized in an LPAR group (cluster). A multi-node installation is recommended if your IBM Z® has 30 Integrated Facilities for Linux® (IFLs) or more. Tests have shown that the distribution of large workloads to different LPARs leads to a better performance. It is also a good idea to start with a multi-node setup if you expect your workload to grow considerably.
The head node is the controlling node. Externally, it communicates with the networks outside of the cluster. It is paired with one or more Db2® subsystems; it connects to the management network defined in the HMC activation profile, and optionally connects to GDPS® servers. Internally, the head node communicates with the data nodes.
The data nodes mostly communicate with the head node and with each other. Communication hardly leaves the network. Data nodes require a connection to a management network, but this connection is only used initially, when software is transferred from the head node, and for the collection of trace data.
- In general, to change the number of LPARs, a complete reload of the accelerator and a fresh installation is required. For example, if you start with a three-drawer system with one LPAR each (as recommended), and then want to add a fourth drawer, create a new setup to reinstall the cluster with 4 drawers.
- For two-drawer systems and four-drawer systems, 4 LPARs are recommended. That is, you either have 2 LPARs per drawer (two-drawer system), or 1 LPAR per drawer (four-drawer-system). If you want to extend a two-drawer system to a four-drawer system, you can keep the existing 4 LPARs. You need not reinstall the cluster.
- To migrate from a setup on the IBM® z15®, you do not have to change the number of accelerator LPARs. You can continue with a setup of three, four, or five LPARs in the cluster (depending on your number of drawers; use or continue with as many LPARs as drawers.). However, the performance of any such setup is not as good as a setup with three or four nodes on the IBM z16® or the IBM z17.
- For confined head node installations, the recommendation is to use dedicated IFLs only. However, if you want to use a confined head node setup for shared IFL workloads, make sure that the LPARs that provide the shared workloads have a significantly lower priority than the LPARs of the confined head node cluster. This way, the slowing impact on the performance is kept at a minimum.
Notes on product upgrades
- Cross-drawer head-node installations
- Confined head-node installations
In future product versions starting with version 8.1, only confined head-node installations will be supported. This requires that you migrate existing cross-drawer head-node installations. You have to migrate such installations before you upgrade to a new product version. This means that a cross-drawer head-node installation of Db2 Analytics Accelerator on Z Version 7.5 must be migrated to a Db2 Analytics Accelerator on Z Version 7.5 confined head-node installation first, that is, before the product upgrade.
A migration from a version 7.5 cross-drawer head node installation to a version 7.5 confined head-node installation is basically a new installation that involves a complete reload of your data.
You might want to upgrade the product on existing hardware, for example, an IBM z15 or z16 system, or on new hardware, such as the IBM z17. An upgrade on new hardware is generally easier to accomplish, but no matter which upgrade you choose, you should consider the upgrade to be a project of its own. As such, it requires careful planning because it involves storage, memory, hardware configuration, existing tables, and so on. Therefore, contact IBM support if you are planning to upgrade. IBM support is familiar with the possible upgrade scenarios and will work out the best upgrade plan with you.