Installing and starting the appliance (single-node setup)

Follow these steps after the accelerator (SSC) LPAR definition for a single-site setup of Db2 Analytics Accelerator on Z.

Before you begin

Storage must be provided for various storage pools. Table 1 provides an overview and minimum size recommendations.

Table 1. Recommended pools and pool sizes
Pool JSON key Description Size
Appliance data pool "data_devices" Accelerator database including accelerator-shadow tables, accelerator-only tables, temporary database space, and temporary result space. > 200 GB plus 90 percent of uncompressed Db2 for z/OS data (including accelerator-only tables) plus 20 percent of the accelerator (SSC) LPAR memory.
Appliance operation "boot_device" Size of the appliance image, which was defined and used during the image file upload step (“Install Software Appliance”). > 40 GB
Important: This must be a single disk.
Appliance runtime "runtime_devices" Temporary storage space required for the appliance runtime (container execution environment) > 80 GB

Procedure

Restriction:
  • For the following steps, you need Mozilla Firefox or Google Chrome. Other browsers are not supported.
  • If the Login page does not show all the controls needed to login successfully, or if after the logon an SSC installer window is not fully usable because controls are missing, change the browser language to English and try again.

  1. Define a dedicated Secure Service Container (SSC) LPAR as described in Defining an LPAR for Db2 Analytics Accelerator on Z.
    Important: A management network must exist between the system that runs your web browser and the SSC LPAR that runs the SSC installer. Make sure that this network is fast enough. A transfer rate of 1 Gigabit Ethernet (1 GbE) per second or faster is recommended. If the network connection is very slow, the SSC installer might be sluggish to respond to button clicks, and the upload of the large image might lead to other issues.
  2. Log on to the Admin UI and proceed to the Welcome page.
    For a description, see Logging on to the Admin UI.
  3. On the Welcome page, click First-Time Setup.
    You see the following page:
    Figure 1. Accelerator Configuration Definition page
    After clicking First-Time-Setup, you reach this page, the heading of which is Accelerator Configuration Definition
  4. Starting with product version 7.1.9, all configuration settings are made by uploading a configuration file in JavaScript Object Notation (JSON) format.
    A sample configuration file is provided at the bottom of the Accelerator Configuration Definition page. Under the heading Sample Configuration Files:
    1. Click the Simple tab.
    2. Click the download button (Download button).
      Note: Depending on your settings, some browsers will display the JSON file rather than download it. In that case, click File > Save Page As and save the file with an extension of .json.

    The sample configuration file is stored in the Download folder of your web browser.

    Important: Compared with previous product releases, the configuration file delivered with product version 7.5.3 and later versions has changed considerably.
    When you open this file in your web browser, it looks as follows:
    Figure 2. Extract of the sample configuration file ( text format), displayed in web browser
    Extract of the sample configuration file (JSON format), displayed in web browser
  5. Open this file in a text editor of your choice and modify the settings according to your needs.
    An editor capable of validating JSON files is recommended because the configuration file must be valid JSON. If it cannot be parsed correctly, you will run into errors. Valid JSON means:
    • Quotes are required around attribute values, even if these are plain numbers.
    • Colons must be used to separate attribute names from their values.
    • Object definitions consisting of key/value pairs must be enclosed in curly braces.
    • Arrays or lists must be enclosed in square brackets.
    For your reference, take a look at the code of the sample configuration file:
    {
        "version": "7.5.9",
        "accelerator_name": "igor01",
        "accelerator_description": "SAMPLE-single-node",
        "accelerator_type": "single-node",
        "admin_ui_timeout": "60",
        "db2_pairing_ipv4": "10.1.1.101/24",
        "temp_working_space": "none",
        "network_interface_bindings": {
            "mgmt_nw": "activation-profile",
            "db2_nw": "osa0Ap0"
        },
        "runtime_environments": [
            {
                "cpc_name": "CPC001",
                "lpar_name": "IGOR001",
                "network_interfaces": [
                    {
                        "name": "osa0Ap0",
                        "device": "0.0.0a00",
                        "port": "0"
                    }
                ]
            }
        ],
        "storage_environments": [
            {
                "boot_device": {
                    "type": "dasd",
                    "device": "0.0.5e29"
                },
                "runtime_devices": {
                    "type": "dasd",
                    "devices": [
                        "0.0.5e25",
                        "0.0.5e26"
                    ]
                },
                "data_devices": {
                    "type": "dasd",
                    "devices": [
                        "0.0.5e14",
                        [
                            "0.0.5e80",
                            "0.0.5e8f"
                        ]
                    ]
                },
                "hyperpav": "auto"
            }
        ]
    }
    Most attributes in the configuration file are required.
    "version" (required)
    The version of the accelerator. The version in the configuration file must match the version of the accelerator exactly. Do not change the value so that it deviates from the version of the accelerator.
    "accelerator_name" (required)
    The name of the accelerator. This attribute is used to identify the accelerator in the Admin UI and in log and trace files written to the logs/dumps/traces/panels directory. You can change the name online. A reset or restart is not required.
    "accelerator_description" (optional)
    Optional text description. You might want to add some information about the accelerator, especially if it is helpful for an administrator. You can change the description online. A reset or restart is not required.
    "accelerator_type" (required)
    As the name suggests, the type of the accelerator. Set this attribute to the value "single-node". This means that the runtime and storage environments you define later contain definitions for a single LPAR and a single storage environment only. A single-node accelerator can use up to 40 Integrated Facilities for Linux® (IFLs) and 3 TB of memory.
    Important: Once this parameter has been set for and used by an accelerator, it cannot be changed anymore. If you want to switch to a multi-node installation, you must provide an entirely new configuration. Also, if you switch from one mode to the other, the existing data is not migrated or preserved.
    "admin_ui_timeout" (optional)
    Defines the session time of the Admin UI. When the specified time has passed, the session expires, and the administrator has to log on again to continue. Values from 1 to 1440 minutes are allowed. The default time is 15 minutes.

    If you set or change this parameter, the change takes effect immediately, that is, without a restart of the accelerator.

    Example:
    "admin_ui_timeout": "60",

    This sets the session timeout period for the Admin UI to 60 minutes.

    Note: This parameter was introduced with product version 7.5.11. If you try to use it with an earlier product version, your configuration setup will fail, and an error message will be issued.
    "db2_pairing_ipv4" (required)
    The IP address used to pair your Db2 subsystem with the specified accelerator. This IP address uniquely identifies the accelerator and is used by Db2 for z/OS to connect to the accelerator.
    Changing this value requires a subsequent reset from the Accelerator Components Health Status page of the Admin UI. See the following figure.
    Figure 3. Resetting an accelerator configuration
    Screen capture of Reset button and wipe option in the Admin UI
    Important: To change the value, it is no longer necessary to select the Wipe data option and delete the existing data. This was only required in previous versions and involved a new pairing and upload of the tables.
    You can specify a netmask as part of the IPv4 address , like /24 for a subnet with 256 addresses. For example:
    "db2_pairing_ipv4": "10.101.31.172/24"

    This specifies the IP address 10.101.31.172 as the identifier of a subnet that comprises the address range from 10.101.31.0 to 10.101.31.255.

    "temp_working_space" (optional)
    The accelerator needs temporary storage for extensive sort operations that cannot be executed exclusively in the system memory. Increase the size if certain operations cannot be completed because the temporary storage and the system memory do not suffice. In addition, the accelerator needs temporary storage for the replication spill queue and for query results that tend to arrive faster than they can be picked up by the receiving client. The "temp_working_space" parameter specifies the size of the temporary storage as part of the data pool.

    Involved queries need a high amount of temporary storage. A single query can easily consume multiple terabytes of temporary storage during its execution. If your system processes several long-running queries of this type at the same time, some of these queries might have to be canceled if the temporary storage is running out.

    Transient storage devices (NVMe drives), which are highly recommended for various reasons, can help you avoid situations like this. See transient_storage for more information.

    You can also define a transient storage pool, and dedicate external devices (disk drives) to this pool. This option is not as fast as NVMe drives, but still much better than using a portion of your data pool for temporary data. It is the recommended solution if your accelerator is not deployed on a LinuxOne computer, but on a classic IBM Z® computer. See transient_devices for more information.

    If you do not use any of the transient storage options, a portion of your data pool will be used for the temporary data.

    It is important to note that there is no swapping or failover. You either use disk space in your data pool for the placement of temporary data, or transient storage if that exists. If your transient storage is running out, processing will not continue with temporary working space in your data pool.

    You can set the "temp_working_space" parameter to the following values:

    "unlimited"
    The advantage of this setting is that it works without sizing, that all storage resources can be shared, and that no storage is reserved as temporary working space if it is not needed. The disadvantage is a lower operation stability because space-intensive queries might lead to the cancellation of other jobs, irrespective of their types. A space-intensive query can even cause the cancellation of INSERT operations.

    If a situation is reached where nearly all temporary working space is used up by running queries, replication jobs, load jobs, or by the population of accelerator-shadow tables, any additional workload that claims more than the available disk space is canceled automatically.

    "automatic"
    Starting with product version 7.5.11, this is the default, that is, the value used if you omit the parameter. This setting results in the creation of a database-managed (DMS) table space based on the system size and the free space in the data pool. The size will be the smaller of the following two values:
    • 80 percent of the LPAR memory
    • 50 percent of the free space of the data pool
    Tip: In many cases, this is not an ideal allocation. If your queries require significant amounts of temporary storage, consider the use of transient_storage. If, in contrast, your query workload needs just small amounts or hardly any temporary storage at all, specify a fixed size with a low value or "none".

    The current default is recommended only if you do not know the storage requirements of your query workload very well, or if you do not want to spend much time on fine-tuning.

    "none"
    Just a very small temporary workspace is used. A query that includes an extensive sort which leads to a memory overflow will fail immediately.
    Fixed size
    A DMS table space (see "automatic") with a size you determine. For example:
    "temp_working_space": "500 GB"

    The specified size is reserved for the temporary work space and serves as a size limit at the same time.

    A change of the setting requires a restart of the accelerator. Changes do not take effect before the restart. If you use the setting "automatic", the size of the DMS table space is re-adjusted dynamically after each start of the accelerator, according to the memory and disk usage rates that are encountered.

    If you set "temp_working_space" to "automatic" or to a fixed value, and if the free storage space has shrunk during the time from one restart to the next, a smaller temporary workspace is provided after the latest restart. If you were close to 100 percent storage usage before a restart, the temporary workspace might be reduced so much during the restart that certain jobs cannot be executed anymore. In that case, consider adding storage to the accelerator. This can be done while your system is online.

    "dispatch_mode" (optional)
    This optional parameter determines how the workload is distributed across the available CPUs.
    Note: The "dispatch_mode" parameter has an effect only if you process shared IFL workloads. Otherwise, you can ignore it.

    If you use a distributed head node, you must configure all IFLs in the cluster as shared IFLs. This is because the head node shares resources with the data nodes in this mode, so that PR/SM virtualization capabilities can be exploited. With just a single node or a confined head node, you can use dedicated IFLs or shared IFLs. Dedicated IFLs probably work a little faster in these modes. However, to simplify this documentation, the use of shared IFLs is assumed throughout the text.

    You can set this parameter to the value "horizontal" or "vertical". The default value is "vertical".

    Vertical dispatch mode (HiperDispatch mode) means that the workload is processed by just a subset of the available CPUs, which reduces the scheduling overhead. It is the most efficient mode for systems with many logical processors.

    Horizontal dispatch mode means that the workload is spread across all available CPUs. For older versions of Db2 Analytics Accelerator on Z, you could not set a dispatch mode. The horizontal mode was always used.

    "network_interface_bindings" (required)
    In the network interface bindings section, you map physical interfaces to network names that can be specified in a runtime environment. The settings in this section cannot be changed online. If you need to change them, you must restart the Db2 Analytics Accelerator on Z accelerator.
    "mgmt_nw"
    This network, which is used by the Admin UI and other support interfaces, is defined by the HMC activation profile of the accelerator (SSC) LPAR. It is not part of the Db2 Analytics Accelerator on Z configuration. Therefore, use the attribute value "activation-profile". Note that the name of the management network might change if someone updates the activation profile of the accelerator (SSC) LPAR on the HMC.
    Note: Jumbo frame support is not required for the switches or network interface controllers in this network.
    "db2_nw"
    This network name points to the IP address of your Db2 subsystem (counterpart of the "db2_pairing_ipv4"). It is used during the pairing process, and all network traffic between your Db2 subsystem and the accelerator will run through this interface.

    The attribute value must be the same as one of the "name:" attributes in your "network_interfaces" definitions further down in the configuration file. The value must be an alphanumeric character string no longer than 8 characters. Compare this with the sample code.

    In the "network_interfaces" section, which is described below, you find the details of all networks, including the network used for the pairing process. In the example, this is the network device with the ID 0.0.4b00.
    Important: The switches and network interface controllers in this network must support jumbo frames.
    "gdps_nw"
    This network interface is used for GDPS® failover support. The value is the interface name of the alternative data network in a failover scenario. You can omit this value if you do not use GDPS.
    Note: Jumbo frame support is not required for the switches or network interface controllers in this network.

    An example of the "network_interface_bindings" block:

    "network_interface_bindings": {
      "db2_nw": "my_db2_network",
      "gdps_nw": "my_gdps_network",
      "mgmt_nw": "activation-profile"
    }
    Notes:
    • Multiple interfaces can use the same physical connection. For example, in a GDPS setup, the interface name specified as the value of “gdps_nw” might be the same as the value of “db2_nw”. In this case, the same physical connection would be used by both interfaces, and only one network definition would be required in the "network_interfaces" block further down.

      However, a separation of the subnets is recommended. At least the management network should be separated from the dedicated Db2 network, for the following reasons:

      • Frequently, the management network is not optimized for the best performance.
      • The management network might contain gateways that cause delays.
      • The management network might not support jumbo frames.
    • Do not use the value “activation-profile” for any network interface other than the “mgmt_nw” because the activation profile might change in the HMC. Such a change might have repercussions if a network that refers to the activation profile is used for other purposes. The change might make your network definition unusable.
    "runtime_environments" (required)
    This block defines the network interfaces and other LPAR-specific settings of the accelerator (SSC LPAR). Each of these LPARs is identified by the CPC name and the LPAR name. A set of networks must be defined for each LPAR. This is usually the Db2 network, and, optionally, an additional network to reach the GDPS server. The definition of a runtime environment is also required for GDPS setups because GDPS controls the storage configuration, but not the configuration of accelerator LPARs. Specify the following attributes to identify an accelerator (SSC) LPAR:
    "cpc_name"
    The name of the CPC as defined on the Hardware Management Console (HMC).

    You cannot change this name online. If you must change it because the CPC name changes or because you want to move the accelerator to a different CPC, first create an additional runtime environment that contains the new value. Then shut down the accelerator, change the old value and reactivate the accelerator.

    "lpar_name"
    The name of the accelerator (SSC) LPAR as defined on the HMC.

    You cannot change this name online. If you must change it because the LPAR name changes or because you want to move the accelerator to a different LPAR, first create an additional runtime environment that contains the new value. Then shut down the accelerator, change the old value and reactivate the accelerator.

    "transient_storage": "NVMe" (optional)
    This is a highly recommended option for owners of an IBM LinuxONE system. NVMe storage is local storage of the LinuxONE, which can be accessed directly. It has a much better performance than external storage devices, and might help free up capacity on your external devices. When in use, all data on the NVMe storage is encrypted. The encryption keys are kept in the system memory only. This ensures that all data on the NVMe is securely removed when the accelerator is stopped or restarted.

    NVMe storage is ideal for the placement of temporary data, especially if you need a lot of this space because you run involved queries that execute sort operations or produce result sets so large that they do not fit into the accelerator memory. The database engine of the accelerator can store and read significant amounts of data from NVMe storage in a short period of time.

    NVMe storage is also good for the replication spill queue and for query results that tend to arrive faster than they can be picked up by the receiving client.

    Configuration example:

    "runtime_environments": [
            {
                "cpc_name": "Z16_4",
                "lpar_name": "LPAR1",
                "transient_storage": "NVMe",
                "network_interfaces": [
                          .
                          .
    
    Important:
    • If you specify "transient_storage":"NVMe", you don't have to set the "temp_working_space" parameter at all. Any setting of this parameter will become ineffective if transient storage is used.
    • If NVMe drives do not provide enough storage for your temporary data, consider using a transient data pool. See transient_devices for more information.
    • In a Geographically Dispersed Parallel Sysplex® (GDPS), you can have different transient storage configurations for the different sites. For example, it is possible to use internal NVMe transient storage with 150 TB capacity on the primary site, NVMe transient storage with 10 TB capacity on the secondary site, and temporary working space of the data pool on the third site.

    For an in-depth discussion of the transient storage options, see Configuring and using transient storage for IBM Db2 Analytics Accelerator for z/OS on Z.

    "network_interfaces" (required)
    This keyword defines the physical network interfaces (OSA, RoCE, or HiperSocket) that are used by a runtime environment. Each network name defined in the "network_interface_bindings" section must be mapped to a physical network interface in the corresponding runtime environment. You can change all of the definitions in this block online. The following attributes must be specified for each physical network:
    "name" (required)
    The name of the network interface in the "network_interface_bindings" section.
    "ipv4"
    The IPv4 address and the subnet used for an interface, for example:
    10.4.1.101/16
    Important: The IPv4 address that the "db2_nw" network uses is specified as the "db2_pairing_ipv4" address. Therefore, do not specify an additional "ipv4" address for the "db2_nw" network.
    "device" (required)
    The identifier of an OSA-Express® card, a RoCE Express card, or a HiperSocket. A device can only be used once. This includes the device specified in the activation profile for the HMC.
    "port" (optional)
    The network port to be used. If this value is omitted, the port number defaults to "0". Most OSA cards have a single physical port "0".
    "vlan" (optional)
    If a virtual LAN (VLAN) has been defined for the accelerator (SSC) LPAR and you want to use this VLAN as an interface for Db2 Analytics Accelerator on Z, you can specify the VLAN name here.
    Example:
    "runtime_environments": [
      {
        "cpc_name": "SYSZE1",
        "lpar_name": "SVLSSC0C",
        "network_interfaces": [
          {
            "device": "0.0.4b00",
            "name": "my_db2_network",
            "port": "1",
            "vlan": "600"
          }
        ]
      }
    ]
    

    In this example, you find a network interface definition for an accelerator (SSC) LPAR named SVLSSC0C. This LPAR runs in a CPC named SYSZE1. The LPAR has a network device 0.0.4b00 (an OSA-Express card, a RoCE Express card, or a HiperSocket). The name that Db2 Analytics Accelerator on Z uses for this network device is "my_db2_network".

    "static_routes" (optional)

    This option is used to define additional network routes for an interface.

    If the IP address of a Db2 for z/OS LPAR or the GDPS keys LPAR is in a different subnet than the IP address assigned to the accelerator, an additional route definition is needed to establish the connection. An additional static route also helps to avoid undesired network traffic through a default gateway, which might have been defined in the HMC activation profile of the accelerator (SSC) LPAR.

    "ipv4"
    The IPv4 address or the IPv4 address and subnet of the target network.
    "via"
    The IPv4 address of the routing device.

    Example: The accelerator's pairing IP address is 10.20.1.33/24 and there are two Db2 for z/OS LPARs with the IP addresses 10.1.1.47/24 and 10.1.1.48/24.

    One or more gateways connect both subnets. One gateway is accessed through IP address 10.20.1.1, the other through 10.1.1.1.

    To allow traffic from one network to the other, the TCPIP.PROFILE definition in z/OS defines a route to 10.20.1.0/24, which uses the gateway 10.20.1.1. The accelerator uses the following configuration to enable traffic to the 10.1.1.0 network using the corresponding gateway at 10.1.1.1:

    {
      "accelerator_name": "S1",
      "db2_pairing_ipv4": "10.20.1.33/24",
      "network_interface_bindings": {
        "db2_nw": "db2_conn",
        "mgmt_nw": "activation-profile"
      },
      "runtime_environments": [
        {
          "network_interfaces": [
            {
              "name": "db2_conn",
              "device": "0.0.0440",
              "vlan": "552",
              "static_routes": [ { "ipv4": "10.1.1.0/24", "via": "10.20.1.1" } ]
            }
          ]
        }
      ],
    

    This way, all traffic to an IPv4 address that starts with 10.1.1 uses the OSA-Express card with device ID 0.0.0440 via gateway 10.20.1.1. All network traffic between the accelerator and destinations in the 10.1.1.0/24 subnet is thus bound to that OSA device.

    "bond_settings" (optional)
    This attribute allows you to define several network cards (OSA-Express cards) as a single device. Bonding is usually employed in a high-availability setup, as the remaining network cards in the setup can take over if one network card fails. It is also possible to run all available network cards simultaneously.
    Example:
    "network_interfaces": [
      {
        "name": "db2_conn",
        "vlan": "700",
        "bond_settings": {
          "mode": "active-backup",
          "workers": [
            {
              "device": "0.0.0a00",
              "port": "0"
            },
            {
              "device": "0.0.1b00",
              "port": "1"
            }
          ]
        }
      }
    ]
    

    In this example, two OSA cards (devices 0a00 and 1b00) are combined to one bonding device called "db2_conn". The device works in "active-backup" mode, meaning that at any time, just one of the network cards is active. The other card takes over when the active card fails.

    You can alternatively specify "mode": "802.3ad", in which case all network cards of the device will be active at the same time. "802.3ad" stands for the IEEE 802.3ad link aggregation mode.

    In 802.3ad mode, you need at least two physical devices. Specify these in the same way as you specify the devices for active-backup mode. That is, use a "workers" list as shown in the previous example.

    "options" (optional)
    It is not necessary to specify "options" for "bond_settings". If the options are omitted, default values are used. Whether options apply to a particular setup depends on the selected mode ("active-backup" or "802.3ad") . For a detailed description of these options, see Chapter 7. Configure Network Bonding in the Red Hat® Enterprise Linux 7: Networking Guide. A link is provided at the end of this topic.
    Restriction: Currently, you cannot specify just a subset of the available options. You either have to specify no options at all, in which case default values are used, or specify all options pertaining to a particular mode.

    All of the following values can be changed online.

    "primary"
    Valid in active-backup mode only. The first physical device to be used. This is "0.0.0a00" according to the previous example. The primary device is the first of the bonding interfaces. It will be used as the active device unless it fails.
    "primary-reselect": "always"
    Valid in active-backup mode only. Determines how the active physical device is selected after a failure. Specify "always", which means that an attempt will be made to make the first physical device (labeled "primary") active again.

    Other allowed options are "better", which means that the fastest device will be used as the active device, or "failure", which means that the active physical device is only changed if the currently active device fails.

    "failover-MAC": "none"
    Valid in active-backup mode only. Allows you set all physical devices to the same MAC address or determine these addresses according to a policy. Specify the value "none", which means that the same MAC address will be used for all physical devices.
    "no-gratuitous-ARPs": "0"
    Valid in active-backup mode only. Determines the number of peer notifications after a failover event. Specify "0", which means no notifications. This option corresponds to the num_grat_arp or num_unsol_na option in the Red Hat Enterprise Linux 7: Networking Guide.
    "transmit-hash-policy": "layer2"
    Valid in 802.3ad mode only. Selects a policy according to which the MAC addresses of the physical devices are determined. Specify layer2, which means that traffic to a particular network peer is assigned to the same network device, which is determined solely by its MAC address. Other allowed options are "layer3+4" and "layer2+3". The option "layer3+4" means that multiple network devices can be used to reach a single network peer even if a single network connection does not span multiple network devices. The option "layer2+3" is similar to "layer 2", but the network device is selected by its IP address in addition to its MAC address.
    Note: In the Red Hat Enterprise Linux 7: Networking Guide, this option is called "xmit-hash-policy"
    "LACP-rate": "slow" | "fast"
    Valid in 802.3ad mode only. The rate at which physical devices transmit Link Aggregation Control Protocol Data Units (LACPDUs). Specify "slow", which means every 30 seconds, or "fast", which means every 1 second.
    "link-monitoring": "MII"
    Selects the method to be used for monitoring the physical device's ability to carry network traffic. Select "MII", which stands for media-independent interface. With this setting, the driver, the MII register, or the ethtool can be queried for monitoring information about a physical device. Alternatively, you can specify "ARP" to use the ARP monitor.
    "monitoring-frequency": "100"
    The time interval that passes between two monitoring events. It is an integer value that stands for milliseconds. Use a value of "100".
    "link-up-delay": "0"
    Delay that needs to pass before network traffic is sent to a physical device after link monitoring has reported the device to be up. Specify "0", which means no delay.
    "link-down-delay": "0"
    Delay that needs to pass before network traffic is routed to the failover device after link monitoring has reported the failure of the previously active device. Specify "0", which means no delay.
    Example (active-backup mode):
    "options": {
      "primary": "0.0.0a00",
      "primary-reselect": "always",
      "failover-MAC": "none",
      "no-gratuitous-ARPs": "0",
      "link-monitoring": "MII",
      "monitoring-frequency": "100",
      "link-up-delay": "0",
      "link-down-delay": "0"
    }
    Example (802.3ad mode):
    "options": {
      "LACP-rate": "slow",
      "transmit-hash-policy": "layer2",
      "link-monitoring": "MII",
      "monitoring-frequency": "100",
      "link-up-delay": "0",
      "link-down-delay": "0"
    }
    "zfcp_devices" (required if ZFCP drives are used)
    The FICON® Express ports of your devices must be listed in the "runtime_environments" section. This is required for ZFCP storage devices only, as the ports of ECKD devices are handled by the firmware of the CPC. For ZFCP devices, however, you must list the port names. See the following example:
    "runtime_environments": [
      {
        "cpc_name": "CPC001",
        "lpar_name": "IGOR01",
        "network_interfaces": [
          {
            "name": "osa2db2",
            "device": "0.0.0a00"
          }
        ],
        "zfcp_devices": [
          "0.0.1b10",
          "0.0.1b40",
          "0.0.2c80"
        ]
      }
    ]
    

    The accelerator uses multiple paths, that is, it tries to use all specified ports. For that reason, a list of different ports rather than just one port can increase the performance.

    You can use a single port identifier for a ZFCP device or for an ECKD device, but not for both.

    You can change the ZFCP port names online.

    "storage_environments" (required)
    This block lists all storage devices, that is, disks or disk enclosures. At a minimum, this block contains the name of the storage device on which the accelerator is initially deployed. You can add storage devices or define multiple storage environments for mirroring or failover purposes. See Figure 4. Note, however, that the accelerator does not manage the mirroring or copying of data to additional devices or storage environments.
    Figure 4. Multiple storage environments
    A schematic overview that shows how to define multiple storage environments
    Attention: If you use GDPS integration, GDPS will manage the storage replication. In this case, do not define more than one storage environment. Multiple storage environments are not supported in the GDPS context.

    The "storage_environments" section combines the "primary_storage" and "storage_maps" sections found in configuration files of earlier releases. During the first-time deployment, these devices are formatted, which means that the existing data on these devices is erased.

    Migration from an older release:

    If your current JSON configuration file still shows the "primary_storage" and "storage_maps" keywords, you need to update your storage environment configuration. In Table 2, you find an example of an old configuration block and a corresponding new configuration block for a setup with two storage environments. Use this table as a reference to make the required changes.

    Table 2. Old and new storage environment configurations
    Old configuration (version 7.5.2 and lower) New configuration (version 7.5.3 and higher)
    "primary_storage": {
        "boot_device": {
          "type": "dasd",
          "device": "0.0.5e29"
        },
        "runtime_devices": {
          "type": "dasd",
          "devices": [
            "0.0.5e25",
            "0.0.5e26"
          ]
        },
        "data_devices": {
          "type": "dasd",
          "devices": [
            "0.0.5e14",
            ["0.0.5e80","0.0.5e8f"]
          ]
        }
      },
      "storage_maps": [
        {
          "boot_device": "0.0.1b11",
          "map": [
            {
              "primary": ["0.0.5e25","0.0.5e26"],
              "copy":    ["0.0.1b25","0.0.1b26"]
            },
            {
              "primary": "0.0.5e14",
              "copy": "0.0.1c14"
            },
            {
              "primary": ["0.0.5e80","0.0.5e8f"],
              "copy":    ["0.0.1d00","0.0.1d0f"]
            }
          ]
        }
      ]
    
    "storage_environments": [
      {
        "boot_device": {
          "type": "dasd",
          "device": "0.0.5e29"
        },
        "runtime_devices": {
          "type": "dasd",
          "devices": [
            "0.0.5e25",
            "0.0.5e26"
          ]
        },
        "data_devices": {
          "type": "dasd",
          "devices": [
            "0.0.5e14",
            ["0.0.5e80","0.0.5e8f"]
          ]
        }
      },
      {
        "boot_device": {
          "type": "dasd",
          "device": "0.0.1b11"
        },
        "runtime_devices": {
          "type": "dasd",
          "devices": [
            "0.0.1b25",
            "0.0.1b26"
          ]
        },
        "data_devices": {
          "type": "dasd",
          "devices": [
            "0.0.1c14",
            ["0.0.1d00","0.0.1d0f"]
          ]
        }
      }
    ]
    

    An accelerator can use up to four types of storage: the boot device, the runtime data pool, the data pool for operative data, and, optionally, a transient pool for temporary data. Each storage device or storage pool can use DASD (ECKD) or ZFCP (SCSI) devices. The mixing of different device types is not supported. However, the size of individual devices in a pool is not restricted.

    Important: You can add devices to a storage pool at any time while the accelerator is online. New devices are automatically integrated into a storage pool. This does not require a reset or a restart. However, you cannot remove devices from a storage pool in this way. A removal requires an entirely new installation.

    You must define these storage devices by using the following attributes in the configuration file:

    "boot_device" (required)
    The boot device contains the software image that is written by the Secure Service Container (SSC) installer. The accelerator will be started from this device. It is also the target device for uploading the SSC installer image before the initial deployment or before an update. The boot device must be a single device with at least 40 GB net storage capacity.

    The boot device uniquely identifies a storage environment. If multiple storage environments exist, the storage environment (definition) that lists the currently active boot device will be used. An accelerator without a valid storage environment is invalid.

    Important: It is not possible to change the boot device during an update.

    For example:

    "boot_device": {
      "type": "dasd",
      "device": "0.0.5e29"
    }
    

    or

    "boot_device": {
      "type": "zfcp",
      "udid": "6005076241bb5024200000000000003e"
    }
    "runtime_devices" (required)
    The runtime storage is used by the accelerator software for internal processing. It does not contain user data and its size is fixed. The size does not depend on the amount of user data processed by the accelerator. Specify a list of devices with a total net capacity of at least 80 GB.During normal operation, the utilization rate should not exceed 80 percent. If it does exceed 80 percent most of the time, consider adding devices. For example:
    "runtime_devices": {
      "type": "dasd",
      "devices": [
        "0.0.998c"
      ]
    }

    The mixing of different device types in a single list, that is, DASD (ECKD) and ZFCP (SCSI) is not supported.

    "data_devices" (required)
    The data storage is used to store the user data and temporary data of the accelerator (table data). It is typically the largest storage area of your entire configuration. Its size is determined by the amount of data the accelerator has to handle. During normal operation, the utilization rate should not exceed 80 percent. If it does exceed 80 percent most of the time, consider adding devices. For example:
    "data_devices": {
      "type": "dasd",
      "devices": [
        "0.0.9c00",
        "0.0.9c01",
        "0.0.9c02"
      ]
    }
    
    "transient_devices" (optional)
    You can use a transient data pool for the storage of temporary data. If you do use a transient pool, temporary data does not occupy disk space in your data pool. When in use, a transient storage pool behaves in the same way as transient_storage, except that the performance of NVMe drives for transient storage is better than the performance of a transient pool on disks. Nevertheless, the separation of the data pool into a pool for temporary data and other data leads to a significant performance gain when compared to a solution where all this data resides in a single data pool.

    A transient storage pool is recommended if your accelerator is not deployed on a LinuxOne computer (where you could use NVMe drives), but on a classic IBM Z computer.

    For example:

    "transient_devices": {
          "type": "dasd",
          "devices": [
            "0.0.9b12"
          ]
        }
    Important:
    • A transient storage pool and transient storage with NVMe drives cannot be used at the same time. A configured transient storage pool is therefore ignored if transient storage with NVMe drives exists.
    • The existence of transient storage (pool or NVMe drives) invalidates any custom setting of the "temp_working_space" parameter because this parameter applies to temporary data in the data pool only. As soon as temporary data is processed on transient storage, the "temp_working_space" parameter becomes ineffective.
    • In a Geographically Dispersed Parallel Sysplex (GDPS), you can have different transient storage configurations for the different sites. For example, it is possible to use internal NVMe transient storage with 150 TB capacity on the primary site, NVMe transient storage with 10 TB capacity on the secondary site, and temporary working space of the data pool on the third site.

    For an in-depth discussion of the transient storage options, see Configuring and using transient storage for IBM Db2 Analytics Accelerator for z/OS on Z.

    "type" (required)
    This is the type of storage to be used (disk type). Possible values are "dasd" for extended count key data (ECKD) volumes and "zfcp" for Small Computer System Interface (SCSI) volumes. You must specify the type for each device category (that is, the boot device, the runtime device, and the data device).
    Important:
    • It is not possible to mix ECKD and SCSI devices in a single device category or device pool.
    • If you use DASD (ECKD) storage, HyperPAV aliases are strongly recommended because they increase the processing speed.
      Note: The "PAV" in HyperPAV stands for Parallel Access Volumes. It is a concept of using multiple devices or aliases to address a single DASD (ECKD) disk device.
    • DASD (ECKD) devices are formatted by the dasdfmt program of the Linux operating system on the accelerator. This can take a long time, sometimes even hours for large devices or storage pools. HyperPAV aliases also help speed up formatting. Therefore, define HyperPAV aliases also in your initial JSON configuration file.
    • You can use DASD (ECKD) devices of different sizes in a single pool.
    • You can use ZFCP devices of different sizes in a single pool.
    • For ZFCP devices, use Ficon Express (FE) ports because these ports guarantee a much better performance and availability. FE ports are defined in the runtime environment.
    • Although the adding of devices is supported while the accelerator is online, the type cannot be changed after the initialization of the storage pool. To change the type, you must remove the accelerator and reinstall it.
    Example (ECKD or "dasd"):
    "type": "dasd",
    "devices": [
      "0.0.9c00",
      "0.0.9c01",
      "0.0.9c02"
    ]
    
    Example (SCSI or "zfcp"):
    "type": "zfcp",
    "udids": [
      "0c984712545423523614b8d812345632",
      "0c0a0b5c9d15555a545545b46456c4d6"
    ]
    
    "device" or "devices" (required)
    This attribute is used to list the devices by their names or identifiers. You must specify a device or a list of devices for each device category (that is, the boot device, the runtime device, and the data device).
    ECKD example:
    "storage_environments": [
      {
        "boot_device": {
          "type": "dasd",
          "device": "0.0.9986"
        },
        "data_devices": {
          "type": "dasd",
          "devices": [
            "0.0.9c00",
            "0.0.9c01",
            "0.0.9c02"
          ]
        },
        "runtime_devices": {
          "type": "dasd",
          "devices": [
            "0.0.998c"
          ]
        }
      }
    ]
    ZFCP example:
    "storage_environments": [
      {
        "boot_device": {
          "type": "zfcp",
          "udid": "6005076241bb5024200000000000003e"
        },
        "runtime_devices": {
          "type": "zfcp",
          "udids": [
            "6005076241bb50242000000021000028"
          ]
        },
        "data_devices": {
          "type": "zfcp",
          "udids": [
            "6005076241bb50242000000003000010",
            "6005076241bb5024200000000a000011",
            "6005076241bb5024200000000b000012"
          ]
        }
      }
    ]
    
    Note: To change the ID of the boot device, a few extra steps are required:
    1. Copy the current storage environment in your JSON file. That is, create a duplicate block in the file.
    2. Change the ID in the copied block.
    3. Restart the accelerator.
    4. Remove the old storage environment from the JSON file when the accelerator is online again.

    Complete example:

    "storage_environments": [
      {
        "boot_device": {
          "type": "dasd",
          "device": "0.0.5e29"
        },
        "runtime_devices": {
          "type": "dasd",
          "devices": [
            "0.0.5e25",
            "0.0.5e26"
          ]
        },
        "data_devices": {
          "type": "dasd",
          "devices": [
            "0.0.5e14",
            ["0.0.5e80","0.0.5e8f"]
          ]
        }
      },
      {
        "boot_device": {
          "type": "dasd",
          "device": "0.0.1b11"
        },
        "runtime_devices": {
          "type": "dasd",
          "devices": [
            "0.0.1b25",
            "0.0.1b26"
          ]
        },
        "data_devices": {
          "type": "dasd",
          "devices": [
            "0.0.1c14",
            ["0.0.1d00","0.0.1d0f"]
          ]
        },
        "transient_devices": {
          "type": "dasd",
          "devices": [
            "0.0.9b12"
          ]
        }
      }
    ]
    

    In this example, the system might start from an LPAR with the boot device 0.0.5e29, which has the data devices 0.0.5e14, and ["0.0.5e80","0.0.5e8f"], which is an array or enclosure consisting of two disks. The system could also start from an LPAR that uses the boot device 0.0.1b11, and the data devices 0.0.1c14 and ["0.0.1d00","0.0.1d0f"]. To use one of the environments for failover purposes, you must replicate the disks. That is, in this case, you would have to copy the data from 0.0.5e14 to 0.0.1c14, and from ["0.0.5e80","0.0.5e8f"] to ["0.0.1d00","0.0.1d0f"].

    Attention: If you use GDPS, do not define more than one storage environment because GDPS handles the replication of storage environments automatically for you. Multiple storage environments are not supported in the GDPS context.

    The example also contains an (optional) transient storage pool, which consists of a single disk device with the name ["0.0.9b12"].

    HyperPAV aliases:

    HyperPAV aliases can be used in the following ways:

    • Automatic HyperPAV aliases
    • Explicitly listed HyperPAV aliases

    In automatic HyperPAV mode, all HyperPAV alias devices that are visible to an LPAR and that are connected to the same control-unit image (LCU) are used automatically for that LPAR. To enable the automatic HyperPAV mode, you must add a definition to the storage environments section in the JSON configuration file.

    Important:
    • HyperPAV aliases can be used with DASD (ECKD) storage only.
    • Make sure that only the volumes and HyperPAV aliases you want to use on a particular LPAR are visible to that LPAR. This is even more important if you use automatic alias devices because in that case, your accelerator (SSC) LPAR has to sift through all visible devices just to determine and activate the alias devices.
    • The use of HyperPAV aliases requires a change in the input/output definition file (IODF). See Input/output definition file (IODF) for more information.
    • Having added HyperPAV devices to an existing configuration, you must shut down and restart the affected accelerator (SSC) LPAR. For more information, see Shutting down and restarting a single-node accelerator.
    Example:
    "storage_environments": [
      {
        "boot_device": {
              .
              .
        },
        "runtime_devices": {
              .
              .
        },
        "data_devices": {
              .
              .
        },
        "hyperpav": "auto"
      }
    ]

    Mind that in "auto" mode, the system uses all available HyperPAV alias devices.

    To use just a subset of the available devices, it is preferable to use explicitly listed HyperPAV aliases. as in the following example:

    "hyperpav": [
      [
        "0.1.4000",
        "0.1.4007"
      ],
      "0.1.1234"
    ]

    In this particular case, the system uses a range of HyperPAV aliases from 0.1.4000 to 0.1.4007 plus a single HyperPAV alias with the ID 0.1.1234.

  6. When you're finished with your configuration file, upload it to the Admin UI.
    On the Accelerator Configuration Definition page, click the upload button (see also Figure 1):
    The upload button on the Accelerator Configuration Definition page
    If something is wrong with the file you uploaded, an error message is displayed on the page:
    Figure 5. Error message after uploading a faulty configuration file
    An error message is displayed on top of the page if something is wrong with the uploaded file.
  7. If errors occurred, fix these and repeat the upload (steps 5 and 6).
    If no errors occurred, the Accelerator Configuration Definition page shows the settings of your configuration file in expandable sections. You can expand each section to display its settings by clicking the downward pointing arrows.
    Figure 6. Accelerator Configuration Definition after a successful configuration file upload
    Accelerator Configuration Definition page after a successful configuration file upload. Settings in the file are represented by a folder structure. The folders have been expanded to show the settings.
  8. Click Apply.
    You see a message window indicating that the configuration is in progress. During that time, you can follow the line of events if you click the Logs tab. The page does not refresh automatically, but you can click the Refresh button in the upper right corner.
    Figure 7. Message window showing the progress of your Db2 Analytics Accelerator on Z configuration
    A message window indicating that Db2 Analytics Accelerator on Z is being configured

Results

When these processes have finished, the Accelerator Components Health Status page is displayed automatically. The page should now give you the following information:
Figure 8. The Accelerator Components Health Status page is displayed after a successful configuration
This image shows the information on the Accelerator Components Health Status page when the appliance has finally been started. You see information about the runtime components of the appliance (Db2 Analytics Accelerator on Z): appliance infrastructure, appliance runtime, appliance authentication service, appliance data service, and Db2 accelerator service. The status of all these components is green. On the right, you find buttons to reset, update, and shut down the appliance.

The message Accelerator status: ready on the top right indicates that all installation steps have been completed and that components have been started for Db2 Analytics Accelerator on Z.

1 RoCE: Remote Direct Memory Access over Converged Ethernet