Node canisters

Canisters are replaceable hardware units that are subcomponents of enclosures.

A node canister provides host interfaces, management interfaces, and interfaces to the control enclosure. The node canister in the upper enclosure bay is identified as canister 1. The node canister in the lower bay is identified as canister 2. A node canister has cache memory, internal drives to store software and logs, and the processing power to run the system's virtualizing and management software. A node canister also contains batteries that help to protect the system against data loss if a power outage occurs.

The node canisters in an enclosure combine to form a cluster, presenting as a single redundant system with a single point of control for system management and service. System management and error reporting are provided through an Ethernet interface to one of the nodes in the system, which is called the configuration node. The configuration node runs a web server and provides a command-line interface (CLI). The configuration node is a role that any node can take. If the current configuration node fails, a new configuration node is selected from the remaining nodes. Each node also provides a command-line interface and web interface to enable some hardware service actions.

Information about the canister can be found in the management GUI.

Figure 1. Node canister - Rear view
Image of Fibre Channel adapter in adapter slot 1

Boot drive and TPM

Each node canister has two externally accessible boot drives, which hold the system software and associated logs and diagnostics. The boot drives are also used to save the system state and cache data if there is an unexpected power-loss to the system or canister.

The system supports hardware root of trust and secure boot operations, which protect against unauthorized physical access to the hardware and prevents malicious software from running on the system.

The system provides secure boot by pairing the boot drive with the Trusted Platform Module (TPM). The TPM provides a secure cryptographic processor that performs verification of hardware and prevents unauthorized access to hardware and the operating system. The TPM protects secure boot to ensure that the installed code images are signed, trusted, and unchanged.

As the system boots, the TPM acquires hash values from each part of the boot (software and configuration settings) in a process that is known as measuring. If a particular set of hash values reach the right values, TPM secures and locks this information into the TPM. This process is known as sealing information into the TPM. After the information is sealed within the TPM, it can only be unsealed if the boot arrives at the correct hash values. TPM verifies each of these hash values and unlocks the operating system only during a boot operation when these values are correct.

Batteries

Each node canister contains two redundant battery, which provide power to the canister if there is an unexpected power loss. This allows the canister to safely save system state and cached data.

Node canister indicators

A node canister has several LED indicators, which convey information about the current state of the node.

Node canister ports

Each node canister has the following on-board ports:
Table 1. Node canister ports
Port Marking Logical port name Connection and Speed Function
1 Ethernet port 1 RJ45 copper, 1 Gbps

Primary Management IP

Service IP

2 Ethernet port 2 RJ45 copper, 1 Gbps

Secondary Management IP (optional)

Technician port RJ45 copper, 1 Gbps DCHP port direct service management
USB port USB type A

Encryption key storage, Diagnostics collection

May be disabled

Adapter cards

Each canister contains three removable adapter cages, each of which may contain up to two network adapter cards. These adapter cages connect with a single x16 PCIe connector on the main canister board, and have two connectors into which adapter cards can be inserted. If an adapter is present in the higher-numbered of the two slots in the cage, the x16 link is split into two x8 links. Otherwise, all 16 lanes are routed to the lower-numbered slot.

In the system software, adapter card slots are numbered left to right.
  • Cage 1 holds adapters 1 and 2.
  • Cage 2 holds adapters 5 and 6.
  • Cage 3 holds adapters 7 and 8.

Adapter slots 3 and 4 are not in a removable cage, and are used for the Compression Accelerator card (fixed configuration).

Each node canister supports the following combinations of network adapters:
Table 2. Adapter Cage 1 combinations
Adapter Slot 1 Adapter Slot 2
Quad-port 64 Gbps Fibre Channel Empty
Quad-port 32 Gbps Fibre Channel Quad-port 32 Gbps Fibre Channel
Dual-port 25 Gbps Ethernet (iWARP) Dual-port 25 Gbps Ethernet (iWARP)
Dual-port 25 Gbps Ethernet (RoCE) Dual-port 25 Gbps Ethernet (RoCE)
Dual-port 100 Gbps Ethernet Empty
Dual-port 100 Gbps Ethernet
Note: This combination is supported with a minimum of 8.6.0 code.
Dual-port 100 Gbps Ethernet
Note: This combination is supported with a minimum of 8.6.0 code.
Table 4. Adapter Cage 3 combinations
Adapter Slot 7 Adapter Slot 8
Empty Empty
Quad-port 64 Gbps Fibre Channel Empty
Quad-port 32 Gbps Fibre Channel Quad-port 32 Gbps Fibre Channel
Dual-port 25 Gbps Ethernet (iWARP) Dual-port 25 Gbps Ethernet (iWARP)
Dual-port 25 Gbps Ethernet (RoCE) Dual-port 25 Gbps Ethernet (RoCE)
Dual-port 100 Gbps Ethernet Empty
Dual-port 100 Gbps Ethernet
Note: This combination is supported with a minimum of 8.6.0 code.
Dual-port 100 Gbps Ethernet
Note: This combination is supported with a minimum of 8.6.0 code.
Port Numbering
For each adapter card, ports are numbered downwards from top to bottom. Fibre Channel cards have a fixed relationship between port numbers and slot number.
Note: The cards are populated in order of slots 1, 2 followed by 7, 8 then 5, 6 to ensure best performance. The card in slots 7 and 8 will be numbered as shown even if there are not Fibre Channel adapters in slots 5 and 6.
Table 5. Port numberings
Adapter slot numbers 1 2 5 6 7 8
Fibre channel port numbers 1 5 9 13 17 21
2 6 10 14 18 22
3 7 11 15 19 23
4 8 12 16 20 24
Ethernet port numbering starts with the on-board ports (1, 2) and then progresses incrementally across any installed adapter cards, starting with the leftmost slot and numbering down each adapter in turn.


Memory configurations

IBM® Storage FlashSystem 9500 supports up to twenty-four 64 GB DIMMs per node, with three memory configurations supported.
Table 6. Memory configuration
Configuration Feature code DIMMs per node Memory per node Best practice recommendation
Base ACGM 8 512 GB Base config, ideal for < 16 drives and 1 network adapter with modest IOPS requirements
Upgrade 1 ACGN 16 1024 GB This configuration is ideal for cache-heavy I/O Workloads and DRP or Deduplication workloads IOPs or latency and >16 drives with >1 adapter cage and/or DRP/Deduplication workloads
Upgrade 2 ACGP 24 1536 GB This configuration is ideal for cache-heavy I/O Workloads and DRP or Deduplication workloads

For more details on the adapters, see the following pages: