Node canisters
Canisters are replaceable hardware units that are subcomponents of enclosures.
A node canister provides host interfaces, management interfaces, and interfaces to the control enclosure. The node canister in the left-hand enclosure bay is identified as canister 1. The node canister in the right-hand bay is identified as canister 2. A node canister has cache memory, internal drives to store software and logs, and the processing power to run the system's virtualizing and management software. A node canister also contains batteries that help to protect the system against data loss if a power outage occurs.
The node canisters in an enclosure combine to form a cluster, presenting as a single redundant system with a single point of control for system management and service. System management and error reporting are provided through an Ethernet interface to one of the nodes in the system, which is called the configuration node. The configuration node runs a web server and provides a command-line interface (CLI). The configuration node is a role that any node can take. If the current configuration node fails, a new configuration node is selected from the remaining nodes. Each node also provides a command-line interface and web interface to enable some hardware service actions.
Information about the canister can be found in the management GUI.

Boot drive and TPM
Each node canister has an internal boot drive, which holds the system software and associated logs and diagnostics. The boot drive is also used to save the system state and cache data if there is an unexpected power-loss to the system or canister. The boot drive is not a replaceable part.
The system supports hardware root of trust and secure boot operations, which protect against unauthorized physical access to the hardware and prevents malicious software from running on the system.
The system provides secure boot by pairing the boot drive with the Trusted Platform Module (TPM). The TPM provides a secure cryptographic processor that performs verification of hardware and prevents unauthorized access to hardware and the operating system. The TPM protects secure boot to ensure that the installed code images are signed, trusted, and unchanged.
As the system boots, the TPM acquires hash values from each part of the boot (software and configuration settings) in a process that is known as measuring. If a particular set of hash values reach the right values, TPM secures and locks this information into the TPM. This process is known as sealing information into the TPM. After the information is sealed within the TPM, it can only be unsealed if the boot arrives at the correct hash values. TPM verifies each of these hash values and unlocks the operating system only during a boot operation when these values are correct.
Batteries
Each node canister contains a battery, which provides power to the canister if there is an unexpected power loss. This allows the canister to safely save system state and cached data.
Node canister indicators
A node canister has several LED indicators, which convey information about the current state of the node.
Node canister ports
| Port Marking | Logical port name | Connection and Speed | Function |
|---|---|---|---|
| Ethernet port 2 | SFP, 10 Gbps, or 25 Gbps |
Secondary Management IP (optional) Host I/O (iSCSI, NVMe/TCP) Ethernet Replication (using TCP) |
|
| Ethernet port 3 | SFP, 10 Gbps, or 25 Gbps |
Host I/O (iSCSI, NVMe/TCP) Ethernet Replication (using TCP) |
|
| Ethernet port 1 | RJ45 copper, 1 Gbps |
Primary Management IP Service IP |
|
![]() |
Technician port | RJ45 copper, 1 Gbps | DCHP port direct service management |
| USB port | USB type A |
Encryption key storage, Diagnostics collection May be disabled |
- Optical 25 GbE SFP28 (IBM feature code ACHP)
- Optical 10 GbE SFP+ (IBM feature code ACHQ)
- Copper 10 GbE RJ45 SFP (IBM feature code ACJ2)
- Direct Attach Copper (DAC) cable – up to 25 metres (Customer supplied).
Technician port
The technician port is a designated 1 Gbps Ethernet port on the back panel of the node canister that is used to initialize a system or configure the node canister. The technician port can also access the management GUI and CLI if the other access methods are not available.
Adapter cards
Each canister contains two slots for network adapter cards. Each card fits into a cage assembly that contains an interposer to allow the card to be connected to the canister main board. In the system software, adapter card slots are numbered from left to right (1 and 2).
| Valid cards per slot | Supported protocols/uses |
|---|---|
| Adapter Slot 1 | |
| Empty | - |
| Quad-port 32 Gbps Fibre Channel |
Host I/O that uses FC or FC-NVMe Replication Communication between systems |
| Quad-port 10 Gbps Ethernet |
Host I/O that uses iSCSI or NVMe/TCP Replication over RDMA, TCP Communication between systems |
| Dual-port 64 Gbps Fibre Channel |
Host I/O that uses FC or FC-NVMe Replication Communication between systems |
| Dual-port 25 Gbps Ethernet (iWARP) |
Host I/O that uses iSCSI Replication over RDMA, TCP Communication between systems |
| Adapter Slot 2 | |
| Empty | - |
| Quad-port 32 Gbps Fibre Channel |
Host I/O that uses FC or FC-NVMe Replication Communication between systems |
| Dual-port 64 Gbps Fibre Channel |
Host I/O that uses FC or FC-NVMe Replication Communication between systems |
| Quad-port 10 Gbps Ethernet |
Host I/O that uses iSCSI or NVMe/TCP Replication over RDMA, TCP Communication between systems |
| Dual-port 12 Gbps SAS Expansion | Connection to SAS Expansion Enclosures |
| Dual-port 25 Gbps Ethernet (iWARP) |
Host I/O that uses iSCSI Replication over RDMA, TCP Communication between systems |
- Port Numbering
-
For each adapter card, ports are numbered from left to right, and from adapter 1 to adapter 2. Fibre Channel ports are numbered from 1 as the leftmost port on the first adapter and continue sequentially across any additional adapters. Ethernet port numbering starts with the on-board ports (1 - 3) and then progresses incrementally across any installed adapter cards, starting with the leftmost slot and numbering across each adapter in turn.
Memory configurations
| Configuration | Feature code | DIMMs per node | Memory per node | Best practice guidelines |
|---|---|---|---|---|
| Base 1 (factory installation) | ALG2 | 1x32 GiB | 32 GiB | Cost-optimised for small capacities (<6 drives) or IO workloads that do not require advanced function such as Deduplication, vVols, or replication. |
| Base 2 (factory installation) | ALG3 | 2x64 GiB | 128 GiB | Optimised for IOPs workloads or larger capacities (>6 drives). This configuration is the minimum required for advanced software features and Storage Insights integration without an external data collector. |
| Option 1 (field or factory installation | ALGE | 4x64 GiB | 256 GiB | Maximum Memory bandwidth. Optimised for very high IOPs workloads in excess of 250,000 at sub millisecond latency. |
For more details on the adapters, see the following pages:
