Node canisters

Canisters are replaceable hardware units that are subcomponents of enclosures.

A node canister provides host interfaces, management interfaces, and interfaces to the control enclosure. The node canister in the left-hand enclosure bay is identified as canister 1. The node canister in the right-hand bay is identified as canister 2. A node canister has cache memory, internal drives to store software and logs, and the processing power to run the system's virtualizing and management software. A node canister also contains batteries that help to protect the system against data loss if a power outage occurs.

The node canisters in an enclosure combine to form a cluster, presenting as a single redundant system with a single point of control for system management and service. System management and error reporting are provided through an Ethernet interface to one of the nodes in the system, which is called the configuration node. The configuration node runs a web server and provides a command-line interface (CLI). The configuration node is a role that any node can take. If the current configuration node fails, a new configuration node is selected from the remaining nodes. Each node also provides a command-line interface and web interface to enable some hardware service actions.

Information about the canister can be found in the management GUI.

Figure 1. IBM FlashSystem 9600 Node Canister – rear view
Rear view of the control enclosure
  •  1  Technician port
  •  2  Ethernet management ports: Port 1 (upper) and Port 2 (lower)
  •  3  Adapter card slots

Boot drive

Each node canister has a pair of internal boot drives, which hold the system software and associated logs and diagnostics. The boot drives are also used to save the system state and cache data if there is an unexpected power-loss to the system or canister.

Batteries

Each node canister contains a battery, which provides power to the canister if there is an unexpected power loss. This allows the canister to safely save system state and cached data.

Node canister indicators

A node canister has several LED indicators, which convey information about the current state of the node.

Node canister ports

Each node canister has the following dedicated ports, as shown in Figure 2 below:
Figure 2. Node canister ports location
Table 1. Node canister ports
Port Marking Logical port name Connection and Speed Function
USB port 1 USB type A

Encryption key storage, Diagnostics collection

May be disabled

Technician port RJ45 copper, 1 Gbps DCHP port direct service management
Serial port USB type C Service port disabled for security
Display port Mini Display Port Display port disabled for security
None Ethernet port 1 RJ45 copper, 10 Gbps

Primary Management IP

Service IP

None Ethernet port 2 RJ45 copper, 10 Gbps

Secondary Management IP (optional)

Technician port

The technician port is a 1 Gbps Ethernet port that is used to initialize a system or configure the node canister. The technician port can also access the management GUI and CLI if the other access methods are not available.

Adapter cards

Each canister contains four slots for network adapter cards. In the system software, adapter card slots are numbered from 1 to 4, from left to right. The adapters that provide the technician port and the management Ethernet ports are part of the node canister and do not have adapter slot numbers.

Each node canister supports the following combinations of network adapters:
Table 2. Adapters and supported protocols
Valid cards per slot Supported protocols/uses
Adapter Slot 1
Quad-port 64 Gbps Fibre Channel

Host I/O that uses FC or FC-NVMe

Replication

Communication between systems

Quad-port 32 Gbps Fibre Channel

Host I/O that uses FC or FC-NVMe

Replication

Communication between systems

Quad-port 25 Gbps or 10 Gbps Ethernet adapter

Host I/O that uses iSCSI or NVMe/TCP

Replication over TCP

Communication between systems

Dual-port 40 Gbps or 100 Gbps Ethernet adapter

Host I/O that uses iSCSI or NVMe/TCP

Replication over RDMA, TCP

Communication between systems

Adapter Slot 2
Empty -
Quad-port 64 Gbps Fibre Channel

Host I/O that uses FC or FC-NVMe

Replication

Communication between systems

Quad-port 32 Gbps Fibre Channel

Host I/O that uses FC or FC-NVMe

Replication

Communication between systems

Quad-port 25 Gbps or 10 Gbps Ethernet adapter

Host I/O that uses iSCSI or NVMe/TCP

Replication over TCP

Communication between systems

Dual-port 40 Gbps or 100 Gbps Ethernet adapter

Host I/O that uses iSCSI or NVMe/TCP

Replication over RDMA, TCP

Communication between systems

Adapter Slot 3
Empty -
Quad-port 64 Gbps Fibre Channel

Host I/O that uses FC or FC-NVMe

Replication

Communication between systems

Quad-port 32 Gbps Fibre Channel

Host I/O that uses FC or FC-NVMe

Replication

Communication between systems

Quad-port 25 Gbps or 10 Gbps Ethernet adapter

Host I/O that uses iSCSI or NVMe/TCP

Replication over TCP

Communication between systems

Dual-port 40 Gbps or 100 Gbps Ethernet adapter

Host I/O that uses iSCSI or NVMe/TCP

Replication over RDMA, TCP

Communication between systems

Adapter Slot 4
Quad-port 32 Gbps Fibre Channel

Host I/O that uses FC or FC-NVMe

Replication

Communication between systems

Quad-port 25 Gbps or 10 Gbps Ethernet adapter

Host I/O that uses iSCSI or NVMe/TCP

Replication over TCP

Communication between systems

Dual-port 40 Gbps or 100 Gbps Ethernet adapter

Host I/O that uses iSCSI or NVMe/TCP

Replication over RDMA, TCP

Communication between systems

Port Numbering

For each adapter card, ports are numbered from top to bottom. Fibre Channel cards have a fixed relationship between port numbers and slot number.

Table 3. Fibre channel port numberings
Adapter slot numbers 1 2 3 4
Fibre channel port numbers 1 5 9 13
2 6 10 14
3 7 11 15
4 8 12 16

Ethernet port numbering starts with the node canister management ports (1, 2) and then progresses incrementally across any installed adapter cards, starting with the leftmost slot and numbering down each installed adapter in turn.



Memory configurations

IBM® FlashSystem 9600 supports up to twelve 128 GB DIMMs per node, with two memory configurations supported.
Table 4. Memory configuration
Configuration Feature code DIMMs per node Memory per node Best practice recommendation
Base (factory installation) - 6 768 GB Base config, ideal for < 12 drives and 1 network adapter with modest IOPS requirements.
Upgrade (factory or field installation) ALGH 12 1536 GB Recommended for cache-heavy I/O Workloads.

For more details on the adapters, see the following pages: