Node canisters

Canisters are replaceable hardware units that are subcomponents of enclosures.

A node canister provides host interfaces, management interfaces, and interfaces to the control enclosure. The node canister in the upper enclosure bay is identified as canister 1. The node canister in the lower bay is identified as canister 2. A node canister has cache memory, internal drives to store software and logs, and the processing power to run the system's virtualizing and management software. A node canister also contains batteries that help to protect the system against data loss if a power outage occurs.

The node canisters in an enclosure combine to form a cluster, presenting as a single redundant system with a single point of control for system management and service. System management and error reporting are provided through an Ethernet interface to one of the nodes in the system, which is called the configuration node. The configuration node runs a web server and provides a command line interface (CLI). The configuration node is a role that any node can take. If the current configuration node fails, a new configuration node is selected from the remaining nodes. Each node also provides a command line interface and web interface to enable some hardware service actions.

Information about the canister can be found in the management GUI.

Figure 1. Rear view of the control enclosure
Rear view of the control enclosure
Note: The system design of IBM® Storage FlashSystem 9100 means that the top node canister is inserted upside down above the bottom canister. This means that all ports and slots in the top canister are numbered right-to-left as seen from the back of the system, compared to the bottom canister, where numerical ordering is left-to-right.

Boot drive

Each node canister has an internal boot drive, which holds the system software and associated logs and diagnostics. The boot drive is also used to save the system state and cache data if there is an unexpected power-loss to the system or canister.

Batteries

Each node canister contains a battery backup unit, which provides power to the canister if there is an unexpected power loss. This allows the canister to safely save system state and cached data.

Node canister indicators

A node canister has several LED indicators, which convey information about the current state of the node.

Node canister ports

Each node canister has the following on-board ports:
Table 1. Node canister ports
Port Marking Logical port name Connection and Speed Function
1 Ethernet port 2 RJ45 copper, 10 Gbps

Management IP

Service IP

Host IO (iSCSI)

2 Ethernet port 3 RJ45 copper, 10 Gbps

Secondary Management IP Host IO (iSCSI)

3 Ethernet port 1 RJ45 copper, 10 Gbps Host IO (iSCSI)
4 Ethernet port 4 RJ45 copper, 10 Gbps Host IO (iSCSI)
Technician port RJ45 copper, 1 Gbps DCHP port direct service management
USB port 1 USB type A

Encryption key storage, Diagnostics collection

May be disabled

USB port 2 USB type A

Encryption key storage, Diagnostics collection

May be disabled

Technician port

The technician port is a designated 1 Gbps Ethernet port on the back panel of the node canister that is used to initialize a system or configure the node canister. The technician port can also access the management GUI and CLI if the other access methods are not available.

Adapter cards

Each canister contains two slots for network adapter cards. Each card fits into a cage assembly that contains an interposer to allow the card to be connected to the canister main board. In the system software, adapter card slots are numbered from 1 to 3 (left to right for the lower canister).

Each node canister supports the following combinations of network adapters:
Table 2. Adapters and supported protocols
Valid cards per slot Supported protocols/uses
Adapter Slot 1
Empty -
Quad-port 16 Gbps Fibre Channel

Host I/O that uses FC or FC-NVMe

Replication

Clustering between systems

Quad-port 32 Gbps Fibre Channel

Host I/O that uses FC or FC-NVMe

Replication

Clustering between systems

Dual-port 25 Gbps Ethernet (iWARP)

Host I/O that uses FC or FC-NVMe

Replication

Clustering between systems

Dual-port 25 Gbps Ethernet (RoCE) Host I/O that uses iSCSI, RoCE or NVMe/TCP
Adapter Slot 2
Empty -
Quad-port 16 Gbps Fibre Channel

Host I/O that uses FC or FC-NVMe

Replication

Clustering between systems

Quad-port 32 Gbps Fibre Channel

Host I/O that uses FC or FC-NVMe

Replication

Clustering between systems

Dual-port 25 Gbps Ethernet (iWARP)

Host I/O that uses FC or FC-NVMe

Replication

Clustering between systems

Dual-port 25 Gbps Ethernet (RoCE) Host I/O that uses iSCSI, RoCE or NVMe/TCP
Adapter Slot 3
Empty -
Quad-port 16 Gbps Fibre Channel

Host I/O that uses FC or FC-NVMe

Replication

Clustering between systems

Quad-port 32 Gbps Fibre Channel

Host I/O that uses FC or FC-NVMe

Replication

Clustering between systems

Dual-port 25 Gbps Ethernet (iWARP)

Host I/O that uses FC or FC-NVMe

Replication

Clustering between systems

Dual-port 25 Gbps Ethernet (RoCE) Host I/O that uses iSCSI, RoCE or NVMe/TCP
Dual-port 12 Gbps SAS Expansion Connection to SAS Expansion Enclosures

Memory configurations

IBM Storage FlashSystem 9100 supports up to twenty-four 32 GB DIMMs per node, with three memory configurations supported.
Table 3. Memory configuration
Configuration Feature code DIMMs per node Memory per node Best practice recommendation
Base ACGM 4 128 GB Base config, ideal for < 12 drives and 1 network adapter with modest IOPS requirements
Upgrade 1 ACGJ 12 384 GB Recommended for best IOPs/latency and >12 drives with >1 adapter and/or DRP/Deduplication workloads
Upgrade 2 ACGB 24 768 GB Recommended for cache-heavy I/O Workloads and DRP/Deduplication workloads

For more details on the adapters, see the following pages: