PCIe3 2-port 100 GbE NIC & RoCE QSFP28 Adapter (FC EC3L and EC3M; CCIN 2CEC)

Learn about the specifications and operating system requirements for the feature code (FC) EC3L and EC3M adapter.

Overview

FC EC3L and EC3M are both the same adapter with different tail stock brackets. FC EC3L is a low-profile adapter and FC EC3M is a full-height adapter.

The PCIe3 2-port 100 GbE NIC & RoCE QSFP28 Adapter is a PCI Express (PCIe) generation 3 (Gen3), x16 adapter. The adapter provides two 100 Gb QSFP28 ports. The PCIe3 2-port 100 GbE (NIC and RoCE) QSFP28 Adapter supports both NIC (Network Interface Controller) and IBTA RoCE standards. RoCE is Remote Direct Memory Access (RDMA) over Converged Ethernet. Using RoCE, the adapter can support significantly greater bandwidth with low latency. It also minimizes CPU overhead by more efficiently using memory access. This offloads the CPU from I/O networking tasks, improving performance and scalability.
Note: Each port maximum of 100 Gb assumes that no other system and or switch bottlenecks are present. The adapter allows full bandwidth for a single port in a PCIe3 slot and up to 128 Gb/s minus overheads for both ports.
Figure 1. PCIe3 2-port 100 GbE NIC & RoCE QSFP28 Adapter
PCIe3 2-port 100 GbE NIC & RoCE QSFP28 Adapter

Specifications

Item
Description
Adapter FRU number
00WT078
I/O bus architecture
PCIe3 x16
Slot requirement
For details about slot priorities, maximums, and placement rules, see PCIe adapter placement rules and slot priorities and select the system you are working on.
Voltage
3.3 V
Form factor
Short, low-profile (FC EC3L)
Short, with full-height tailstock (FC EC3M)
Cables
For 100G, IBM® offers either Direct Attach Copper (DAC) cables up to 2 M or Active Optical Cables (AOC) up to 100 M. QSFP28 based transceivers are included on each end of these cables. For more information about adapter cabling, see the Cable and Transceiver Matrix.
Note: For 40G, IBM® offers DAC cables up to 5 M. QSFP+ base transceivers are included on each end of these cables. See FC EB2B, EB2H, and ECBN for a 1 M, 3 M, and 5 M copper cables.
Transceivers
IBM qualifies and supports QSFP28 optical transceiver (FC EB59) to install into the adapter. Customers can also use their own optical cabling and QSP28 optical transceiver for the other end. This is a 100GBASE-SR4 based active optical transceiver capable up to 100 M through the OM4 cable or 70 M through OM3 cable. Either one or both of the adapter's two QSP28 ports can be populated. When two ports are filled, both can have copper cables or optical cables. Additionally, one of the cables can be copper and the other can be optical. IBM® also offers QSFP+ optical transceiver (FC EB27) to install into the adapter and allowing the customer to use their own optical cabling and QSP28 optical transceiver for the other end.
Cable and Transceiver Matrix
Feature code Description
EB59 100GBASE-SR4 Optical Transceiver MTP/MPO cable (purchased separately)
  • FC EB2J - 10 M
  • FC EB2K - 30 M
EB5J QSFP28 Passive Copper 100 Gb Ethernet Cable - .5 M
EB5K QSFP28 Passive Copper 100 Gb Ethernet Cable - 1 M
EB5L QSFP28 Passive Copper 100 Gb Ethernet Cable - 1.5 M
EB5M QSFP28 Passive Copper 100 Gb Ethernet Cable - 2 M
EB5R QSFP28 AOC 100 Gb Ethernet Cable - 3 M
EB5S QSFP28 AOC 100 Gb Ethernet Cable - 5 M
EB5T QSFP28 AOC 100 Gb Ethernet Cable - 10 M
EB5U QSFP28 AOC 100 Gb Ethernet Cable - 15 M
EB5V QSFP28 AOC 100 Gb Ethernet Cable - 20 M
EB5W QSFP28 AOC 100 Gb Ethernet Cable - 30 M
EB5X QSFP28 AOC 100 Gb Ethernet Cable - 50 M
EB5Y QSFP28 AOC 100 Gb Ethernet Cable - 100 M
EB2B 1 M Passive QSFP+ to QSFP+
EB2H 3 M Passive QSFP+ to QSFP+
ECBN 5 M Passive QSFP+ to QSFP+
EB27 QSFP+ 40G BASE-SR transceiver
Attributes provided
The adapter is based on the Mellanox ConnectX-4 adapter, which uses the ConnectX-4 EN Network Controller
Ethernet only supported in Ethernet or RoCE mode
PCIe3 compliant (1.1 and 2.0 compatible)
RDMA over Converged Ethernet (RoCE)
NIC and RoCE are concurrently supported
RoCE supported on Linux and AIX (7.2, and later)
NIC supported on all operati
TCP/UDP/IP stateless offload
LSO, LRO, and checksum offload
NIM boot support
Backward compatible with 40 Gb Ethernet when using compatible cables and transceivers
Improves performance and scalability by offloading the CPU from I/O networking tasks
Minimizes CPU overhead by more efficiently using memory access
PowerVM SR-IOV support. For more information see, PowerVM® SR-IOV FAQs.

Operating system or partition requirements

If you are installing a new feature, ensure that you have the software that is required to support the new feature and you must determine any prerequisites that must be met for this feature and the attached devices. For information about operating system and partition requirements, see one of the following topics: