4 Gigabit PCI Express Dual Port Fibre Channel Adapter (FC 5774; CCIN 5774)
Learn about the specifications and operating system requirements for the feature code (FC) 5774 adapter.
Overview
The 4 Gigabit PCI Express Dual Port Fibre Channel Adapter is a 64 bit, short form factor x4, PCIe adapter with an LC-type external fiber connector that provides single initiator capability over an optical fiber link or loop. The adapter automatically negotiates the highest data rate between the adapter and an attaching device at 1 Gbps, 2 Gbps, or 4 Gbps of which the device or switch is capable. Distances between the adapter and an attaching device or switch can reach up to 500 meters running at 1 Gbps data rate, up to 300 meters running at 2 Gbps data rate, and up to 150 meters running at 4 Gbps data rate. When used with IBM® Fibre Channel storage switches supporting long-wave optics, the adapter can reach distances of up to 10 kilometers running at either 1 Gbps, 2 Gbps, or 4 Gbps data rates.
The adapter can be used to attach devices either directly, or with Fibre Channel switches. If you are attaching a device or switch with an SC type fiber connector, you must use an LC-SC 50 micron fiber converter cable (FC 2456) or an LC-SC 62.5 micron fiber converter cable (FC 2459).
- Compliant with the PCIe Base and Card Electromechanical (CEM)
1.0a specifications:
- x1 and x4 lane link interface at 2.5 Gbit/s (auto-negotiated with system)
- Supports VC0 (1 Virtual Channel) and TC0 (1 Traffic Class)
- Configuration and IO Memory read/write, completion, message
- Support for 64-bit addressing
- ECC error protection
- Link CRC on all PCIe packets and message information
- Large payload size: 2048 bytes for read and write
- Large read request size: 4096 bytes
- Compatible with 1, 2, and 4 Gb Fibre Channel interface:
- Auto-negotiate between 1 Gb, 2 Gb or 4 Gb link attachments
- Support for all Fibre Channel topologies: point-to-point, arbitrated loop, and fabric
- Support for Fibre Channel class 2 and 3
- Maximum Fibre Channel throughput achieved by using full duplex hardware support
- End-to-end data path parity and CRC protection, including internal data path RAMs
- Architectural support for multiple upper layer protocols
- Internal high-speed SRAM memory
- ECC protection of local memory, includes single-bit correction and double-bit protection
- Embedded shortwave optical connection with diagnostics capability
- Onboard Context Management by firmware (per port):
- Up to 510 FC Port Logins
- Up to 2047 concurrent Exchanges
- I/O multiplexing down to the FC Frame level
- Data buffers capable of supporting 64+ buffer-to-buffer (BB) credits per port for shortwave applications
- Link management and recovery handled by firmware
- Onboard diagnostic capability accessible by optional connection
- Parts and construction compliant with the European Union Directive of Restriction of Hazardous Substances (RoHS)
- Performance up to 4.25 Gbps full duplex
The following figure shows the adapter.
Specifications
- Item
- Description
- Adapter FRU number
- 10N7255*
- * Designed to comply with RoHS requirement
- Wrap plug FRU number
- 11P3847
- I/O bus architecture
- PCIe Base and CEM 1.0a
x4 PCIe bus interface - Slot requirement
- One available PCIe x4, x8, or x16 slot
- Voltage
- 3.3 V
- Form factor
- Short, low-profile
- FC compatibility
- 1, 2, 4 gigabit
- Cables
50/125 micron fiber (500 MHz*km bandwidth cable)
- 1.0625 Gbps 0.5 – 500 m
- 2.125 Gbps 0.5 – 300 m
- 4.25 Gbps 0.5 – 150 m
62.5/125 micron fiber (200 MHz*km bandwidth cable)
- 1.0625 Gbps 0.5 – 300 m
- 2.125 Gbps 0.5 – 150 m
- 4.25 Gbps 0.5 – 70 m
- Maximum number
- For the maximum adapters supported, see the PCI adapter placement topic collection for your system.
For details about slot priorities and placement rules, see the PCI adapter placement topic collection for your system.
Operating system or partition requirements
If you are installing a new feature, ensure that you have the software that is required to support the new feature and that you determine whether there are any prerequisites for this feature and attaching devices. To check for the prerequisites, see IBM Prerequisite website .
- AIX®
- AIX 7.1, or later
- AIX 6.1, or later
- AIX 5.3, or later
- Linux
- Red Hat Enterprise Linux 5.6 for POWER®, or later
- SUSE Linux Enterprise Server 11 Service Pack 1, or later
- IBM i
- IBM i 7.1, or later
- IBM i 6.1, or later
Adapter LED states
Green and yellow LEDs can be seen through openings in the mounting bracket of the adapter. Green indicates firmware operation and yellow signifies port activity. Table 1 summarizes normal LED states. There is a 1 Hz pause when the LED is off between each group of fast flashes (1, 2 or 3). Observe the LED sequence for several seconds to ensure that you correctly identify the state.
Green LED | Yellow LED | State |
---|---|---|
On | 1 fast flash | 1 Gbps link rate - normal, link active |
On | 2 fast flashes | 2 Gbps link rate - normal, link active |
On | 3 fast flashes | 4 Gbps link rate - normal, link active |
Power-On Self Test (POST) conditions and results are summarized in Table 2. These states can be used to identify abnormal states or problems.
Green LED | Yellow LED | State |
---|---|---|
Off | Off | Wake-up failure (dead board) |
Off | On | POST failure (dead board) |
Off | Slow flash | Wake-up failure monitor |
Off | Fast flash | Failure in post |
Off | Flashing | Post processing in progress |
On | Off | Failure while functioning |
On | On | Failure while functioning |
Slow flash | Off | Normal, link down |
Slow flash | On | Not defined |
Slow flash | Slow flash | Offline for download |
Slow flash | Fast flash | Restricted offline mode, waiting for restart |
Slow flash | Flashing | Restricted offline mode, test active |
Fast flash | Off | Debug monitor in restricted mode |
Fast flash | On | Not defined |
Fast flash | Slow flash | Debug monitor in test fixture mode |
Fast flash | Fast flash | Debug monitor in remote debug mode |
Fast flash | Flashing | Not defined |
Device ID jumper
The default setting for the two device ID jumpers labeled P0_JX. and P1_JX is to set the jumpers on pins 1 and 2 as shown in Figure 2. Do not change the jumper settings for a standard installation.
Replacing hot swap HBAs
Fiber Channel host bus adapters (HBAs) connected to a fiber array storage technology (FAStT) or DS4000® storage subsystem have a child device called a disk array router (dar). You must unconfigure the disk array router before you can hot swap an HBA that is connected to a FAStT or DS4000 storage subsystem. For instructions, see Replacing hot swap HBAs in the IBM System Storage® DS4000 Storage Manager Version 9, Installation and Support Guide for AIX, HP-UX, Solaris, and Linux on Power Systems™ Servers, order number GC26-7848.