SAN Worst Practices 1: ISL R_RDY Mode
seb_ 060000QVK2 Comment (1) Visits (1330)
There is a function in FabricOS with a command name so unsuspicious and innocent you might want to enable it without even knowing what it does:
Now doesn't that sound like: "If you plan to attach an ISL to this port, enable me! Enable me and I will prepare this callow port to be a real E-Port."?
Virtual Channels 101
We talked about virtual channels before and with "talked about" I mean I wrote about them here and here and briefly in some other articles. But for the sake of completeness let me explain what we need here.
Unlike a multiplexed link an ISL still transports only one signal at a time. So what's partitioned is not really the ISL but its buffer credit management. That means you have a distinct buffer credit counter for each virtual channel, which is decreased every time a frame is transmitted for that particular virtual channel. The receiving switch recognizes for which virtual channel a frame is sent. To give buffer credits back as soon as it can receive more frames, the switch will send a VC_RDY. This is a ordered set (4 byte word) that contains the information for which virtual channel it carries a buffer credit.
In the classical ISL mentioned above we have basically 8 virtual channels and only 5 of them are needed for most purposes in a FCP SAN: Virtual channel 0 and 2-5. VCs 2-5 are the "data-VCs", they carry the user traffic - mainly the I/O between end-devices, but also control frames and administrative stuff between end-device and switch. Additionally switches talk with each other in an own service class "Class F" using virtual channel 0. If you change the zoning, the new zoning configuration will be distributed in the fabric as Class F traffic. The same for devices coming online or when the the principal distributes the current time.
The advantage of virtual channels is, that if there is a bottleneck further down the way - for example due to a slow drain device - not all traffic is blocked, but only the traffic mapped to the same virtual channel like that slow drain device. The other devices still can talk freely. It's even more important to separate normal traffic from Class F traffic. You don't want your fabric to stay in an inconsistent state because a slow drain device consumed the buffer credits. And you don't want it because of another reason I will explain later.
Here you see the buffer credits of a classic ISL in the output of the root-level command portregshow. The VCs count from left to right beginning with VC0. For this article you can ignore the both credits for VC6 and VC7.
The VC0 (transporting Class F) has 4 buffer credits. The data-VCs 2-5 have 5 credits each.
You can increase the buffers of an E-Port (which you would see in the other switch's portregshow then) in 2 ways:
1) With portcfgeportcredits as described here. This would give you additional buffer credits on each data virtual channel. It would look like this:
All of the data virtual channels have 20 buffer credits now. The 4 buffers for Class F would stay the same.
2) With a long distance mode. By doing that, the data VCs collapse to only VC2. The rest of the VCs stay the same again.
You see: You might change buffer credits, but it will only affect the data VCs. Class F traffic stays with 4 buffers regardless of the distance. And that's a good thing. Because Class F is a beast.
Normal I/O can be routed over any ISL that is available as long as it is on one of the shortest paths according to the FSPF protocol (Fabric Shortest Path First). Class F traffic will usually not do that, it will go over principal ISLs. When the fabric is built, a spanning tree is built as well. It is rooted at the fabric's principal switch and reaches every switch in the fabric exactly one time (thus a spanning tree). You can see the involved ISLs in switchshow marked with "upstream" and "downstream" depending on the direction towards or away from the principal.
Class F has the highest priority (together with the F_RJT and ACK traffic from class 2). Especially higher than any user traffic. But it is caged in VC0 with the fixed 4 buffers. So regardless of how long the distance is, it is always limited to these 4 buffers. It wants to bite the ones outside the cage, but it can't. In the fabric that means that it just can take some time until all the fabric information is distributed, but that's fine. The processes are designed in the way to cope with that.
But what if the beast breaks out of the cage?
ISL R_RDY mode is intended to be used on long distance links going over legacy SAN extenders or gateways that cannot cope with VC_RDYs or only transport the plain frames without any ordered set per se. If you use ISL R_RDY mode, VC_RDYs cannot be used and so there are no virtual channels. All virtual channels will collapse to one big channel and it will get all the buffer credits. So in portregshow it looks like this:
The cage is broken, the beast is free. The highly prioritized Class F traffic is not limited to 4 buffers anymore but can consume all of them freely. And it's strange how much Class F traffic there suddenly is. Especially in situations where something changes in the fabric, user traffic is effectively blocked and in extreme cases, the user traffic waits for such an amount of time, that it's dropped due to timeout. Additionally a lot of back pressure spreads in other parts of the fabric and causes performance problems even for apparently unrelated devices.