I don't always write technical blog posts. But when I do I make them long and the conclusion contains a request to you my readers to do this or that. I won't do that today. Today is about a behavior I observed, but I won't propose anything. Feel free to draw your own conclusions. Well that might be considered as a proposal :o)
This one is about the IBM System Storage SAN06B-R, a multi-protocol router or SAN extension switch. It consists of two ASICs - one handling the Fibre Channel part and one for the FCIP. They also have some extra tasks like FC routing and compression, but for our example it's enough to know that there are two and if you want to transfer SAN traffic over FCIP, it has to pass both of them.
The both ASICs are connected via 5 internal ports all working with a line rate of 4Gbps. That doesn't sound much compared to the 16 FC ports running with up to 8Gbps on the front-side. But we have to keep in mind that they are only for connectivity. Given the max. IP connectivity of 6x 1GbE, the internal connections shouldn't be a bottleneck.
Internal connections are somewhat similar to external ISLs between switches when it comes to flow control. They use buffer-to-buffer credits ("buffer credits") and the links are logically partitioned into virtual channels, each of them with an own buffer credit counter. These virtual channels prevent head of line blocking in case of a back pressure (for example due to slow drain devices on the other side of FCIP connections).
When it comes to buffer credits, it's important how they are assigned to these virtual channels. Within these internal connections each VC gets 1 buffer, but it can borrow 3 out of a pool. The pool is shared among all VCs for that port and contains 11 in total.
You might say "Yeah, but hey it's just a very short connection on the board. Who needs those buffer credits anyway?", but keep in mind they are not just for spanning the tiny distance. There are multiple reasons why frames need to be touched here and therefore buffered. Plus of course possible external back pressure. Often a few buffer credits make the difference between normal traffic flow and piling up of frames and even frame discards due to timeout.
I guess the last thing you want to have is an artificial bottleneck inside of your routers...
So the amount of buffers and buffer credits for each internal connection depends on how many VCs are in use. And that's the crux. The number of VCs per internal connection depends on the number of...
A tunnel consists of 1-6 circuits, so you can bundle several GbE interfaces together. They call it FCIP trunking. Some features like e.g. Tape Pipelining require the use of only one tunnel. There's not much we can do about that. For an environment that doesn't utilize them, it starts to get interesting now: If you have only 1 tunnel, you have only 1 VC and therefore only 4 buffer credits plus the risk of head of line blocking! In addition if you actually spread the traffic across the low, medium and high priority within a circuit, you would get an own VC for each priority.
Using only the standard "medium" priority for the data traffic (F-class "administrative" fabric traffic use an own VC that fall out of this equation of course) would give you that amount of buffers on each of the 5 internal connections between the ASICs:
|# of tunnels||# of VCs||# of buffers|
(1 buffer per VC + 3 to borrow per VC out of a pool of 11)
Please be aware that the amount of VCs/buffers is only one point that needs to be taken into consideration when planning and configuring the optimal FCIP connection. You can find a good overview about the other ones in Brocade's FCIP Administrator's Guide for your FabricOS version.