Comments (18)
  • Add a Comment
  • Edit
  • More Actions v
  • Quarantine this Entry

1 brettallison commented Permalink

I think it would be more clear, if after the statement, "In our imaginary core-edge fabric where for example all SVC ports are connected to ports 0, 4, 8, 12, ... all host I/O towards SVC would use the same virtual channel" you showed the binary conversion of 0,4,8,12 as most of us don't deal in binary every day. This would illustrate your point that the port 2 is used since the last 2 bits for all of these destination ports is 00.

2 seb_ commented Permalink

Sorry for the late comment moderation and response, I didn't look into my company emails :o) <div>&nbsp;</div> You could also think of it as a modulo function y = x mod 4. So that 0-&gt;0, 1-&gt;1, 2-&gt;2, 3-&gt;3, 4-&gt;0, 5-&gt;1, 6-&gt;2, 7-&gt;3, 8-&gt;0. <div>&nbsp;</div> I added the binary numbers and made the 00s bold. Thanks for the feedback! <br /> Cheer seb

3 tseneb commented Permalink

hello, <div>&nbsp;</div> how about implementing isl trunking between core and edge switchs with 2 or 4 links. <br /> i think in this case the buffers of this ports will added to the trunk. <br /> thanks

4 seb_ commented Permalink

Hello Spyros, <div>&nbsp;</div> that's a very good question, because it's exactly that what wouldn't work. Trunking like Brocade does it would saturate the first link of a trunk to a certain level and then would include the next link of the trunk in a kind of round-robin load sharing. But if you have an environment like described above, you will probably not read that level, because you can only use 5 buffers and credit starvation will keep the link utilization down. <div>&nbsp;</div> And even if it would include the next link, you just gain 5 extra buffers in these few situations and still have only 5 per link. In times of credit starvation the bottleneck would most probably just appear some split seconds later. A rule of thumb for the avg. cost of an FC port is still ~1k$... not really cheap for something that most probably wouldn't solve the problem. <div>&nbsp;</div> Cheers seb

5 ajeffco commented Permalink

What about when using exchange based routing instead of trunking between the core and edge?

6 seb_ commented Permalink

Hi Al, <br /> as exchange based is the default routing policy in an open systems environment, this would exactly match the described exposed environment from the article. A routing policy just decides which ISL to use, not which virtual channel. Just using more E-Port ISLs with the same utilization of only one virtual channel out of them would be the same like filling only 25% of an airplane and just send 4 airplanes to get all people transported. Of course the analogy ends here, because the frames won't have more legroom then :o) <div>&nbsp;</div> So if you plan a new SAN just avoid a cabling like that. If you already have it, a longdistance mode could be an easy solution. <div>&nbsp;</div> Cheers seb

7 David90 commented Permalink

Hi seb <div>&nbsp;</div> thx for this quite interesting article. I was aware of vc distribution issue for my SVC nodes because of simple cabling (all nodes are connected on different FC4-32 blades, but on the same port. for instance 1/15 2/15 etc.) but didn't think about LE mode for ISLs. <br /> It seems to be a quite good and simple idea to get rid off potential congestion on vc level, but I have 2 questions about that : <br /> Is there any restrictions to configure LE mode ? I mean can we configure a multimode ISL (less than 100 m) in 'long distance' mode ? <br /> And more important thing : from portcfglongdistance help <br /> "If a port is configured as <br /> a long distance port, the remaining ports of that port group <br /> could be disabled, fail to initialize, or move to "buffer <br /> limited" mode due to a lack of frame buffer credits." <br /> Depending on hardware / remaining buffers in port groups, we should take care about this point before enabling LE mode. Does it make sense ? <div>&nbsp;</div> MFG <br /> -- <br /> david

8 seb_ commented Permalink

Hi David, <div>&nbsp;</div> thanks for your feedback! <div>&nbsp;</div> For your questions: <br /> I'm not aware of any restrictions. LE doesn't need a license and it does not check the actual length of the cable. <br /> From buffer point of view you would enable 20 additional buffers per ISL (without LE mode: 20 buffers spread over 4 data VCs - with LE mode: 40 on a single data VC). With the command portbuffershow you can check how many buffers are remaining in the ASIC and usually there are enough buffers free in today's switches as long as you don't also connect lot's of real long distance ISLs with several hundred kilometers. (For example, I have a lab switch 2498-B40 (Condor2 ASIC) with 26 devices connected + several ISLs and long distance ISLs and I still have 1200 buffers remaining.) <div>&nbsp;</div> Cheers seb

9 David90 commented Permalink

thanks for your answer seb, I've tested on a 4G device (4100) and it seems no additionnal buffers have been taken in LE mode : <br /> bbc registers <br /> ============= <br /> 0xd5d82400: bbc_trc 4 0 20 0 0 0 1 1 <br /> 0xd5d82420: bbc_trc 0 0 0 0 0 0 0 0 0 <div>&nbsp;</div> bal_alloc_buf 0000001a <div>&nbsp;</div> Thus as far as I understand, there should not be any issue with available buffers while configuring LE mode on ISL under 4G plaform. <div>&nbsp;</div> Good tip I will keep in mind ... :) <div>&nbsp;</div> MFG <br /> -- <br /> david

10 TadSmith commented Permalink

We are currently looking to implement an XIV and re-cable some existing storage. Can you be a little more specific on how the virtual channel is determined? Is it truly just the last 2 digits of the binary version of the portAddress on the switch? Or is it slightly more complicated than that?