Comments (9)
  • Add a Comment
  • Edit
  • More Actions v
  • Quarantine this Entry

1 anthonyv commented Permalink

I like your sketching kung-fu. <br /> Do not be apologise, the diagrams exactly show what you intended them to.

2 seb_ commented Permalink

Thank you :o) Nevertheless I should have a look for a good program to do that in the future.

3 ArjanvanRees commented Permalink

I'm currently thinking about some changes to our fabrics. <div>&nbsp;</div> Currently, we've got 2 fabrics with 1 core which is no longer able to support all our storage arrays and SVC nodes. As such, we've placed a complete IO group and storage array on a different 'edge' switch. <div>&nbsp;</div> Also, we've got a ISL over a DWDM to our secondary data site where our replication target SVC and storage arrays are located. There as well, we have already had to go for dispersing over 2 switches. <div>&nbsp;</div> As a solution, I'd like to implement a dual core design. I can still disperse over 2 switches, but both switches would now be designated as being core switches. <div>&nbsp;</div> Your post helps me understand the necessity of looking at this carefully. It will require majore re-zoning in order to implement. Another choice would be to replace the cores with 80 port switches. <div>&nbsp;</div> I have one question though, The first edge switch in our secondary site could be seen as the core for that location. We've chosen to make it an extended fabric in stead of seperate fabrics. So far we've not seen any problems with this solution, but perhaps someone here has some positive or negative experiences with extended fabrics?

4 seb_ commented Permalink

@Arjan van Rees: Thank you! :o) <br /> From your description I'm not really sure if you have something like a core-edge-design. For your question: If the DWDM link is stable and the DWDM is transparent (please see http://ibm.co/mOUn8O ) I see no problem in having an extended fabric. As a support guy I prefer that comparing to separated, routed fabrics, because it's easier to troubleshoot than with all the NAT (front domain, translate domain, fabric ID, etc). For FCIP connections on the other hand I recommend to use a router in between because as I SAN guy I usually mistrust the quality of all long distance IP connections :o) Please excuse that I can't help you much with planning this one, as this is usually a fee-based service. But if you are interested in such help, I can arrange something, as ATS (the service guys) and PFE (product field engineering, the support guys, me) working closely in the ESCC (European Storage Competence Center). So most probably I could be the one doing this service then :o) (after my vacation)

5 Fredje commented Permalink

Seb, <br /> this is somewhat late, but I need to add a small comment on this, and that is that I view it very unlikely that <br /> the fabric OS would lead the inter-node traffic across 2 hup's, instead of just going with one of the point-to-point connections between the SVC nodes. Unless the customer has done very poor/incorrect zoning, so that the SVC nodes ports doesn't see each other on the same switch, only then I would expect the fabric to lead the traffic across the edge switch, and thereby the 2 ISL's. In fact I have a customer with a streched SVC cluster, having switches in 3 location and long SVC node cabling to each core switch, and getting a similar setup, but I see absolutly no traffic across the ISL's. The customer's SAN is a IBM-branded Brocade FOS, if that would make any difference. <div>&nbsp;</div>

6 seb_ commented Permalink

Hi Freddy, I will check this. I had a couple of cases where it definitely made a difference, but of course it could be that the current SVC code is more fabric aware. Thank you for your feedback! Cheers seb

7 AnoopDevaraj commented Permalink

Dear Seb -Recently in our SAN network (DS8800 with svc controller 6.3) with long distance metro replication, we have observed slow performance due to slow draining. As per the IBM recommendations, they suggested to downgrad certain the core switch ports from 8G to 4G to elimiate the host/enclosure slow drain devices from SAN. Of late, we noticed that the I/O slow performance again (read is taking much time), ONLY WHEN the metro mirroring is turned on. Please suggest me the necessary sanity check to be performed on why the slow performance arises when metro mirroring is on. <div>&nbsp;</div> Thanking you in anticipation, <br /> Anoop <br /> Problem Analyst <br /> Dubai. <br /> anoop.devaraj@emirates.com

8 seb_ commented Permalink

Hello Anoop, <div>&nbsp;</div> I heard a little bit about the backgrounds of this problem from my colleagues. I'm sorry, but at the moment I don't have time to dig into this. Please keep going with the already open cases / services. <div>&nbsp;</div> Cheers <br /> seb

9 AnoopDevaraj commented Permalink

Dear Seb - Thank you very much for your reply. I would really appreciate, if you kindly look into the issue whenever you get some free time. I look forward to hear some suggestions and troubleshooting stratagies on establishing the root cause of the issue. Major concern is slow response is encountered only when the 45km metro mirroring is turned on and wishing to establish a co-relation of this with slow response. <div>&nbsp;</div> Thanking you in anticipation, <br /> Best Regards, Anoop