All too easy to use 8Gbits
orbist 060000HPM5 Visits (1781)
While doing my late night catch up on whats being said out there, I had meant to post on Chris Evans' previous topic about 8Gb Fibre Channel. His latest post reminding me to explore this a bit further.
In an answer to the question about why 8Gbit and not 10Gbit, 8Gbit has the advantage of being backwards compatible, whereas 10Gbit is a completely different beast. I guess the question is more why did they go 10Gbit for ISLs and not 8Gbit - maybe its a play into data center ethernet afterall.
Chris picks up on Symmantec/Veritas having to re-write the firmware on the HBAs they support to enable initiator/target operation. All sounds very familiar, as SVC's internal HBA's have to run as both initiator and target. However, picking up the Agilent Tachyon family of HBA's gave us this facility for free but also the license to use the source code to their device driver. Another reason why SVC has complete control of every line of code your I/O flows through.
Back to 8Gbit though, personally I can't wait for our first sample cards. Running SVC in the lab and doing the semi-pointless and all too touted cache hit measurements let me join the band-wagon for a second! With the 4x 4Gbit ports SVC has today (per node) this gives a theoretical maximum of 1.6GB/s per node. Measuring large transfer read cache hits I usually get 1.5GB/s per node. So 3.0GB/s out of a possible 3.2GB/s for an SVC IO Group. Given that there are PCI overheads, bus overheads and of course SVC code running losing such a small amount of potential throughput is pretty darn amazing. Its no co-incidence that the 8-lane PCI express bus inside the nodes also has a theoretical maximum of 1.6GB/s per slot. Again showing how well SVC can drive the hardware to the maximum.
So all I can say is roll on 8Gbit, giving each node double the bandwidth - and nicely tieing up with PCI express Gen2 (again double bandwidth)... the future is indeed bright.