<   Previous Post  GuestPost : To Infin...
The gotchas of NAND  Next Post:   >

Comments (4)
  • Add a Comment
  • Edit
  • More Actions v
  • Quarantine this Entry

1 FritzV commented Permalink

Barry, you are completely right with your ASIC/3PAR remarks. <br /> 3PAR started the development of the "fourth generation" ASIC at the end of 2005. The 3PAR ASIC design is coming from Encores Infinity SP 30/40 storage processors. This company settled in Ft. Lauderdale was liquidated at January 1999, but the intellectual property from this times is still alive. <br /> The fourth generation ASIC reflects the state of the art at 2005, it is a PCIe Gen1 design. Each ASIC supports three x8 PCIe busses for intranode transfers and four busses for internode communications. Each PCIe lane is running at 2.5GHz, this means a x8 bus bandwidth of 2GByte/s. The V-Class controller owns two ASICs with a total internal bandwidth of 12GByte/s. The available bandwidth is shared among 24 8GBit/s FC ports for frontend connections and 12 4GBit/s FC ports for the backend, this is realized with the help of PCIe switches from PLX. <br /> 24 x 800MByte/s and 12 x 400MByte/s gives 24GByte/s, therefore an oversubscription of 2 : 1 exists which can be tolerated. Unfortunately we start now with 16GBit/s FC, this results in 24 x 1600MByte/s plus 12 x 400MByte/S equals 43.2GByte/s. This gives an oversubscription of <br /> 3.6 : 1 which is to high. So the 3PAR history repeats itself, the new V-Class controllers can not be used in an efficient manner for the coming 16GBit/s FC or 40Gbit/s iSCSI and FCoE protocols. So it is much better to use the Intel or AMD commodity silicon with PCIe Gen2 or even Gen3. The ASIC advantages are over-compensated by the progress of the Intel/AMD microprocessors and chipsets. Why waste time and money with ASICs, the brute force method is often better than a highly-sophisticated design which is not future-proof.

2 thestorageanarchist commented Permalink

Welcome back! <br /> . <br /> Pepsi-MAX jokes aside, I absolutely agree with your assessment of ASICs vs. Intel. While there are some things that are more appropriately and cost-effectively handled by ASICs (heck, the Tachyon and the referenced PCIe switches are really just well-polished ASICs), they are a risky and expensive way to handle mundane operations like zero-detection. <br /> . <br /> In fact, rumor is that Hitachi's VSP was 12-18 months late to market due to having to respin not 1, but 2 of the 5 custom ASICs used in its design. Both 3PAR's and VSP's ASICs are built as PCIe Gen-1 devices - this is why both VSP and 3PAR's V-Series remain Harperton and PCIe Gen-1 based even through both were introduced well into the Nahalem/Westbridge+PCIe Gen2 lifecycle. <br /> . <br /> For both, switching to PCIe Gen 2 will require significant overhaul of their design. And the Sandy Bridge/Ivy Bridge generation will quite likely require a respin once again to accommodate PCIe Gen 3 and the new QPI/Memory ring-based interfaces. <br /> . <br /> This is probably why leaked versions of the Hitachi roadmaps indicate that the path to elimination of ASICs are key milestones for future xSP arrays. <br /> . <br /> The rest of us pretty much just need to recompile, or at worst hand-optimize for the new instruction sets supported by next-generation Intel processors. <br /> . <br /> (For the record, VMAX does currently employ a custom ASIC in its Virtual Matrix interface).

3 af_robot commented Permalink

It seems you really do not understand what ASICs used for. Intel CPUs I/O bandwidth very limited for typical disk arrays usage scenario. No matter how many PCI-X/PCIe slots you have - CPU can move about ~20 Gigabytes per seconds (theoretically!) of data from I/O adapters. Connect 100 x 15k hard drives (200Mb/s) to controller without ASIC – and your CPU will be dying just moving the data. Not even talking about SSD. ASIC offloads data from CPU, leaving CPU horsepower for other tasks. And NO, you’re wrong – it is already 2013 and HP still making new ASICs and leading in SPC1 benchmark with 3PAR arrays.

4 orbist commented Permalink

af_robot <div>&nbsp;</div> I understand perfectly what ASIC used for. But your 20Gbytes/s is a bit off, each DDR3 memory channel can do about 8Gbytes/second, with 4 channels per CPU, and SMP machines, thats a lot of data. <div>&nbsp;</div> The limit is usually the PCI bus, and the protocol interfaces themselves, even at 16Gbit FC, thats only 1.6Gbytes/s per port. <div>&nbsp;</div> So 4x 8GB/s per CPU, and say a 2 way SMP box, thats 64GB/s memory, which would need 40 16Gbit ports running flat out... I still stand by the why spend 3 years developing something for a niche, and adding a HUGE cost to your controller, vs a few lines in assembler that can achieve close to the same bandwidth... cost to end user == maximum gain