I was recently asked by a UK customer how the performance of an 800 GB SSD compared with the performance of a 1.6 TB SSD. They were specifically asking about the performance of the drives supported by the V7000 - but I realised it's actually quite an interesting question to think about...
My first reaction was that of course SSDs are just like regular drives - more capacity doesn't add any performance, but then I thought some more and realised that was just completely wrong. I also decided that there is basically no way to answer that question for the a generic SSD. Here are the factors which I think are probably involved:
- An SSD is really a circuit board with a number of Flash chips on them to store the data. If you double the capacity of the SSD then you may have doubled the number of the Flash Chips. If you have doubled the number of Flash chips then there is a reasonable expectation that the performance may have doubled.
- One of my colleagues tells me that the SSDs inside Apple laptops do claim that if you get the drive which is twice as big then it performs twice as well... So this logic is at least sometimes true.
- Those Flash chips are hiding behind a SAS interface. For a high performance SSD there's a chance that you'll eventually reach a bottleneck on the SAS interface chip. Once you reach that bottleneck then adding more Flash chips won't make any difference.
- There's no such thing as "A Flash Chip". There are a number of manufacturers out there making different Flash chips with different trade-offs between price and performance. So if the SSD comes from a different vendor, is a different model from the same vendor, or maybe is even just manufactured in a different year then there's a chance that the performance of the two will be noticeably different.
- You may know that an SSD always has more raw capacity than usable capacity. That is because you can not overwrite data on an Flash cell. What you need to do is erase the cell (basically overwrite with zeros) and then you can store your data in it. Because the erase takes time to complete, the drives hide this latency from you by always writing to "pre-erased cells". Then later a garbage collector will come and erase the old cells that are no longer being used. So if your SSD has less of that extra capacity - that will reduce the performance of sustained writes to the drive.
After all this discussion - you might be expecting me to produce a magic answer - but the point is that I don't have one. There is just no way of knowing without simply looking at the performance of each drive and comparing them to each other.
However.... there is another possible approach here, one which I'm stealing from one of IBM FlashSystem technical salesmen based in the UK. Maybe the answer should be "who cares?" (His actual phrase was "IOps are free"). The high end SSDs are reaching a point where the IOPs discussion is becoming irrelevant. On high end SSD systems - especially things like the IBM FlashSystem products, most customers do not have applications that are capable of generating enough IOs per second per Gigabyte to get close to the maximum IOPs of these drives. Even with EasyTier finding the hottest data across an entire storage pool and concentrating it on SSDs, it is unusual to reach the IOPs capabilities of SSDs. You are much more likely to overload SSDs with MB/s than with IO/s. So that's some food for though.
I don't claim to be an expert on Flash or SSDs. I'd be interested to know if any of you have any practical experience of these topics, or if you have any of your own opinions.