Does server virtualization require enterprise storage arrays?
seb_ 060000QVK2 Comments (2) Visits (4287)
HDS' Hu Yoshida wrote an interesting theory on his blog. Basically he says that while modular dual-controller storage arrays might be useful for traditional physical server deployments, virtualized servers would need enterprise storage arrays. (Which interestingly are defined by "multiple processors that share a global cache" according to him.)
I wrote a small reply as a comment which still awaits moderation. To the present Hu usually published my few comments in his blog - regardless of how criticising they were. I don't know why it didn't happen this time, but I think the most reasonable answer is, that everybody at HDS is very busy with the BlueArc aquisition. So meanwhile I publish it here :o)
interesting read. IMHO there’s much truth in your quote “Virtual servers can be like a drug” and I think you are also right with your observation about Tier 1 applications being virtualized. From a support perspective this could lead to bad nightmares. But to be honest, I don’t get why the storage system should be the limiting factor here. The number of servers (in terms of OSes running) doesn’t change in your picture and neither did the total workload towards the storage array. They were physical servers before, now they are virtual servers (VMs) on a few physical ones. In my eyes the requirements regarding the storage environment didn’t change big times but of course you have to check carefully if your physical servers with their SAN connectivity could turn into a bottleneck themselves, as I pointed out in my latest blog post (htt
Additionally, just a minor thing with the dual-controller arrays: Why should the outage of the remaining controllers lead to data loss? Usually the write cache of such arrays will be disabled if one controller is down, because it can’t be mirrored anymore. On one hand this means decreased performance during such maintenance, but on the other hand this means that the host gets the SCSI good status only if the I/O is really written to disk. So, there should be access loss of course, but no data loss.
If you have a different - or a similar - opinion, feel free to leave a comment here :o)