Breaking through the 2TB barrier
anthonyv 2000004B9K Visits (7131)
There was a time when 32 bits was considered a lot. A hell of a lot.
With 32 bits, you can create a hexadecimal number as big as 0xFFFFFFFE (presuming we reserve one bit).
Well... actually...no... that's 2 TiB, which most people would refer to as 2 Terabytes. Mmm.. Suddenly I am less impressed (still wouldn't mind that as a bank account though).
Now there are plenty of running Systems that still cannot work with a disk that is larger than 2 TiB. One of the more common is ESX. I am presuming this limitation is going to disappear, so Storage susbsystems need to be ready to create volumes that are larger than 2 TiB.
The good news is that with the May 2011 announcements, IBM is removing the last 2 TiB sizing limitations from its current storage products. There appears to have been some confusion in the past, so I thought I would go through and be clear where each product is at:
Firmware version 07.35.41.00 added support to create volumes larger than 2 TB. The maximum volume size is limited only by the size of the largest array you can create. This capability has been available for some time and hopefully you are already on a much higher release.
DS4000 and DS5000
Firmware version 07.10.22.00 added support to create volumes larger than 2 TB. The maximum volume size is limited only by the size of the largest array you can create. This capability has been available for some time and hopefully you are already on a much higher release.
DS8700 and DS8800
The DS8700 and DS8800 will support the creation of volumes larger than 2 TB once a code release in the 6.1 family has been installed. With this release you will be able to create a volume up to 16 TiB in size. The announcement letter for this capability is here.
The volume size on an XIV is limited only by the soft limit of the pool you are creating the volume in. This allows the possibility of a 161 TB volume.
SVC and Storwize V7000
These two products have two separate concepts:
SVC and Storwize V7000 Volumes (VDisks).
Prior to release 5.1 of the SVC firmware, the largest volume or VDisk that you could create using an SVC was 2 TiB in size. With the 5.1 release this was raised to 256 TiB, as announced here. When the Storwize V7000 was announced (with the 6.1 release) it also inherited the ability to create 256 TiB volumes.
Storwize V7000 Internal Managed Disks (Array MDisks).
Because the Storwize V7000 has its own internal disks, it can create RAID arrays. Each RAID array becomes one Mdisk. This means the largest MDisk we can create is limited only by the size of the largest disk (currently 2 TB), times the size of the largest array (16 disks). This means we can make arrays of over 18 TiB in size (using a 12 disk RAID6 array with 2 TB disks). Thus internally the Storwize V7000 supports giant MDisks. We can also present these giant MDisks to an SVC running 6.1 code and the SVC will be able to work with them.
SVC and Storwize V7000 External Managed Disks.
When presenting a volume to the SVC or Storwize V7000 to be virtualized into a pool (a managed disk group) we need to ensure two things are confirmed. Firstly you need to be on firmware version 6.2 as confirmed here for SVC and here for Storwize V7000. Secondly that the controller presenting the volume has to be approved to present a volume greater than 2 TiB. From an architectural point of view, MDisks can be up to 1 PB in size as confirmed here, where it says:
I recommend you go to the supported hardware matrix and confirm if your controller is approved. The links for Storwize V7000 6.2 are here and for SVC here. As of this writing, the list has still not been updated, but I am reliably informed it will include the DS3000, DS4000, DS5000, DS8700 and DS8800. It will not initially include XIV, which will come later. Please also note the following:
What to do in the meantime?
If your currently using an SVC or external MDisks with a Storwize V7000, then you need to work within the 2 TiB MDisk limit (except for Storwize V7000 behind SVC). The recommendation is a single volume per Array for performance reasons (so the disk heads don't have to keep jumping all over the disk to support consecutive extents on different parts of the disk). This can require careful planning. For instance using 7+P RAID5 Arrays of 450 GB drives makes an array that is over 3 TB. What to do in this example?
The answer is that where possible, create single volume arrays using 4+P or larger. If the disk size precludes that, then create multiple volumes per array and preferably split these volumes across different pools (MDisk groups).
Anything else to consider?
Well first up, will your Operating System support giant volumes? Googling produces so much old material that it becomes hard to nail down exact limits. For Microsoft, read this article here. For AIX check out this link. For ESX, check out this link.
Second of course is the consideration of size. File systems that utilize the space of giant volumes could potentially lead to giant timing issues. How long will it take to backup, defragment, index or restore a giant file system based on a giant volume (the restore part in particular)? Outside the scientific, video or geo-physics departments, are giant volumes becoming popular? Are they being held back by practical realities or plain fear? Would love to hear your experiences in the real world.
And a big thank you to Dennis Skinner, Chris Canto and Alexis Giral for their help with this post.