We have a DS3524 with a 16 drive DPP (RAID6). We are planning to add another 8 drives to the 3524 and then a fully populated expansion enclosure. Would we add the new 32 disks to the existing pool or it is best to break it up? Thanks!
We have a DS3524 with a 16 drive DPP (RAID6). We are planning to add another 8 drives to the 3524 and then a fully populated expansion enclosure. Would we add the new 32 disks to the existing pool or it is best to break it up? Thanks!
There is a limit on the DS35xx that you can only add 12 disks at a time to a disk pool. So you'll have to do the add operation a few times to get there.
There is no limit on how many disks you can have in a pool. I have a customer who have 192 disks (all 1 TB SATA disks) in the same pool on a single DS3524.
--
Anders
- AndersLorensen
- 2013-05-21T10:36:31Z
There is a limit on the DS35xx that you can only add 12 disks at a time to a disk pool. So you'll have to do the add operation a few times to get there.
There is no limit on how many disks you can have in a pool. I have a customer who have 192 disks (all 1 TB SATA disks) in the same pool on a single DS3524.
--
Anders
Thanks for the reply! I suppose one really big pool is preferred.. curious.. how do they setup logical drives and how mnay disks could fail before data loss?
- tuscani
- 2013-05-21T13:32:21Z
Thanks for the reply! I suppose one really big pool is preferred.. curious.. how do they setup logical drives and how mnay disks could fail before data loss?
One big pool means you dont have to do any "load balancing" between pools. All LUN's get the performance of all the disks. The downside is that one server can steal all the performance if it wants to.
As for data loss, its very very unlikely. Each Raidset in the pool can manage to loose 2 disks. So you need 3 disks to fail at the same time to get data loss. The pool contains alot of those Raid6 raidsets, so its not more dangerous to have 1 big pool, over 2 pools, as you'll have the same number of raid sets.
The customer uses his 192 disks for a large Exchange solution running on VMWare. They actually have two of those 192-disk DS3524's, as the whole solution is mirrored to a second site. (mirroring, using Exchange DAG replkation).
The biggest issue with the solution is the VMWare limitation that it cannot handle more than 32 TB capacity open on 1 ESX host! But thats a different subject!
--
Anders
- AndersLorensen
- 2013-05-21T14:03:31Z
One big pool means you dont have to do any "load balancing" between pools. All LUN's get the performance of all the disks. The downside is that one server can steal all the performance if it wants to.
As for data loss, its very very unlikely. Each Raidset in the pool can manage to loose 2 disks. So you need 3 disks to fail at the same time to get data loss. The pool contains alot of those Raid6 raidsets, so its not more dangerous to have 1 big pool, over 2 pools, as you'll have the same number of raid sets.
The customer uses his 192 disks for a large Exchange solution running on VMWare. They actually have two of those 192-disk DS3524's, as the whole solution is mirrored to a second site. (mirroring, using Exchange DAG replkation).
The biggest issue with the solution is the VMWare limitation that it cannot handle more than 32 TB capacity open on 1 ESX host! But thats a different subject!
--
Anders
Cool... right now we have a 3524 with a single pool of 16x900gb SAS.. I only have about 8TB useable which I thought was weird though since only two disks are preservation.
We plan to add eight more disks to the 3524, puchase another enclosure and fully populate with 32 more disks. Adding all the the existing pool. We are using VMware ESXi 4.1 but will be moving to Server 2012 Hyper-V. Right now LUNS are carved out 1TB each.
- tuscani
- 2013-05-21T14:23:13Z
Cool... right now we have a 3524 with a single pool of 16x900gb SAS.. I only have about 8TB useable which I thought was weird though since only two disks are preservation.
We plan to add eight more disks to the 3524, puchase another enclosure and fully populate with 32 more disks. Adding all the the existing pool. We are using VMware ESXi 4.1 but will be moving to Server 2012 Hyper-V. Right now LUNS are carved out 1TB each.
Hot spare disks are build into the pool as well. Default, if I remember correct is 4 hotspare disks.
So you have 16 disks. 2 disks go to parity. 4 as hotspares. = 10 disks usable.
900 * 10 = 9 TB or about 8,1 TiB Usable. (The storage manager displays in TiB, even though it writes TB)
If you only have 16 disks (or even when you land at 48 disks) you might want to lower that hotspare number - its easy to alter.
--
Anders
- AndersLorensen
- 2013-05-21T14:33:50Z
Hot spare disks are build into the pool as well. Default, if I remember correct is 4 hotspare disks.
So you have 16 disks. 2 disks go to parity. 4 as hotspares. = 10 disks usable.
900 * 10 = 9 TB or about 8,1 TiB Usable. (The storage manager displays in TiB, even though it writes TB)
If you only have 16 disks (or even when you land at 48 disks) you might want to lower that hotspare number - its easy to alter.
--
Anders
Thanks! I wasn't aware hot spares were also included! I am a server guy but have taken on storage recently as you can prob tell. Zoning all this was fun a few weeks ago consideringI have never done it :)
So when we add the 32 new disks we can expect all of that to be useable right? I am thinking I will drop down to two hot spares when all said and done.
Any gotchas when adding disks or enclosures?
- tuscani
- 2013-05-21T14:51:46Z
Thanks! I wasn't aware hot spares were also included! I am a server guy but have taken on storage recently as you can prob tell. Zoning all this was fun a few weeks ago consideringI have never done it :)
So when we add the 32 new disks we can expect all of that to be useable right? I am thinking I will drop down to two hot spares when all said and done.
Any gotchas when adding disks or enclosures?
When adding a new shelf and new disks, consider this:
Firmware upgrade of shelfs (ESM) and disks cannot be done online, so do it before you put them to use! That means, you attach the stuff, then firmware update it, and lastly, add the disks to the pool. Once the disks are in the pool, you wont upgrade ESM/HDD firmware without downtime.
Attaching one shelf is a walk in the part! More than 1, and you want to read the manual on how to cable it correct.
As for Zoning, hope you got it right! I've seen many strange zoning configurations in my life. Everything from Alias, port and wwn zoning in one big mix, to "add all WWN's to an alias, and a zone with 1 alias" - Which all works, but is not supportet nor easy to work with.
When you add the disks, new raid sets will be created, so some of the disks will be used for parity.
--
Anders
- AndersLorensen
- 2013-05-22T10:47:23Z
When adding a new shelf and new disks, consider this:
Firmware upgrade of shelfs (ESM) and disks cannot be done online, so do it before you put them to use! That means, you attach the stuff, then firmware update it, and lastly, add the disks to the pool. Once the disks are in the pool, you wont upgrade ESM/HDD firmware without downtime.
Attaching one shelf is a walk in the part! More than 1, and you want to read the manual on how to cable it correct.
As for Zoning, hope you got it right! I've seen many strange zoning configurations in my life. Everything from Alias, port and wwn zoning in one big mix, to "add all WWN's to an alias, and a zone with 1 alias" - Which all works, but is not supportet nor easy to work with.
When you add the disks, new raid sets will be created, so some of the disks will be used for parity.
--
Anders
I always update the FW prior to install..
Here is how I did the zoning:
ABS-FC-SW-A
Aliases
ABS_SAN_CTRL_A_PORT3 (one member.. WWPN 20:3e:00:80:e5:24:a5:ea)
ABS_SAN_CTRL_B_PORT3 (one member.. WWPN 20:3f:00:80:e5:24:a5:ea)
LOKI_PORT1 (one member.. WWPN 21:00:00:24:ff:4c:5f:68)
ZEUS_PORT1 (one member.. WWPN 21:00:00:24:ff:4c:62:c6)
Zones
CTRLAPORT3_LOKIPORT1_ZONE
-ABS_SAN_CTRL_A_PORT3(1 Members)
-LOKI_PORT1(1 Members)
CTRLAPORT3_ZEUSPORT1_ZONE
-ABS_SAN_CTRL_A_PORT3(1 Members)
-ZEUS_PORT1(1 Members)
CTRLBPORT3_LOKIPORT1_ZONE
-ABS_SAN_CTRL_B_PORT3(1 Members)
-LOKI_PORT1(1 Members)
CTRLBPORT3_ZEUSPORT1_ZONE
-ABS_SAN_CTRL_B_PORT3(1 Members)
-ZEUS_PORT1(1 Members)
Zone Config
TeamAbsolute
-CTRLAPORT3_LOKIPORT1_ZONE
-CTRLAPORT3_ZEUSPORT1_ZONE
-CTRLBPORT3_LOKIPORT1_ZONE
-CTRLBPORT3_ZEUSPORT1_ZONE
ABS-FC-SW-B
Aliases
ABS_SAN_CTRL_A_PORT5 (one member.. WWPN 20:5e:00:80:e5:24:a5:ea)
ABS_SAN_CTRL_B_PORT5 (one member.. WWPN 20:5f:00:80:e5:24:a5:ea)
LOKI_PORT2 (one member.. WWPN 21:00:00:24:ff:4c:5f:69)
ZEUS_PORT2 (one member.. WWPN 21:00:00:24:ff:4c:62:c7)
Zones
CTRLAPORT5_LOKIPORT2_ZONE
-ABS_SAN_CTRL_A_PORT5(1 Members)
-LOKI_PORT2(1 Members)
CTRLAPORT5_ZEUSPORT2_ZONE
-ABS_SAN_CTRL_A_PORT5(1 Members)
-ZEUS_PORT2(1 Members)
CTRLBPORT5_LOKIPORT2_ZONE
-ABS_SAN_CTRL_B_PORT5(1 Members)
-LOKI_PORT2(1 Members)
CTRLBPORT5_ZEUSPORT2_ZONE
-ABS_SAN_CTRL_B_PORT5(1 Members)
-ZEUS_PORT2(1 Members)
Zone Config
ABS
-CTRLAPORT5_LOKIPORT2_ZONE
-CTRLAPORT5_ZEUSPORT2_ZONE
-CTRLBPORT5_LOKIPORT2_ZONE
-CTRLBPORT5_ZEUSPORT2_ZONE
- tuscani
- 2013-05-22T15:42:06Z
I always update the FW prior to install..
Here is how I did the zoning:
ABS-FC-SW-A
Aliases
ABS_SAN_CTRL_A_PORT3 (one member.. WWPN 20:3e:00:80:e5:24:a5:ea)
ABS_SAN_CTRL_B_PORT3 (one member.. WWPN 20:3f:00:80:e5:24:a5:ea)
LOKI_PORT1 (one member.. WWPN 21:00:00:24:ff:4c:5f:68)
ZEUS_PORT1 (one member.. WWPN 21:00:00:24:ff:4c:62:c6)Zones
CTRLAPORT3_LOKIPORT1_ZONE
-ABS_SAN_CTRL_A_PORT3(1 Members)
-LOKI_PORT1(1 Members)CTRLAPORT3_ZEUSPORT1_ZONE
-ABS_SAN_CTRL_A_PORT3(1 Members)
-ZEUS_PORT1(1 Members)CTRLBPORT3_LOKIPORT1_ZONE
-ABS_SAN_CTRL_B_PORT3(1 Members)
-LOKI_PORT1(1 Members)CTRLBPORT3_ZEUSPORT1_ZONE
-ABS_SAN_CTRL_B_PORT3(1 Members)
-ZEUS_PORT1(1 Members)Zone Config
TeamAbsolute
-CTRLAPORT3_LOKIPORT1_ZONE
-CTRLAPORT3_ZEUSPORT1_ZONE
-CTRLBPORT3_LOKIPORT1_ZONE
-CTRLBPORT3_ZEUSPORT1_ZONE
ABS-FC-SW-B
Aliases
ABS_SAN_CTRL_A_PORT5 (one member.. WWPN 20:5e:00:80:e5:24:a5:ea)
ABS_SAN_CTRL_B_PORT5 (one member.. WWPN 20:5f:00:80:e5:24:a5:ea)
LOKI_PORT2 (one member.. WWPN 21:00:00:24:ff:4c:5f:69)
ZEUS_PORT2 (one member.. WWPN 21:00:00:24:ff:4c:62:c7)Zones
CTRLAPORT5_LOKIPORT2_ZONE
-ABS_SAN_CTRL_A_PORT5(1 Members)
-LOKI_PORT2(1 Members)CTRLAPORT5_ZEUSPORT2_ZONE
-ABS_SAN_CTRL_A_PORT5(1 Members)
-ZEUS_PORT2(1 Members)CTRLBPORT5_LOKIPORT2_ZONE
-ABS_SAN_CTRL_B_PORT5(1 Members)
-LOKI_PORT2(1 Members)CTRLBPORT5_ZEUSPORT2_ZONE
-ABS_SAN_CTRL_B_PORT5(1 Members)
-ZEUS_PORT2(1 Members)Zone Config
ABS
-CTRLAPORT5_LOKIPORT2_ZONE
-CTRLAPORT5_ZEUSPORT2_ZONE
-CTRLBPORT5_LOKIPORT2_ZONE
-CTRLBPORT5_ZEUSPORT2_ZONE
Looks great. You can make it simplier by having 1 zone per server per fabric, with 3 aliases in - 1 for each controller, and 1 for the server hba. You would end up with half as many zones.
Previous it was not supported, having both storage controllers in the same zone.(mainly older DS4xxx boxes) But it is now a supported way of doing it. Some storage systems even requires it. (Storwize for example, as it uses fiber as a backup communications channel between the controllers if the midplane behaves badly.)
--
Anders
- AndersLorensen
- 2013-05-23T07:48:39Z
Looks great. You can make it simplier by having 1 zone per server per fabric, with 3 aliases in - 1 for each controller, and 1 for the server hba. You would end up with half as many zones.
Previous it was not supported, having both storage controllers in the same zone.(mainly older DS4xxx boxes) But it is now a supported way of doing it. Some storage systems even requires it. (Storwize for example, as it uses fiber as a backup communications channel between the controllers if the midplane behaves badly.)
--
Anders
Good to know! Thanks for reviewing.
Also, check ed and it shows 0 hot spare disks in use.. also found this in the help docs:
NOTE Hot spare drives are not used in disk pools.