
Can you use SVC with XIV as storage ?
YES, you can !
----- updated with 6TB drives in LUN size table - 20160111 ----
XIV and SVC can be a great combination, we have many customers using this combo.
Why ? flexibility and support for operating systems the XIV does not support.
- But don't you have to pay twice for licenses if you want to do mirror data ? NO, its a special XIV license for the SVC which includes all the features that comes with the XIV, but for the SVC instead, neat isn't it ?
Very often I get questions on how to configure the XIV when it is part of a solution using SVC. The questions typically are, *How many ports should I use in the XIV?',‘What LUN sizes should we use on the XIV ?’, ‘How many LUNs should we have?’
So here we go, check the tables out below. It's, in most cases, all you need.
How many ports from the XIV should be used in SVC ‘one big zone’ per fabric ?
Table 1, shows the number of ports you need to include in the ‘one big zone’ zoning.
XIV modules
|
Total usable capacity
in TB decimal
(1 TB)
|
Total usable capacity
in TB decimal
(2 TB)
|
Total usable capacity
in TB decimal
(3 TB)
|
Total usable capacity
in TB decimal
(4 TB)
|
Total XIV host ports
|
XIV host ports to
zone to an SVC or
Storwize V7000
cluster
|
Active interface
modules
|
Inactive interface
modules
|
6
|
28
|
55
|
84
|
112
|
8
|
4
|
4:5
|
6
|
9
|
44
|
88
|
132
|
177
|
16
|
8
|
4:5:7:8
|
6:9
|
10
|
51
|
102
|
154
|
207
|
16
|
8
|
4:5:7:8
|
6:9
|
11
|
56
|
111
|
168
|
225
|
20
|
10
|
4:5:7:8:9
|
6
|
12
|
63
|
125
|
190
|
254
|
20
|
10
|
4:5:7:8:9
|
6
|
13
|
67
|
134
|
203
|
272
|
24
|
12
|
4:5:6:7:8:9
|
|
14
|
75
|
149
|
225
|
301
|
24
|
12
|
4:5:6:7:8:9
|
|
15
|
80
|
161
|
243
|
325
|
24
|
12
|
4:5:6:7:8:9
|
|
Table 1 XIV host ports as capacity grows with different drive capacity
How many LUNs and which size of the LUNs should be configured on the XIV ?
Table 2, has the answer to, depending of the drive size you have in your XIV.
Number of XIV Modules Installed
|
XIV host ports
|
Volume size (GB) 2 TB drives
|
Volume size (GB) 3 TB drives
|
Volume size (GB) 4 TB drives
|
Volume size (GB) 6 TB drives
|
Volume quantity
|
Ratio of volumes to XIV host ports
|
6
|
4
|
3201
|
4801
|
6401
|
9700
|
17
|
4.3
|
9
|
8
|
3201
|
4801
|
6401
|
9700
|
27
|
3.4
|
10
|
8
|
3201
|
4801
|
6401
|
9700
|
31
|
3.9
|
11
|
10
|
3201
|
4801
|
6401
|
9700
|
34
|
3.4
|
12
|
10
|
3201
|
4801
|
6401
|
9700
|
39
|
3.9
|
13
|
12
|
3201
|
4801
|
6401
|
9700
|
41
|
3.4
|
14
|
12
|
3201
|
4801
|
6401
|
9700
|
46
|
3.8
|
15
|
12
|
3201
|
4801
|
6401
|
9700
|
50
|
4.2
|
Table 2 XIV volume size and quantity recommendation
‘But what about the qdepths ?’
‘Should I not tune the qdepths, somehow?’
No you should not !
General rule when it comes to qdepth, if you have not looked on your queues and found them overutilized, don’t change the defaults of qdepth. Its a waste of time. |
In SVC with XIV solutions you can't tune qdepths, qdepths are tuned by itself when you choose the right amount of LUNs and LUN sizes.
And here is the detailed explanation why, stolen from the IBM Redpaper just updated during a residency at the ESCC in IBM Mainz, Germany, REDP-5063
Queue depth considerations
Ideally, the number of MDisks presented by the XIV to the SVC or Storwize V7000 should be a multiple of the number of XIV host ports, from one to four. Math exists to support this.
Since version 6.3 SVC or Storwize V7000 uses round-robin for each mdisk, so it is not necessary anymore to do a manual load balancing. But it is still necessary to have several mdisks because of the following queue depth limitation of SVC and Storwize V7000.
The XIV can handle a queue depth of 1400 per Fibre Channel host port and a queue depth of 256 per mapped volume per host port:target port:volume tuple. However, the SVC or Storwize V7000 sets the following internal limits:
- The maximum queue depth per MDisk is 60.
- The maximum queue depth per target host port on an XIV is 1000.
Based on this knowledge, you can determine an ideal number of XIV volumes to map to the SVC or Storwize V7000 for use as MDisks by using the following algorithm:
Q = ((P x C) / N) / M
The algorithm has the following components:
Q The calculated queue depth for each MDisk
P The number of XIV host ports (unique WWPNs) visible to the SVC or Storwize V7000 cluster (should be 4, 8, 10, or 12 depending on the number of modules in the XIV)
N The number of nodes in the SVC or Storwize V7000 cluster (2, 4, 6, or 8)
M The number of volumes presented by the XIV to the SVC or Storwize V7000 cluster (detected as MDisks)
C 1000 (this is the maximum SCSI queue depth that an SVC or Storwize V7000 will use for each XIV host port)
If a 2-node SVC or Storwize V7000 cluster is being used with 4 ports on IBM XIV System and 17 MDisks, this yields a queue depth as follows:
Q = ((4 ports*1000)/2 nodes)/17 MDisks = 117.6
Because 117.6 is greater than 60, the SVC or Storwize V7000 uses a queue depth of 60 per MDisk.
If a 4-node SVC or Storwize V7000 cluster is being used with 12 host ports on the IBM XIV System and 50 MDisks, this yields a queue depth as follows:
Q = ((12 ports*1000)/4 nodes)/50 MDisks = 60
Because 60 is the maximum queue depth, the SVC or Storwize V7000 uses a queue depth of 60 per MDisk. A 4-node SVC or Storwize V7000 is a good reference configuration for all other node configurations.
Starting with version 6.4 SVC or Storwize V7000 clusters support MDisks from XIV greater than 2 TB. When using earlier versions of SVC code smaller volume sizes for 2 TB, 3TB, and 4 TB drives are neccessary. The following table is valid for SVC version 6.4 or later.
Tip: If you only provision part of the usable space of the XIV to be allocated to the SVC or Storwize V7000, the calculations no longer work. You should instead size your MDisks to ensure that at least two (and up to four) MDisks are created for each host port on the XIV.
Don’t hurt our feelings read the Redpaper !! 

Happy configuring from the Viking part of the world -- Roger Eriksson 
This entry could not have been done without the contributions of Markus Oscheka , Stephen Solewin and Brian Sherman