At last I have got round to rebuilding my Shared Storage Pool cluster of machines. We have the four VIOS limit. So I was thinking:
- For machines with Dual Virtual I/O Servers - the options are limited so I would pair up machines for mutual Live Partition Mobility.
- For machines with single VIOSs - I would create a cluster of four machines for some extra flexibility and LPM across any of the machines.
The "cluster -create" command to make the first node of the cluster too about 3 minutes and the "cluster -addnode" command takes about 1.5 minutes. I also had to run the cleandisk command to scrub off all information of my previous cluster created during beta testing. I am now creating virtual disks and installing AIX.
Just to show how easy this is (this takes about 3 minutes):
# Create the cluster
cluster -create -clustername galaxy \
-repopvs hdisk4 \
-sppvs hdisk2 hdisk3 \
-spname atlantic \
-hostname diamondvios1.aixncc.uk.ibm.com
Add a node to the Shared Storage Pool cluster (this takes about 1 minute):
cluster -addnode -clustername galaxy \
-hostname redvios2.aixncc.uk.ibm.com
Allocating disk space and attaching it to a VIOS client virtual machine (this takes 5 seconds):
mkbdsp -clustername galaxy -sp atlantic 16G -bd vd_diamond9a -vadapter vhost7
The output format of some of the commands has changed a little since the early beta testing. Below you can see my four VIOS cluster. Two are on one POWER7 machines (note the same serial number) - this allows setup of dual VIOS supported disks for the clients VM's for demonstrations and two on other POWER6 machines (I will need my LPARs set to POWER6 mode to get LPM working between my POWER7 and POWER6 machines):
$ cluster -status -clustername galaxy
Cluster Name State
galaxy OK
Node Name MTM Partition Num State Pool State
diamondvios1 8233-E8B02100271P 2 OK OK
diamondvios2 8233-E8B02100271P 1 OK OK
redvios1 8203-E4A0310E0A41 1 OK OK
goldvios1 8203-E4A0310E0A11 2 OK OK
Add a further disk to the storage pool (this takes half a minute):
chsp -add -clustername galaxy -sp atlantic hdisk5
After creating a few virtual disks the storage pool looks like this:
$ lssp -clustername galaxy
POOL_NAME: atlantic
POOL_SIZE: 52864
FREE_SPACE: 46093
TOTAL_LU_SIZE: 49152
TOTAL_LUS: 3
POOL_TYPE: CLPOOL
POOL_ID: 0000000009893EDD000000004F174D22
$ lssp -clustername galaxy -sp atlantic -bd
Lu Name Size(mb) ProvisionType Lu Udid
vd_diamond6a 16384 THIN a81b1486789e7f267ec9efce630fdf1a
vd_diamond9a 16384 THIN daf51d837e4463818602c3ad63e83257
vd_diamond9b 16384 THICK 42c0b236af1fd1b1ac127e01a148c37a
The word "THICK" is always going to make me smile as thick provisioning is a little "stupid" ... in many cases.
INFORMATION
Two questions came up via a French guru friend of mine about SSP that had not occurred to me. He is working with a large customer using Shared Storage Pool already and thinking through the production set-up for real work.
1) The repository disks - how large should it be and are there sizing guidelines?
The Redbook on Shared Storage Pools states 1 GB is enough but what about future much larger clusters that we are setting up now! Well the developers said the amount of space used by SSP in the repository disk is fairly small (think in the low megabytes). So the 1 GB LUN currently recommend has plenty of spare capacity already built into it for future developments, larger clusters and releases. I can't think anyone these days would want to allocate a LUN smaller than 1 GB just to save half a GB of disk space. In a TB world it is "small beer".
2) Shared Storage Pools can Thin Provision but so can some of the Disk Storage Sub-systems at a lower level. So where to thin provision or should we do both?
The answer from the developers was not what I expected!
First, they point out two levels of thin provisioning is duplicated effort, more monitoring, more complicated for problem determination and simply not recommended. The SSP feature is meant to reduce man-power and simplify operations - not make life more complicated.
Normally, when creating LUN's for a client VM at the disk system level you decide thick or thin provisioning then zone it to the VIOS or if using NPIV straight to the client VM. Thin provisioning only works when the OS does not immediately write to every block but only the disk blocks that is has allocated to files as they grow in number or grow in size.
With Shared Storage Pools, the LUNs are handed to the Virtual I/O Servers which then allocate the disk space in 64MB chucks to the client VM with the mkbdsp command. If you decide to use SSP Thick provisioning then you have decided that you really want the disk blocks allocated right now (and not later on when they are written too). Thick provisioning means I need to ensure that this VM will never hit the "Sorry but we don't have the spare disk blocks that you think you have" error. It would be silly to do this only at the SSP level and later find the disk system runs out of blocks. So SSP writes zeros to every block when it allocates thickly provisioned virtual disks and this forces the disk subsystem to allocate disk blocks too.
Two cases:
- SSP Thick provisioning and disk subsystem Thick provisioning - works as expected but expensive in disk space but no chance of not getting your blocks
- SSP Thick provisioning and disk subsystem Thin provisioning - SSP will force Thick disk subsystem provisioning so it becomes Thick + Thick
This is, of course, ignoring any "tricky dicky" features that a disk system implements like deduplication based on identifying multiple zero filled disk blocks but then the Shared Storage Pool has done its best and can't be held responsible for lower level "cleverness that screws up later on"!
Also note this zero filling of disk blocks also is a security feature - the previous content of the disks is over written.
In the case of Thin provisioning, SSP can decide what it wants to do with the disk space. It does not have to do the same thing as an operating systems that just happens not to write ti unused disk blocks. But for the most part I am pretty sure that is what happens. It could use some disk space for extent management but it will be minimal as the chunk size (smallest size it can allocate) is 64 MB which reduces the meta data . With SSP Thin Provisioning, the choice of Thin or Thick disk system provisioning is up to you - there is no one correct answer.
- Do you want a simple life or more work with fine monitoring and tuning options?
- Are you desperate to reduce disk use and disk space costs or focused on zero interruptions to service?
For me:
Thin SSP + Thick disk space is the right balance to reduce workload. Your mileage could be very different.
Two cases:
- SSP Thin provisioning and disk subsystem Thin provisioning - works as expected double thin for minimal disk space but involves double man-power, and double the risk of filling up problems.
- SSP Thin provisioning and disk subsystem Thick provisioning - Recommended for KISS reasons & avoids fiddling at the disk system layer.
I hope this encourages you to give Shared Storage Pools a go to reduce your time to respond to new user workload demands and make LPM the default from now on for load balancing and machine evacuation for maintenance made easy, thanks, Nigel Griffiths
Tags: 
good
shared
power7
and
up
storage
2
vios
-
+
information
some
running
pools
powervm
aix7