Shared Storage Pools 2 - Up and Running + some New Information
nagger 100000MRSJ Visits (11480)
At last I have got round to rebuilding my Shared Storage Pool cluster of machines. We have the four VIOS limit. So I was thinking:
The "cluster -create" command to make the first node of the cluster too about 3 minutes and the "cluster -addnode" command takes about 1.5 minutes. I also had to run the cleandisk command to scrub off all information of my previous cluster created during beta testing. I am now creating virtual disks and installing AIX.
Just to show how easy this is (this takes about 3 minutes):
# Create the clusterAdd a node to the Shared Storage Pool cluster (this takes about 1 minute):
Allocating disk space and attaching it to a VIOS client virtual machine (this takes 5 seconds):
The output format of some of the commands has changed a little since the early beta testing. Below you can see my four VIOS cluster. Two are on one POWER7 machines (note the same serial number) - this allows setup of dual VIOS supported disks for the clients VM's for demonstrations and two on other POWER6 machines (I will need my LPARs set to POWER6 mode to get LPM working between my POWER7 and POWER6 machines):
$ cluster -status -clustername galaxyAdd a further disk to the storage pool (this takes half a minute):
chsp -add -clustername galaxy -sp atlantic hdisk5After creating a few virtual disks the storage pool looks like this:
$ lssp -clustername galaxy
The word "THICK" is always going to make me smile as thick provisioning is a little "stupid" ... in many cases.
Two questions came up via a French guru friend of mine about SSP that had not occurred to me. He is working with a large customer using Shared Storage Pool already and thinking through the production set-up for real work.
1) The repository disks - how large should it be and are there sizing guidelines?
The Redbook on Shared Storage Pools states 1 GB is enough but what about future much larger clusters that we are setting up now! Well the developers said the amount of space used by SSP in the repository disk is fairly small (think in the low megabytes). So the 1 GB LUN currently recommend has plenty of spare capacity already built into it for future developments, larger clusters and releases. I can't think anyone these days would want to allocate a LUN smaller than 1 GB just to save half a GB of disk space. In a TB world it is "small beer".
2) Shared Storage Pools can Thin Provision but so can some of the Disk Storage Sub-systems at a lower level. So where to thin provision or should we do both?
The answer from the developers was not what I expected!
First, they point out two levels of thin provisioning is duplicated effort, more monitoring, more complicated for problem determination and simply not recommended. The SSP feature is meant to reduce man-power and simplify operations - not make life more complicated.
Normally, when creating LUN's for a client VM at the disk system level you decide thick or thin provisioning then zone it to the VIOS or if using NPIV straight to the client VM. Thin provisioning only works when the OS does not immediately write to every block but only the disk blocks that is has allocated to files as they grow in number or grow in size.
With Shared Storage Pools, the LUNs are handed to the Virtual I/O Servers which then allocate the disk space in 64MB chucks to the client VM with the mkbdsp command. If you decide to use SSP Thick provisioning then you have decided that you really want the disk blocks allocated right now (and not later on when they are written too). Thick provisioning means I need to ensure that this VM will never hit the "Sorry but we don't have the spare disk blocks that you think you have" error. It would be silly to do this only at the SSP level and later find the disk system runs out of blocks. So SSP writes zeros to every block when it allocates thickly provisioned virtual disks and this forces the disk subsystem to allocate disk blocks too.
Also note this zero filling of disk blocks also is a security feature - the previous content of the disks is over written.
In the case of Thin provisioning, SSP can decide what it wants to do with the disk space. It does not have to do the same thing as an operating systems that just happens not to write ti unused disk blocks. But for the most part I am pretty sure that is what happens. It could use some disk space for extent management but it will be minimal as the chunk size (smallest size it can allocate) is 64 MB which reduces the meta data . With SSP Thin Provisioning, the choice of Thin or Thick disk system provisioning is up to you - there is no one correct answer.