Thin Provisioning - what should you think about from an operational point of view
Andrew_Martin 06000175BS Comments (15) Visits (12828)
Thin provisioning is a well understood technology in the storage industry, but a surprising number of you haven't actually deployed it yet due to the "what-ifs" that keep you up at night. In this post I will hopefully give you a pointer to my thoughts about how you could address some of these concerns. Those of you who have already deployed Thin Provisioning will hopefully also find something interesting hiding in here.
Almost all of this will apply to the using real time compression as an alternative to Thin Provisioning.
What do you mean by Thin provisioning
A volume that is thin provisioned by SVC or Storwize is a volume where the big chunk of binary zeros are not stored in the storage pool. So if you've never written to the volume yet, then you don't need to use valuable resources storing data that doesn't exist yet.
One of the customers I have worked with pointed out to me that when an application is decommissioned, they have probably purchased the free space in that application's filesystems three times. This is because they replace their storage every three years, and their applications normally survive for around 10 years - so every three years they move filesystem free space from one controller to the next without ever having written to that free space.
Also - on all but a few old SVC nodes, if a server (e.g. VMware of IBM I) decide to initialize a volume by filling it with zeros, the Thin provisioning and compression software will simply throw those IOs away, because the disk already contains zeros.
Thin Provisioning properties - what do all the different types of volume capacities mean
There are a number of different properties of a thin provisioned volumes. It will be useful to understand them for the rest of the explanations - so here they are. I have included a screenshot from the GUI showing how these figures appear in the GUI. This is a slightly old screenshot, but it hasn't changed very much.
Thin provisioning versus Overallocation
So thin provisioning means "don't store the zeros" - what does overallocation mean? Simply put a storage pool is only overallocated once the sum of all volume capacities exceeds the size of the storage pool.
One of the things that worries administrators the most is the question "what if I run out of space". The first thing to remember is that if you already have enough capacity on disk to store fully allocated volumes, then if you convert to thin provisioning - you will have enough space to store everything even if the server writes to every byte of virtual capacity. So this isn't going to be a problem for the short term, and you will have a few weeks or months to monitor your system and understand how your capacity grows.
Even if you are starting a new Storage Pool - it is likely that you wont start over provisioning for a few weeks after you start deploying to that pool.
You don't actually need to overallocate until you feel comfortable that you have a handle on thin provisioning.
How do I monitor thin provisioning?
OK - so if you are still with me, then you have started using thin provisioning and you are ready to start actually overallocating, but I still haven't solved the first problem - "what if I run out of space". Hopefully you've already worked out that the answer is monitoring combined with some form of disaster plan.
If you are lucky enough to have a capacity planning team in your organization, then the good news is this seems like it's a problem for that team rather than you the poor storage administrator. However whether the problem is for you to solve or for one of your colleagues, what should you do? I will refer to the person performing the monitoring as the capacity planner for clarity here only.
The basics of capacity planning for thin provisioning or compressed volumes are no different than capacity planning for fully allocated. The capacity planner needs to monitor the amount of capacity being used versus the capacity of the storage pool, and make sure you purchase more capacity before you actually run out. The main difference is that in a fully allocated world, the capacity normally only increases during working hours because the increase is caused by an administrator creating more volumes. Also the capacity planner may have the advantage of a change request which accompanies every capacity increase to remind them to check their current usage and predictions.
In the brave new world (OK - not very new really...) of thin provisioning and compression, the capacity planner needs to monitor the real capacity rather than the volume capacity. That's the main difference. Of course they now also need to monitor it regularly because the real capacity can increase at any time of day for any reason. Tools like Spectrum Control can capture the real capacity of a storage pool and enable you to graph the real capacity so you can see how it is growing over time. Having a tool to show how the real capacity is growing over time seems to me like an important requirement to be able to predict when the space will run out.
The SVC or Storwize will also alert you by putting an event into the event log when the storage pool breaches a configurable threshold - called the warning level. The GUI will set this to 80% of the capacity of the storage pool by default - although you can change it. It is a very very good idea to have event notifications turned on so that someone gets an email or pop up on your monitoring system when this error gets added to the event log. Note that this event will not call home to IBM - you need to respond to this yourself.
Note: The GUI will also set an 80% threshold on every thin provisioned volume. This is not a particularly helpful warning unless you have auto-expand turned off - but we are not discussing that use case here. Many customers have taken the decision to disable the volume warning (not the pool warning) to avoid getting an alert when the server uses the 80% of the capacity it has been assigned. The CLI to disable the warning for a volume is:
chvdisk -warning 0 <volume name>
OK - I have all of my monitoring in place, but you still haven't answered what I do if I run out of space
So it turns out that there is only one question regarding overallocation - what to do if you run out of space. There are a lot of options here. I will try and go through as many of them as I can think of. You can use just one of these options, or a combination of as many as you like.
If one of those pesky servers decides to actually write to the space that you allocated to it and if uses up all of the free space in the storage pool then you are in trouble. If the system does not have any more capacity to store the host writes, then the volume will go offline. But it's not only that one volume that goes offline, all the volumes in the storage pool are now at risk of going offline. So what mechanisms and processes should you put in place for when this happens for real?
1/ Automatic out of space protection provided by the product
Now might be a good time to scroll up and remind yourself about the difference between real capacity and used capacity
So you may remember from the beginning of the post that the real capacity is normally going to be bigger than the used capacity. If you created your volume using the GUI, then the difference between the real capacity and the used capacity will normally be 2% of the virtual capacity (unless it is a FlashCopy target volume, in which case it will be almost zero by default in the GUI). Officially we call this 'difference' the contingency capacity, but I like to think of it as the emergency capacity - for reasons which will hopefully become clear.
If the storage pool has run out of space - each volume now has it's own emergency capacity. And that emergency capacity is normally pretty sizable (2% of a 2TB volume is about 41GB). The emergency capacity which is dedicated to volume 75 (for example) will allow volume 75 to stay online for anywhere between minutes to days depending upon the change rate of that volume. This means that when you run out of space, you do have some time to repair things before everything starts going offline.
This automatic protection will probably solve the vast majority of immediate problems, but remember that once you are woken up to be told that you've run out of space - this automatic protection won't buy you an infinite amount of time. You need to know what you are going to do next.
If you are really concerned about running out of space you can always change the size of this emergency capacity by modifying the real capacity. If you manually modify the real capacity with the CLI command expandvdisksize -rsize then the difference between the real capacity and the used capacity at that point will become the new contingency capacity. So you could implement a policy of 10% emergency capacity per volume if you wanted to. Also remember that you don't need to have the same contingency capacity for every volume. Production volumes could be given 10% whilst dev volumes could be given just 1%. However this type of modification is considered advanced usage and therefore is not available in the GUI.
2/ Buy more storage
The simplest option for what to do when you run out of space is to buy more. There isn't much more I can say about this one - my only caution is to remind you to think about how long it takes between you deciding to buy capacity and that capacity appearing in your data-center. If that delay is more than a few days, then this policy alone probably isn't good enough. Maybe it is worth considering a talking to your procurement team about having a mechanism to get a faster turn around time for this type of scenario.
3/ Have unallocated storage sitting as standby
You can always have one or more managed disks, or a collection of drives sitting on the floor. These spare drives or managed disks can be added to whichever storage pool runs out of space within only a few minutes. This gives you some breathing room whilst you take some of the other actions. The more managed disks or drives that you have available the more breathing room you have.
One of the nice points of this idea is you can have one set of capacity protecting multiple storage pools. However I've been told that a lot of management teams don't like to see unused capacity on the floor - so this one can have political challenges. One customer told me that they had got around this by creating a large number of spare drives in his system - because his management didn't count the number of spares.
This is my favourite option, but it isn't always practical or easy to get past your finance team
4/ Move or delete volumes
Once you have run out of space, you always have the option to migrate volumes to other pools to free up space. This is perfectly reasonable, but remember that data migration on SVC and Storwize is designed to go slowly to avoid causing performance problems, so it may be that you can't complete this migration before your favourite application goes offline.
Of course a very rapid but dire solution is to delete one or more volumes to make space. This is not really recommended, but it's something on your list. Especially if you are sharing the storage pool with both production and development - you (with management approval of course) may choose to sacrifice less important volumes to preserve the critical volumes.
5/ Policy based solutions
Of course no policy is going to solve the problem if you actually run out of space, but you could use policies to reduce the likelihood of that ever happening to the point where you feel comfortable doing less of the other options. What type of policies could you use for thin provisioning? Well I'm no expert - but here are a couple of my thoughts - feel free to suggest more in the comments.
Please note that in the policies below I have picked arbitrary numbers (e.g. top ten volumes or 200% overallocation). These arbitrary numbers are designed to make the suggested policies more readable than simply using 'x' and 'y', however they aren't my recommended numbers. I don't have any recommended numbers to insert into these policies because this is all about business risk, and I don't know enough about your business to make those decisions for you.
6/ Child Pools
Release 7.4 introduced a new feature called child pools which allows you to make a storage pool which takes it's capacity from a parent storage pool rather than from managed disks. This has a couple of interesting use cases for this thin provisioning
You could separate different applications into different child pools. This would prevent any problems with a server in child pool A affecting a server in child pool B. Also - if Child Pool A ran out of space, and the parent pool still had space then you can easily grow the child pool.
Another way you could use child pools is to create a child pool which is called something descriptive like "DO NOT USE" and allocate (for example) 10% of the storage pool capacity to that child pool. Then if the parent pool ever runs out you have an set of emergency capacity that can be given back to the parent pool (after you've worked out which server was eating up all the space and stopped whatever it was doing).
So to wrap things up I want to let you know that a very large number of our customers are using Thin Provisioning without any problems. Some of you will have spent hours planning an emergency strategy for what to do if you run out of space, others will have spent no time at all planning for this eventuality.
You don't need a 100 page document describing exactly what you will do in the event of an out of space condition - but it might allow you to sleep easier at night to have at least thought about it and to have a rough plan of how you would tackle the problem if it ever occurs.
I know that this running out of space isn't the only interesting topic for Thin Provisioning, and I'm sure many of you will point this out to me in the comments, but since I've managed to write nearly 3000 words just about the out of space topic, it seem like adding more factors here would be counter productive.
As with all things - I merely offer my opinion of how I would approach this type of problem, but do not claim to have the only definitive or correct answer. Feel free to post your own ideas in the comments. And remember if there are topics you'd like to see me do a blog post about - add them as a comment to my Welcome post.