An important warning about importing a VG
I've been playing with LVM tunables, specifically to do with pbufs, to see if changes to the parameters stay with a volume group when it gets moved to a new LPAR.First, some background
is a pinned memory buffer. As this developerWorks article
explains, "The LVM always uses one pbuf for each individual I/O request, regardless of the amount of data that is transferred. AIX creates extra pbufs when adding a new PV to a VG."
The lvmo command
is used to manage pbuf tuning parameters. It allows you to view or set the pbuf count by volume group rather than doing it globally. You can see the number of blocked I/Os for a volume group using the lvmo command. You can also see the global blocked count (total for all volume groups) using vmstat -v
, identified as pending disk I/Os blocked with no pbuf
For now, the question is about setting the pv_pbuf_count.
I was curious. If the pv_pbuf_count (a pbuf count for each physical volume) is set on a volume group basis, where does it get stored? Is it in
- /etc/tunables/nextboot, the file containing the list of non-default tunable parameters?
- the volume group descriptor area (VGDA) on each PV?
- or somewhere else, such as the ODM?
This is an important question, because if the volume group gets exported and imported to another LPAR, what happens to the lvmo settings which have been changed from the default? Do they need to be set again, or do they travel with the volume group?Setting the pbuf count
First, I'll change a volume group's pv_pbuf_count from the default value of 512 to 2048 using lvmo:
lvmo -v datavg -o pv_pbuf_count=2048
Now' display the current settings and statistics usinglvmo -v datavg -a
vgname = datavg
pv_pbuf_count = 2048
total_vg_pbufs = 2048
max_vg_pbufs = 16384
pervg_blocked_io_count = 0
pv_min_pbuf = 512
max_vg_pbuf_count = 0
global_blocked_io_count = 59
The new pv_pbuf_count
is set to 2048. We're allowed 2048 pbufs for each PV in the volume group. The total pbufs for the volume group (total_vg_pbufs
) are also 2048. This is because the volume group only has one PV in it. The global_blocked_io_count
is set to 59, but that's not from this vg, as the pervg_blocked_io_count
(blocked I/Os for this volume group) is set to 0.
This tunable parameter (pv_pbuf_count
) survives a reboot, so where is this parameter change recorded? In /etc/tunables/nextboot
? No, that file was unchanged. (If we'd used the old, system-wide way of changing the pbufs using ioo
, we'd see it in nextboot
, but that would change it for all volume groups, not just for the one we want).
So is the setting in the VGDA
? I'll export the volume group using exportvg
and import it and see what happens.
Ordinarily, you'd be doing the export from one LPAR, map the LUN to another LPAR and then import the volume group there, but doing the export and import on the same LPAR will prove the point for this exercise.
Before exporting the volume group, I need to unmount any file systems in it. You can list file systems in a volume group using the lsvgfs command
. Having done that, you can deactivate and export the volume group:
And then import it again, to see what happens to our beloved pv_pbuf_count
Now, let's see what happened to the pv_pbuf_count
-v datavg -a
Aha! The export and import has reset the pv_pbuf_count
back to the default of 512.
When you do a volume group export and import - a great way of moving all of a volume group's data to a new location, rather than copying it or restoring it - the logical volumes and file systems get moved across to the target system, but tuning parameters don't come for the ride. It makes sense, I suppose, since other ODM parameters such as the queue depth on the PV don't get inherited when the PV is removed from one system and added to another.
Something to be aware of.