Shuffling disk data around

Disk maintenance

Data maintenance on disks is a common task for any system administrator. A quite frequent task in my experience is data movement. This may be due to hotspots on your disk, a quick fix is to move some logical volumes off to another disk to ease congestion. The migratelp command is your trusted friend in this situation. When you are experiencing a failing disk or you are migrating data from one disk to another disk, you can use the migratepv command or create a mirror copy, and you are not restricted to just one method. For original volume groups (VGs), when adding a disk to the VG, you may run into a factoring issue. However, by understanding the characteristics of the VG, the factoring can be changed. These common disk maintenance tasks are discussed with examples in this article.

Share:

David Tansley (david.tansley@btinternet.com), System Administrator, Ace Europe

David TensleyDavid Tansley is a freelance writer. He has 15 years experience as a UNIX administrator, using AIX the last eight years. He enjoys playing badminton then relaxing watching Formula 1, but nothing beats riding and touring on his GSA motorbike with his wife.


developerWorks Contributing author
        level

02 January 2013

Also available in Chinese

When talking about disks maintenance, it is recommended to get used to some of the common abbreviations for the disk-related attributes, as it can keep the chatter down.

  • VG: Volume group
  • LV: Logical volume
  • LP: Logical partition
  • PP: Physical partition
  • PV: Physical volume (disk)

Many times it happens that you have just arrived into the office and all the users and support people start complaining about slowness on the system. A quick check by the numbers includes:

  • Processor bound
  • Memory bound
  • Disk access
  • Network
  • Process hounds

You may have concluded after doing some quality testing it is disk access, the spread of data on the disk is causing congestion when being accessed. Nobody likes it, but when you need to move or migrate LVs around, you generally do not get much notice. So let us assume you have identified hotspots on the disk by interpreting the output from tools such as filemon, topas, nmon, or lvmstat. You need to move that data over to another disk to ease congestion. The other disk can be a new disk or, more probably, an existing disk in the VG that is not so full with data. Looking at a couple of scenarios, let us see how we can move data from one disk to another. Before we do that though, it is good to know a few commands that come in very handy when looking LVs and PVs.

Disk-related commands to keep in your top draw

The following is what I believe, are all the commands you need to know to extract the right information before undertaking a data migration task.

Getting information from a PV

Getting the size of the disk in question (in MB) is always good to know. Assuming that the disk is hdisk4, use the getconf command to find the size:

# getconf DISK_SIZE /dev/hdisk4
9216

Use the lspv command to extract information about the disk.

lspv -l <hdiskx>:

The above command lists the LVs and LPs as well as the PP and mount points of the file systems where applicable.

lspv -m <hdiskx>:

The above command lists the PV, PP number, LV, and LP numbers.

Getting information from a VG

Use the lsvg command to extract the layout of a VG, in which a PV (or PVs) reside.

lsvg <vg_name>:

The above command lists general information about the VG attributes, notably the PP size, total, free, and used.

lsvg -l <vg_name>:

The above command lists the type of file system, the LPs and PVs as well as the LV state (whether it is open or closed) and the file system mount points, if applicable.

lsvg -p <vg_name>:

The above command lists the PVs belonging to that VG along with the total and free PP.

Getting information from a LV

You can use the lslv command to extract information about an LV.

lslv -l <lv_name>:

The above command lists the PVs that the LV resides on.

lslv -m <lv_name>:

The above command lists the LPs , partition number, and PV, for all PVs that the LV resides on.

The output from the above commands gives you enough information to determine if a data migration is good to go, when dealing with the following migration techniques:

  • migratelp
  • migratepv
  • disk mirroring

In this demonstration, I have created small file systems, which means small size LVs.This is because I can keep the output minimal.In reality, the LV of a normal application will be quite large.Thus, a listing of the LPs of a LV will be long.A tip is to print out the LV listing, and then identify the LPs to be moved by using a highlighter pen.


Moving data using the migratelp command

If you need to move LPs, that is part of a LV off a disk to another disk, then migratelp is your friend. You can specify the part of the LV segment or segments that you want to move to another disk.

It is not a coincidence that the output from lvmstat, the LV monitoring tool, closely resembles the required input format that you need to use for the migratelp command.

For this demonstration, the format of the migratelp command is:

migratelp <LV/LP> <destination_PV>

It might be the case that the LV that you want to move is mirrored. That is okay, just select the copy you want to move.

Assume that we have the following disks:

# lspv
hdisk1          00c23bed42b3afff                    None
hdisk2          00525c6a888e32cd                    vg00            active
hdisk3          00c23bed32883598                    None
hdisk0          00c23bed42b3aefe                    rootvg          active

It has been decided that the LV fslv00 is under stress, perhaps from reading and writing to the disk. It has been decided to place another disk (that is VG vg00) in to that and migrate part of the LV fslv00 to the new disk to ease congestion. First, let us look at the disk hdisk2, where fslv00 currently resides:

# lspv -l hdisk2
hdisk2:
LV NAME               LPs     PPs     DISTRIBUTION          MOUNT POINT
fslv00                4       4       00..04..00..00..00    /devhold
loglv00               1       1       00..01..00..00..00    N/A
fslv01                4       4       00..04..00..00..00    /apps

Now, let us take a look at the actual VG, vg00.

#  lsvg -l vg00
vg00:
LV NAME             TYPE       LPs     PPs     PVs  LV STATE      MOUNT POINT
loglv00             jfs2log    1       1       1    open/syncd    N/A
fslv00              jfs2       4       4       1    open/syncd    /devhold
fslv01              jfs2       4       4       1    open/syncd    /apps

We can now see that the LV, fslv00, resides on one disk. Let's now add another disk hdisk3, to the VG:

# extendvg vg00 hdisk3

Confirm that the disk is added, but we already know that it is, because we did not get any errors when extending the VG:

# lsvg -p vg00
vg00:
PV_NAME           PV STATE          TOTAL PPs   FREE PPs    FREE DISTRIBUTION
hdisk2            active            542         533         109..99..108..108..109
hdisk3            active            2187        2187        438..437..437..437..438

Now, let us look at our LV placement. Again, we can see that all the LPs are on the LV fslv00 that is located on hdisk2:

#  lslv -m fslv00
fslv00:/devhold
LP    PP1  PV1               PP2  PV2               PP3  PV3
0001  0111 hdisk2
0002  0112 hdisk2
0003  0113 hdisk2
0004  0114 hdisk2

Let's now move the first two LPs, that is: 0001 and 0002, from fslv00 to the new disk , hdisk2.

# migratelp fslv00/1 hdisk3
migratelp: Mirror copy 1 of logical partition 1 of logical volume
        fslv00 migrated to physical partition 439 of hdisk3.
# migratelp fslv00/2 hdisk3
migratelp: Mirror copy 1 of logical partition 2 of logical volume
        fslv00 migrated to physical partition 440 of hdisk3.

All looks good from the above two migratelp commands. Let’s now confirm that the LPs are on the new disk by querying the LV fslv00 again:

# lslv -m fslv00
fslv00:/devhold
LP    PP1  PV1               PP2  PV2               PP3  PV3
0001  0439 hdisk3
0002  0440 hdisk3
0003  0113 hdisk2
0004  0114 hdisk2

As expected, the LV is now spread across the two disks: hdisk2 and hdisk3.

You can confirm this further by listing the contents of both hdisks, as shown below:

# lspv -l hdisk3
hdisk3:
LV NAME               LPs     PPs     DISTRIBUTION          MOUNT POINT
fslv00                2       2       00..02..00..00..00    /devhold
# lspv -l hdisk2
hdisk2:
LV NAME               LPs     PPs     DISTRIBUTION          MOUNT POINT
fslv00                2       2       00..02..00..00..00    /devhold
loglv00               1       1       00..01..00..00..00    N/A
fslv01                4       4       00..04..00..00..00    /apps

If at some point, you have identified that the migration has made no difference, no problem, just move them back, as shown below:

# migratelp fslv00/1 hdisk2
migratelp: Mirror copy 1 of logical partition 1 of logical volume
        fslv00 migrated to physical partition 111 of hdisk2.
# migratelp fslv00/2 hdisk2
migratelp: Mirror copy 1 of logical partition 2 of logical volume
        fslv00 migrated to physical partition 112 of hdisk2.

Now, if we query hdisk3, there should be no LV in part residing on hdisk3:

# lspv -l hdisk3

No output is returned, and this indicates that there is no data on the disk.

However, for hdisk2, regarding the placement of fslv00, all is now back to as it was earlier:

# lspv -l hdisk2
hdisk2:
LV NAME               LPs     PPs     DISTRIBUTION          MOUNT POINT
fslv00                4       4       00..04..00..00..00    /devhold
loglv00               1       1       00..01..00..00..00    N/A
fslv01                4       4       00..04..00..00..00    /apps

If further confirmation is required, just query the LV:

# lslv -m fslv00
fslv00:/devhold
LP    PP1  PV1               PP2  PV2               PP3  PV3
0001  0111 hdisk2
0002  0112 hdisk2
0003  0113 hdisk2
0004  0114 hdisk2

Mirror migration

There are many ways to move data. Previously, we looked at the migratelp commands. Let us now look at disk mirroring. In this demonstration, imagine we have a failing disk and we need to get the data off to a new disk. We bring in another disk, then mirror the LVs across to the new disk. After the mirroring is completed, the original copies are removed. The failing disk is then removed from the VG. Let's assume a disk has already been brought into the VG, vg00. The next task would be to create a copy of the LVs in the new disk.

First, let's review the layout of the VG, vg00:

#  lsvg -l vg00
vg00:
LV NAME             TYPE       LPs     PPs     PVs  LV STATE      MOUNT POINT
loglv00             jfs2log    1       1       1    open/syncd    N/A
fslv00              jfs2       4       4       1    open/syncd    /devhold
fslv01              jfs2       4       4       1    open/syncd    /apps

Assume that the new disk, hdisk3 has been added to the VG, vg00. Next, create copies for all the LVs across to the newly added disk. In this demonstration, the format of the mklvcopy command is:

mklvcopy <LV_name> <copy_number> <destination__PV>

Where copy_number is 2, that is, the second (copy) occurrence of the LV, and destination_PV (in this demonstration) is hdisk3.

# mklvcopy fslv00 2 hdisk3
# mklvcopy fslv01 2 hdisk3
# mklvcopy loglv00 2 hdisk3

We now have copies of the LV on the new disk. Next, we need to synchronize/mirror the LVs using the syncvg command:

# syncvg -l fslv00
# syncvg -l fslv01
# syncvg -l loglv00

We are now all mirrored up to the new disk, hdisk3. This can be confirmed by listing the VG. Note that the PP column is double that of the LP for each LV. This means that the LVs are mirrored:

#  lsvg -l vg00
vg00:
LV NAME             TYPE       LPs     PPs     PVs  LV STATE      MOUNT POINT
loglv00             jfs2log    1       2       2    open/syncd    N/A
fslv00              jfs2       4       8       2    open/syncd    /devhold
fslv01              jfs2       4       8       2    open/syncd    /apps

Now that we have the data on a good disk, we can remove the original copy, that is, the LVs on hdisk2, the failing disk. In this demonstration, the format of the rmlvcopy command is:

rmlvcopy <LV_name> <copy_number> <PV_to_remove_copy>

Where copy_number will be 1 for the first occurrence of the LV or if you prefer the original LV, and PV_to_remove_copy in this demonstration is hdisk2

So let's get those LV copies removed from the failing disk, hdisk2:

# rmlvcopy fslv00 1 hdisk2
# rmlvcopy fslv01 1 hdisk2
# rmlvcopy loglv00 1 hdisk2

All the copies from hisk2 have now been removed. We should not have any data on hdisk2, and this can be confirmed by viewing the VG. In the following output, notice that for hdisk2, the TOTAL PP and the FREE PP values are the same, and this means that the disk is empty:

# lsvg -p vg00
vg00:
PV_NAME           PV STATE          TOTAL PPs   FREE PPs    FREE DISTRIBUTION
hdisk2            active            542         542         109..108..108..108.. 109
hdisk3            active            2187        2178        438..428..437..437.. 438

You can further confirm this by listing both disks. hdisk2 should have no data but hdisk3 should have the LVs on it.

# lspv -l hdisk2

No output is returned and this indicates that there is no data on the disk.

# lspv -l hdisk3
hdisk3:
LV NAME               LPs     PPs     DISTRIBUTION          MOUNT POINT
fslv00                4       4       00..04..00..00..00    /devhold
loglv00               1       1       00..01..00..00..00    N/A
fslv01                4       4       00..04..00..00..00    /apps

Now, all that is left to do is to remove the failing disk, hdisk2, from the VG:

# reducevg vg00 hdisk2

To confirm that the VG now has only the good hdisk3 disk:

# lsvg -p vg00
vg00:
PV_NAME           PV STATE          TOTAL PPs   FREE PPs    FREE DISTRIBUTION
hdisk3            active            2187        2178        438..428..437..437..438

The data has been successfully moved form a failing disk to a new disk in the VG, vg00. At some point, the failing disk that is now not associated with a VG, would be physically replaced.


Copy data in LVs using the migratepv command

When you have a very populated disk that contains many LVs for migration, it is sometimes more efficient to copy the data by using the migratepv command. You can also use this command to copy individual LVs. Assume that we need to migrate data from hdisk2 to hdisk3.

The size of the destination disk can be smaller than the source disk,
as long as the total of the LVs will all fit into the destination disk.

In this demonstration, the format of the migratepv commands used is:

migratepv < source_PV>  <destination_PV>
migratepv -l <LV> < source_PV>  <destination_PV>

Where <LV> is the name of the LV you want to migrate.

In the following output of VG vg00, you can notice that hdisk3 empty. This is confirmed by viewing the TOTAL PP and FREE PP values, which are the same. This means that there is no data on that disk.

# lsvg -p vg00
vg00:
PV_NAME           PV STATE          TOTAL PPs   FREE PPs    FREE DISTRIBUTION
hdisk2            active            542         532         109..98..108..108..1 09
hdisk3            active            2187        2187        438..437..437..437.. 438

We can further confirm that all the LVs reside on hdisk2, by running the lslv command for each LV:

# lslv -m fslv00
fslv00:/apps
LP    PP1  PV1               PP2  PV2               PP3  PV3
0001  0111 hdisk2
0002  0112 hdisk2
0003  0113 hdisk2
0004  0114 hdisk2
0005  0115 hdisk2
# lslv -m fslv01
fslv01:/devhold
LP    PP1  PV1               PP2  PV2               PP3  PV3
0001  0116 hdisk2
0002  0117 hdisk2
0003  0118 hdisk2
0004  0119 hdisk2
# lslv -m loglv00
loglv00:N/A
LP    PP1  PV1               PP2  PV2               PP3  PV3
0001  0110 hdisk2

Now, let us migrate the LV fslv00 from hdisk2 to hdisk3 by using the following migratepv command:

# migratepv -l fslv00 hdisk2 hdisk3

By viewing the LV, we can tell that the LV fslv00 is now residing on hdisk3:

# lslv -m fslv00
fslv00:/apps
LP    PP1  PV1               PP2  PV2               PP3  PV3
0001  0439 hdisk3
0002  0440 hdisk3
0003  0441 hdisk3
0004  0442 hdisk3
0005  0443 hdisk3

Assuming that you now need to copy all the other LVs across, that is all the data from hdisk2 to hdisk3, run the following command:

# migratepv hdisk2 hdisk3

After successful completion of the migratepv command, there will not be any data on hdisk2, and all data now resides on hdisk3:

# lspv -l hdisk2

The above command does not return any output and this indicates that there is no data on the disk.

# lspv -l hdisk3
hdisk3:
LV NAME               LPs     PPs     DISTRIBUTION          MOUNT POINT
fslv00                5       5       00..05..00..00..00    /apps
loglv00               1       1       00..01..00..00..00    N/A
fslv01                4       4       00..04..00..00..00    /devhold

Next, run the following command to confirm that the VG has the LVs open and there are no issues.

# lsvg -l vg00
vg00:
LV NAME             TYPE       LPs     PPs     PVs  LV STATE      MOUNT POINT
loglv00             jfs2log    1       1       1    open/syncd    N/A
fslv00              jfs2       5       5       1    open/syncd    /apps
fslv01              jfs2       4       4       1    open/syncd    /devhold

Factoring the volume group

Before scalable and big VG's were a major feature within IBM® AIX®, VGs were created as normal or original VGs. You can identify the type of VG that you have by querying the VG and looking for the MAX PV values. As a rule of thumb, use the following data.

  • If 32 PVs, then it is an original VG.
  • If 128 PVs, then it is a big VG.
  • If 1024 PVs, then it is a scalable VG.

Be careful here though, not all VGs return the expected result. This is usually down to VG maintenance after the original VG has been created.

Historically, old VGs would be original, and have a defined set of PPs allocated. The PP and PV values are directly related. If you place a disk (that is far greater in size) into a VG (that is, an original VG and where all the disks within are of a small PP and PV size), you will hit a factoring issue. They are also other instances where you can get factoring issues. A factoring issue is not really a big deal. Your VG will be allowed fewer disks if you change the factoring. This is because, by increasing the factoring size, you can increase the PP size, which in turns decreases the amount of disks you can include in the VG. Is this really a problem? Not really in my book. So here you have a couple of choices:

  • With the VG varied off, use the chvg -B command to convert to a big VG.
  • With the VG varied off, use the chvg -G command to convert to a scalable VG.

Or in the future, make sure that any new VG is created as scalable, so that you do not hit factoring issues. Assume a situation where converting a VG is not an option because you cannot have the file systems offline. To get the disk in the VG, you must change the factor size. Here is an example of a factoring error with an original VG. Assuming that I have a VG called appsvg with a disk of 17 GB in size. and I then try and add a 70 GB disk (hdisk3) to it, I get the following error:

# extendvg appsvg hdisk3
0516-1162 extendvg: Warning, The Physical Partition Size of 32 requires the
        creation of 2187 partitions for hdisk3.  The limitation for volume group
        appsvg is 1016 physical partitions per physical volume.  Use chvg command
        with -t option to attempt to change the maximum Physical Partitions per
        Physical volume for this volume group.
0516-792 extendvg: Unable to extend volume group.

The above output, kind of tells you what the issue is. Anyway, change the factoring size. In this demonstration, I opt for a factor of 3:

 # chvg -t3 appsvg
0516-1164 chvg: Volume group appsvg changed.  With given characteristics appsvg
        can include upto 10 physical volumes with 3048 physical partitions each.
# extendvg appsvg hdisk3

With my factor change, I can now only have 10 disks instead of my original 32 disks (original VG). But, that is fine as at least I have got my big disk into the VG. In this example, I required more disk space and I got it.


Conclusion

In this article, I have demonstrated different ways in which LVs can be moved when experiencing either disk congestions or perhaps from a failing disk. There are other tools that you can use to move data as well, however, I have focused on data migration techniques that allows you to keep the system online while migrating data.

Comments

developerWorks: Sign in

Required fields are indicated with an asterisk (*).


Need an IBM ID?
Forgot your IBM ID?


Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.

 


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name



The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.

 


All information submitted is secure.

Dig deeper into AIX and Unix on developerWorks


static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=AIX and UNIX
ArticleID=853557
ArticleTitle=Shuffling disk data around
publish-date=01022013