Migrating or copying data is a frequent task that system administrators carry out often. There are various tools available for these tasks, including cp, tar, and cplv.

David Tansley (david.tansley@btinternet.com), System Administrator, Ace Europe

David TansleyDavid Tansley is a freelance writer. He has 15 years of experience as a UNIX administrator, using AIX the last eight years. He enjoys playing badminton, then relaxing watching Formula 1, but nothing beats riding and touring on his GSA motorbike with his wife.


developerWorks Contributing author
        level

21 June 2011

Also available in Chinese Spanish

Introduction

Migrating or moving data is a common task. Whether it is copying data across the network to a new filesystem, or copying logical volumes within the same volume group or to a different volume group or maybe just creating a backup of a filesystem. The reasons for moving or copying data could be for performance issues, or general growth of data where there is not enough space in its current environment. There are different tools that can be used for the above-mentioned data movement tasks, such as migratepv, cplv, tar, cpio, cp or rsync. For jfs, you can use splitcopy or, for jfs2, use snapshot to take a copy of a filesystem. There is no golden rule on what method would best suit a certain data movement. In this article, I demonstrate different methods to move or copy data at a filesystem and logical volume level focusing on the following AIX utilities: cplv, tar, and cp.


Using tar and cp to copy data to a new filesystem

When applying updates to an application filesystem, a backup would be taken first, most probably to tape. However if space allows, a copy of the filesystem where the application resides can also be carried out. The advantage of this is that it allows a quick recovery. By swapping over the mount points, it is also advantageous because you can quickly compare the upgraded files to the original ones. Lets assume a filesystem called /opt/pluto holds an application that is to be upgraded:

# df -g
…
/dev/fslv00        1.00      0.03   97%       22     1% /opt/pluto

Let's look at three ways we could copy the application files across to another filesystem, using cp, tar, and cplv.

First, the backup (copied) filesystem needs to be created. This is carried out using the crfs command, making sure it is the same filesystem type and at least the same size. The current /Pluto filesystem is 1G in size and is of jfs2 type. The new filesystem is called /opt/pluto_bak. The following command achieves this:

# crfs -v jfs2 -g rootvg -m /opt/pluto_bak -A yes -p rw -a agblksize=40
96 -a size=1G
File system created successfully.
524068 kilobytes total disk space.
New File System size is 1048576

In the example the input parameters mean the following:

-v jfs2 Specifies a jfs2 filesystem type.
-g rootvg Creates the filesystem rootvg
-m /opt/pluto_bak Specifies the actual mount point.
-A yes Specifies that the filesystem is mounted automatically upon a reboot.
-p rw Specifies read write permissions
-a agblksize=4096 Specifies the block size in bytes
-a size=1G Specifies the size of the filesystem to be created at, in this example it is 1GB.

Next, mount the filesystem /opt/pluto_bak:

# mount /opt/pluto_bak

After creating a filesystem, do not forget to apply the correct ownership and permissions on the mount point of the filesystem, if required.

Now using the cp command, making sure we copy the permissions/modifications and any symbolic links, we use the Rph flags.

The following cp command copies all files and symbolic links (if any) from /opt/pluto to /opt/pluto_bak:

# cd /opt/pluto
# pwd
/opt/pluto
# cp -Rph * /opt/pluto_bak

Be sure to test that the copy was done correctly by listing the number of files and running du of both the original and copied filesystem. For /opt/pluto, we have:

# pwd
/opt/pluto
# du -ms .
988.46  .
# ls |wc
      17      17     146

For /opt/pluto_bak, we have:

# cd ../pluto_bak
# ls |wc
      17      17     146
# du -ms .
988.46  .

All of the previous outputs match up from the original to the copied filesystem, so all went OK (the copy completed successfully).

Now we will do the same operation again, but this time using the tar utility.

Using tar is generally quicker when dealing with lots of smaller files. If your files are greater than 2GB, then be sure to use the GNU tar utility.

# cd /opt/pluto
# pwd
/opt/pluto
# tar cpf - . | (cd /opt/pluto_bak; tar xpf - )

In the previous output, tar creates the archives with the modifications times/permissions from the current directory; tar archives this to standard output. The output is then piped thru to a sub-shell, then cd to /opt/pluto_bak and then untar (extract) from standard output into the /opt/pluto_bak filesystem.

As before, be sure to check the original and copied filesystem on the total size and the number of files match up.

If you wish to see the files being archived and extracted, add the verbose (v) option:

# cd /opt/pluto
# tar cvpf - . | (cd /opt/pluto_bak; tar xvpf - )
a .
a ./lost+found
a ./myfile.dat 0 blocks.
a ./pop 1 blocks.
a ./test100M.bin 195313 blocks.
x .
x ./lost+found
x ./myfile.dat, 0 bytes, 0 media blocks.
x ./pop, 139 bytes, 1 media blocks.
x ./test100M.bin, 100000000 bytes, 195313 media blocks.
a ./myfile1 195313 blocks.
x ./myfile1, 100000000 bytes, 195313 media blocks.
a ./mprep 195313 blocks.
x ./mprep, 100000000 bytes, 195313 media blocks.
a ./chklp 195313 blocks.
x ./chklp, 100000000 bytes, 195313 media blocks.
a ./poplt 195313 blocks.
x ./poplt, 100000000 bytes, 195313 media blocks.
…..
…..

Using tar to copy data to a remote filesystem

You can also use tar to copy data across the network, although scp will do the job. I generally prefer to use tar, for filesystem copies. To tar across the network use ssh as the transport method. You could use rsh, but I recommend not to because of the security flaws it opens.

Assume we wish to copy data from /opt/pluto on the local host to the remote host nordkapp filesystem /opt/pluto, I could use:

# cd /opt/pluto
# tar -cpf - . | ssh nordkapp (cd /opt/pluto; tar -xpf -)

As can be seen from this example, there is not much difference in the tar command between copy/restore locally and copy/restore remotely. The previous assumes that the ssh keys have been exchanged to allow password-less connection/login.


Copying a logical volume within a volume group

Using cplv is quite slower than using cp or tar when dealing with smaller filesystems. Another option to consider is that cplv copies the entire lv across, where tar and cp only need to copy the files across. As a general rule of thumb, I would use cplv if the filesystem is greater than 10 GB.

In the following example, we are going to copy the logical volume that resides under /opt/pluto, which is called fslv00 to the copied filesystem logical volume, fslv01. The cplv command overwrites the current contents of fslv01, which is what we want. Notice in this example, the filesystem /opt/pluto_bak has been created as demonstrated earlier; there is no data in that file system at the moment. This can be seen by the output of the df command:

# df -g
…..
/dev/fslv00   1.00      0.03   97%       22     1% /opt/pluto
/dev/fslv01   1.00      1.00    1%        4     1% /opt/pluto_bak

The first thing to do is unmount both filesystems. Unlike tar and cp, cplv cannot be carried out with the filesystems online:

# umount /opt/pluto
# umount /opt/pluto_bak

If the filesystem reports it cannot unmount, because it is busy, check to ensure that the application is closed. Then determine which processes are keeping the filesystem from un-mouting; use fuser:

# fuser -u <filesystem>

For example:

     # fuser -u /opt/pluto

If you decide you wish to kill all processes on that filesystem, use:

# fuser -k <filesystem>

Next, use the cplv command. In this instance, we are going to copy the logical volume fslv00, which overwrites the existing destination logical volume (fslv01). The basic format for this demonstration is:

cplv -e < dest lv>  -f  <source lv>

Where:

-eCopies the logical volume contents to an existing logical volume
>dest lvSpecifies the destination logical volume, in this example it will be fslv01
source lv Specifies the source logical volume, in this example it will be fslv00

As we are overwriting another logical volume, make sure that the destination logical volume has the logical volume type set to copy instead of jfs2 (in the logical volume attributes). If not, the command fails and you are warned that you need to change it. To check the TYPE, run the following lslv command against the destination lv:

# lslv fslv01 |grep TYPE
TYPE:          jfs2                 WRITE VERIFY:   off

From the previous output, we see that the destination logical volume is set to jfs2. Now set it to copy and run the lslv command to ensure that is set:

# chlv -t copy fslv01
# lslv fslv01 |grep TYPE
TYPE:          copy                 WRITE VERIFY:   off

Now that it is set to copy, we can copy the logical volume:

 # cplv -e fslv01 -f fslv00
cplv: Logical volume fslv00 successfully copied to fslv01 .

From this output, we see that the copy was successful, once the cplv command has completed the logical volume type that has been reset back to: jfs2. Let's verify that:

 # lslv fslv01 |grep TYPE
TYPE:            jfs2                WRITE VERIFY:   off

All is good. Now, mount the filesystems:

# mount /opt/pluto
# mount /opt/pluto_bak
# df -g
….
/dev/fslv00    1.00      0.03   97%       22     1% /opt/pluto
/dev/fslv01    1.00      0.03   97%       22     1% /opt/pluto_bak

The filesystem /opt/pluto has now been copied to /opt/pluto_bak using the cplv command. Be sure to compare size and file count on the copied filesystem, as before.

If during an application upgrade events go wrong, the application is unusable, you have no choice but to restore out. However, as in this demonstration, we have made a copy of the filesystem, so all that we need to do is to swap over the bad application filesystem to the good one. Swapping the filesystems over is quicker than restoring from tape, and you also have a copy of the failed upgraded filesystem for further diagnostics.

In the following example, assume we wish to swap (by that I mean change the mount point) from /opt/pluto_bak to /opt/pluto. First, we have to change the mount point of /opt/pluto to /opt/pluto_err. This is done so that there is no naming conflicts when we change /opt/pluto_bak to /opt/pluto. Changing the mount point of a filesystem is accomplished using the chfs command. Be sure to have the filesystems unmounted first. The basic format is:

chfs -m <new mount point> <original mount point>

To confirm, change mount point of /opt/pluto to /opt/pluto_err and change the mount point of /opt/pluto_bak to /opt/pluto:

#  chfs -m /opt/pluto_err /opt/pluto

Now we have changed the mount point of /opt/pluto to /opt/pluto_err we can now change /opt/pluto_bak to /opt/pluto:

# chfs -m /opt/pluto /opt/pluto_bak

Next, mount the filesystems:

# mount /opt/pluto_err
# mount /opt/pluto

The application is now usable. The failed application now resides on /opt/pluto_err, which has also been mounted. This allows further investigation since the application filesystems are now both mounted, and as such, a comparison can now be carried out.


Copying a logical volume to a different volume group

Copying a filesystem data to a different volume group can be done using tar, cp, or cplv. In the this section I am using cplv. The cplv procedure is similar to previous demonstrations. The only difference is, instead of the target filesystem residing in the same volume group, it would be created in a different volume group. When copying to a different volume group the logical volume device (and maybe the log device) attributes in /etc/filesystems have to be changed, so AIX knows where to look for the filesystem when a request to mount it is made.

Let's look at how to copy a logical volume to a different volume group. The steps involved are:

  • Unmount the /opt/pluto filesystem.
  • Copy the logical volume that /opt/pluto resides on, (fslv00). With the copy command, parse it to the new logical volume, pluto_lv.
  • If this is a new volume group, create a new jfs2log and format it. Otherwise, use the existing jfs2log that resides on that volume group.
  • Edit /etc/filesystems to point /opt/pluto's dev and log device to the correct devices that now reside on that volume group.
  • Mount the /opt/pluto filesystem on the newly copied logical volume, pluto_lv.
  • Sanity check the newly mounted filesystem.
  • Unmount the filesystem and fsck and re-mount it.

Using our /opt/pluto filesystem, assume we wish to copy the logical volume fslv00 from rootvg to a different volume group, apps_vg. And the logical volume should be renamed to pluto_lv.

The basic format of the cplv command for this demonstration is:

cplv -v <dest vg> -y <new lv name> source _lv

Where:

-v <dest vg>is the destination volume group, in this example it is apps-vg
-y <new lv name> is the destination logical volume name, in this example is pluto_lv

So the first task is to unmount the filesystem we wish to copy, so that we can access the logical volume.

# umount /opt/pluto

Using the lsvg command, we see that the filesystem is closed:

# lsvg -l rootvg
…
fslv00              jfs2       64      64      1    closed/syncd  /opt/pluto

Now do the copy:

#  cplv -v apps_vg -y pluto_lv fslv00
cplv: Logical volume fslv00 successfully copied to pluto_lv .

Now that it is copied, verify with the lsvg command:

 # lsvg -l apps_vg
apps_vg:
LV NAME     TYPE       LPs     PPs     PVs  LV STATE      MOUNT POINT
pluto_lv    jfs2       128     128     1    closed/syncd  N/A

From the previous output, we see that the logical volume fslv00 has been copied to the volume group apps_vg and the logical volume is renamed to pluto_lv. However, also notice that there is no JFS (journaled file system) jfs2log entry. This occurs if you are copying logical volumes to an empty volume group.

If there is already a jfs2log on the volume group, then your copied filesystem can use that one, there is no need to create a new jfs2log.

To create the jfs2log use the mklv command. The basic format of the mklv command for this demonstration is:

mklv -t <type> -y <new lv_name> vg_name <number of LPs>

Where:

-t <type>Specifies the logical volume type. in this case, it is jfs2log.
-y <new lv name>Specifies the destination logical volume name. In this example, it is jfs2log_lv.
vg_name :Indicates the volume group name where the jfs2log is to reside. In this example, it is apps_vg.
Number of LPsThe number of logical partitions; in this case, one partition is required.

So, to create the jfs2log, which is to be called jfs2log_lv for the volume group apps_vg, use:

# mklv -t jfs2log -y jfslog_lv apps_vg 1
jfslog_lv

You can omit the -y < new lv_name> and let AIX name it for you (by default the first one is called loglv00).

To verify it has been created:

# lsvg -l apps_vg
apps_vg:
LV NAME     TYPE       LPs     PPs     PVs  LV STATE      MOUNT POINT
pluto_lv    jfs2       128     128     1    closed/syncd  N/A
jfslog_lv   jfs2log    1       1       1    closed/syncd  N/A

The next task is to initialize and format the logical volume jfs2log_lv so it can be used as a jfs2 log. This is achieved using the logform command. When prompted to destroy the contents, select Yes. You are not actually destroying anything, rather you are formatting a newly created logical volume.

# logform /dev/jfslog_lv
logform: destroy /dev/rjfslog_lv (y)?y
#

We are now nearly ready to mount the /opt/pluto; however, we must first let AIX know the new device and jfs2log associated with it. If we look at the filesystem entry for /opt/pluto, from the /etc/filesystems file, we have:

# grep -w -p "/opt/pluto" /etc/filesystems
/opt/pluto:
        dev             = /dev/fslv00
        vfs             = jfs2
        log             = /dev/hd8
        mount           = true
        options         = rw
        account         = false

We can see that we need to change the following attributes:

dev
log

These attributes reflect the log and device when the logical volume resided in rootvg. These need to be changed as the logical volume now resides in the volume group apps_vg with different log and dev values. To reflect where the log and logical volume now resides, change the following:

dev  = /dev/fslv00

to

dev = /dev/pluto_lv

If there is already a jfs2log on the volume group where you copied your logical volume, be sure you use that device for the jfs2log attribute when making your changes in /etc/filesystems.

Change:

log   = /dev/hd8

to

log   = /dev/jfslog_lv

Now edit /etc/filesystems and change those attributes as in the previous example, once this is done verify that the attribute changes have been carried out.

# grep -w -p "/opt/pluto" /etc/filesystems
/opt/pluto:
        dev             = /dev/pluto_lv
        vfs             = jfs2
        log             = /dev/jfslog_lv
        mount           = true
        options         = rw
        account         = false

Now all that is left to do is mount /opt/pluto that now resides in the apps_vg volume group. The logical volume copy is now complete.

# mount /opt/pluto

# lsvg -l apps_vg
apps_vg:
LV NAME    TYPE       LPs     PPs     PVs  LV STATE      MOUNT POINT
pluto_lv   jfs2       128     128     1    open/syncd    /opt/pluto
jfslog_lv  jfs2log    1       1       1    open/syncd    N/A

We still have the original logical volume residing in rootvg, with no mount point associated with it. Once /opt/pluto has been checked out, this can be removed. As part of the checking process, it is a good habit and good practice to run fsck on the newly copied filesystem (be sure to unmount it first though).

# umount /opt/pluto
# fsck -y /dev/pluto_lv
The current volume is: /dev/pluto_lv
Primary superblock is valid.
J2_LOGREDO:log redo processing for /dev/pluto_lv
Primary superblock is valid.
*** Phase 1 - Initial inode scan
*** Phase 2 - Process remaining directories
*** Phase 3 - Process remaining files
*** Phase 4 - Check and repair inode allocation map
*** Phase 5 - Check and repair block allocation map
File system is clean

# mount /opt/pluto

Next, assuming everything has checked out OK, remove the original logical volume from rootvg. Be sure to remove the original logical volume before a reboot (or exportvg), as the ODM will still hold the original logical volume and fielsystem attributes:

# lsvg -l rootvg
…..
fslv00              jfs2       64      64      1    closed/syncd  N/A

# rmlv fslv00
Warning, all data contained on logical volume fslv00 will be destroyed.
rmlv: Do you wish to continue? y(es) n(o)? y
rmlv: Logical volume fslv00 is removed.

Conclusion

Moving or copying data between filesystems is a common task for system administrators, whether within the same volume group or across networks. In this article, I have demonstrated different ways these tasks can be carried out. There are many ways to copy data, I have just highlighted a couple of them with examples.


Resources

Resources

Get products and technologies

Discuss

Comments

developerWorks: Sign in

Required fields are indicated with an asterisk (*).


Need an IBM ID?
Forgot your IBM ID?


Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.

 


The first time you sign into developerWorks, a profile is created for you. Select information in your profile (name, country/region, and company) is displayed to the public and will accompany any content you post. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name



The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.

 


All information submitted is secure.

Dig deeper into AIX and Unix on developerWorks


static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=AIX and UNIX
ArticleID=681684
ArticleTitle=Migrating data
publish-date=06212011