"Slow = OK, but keep the downtime down"
I had to make a copy of a production Oracle database so it could be used on a new LPAR. Following Chris Gibson's post on the splitvg command
, I decided that splitvg
was the way to go. I did have other options, such as SAN cloning or restoring from backup, but the business owner was happy to do the clone by using software mirroring and then splitting the mirror. The software mirror was from the existing RAID array (on SAN) to a second RAID array. (That's what I call redundancy). The business weren't too worried about the impact of doing a mirror of the LVs, as long as it could be done online (it could!), and they didn't mind about doubling up database writes for a day or two by writing to two RAID arrays, provided the database downtime at the time of the split was under an hour. That was the "low impact" requirement. We also wanted to avoid backups and restores, which might have taken excessive time and caused too much contention on the backup infrastructure.Mirror it all, then split
Although the database was all in one volume group, there were some other file systems in the same volume group which I didn't need to clone to the new LPAR. I considered using the splitlvcopy command
which would have allowed the granularity of only splitting off the logical volumes we needed, but as I couldn't see a way of easily carving off those split LVs into their own volume group, I decided to use the splitvg command after all. We had some spare disk. We could afford the extra time involved in mirroring these extra file systems, as it was all part of the synchronisation of the mirror which would be done online.
splitvg turns a mirror copy of the
volume group into a
So, I mapped a new set of LUNs to the two VIO servers, then added them to the
datavg on the source LPAR. Why not use a single LUN? As I knew there was
additional data being mirrored, I wanted to be able to remove those file systems after importing the volume group on the target LPAR and then reclaim at least one of the LUNs which wasn't needed. In other words, multiple LUNs was more modular.
I then did the mirror of each the logical
volumes using mklvcopy,
although as it was an entire volume group, I could have used mirrorvg
. I then did the synchronise of the LVs using syncvg
(I sped up the mirroring using -P 32 to synchronise 32 LPs in parallel). I could have done the synchronise with the mirrorvg or mklvcopy, but I wanted to monitor it closely and synchronise the LVs one by one.
At this stage, I had a datavg with two synchronised copies. Time to split it.
splitting datavg to a new volume group
When you invoke splitvg, you turn a mirror copy of the volume group into a snapshot. You can do this temporarily (for example, so as to make a backup) and then resynchronise it back again using joinvg
. But that's not what I needed here. I wanted to carve off the volume group mirror/snapshot permanently, so I could export the new volume group and import it on my target LPAR. For this, I had to use the splitvg with the -i option. As the splitvg command documentation
explains, the -i option:
Will split the mirror copy of a volume group into a new volume group that can not be rejoined into the original.Doing the split
Once the database was shut down I ran the split of the volume group using:
splitvg -y copyvg -i -c 2 datavg
The -y indicates the name of the new volume group, and the -c indicated that the copy I wanted to carve off was the one on the new set of LUNs.
This simple splitvg command took a couple of minutes to complete. At that point we could have started the database again with the original disks. In fact I decided to clean up all of the disks from the Production LPAR and the VIOS before handing the LPAR back to the DBA.
First I had to deactivate the new volume group:
and export it:
From there I was able to remove the disks from the ODM using:
rmdev -dl hdiskN
And unmap the LUNs from the VIOS command line using rmvdev -vtd
I then mapped the LUNs to the target LPAR, which was on the same physical server and used the same two VIO servers, and the rest of the work was on the target LPAR.
Identify the new disks using:
And import the volume group using importvg
. If I had wanted to rename the volume group at this stage, I would have done it by giving it a different name using importvg's -y flag. Also, when you do the importvg, you only specify one hdisk and all the other disks should be imported as part of the volume group, provided they're available!
Just about ready to mount the file systems, except for this: the target LPAR already had some similarly-named file systems in place. When there are duplicate LV names the importvg will assign the LVs the "fs" prefix (e.g. lv00 is renamed to fslv00), and the file systems had their mount points prefixed with /fs, so
was imported as:
I removed the file systems I didn't need, and was able to rename the file system mount points to the ones I wanted using the chfs -m
chfs -m /oracle/CPY/sapdata1 /fs/oracle/PRD/sapdata1 # Change mount point to /oracle/CPY/sapdata1
and change the logical volume names using chlv -n
chlv -n cpysapdata1 fssapdata1 # rename LV "fssapdata1" to "cpysapdata1"Summary: the Quick split
Overall, this was a pretty good, low-impact, way of migrating a copy of data to a new system. The mirror was
slow, but didn't impact users too much and didn't involve any downtime.
The split was fast and all over in a couple of minutes, and the import
of the new volume group didn't take much longer. All in all, it seemed a
good way of doing the database clone.Renaming the LV online
We started this blog post by quoting from my compatriot Chris Gibson. What better way to finish it? From AIX 6.1 TL 4 you can run the chlv -n command online
without unmounting the file system using the LV.