Question & Answer
Question
How do you upgrade a high availability PTS configuration?
Cause
For customers running a high availability PTS configuration on RHEL 6.4 and above
Answer
Please note the steps and you can always refer to the PTS Configuration guide for high availability servers if there are any questions beyond what is typically required or needed for upgrading said environment. PTS HA = Persistent Transport System - High Availability
Please also note that the most important pieces to realize are what is currently mounted or what is not mounted on each node of the PTS environment. For the PTS HA environment, you will have two physical machines each with their own host, ie HA1 / HA2 so there is 4 PTS boxes in total.
Preparation Phase.
01) PTS1 -> On the replication log server host, login as root.
02) PTS1 -> Run the following command:
export PATH=/opt/ibm/nzpts/bin/:$PATH
03) PTS1 -> Copy the PTS software installation files, pts.tar and installpts, to the replication log server.
- Example
mkdir /tmp/pts
scp nz@repl_ha1:/nz/kit/pts/* nz@pts:/tmp/pts
cd /tmp/pts
$ clustat
[PRO] [root@PTS1 pts]# clustat
Member Name ID Status
------ ---- ---- ------
PTS1 1 Online, Local, rgmanager
PTS2 2 Online, rgmanager
/dev/block/253:15 0 Online, Quorum Disk
Service Name Owner (Last) State
------- ---- ----- ------ -----
service:PTS-Service PTS1 started
05) PTS1 -> VERY IMPORTANT, check the mount on the active node to get the filesystem names and types for /opt/ibm/ and /var/nzrepl/
$ mount
Example 1
/dev/mapper/PTS--Software--Grp-PTS--Software on /opt/ibm type ext3 (rw,nobarrier,user_xattr)
/dev/mapper/PTS--Data--Grp-PTS--Data on /var/nzrepl type ext4 (rw,nobarrier,user_xattr)
Example 2
/dev/mapper/vg_pts_soft-lv_pts_soft on /opt/ibm type ext3 (rw,nobarrier,user_xattr)
/dev/mapper/vg_pts_data-lv_pts_data on /var/nzrepl type ext4 (rw,nobarrier,user_xattr)
Example 3
/dev/PTS-Data-Grp/PTS-Data on /var/nzrepl type xfs (rw,nobarrrier)
/dev/PTS-Software-Grp/PTS-Software on /opt/ibm xfs (rw,nobarrrier)
Note:
If /opt/ibm or /var/nzrepl are not mounted you can always check the cluster configuration file cluster.conf that will contain the details for those 2 filesystems.
Upgrade Phase.
01) PTS1 -> Stop PTS Replication
$ ptsreplication -stop -all
02) PTS1 -> Switch to root user
03) PTS2 -> Switch to root user
04) PTS1 -> Check PTS-Service status
$ clustat
05) PTS1 -> Disable PTS-Service
$ clusvcadm -d PTS-Service
06) PTS1 -> Ensure PTS-Service status is disabled
$ clustat
-> service:PTS-Service <PTS'N'> disabled
07) PTS1 -> Unmount /opt/ibm, /var/nzrepl and ensure they are NOT mounted:
$ umount /var/nzrepl
$ umount /opt/ibm
$ mount
08) PTS2 -> Unmount /opt/ibm, /var/nzrepl and ensure they are NOT mounted:
$ umount /var/nzrepl
$ umount /opt/ibm
$ mount
09) PTS1 -> Mount manually /opt/ibm and /var/nzrepl. Use the details from point 05) in the Preparation phase
Example 1
$ mount -t ext3 -o rw,nobarrier,user_xattr /dev/mapper/PTS--Software--Grp-PTS--Software /opt/ibm
$ mount -t ext4 -o rw,nobarrier,user_xattr /dev/mapper/PTS--Data--Grp-PTS--Data /var/nzrepl
Example 2
$ mount -t ext3 -o rw,nobarrier,user_xattr /dev/mapper/vg_pts_soft-lv_pts_soft /opt/ibm
$ mount -t ext4 -o rw,nobarrier,user_xattr /dev/mapper/vg_pts_data-lv_pts_data /var/nzrepl
Example 3
$ mount -t xfs -o rw,nobarrier /dev/PTS-Software-Grp/PTS-Software /var/nzrepl
$ mount -t xfs -o rw,nobarrier /dev/PTS-Data-Grp/PTS-Data /var/nzrepl
10) PTS1 -> Install PTS
$ cd /var/nzrepl/pts.<NetezzaReplicationVersion>
$ ./installpts cman_cluster
11) PTS1 -> Unmount /opt/ibm, /var/nzrepl and ensure they are NOT mounted:
$ umount /var/nzrepl
$ umount /opt/ibm
$ mount
12) PTS2 -> Unmount /opt/ibm, /var/nzrepl and ensure they are NOT mounted:
$ umount /var/nzrepl
$ umount /opt/ibm
$ mount
13) PTS2 -> Repeat point 09 in PTS 2, again make sure to use the correct filesystems.
14) PTS2 -> Install PTS
$ cd /var/nzrepl/pts.<NetezzaReplicationVersion>
$ ./installpts cman_cluster
15) PTS2 -> Unmount /opt/ibm, /var/nzrepl and ensure they are NOT mounted:
$ umount /var/nzrepl
$ umount /opt/ibm
$ mount
06) PTS2 -> Unmount /opt/ibm, /var/nzrepl and ensure they are NOT mounted:
$ umount /var/nzrepl
$ umount /opt/ibm
$ mount
17) PTS1 -> Enable PTS-Service
$ clusvcadm -e PTS-Service
18) PTS1 -> Ensure PTS-Service status is enabled
$ clustat
-> service:PTS-Service PTS<N> enabled
19) PTS1 -> If needed to change owner of PTS-Service to PTS1, PTS2, or PTS<N>, use "clusvcadm -r <group> [ -m <member> ]"
$ clusvcadm -r PTS-Service
20) PTS<N> -> On the machine owning the PTS-Service, ensure the mounts are automatically mounted
$ mount
-> /..... / on /opt/ibm type ....
-> /...../ on /var/nzrepl type ....
21) PTS<N> -> Switch to nz user
22) PTS<N> -> Ensure the PTS system is updated
$ ptsrev
23) PTS<N> -> Update PTS name in ptstopology to the PTS Wall IP Name (associated with PTS Wall IP Address)
$ ptsconfigure -host <PTSWallIPName>
24) PTS<!N> -> On the machines NOT owning the PTS-Service, ensure the mounts do NOT exist:
$ mount
25) PTS<!N> -> Ensure the PTS software is not enabled
$ ptsrev
-> ptsrev: command not found
26) PTS<!N> -> Ensure /var/nzrepl is empty
$ ls /var/nzrepl
27) PTS<!N> -> Ensure PTS software is not installed (directory "nzpts" does not exist)
$ ls /opt/ibm/
28) PTS<N> -> Stop PTS Replication
$ ptsreplication -start -all
Below is the message you will recieve upon successfully completing the PTS upgrade, note the warning about the mounts as mentioned above.
[root@ptsPRDha1 ~]# ./installpts cman_cluster
Current directory is /root/
The numeric user and group id for "nz" must match on the NPS and PTS servers.
Group "nz" exists and has gid = 500
User "nz" exists and has uid = 500
Checking configuration prerequisites for Red Hat Enterprise Linux High Availability Add On...
Checking that PTS-Service Group is disabled prior to installing software...
... cluster service is disabled.
Pre-installation steps:
1) The /opt/ibm/nzpts directory contains shared cluster data.
Disable the cluster software to permit manually mounting the shared data.
2) Manually umount the /opt/ibm/nzpts and /var/nzrepl directories
from all other cluster members. For performance clustered filesystems
are not recommended and the ext4 or xfs filesystems will suffer
severe damage if mounted on more than one host at a time.
3) Manually mount the /opt/ibm/nzpts and /var/nzrepl directories
on the installation host so the installer can use them.
Subsequent errors and warnings may occur if the directories are not mounted.
If this occurs correct the issue and rerun ./installpts
Do you wish to continue the install? [y|N] y
Starting installation...
Installing IBM Netezza PTS...
Configuring local host to run IBM Netezza PTS database
Setting up IBM Netezza PTS database service script...
Starting IBM Netezza PTS database... done
Migrating IBM Netezza PTS database schema...
PTS metadata schema is up-to-date
Registering this host instance as installed for cluster control...
Disabling SysV init.d auto-start for ptsd
Deleting existing upstart service script and reload initctl
Cleaning up empty folders ...
Waiting for PTS Database install to finish
Stopping PTS database service in preparation for cluster control
Shutting down IBM Netezza PTS database... done
*********************************************************************************
WARNING: You MUST unmount the PTS-Software and PTS-Data volumes before installing
on other cluster hosts. Failure to do so can cause severe file system
damage and data loss. The command to unmount the PTS-Software and
PTS-Data volumes is:
umount /opt/ibm; umount /var/nzrepl
*********************************************************************************
IBM Netezza PTS upgraded successfully in /opt/ibm/nzpts; upgrade on all cluster hosts
before enabling cluster service.
Was this topic helpful?
Document Information
Modified date:
17 October 2019
UID
swg21700626