We are having an issue with a new installed HACMP system which, from being perfectly happy during testing, is now not behaving itself during failover.
There are a number of weird things going on which I will work through:
1) During resource release from machine A during resource failover to Machine B the rpc.lockd daemon is not dying. Looking at hacmp.out I can see repeated entries such as
+PRIMES3_RG:rg_move_complete+271 [ 24 -gt 0 ]
+PRIMES3_RG:rg_move_complete+267 lssrc -s rpc.lockd
+PRIMES3_RG:rg_move_complete+267 grep stopping
rpc.lockd nfs 294926 stopping
+PRIMES3_RG:rg_move_complete+267 [ 0 = 0 ]
+PRIMES3_RG:rg_move_complete+270 +PRIMES3_RG:rg_move_complete+270 expr 24 - 1
+PRIMES3_RG:rg_move_complete+271 sleep 1
being repeated until the counter inexorably gets to zero and the node goes into error.
Running the "Recover From HACMP Script Failure" comman from smit causes pretty well immediate failover!
2) During cluster start on Machine A as the active node (this doesn't happen on B) only file systems which have been marked for NFS export (within the cluster resources) have actually been mounted!
This all points to NFS issues.
What has changed?
Tivoli has been installed since we performed all the tests!
Anyone got any idea (apart from remove Tivoli!) what on earth to do?
This topic has been locked.
3 replies Latest Post - 2010-11-22T13:40:23Z by SystemAdmin
Pinned topic HACMP failover .... rpc.lockd
Answered question This question has been answered.
Unanswered question This question has not been answered yet.
Updated on 2010-11-22T13:40:23Z at 2010-11-22T13:40:23Z by SystemAdmin
SystemAdmin 110000D4XK69 Posts
Bonzodog 27000361XK2 PostsACCEPTED ANSWER
Re: HACMP failover .... rpc.lockd2010-11-21T13:05:05Z in response to SystemAdminNot being at work this is from memory...
We have a large number of NFS shares that are mounted on a series of Solaris 8 systems. The idea for stopping NFS is to enable
a cleaner failover during cluster changes.
SystemAdmin 110000D4XK69 PostsACCEPTED ANSWER
Re: HACMP failover .... rpc.lockd2010-11-22T13:40:23Z in response to BonzodogSo your cluster will act as an HA NFS Server?
the cluster software will do magic behind your back in order to even take over NFS locks clients have established on one node. you should not interfere by trying to start or stop services on your own.
read the hints on nfs exports in the manual (using the /usr/es/.../exports file, using separate log volumes for cluster-exported filesystems).