Topic
IC4NOTICE: developerWorks Community will be offline May 29-30, 2015 while we upgrade to the latest version of IBM Connections. For more information, read our upgrade FAQ.
3 replies Latest Post - ‏2010-11-22T13:40:23Z by SystemAdmin
Bonzodog
Bonzodog
2 Posts
ACCEPTED ANSWER

Pinned topic HACMP failover .... rpc.lockd

‏2010-11-19T10:57:28Z |
We are having an issue with a new installed HACMP system which, from being perfectly happy during testing, is now not behaving itself during failover.

There are a number of weird things going on which I will work through:
1) During resource release from machine A during resource failover to Machine B the rpc.lockd daemon is not dying. Looking at hacmp.out I can see repeated entries such as

+PRIMES3_RG:rg_move_complete+271 [ 24 -gt 0 ]
+PRIMES3_RG:rg_move_complete+267 lssrc -s rpc.lockd
+PRIMES3_RG:rg_move_complete+267 LC_ALL=C
+PRIMES3_RG:rg_move_complete+267 grep stopping
rpc.lockd nfs 294926 stopping
+PRIMES3_RG:rg_move_complete+267 [ 0 = 0 ]
+PRIMES3_RG:rg_move_complete+270 +PRIMES3_RG:rg_move_complete+270 expr 24 - 1
COUNT=23
+PRIMES3_RG:rg_move_complete+271 sleep 1

being repeated until the counter inexorably gets to zero and the node goes into error.

Running the "Recover From HACMP Script Failure" comman from smit causes pretty well immediate failover!

2) During cluster start on Machine A as the active node (this doesn't happen on B) only file systems which have been marked for NFS export (within the cluster resources) have actually been mounted!

This all points to NFS issues.

What has changed?

Tivoli has been installed since we performed all the tests!

Anyone got any idea (apart from remove Tivoli!) what on earth to do?
Updated on 2010-11-22T13:40:23Z at 2010-11-22T13:40:23Z by SystemAdmin
  • SystemAdmin
    SystemAdmin
    69 Posts
    ACCEPTED ANSWER

    Re: HACMP failover .... rpc.lockd

    ‏2010-11-20T08:01:29Z  in response to Bonzodog
    explain more of your nfs setup. why do you want to stop rpc.lockd (or other nfs daemons)?
    • Bonzodog
      Bonzodog
      2 Posts
      ACCEPTED ANSWER

      Re: HACMP failover .... rpc.lockd

      ‏2010-11-21T13:05:05Z  in response to SystemAdmin
      Not being at work this is from memory...

      We have a large number of NFS shares that are mounted on a series of Solaris 8 systems. The idea for stopping NFS is to enable
      a cleaner failover during cluster changes.
      • SystemAdmin
        SystemAdmin
        69 Posts
        ACCEPTED ANSWER

        Re: HACMP failover .... rpc.lockd

        ‏2010-11-22T13:40:23Z  in response to Bonzodog
        So your cluster will act as an HA NFS Server?
        the cluster software will do magic behind your back in order to even take over NFS locks clients have established on one node. you should not interfere by trying to start or stop services on your own.

        read the hints on nfs exports in the manual (using the /usr/es/.../exports file, using separate log volumes for cluster-exported filesystems).