I have a GPFS cluster in Linux with three GPFS servers: nodoa, nodob, nodoc and three CNFS servers: nodo1, nodo15, nodo29. The latter being GPFS clients of the former.
I exported several filesystems and nodo1,15,29 are exporting them as NFS shares.
There are multiple issues in this configuration.
One question: Where do I start NFS from? If I start GPFS cluster CNFS starts NFS on those nodes but then I am not able to mount from a NFS client.
If I start nfs manually (RHEL 6): service nfs stop; service nfs start then it is able to mount.
My doubt: if CNFS has a problem it uses to restart gpfs in CNFS servers so giving back service quickly and NFS clients (hard mounted) do not notify it. But ... if CNFS is not able to start NFS completely, I think (and it does to me) NFS will not start completely and NFS clients will not be able to access filesystems.
Any suggestions on how to start NFS and what to start?
Re: CNFS configuration2012-08-31T10:38:22ZThis is the accepted answer. This is the accepted answer.Problem solved after restarting everything.
CNFS starts NFS. It is not necessary starting NFS with Linux, CNFS will take care of it as it does with virtual IP,s.
Re: CNFS configuration2012-09-03T11:07:33ZThis is the accepted answer. This is the accepted answer.
- JVicente 060001WNXU
NFS must start with the system; if not, CNFS will not start NFS well and mount authorizations will fail in case of failover / failback.
So, chkconfig nfs on will make CNFS work.
Also, mount with nfsvers=3 as NFS v4 is not yet supported in CNFS (although it is in GPFS):
cnfs3:/home/proyectos /mnt/home/proyectos nfs exec,dev,nosuid,rw,hard,nfsvers=3 1 1