We have HS21 Cell blades that mounted NFS via our Storage servers. We recently rebuilt our storage servers and the original (2) storage servers that the HS21's mount have a new Ethernet address and we added 2 more storage servers. Problem I have now is if in the post install script for the NFS mounts we point to the two new servers the Cells get the mounts but if we point to the old servers the Cells fail to find the servers. We are pointing using the IP address.
The AMM firmware was updated and the person that did the overall upgrade (storage and all) did look at the AMM configuration during the updates. It seems that something in the blades are having a problem with the new storage server IP address that went from 172.30.#.# to 172.20.#.#.
If I bring up the blade pointing to the original storage servers new IP address they fail to mount. If I point to the new storage servers they mount. If I then log into the blade, unmount the new server mount and issue the mount command pointing to the original storage servers they will mount. The kicker is if I point the first NFS mount in the script to one of the two new storage servers and the next mount to the old storage servers then all the mounts are mounted. I can repeat this in various scenarios and it works as long as the first mount goes to the new storage servers.
I have checked all the various "usual" stuff, /etc/exports, /etc/hosts.allow and /etc/host.deny. Tweaked the mount command but nothing has worked. Any ideas??? Thanks.
NOTICE: developerWorks Community will be offline May 29-30, 2015 while we upgrade to the latest version of IBM Connections. For more information, read our upgrade FAQ.
This topic has been locked.
1 reply Latest Post - 2012-05-04T20:38:40Z by dflatley
Pinned topic NFS mount issue
Answered question This question has been answered.
Unanswered question This question has not been answered yet.
Updated on 2012-05-04T20:38:40Z at 2012-05-04T20:38:40Z by dflatley