These instructions are to be used as a guide for setting up a Linux
client/server system (Red Hat or Suse) with Kerberos support. All
setup-related questions should be directed to Suse or Red Hat.
Use these steps to configure Red Hat Enterprise Linux
5 and Suse 10/11 with NFSv4 and Kerberos support. By default, base
NFSv4 support is enabled in the kernel.
- Edit /etc/hosts. Your fully qualified host name must be the first
entry and the machine name’s name must not be included on the
localhost line. For example:
10.1.0.100 hostname.domain.com
127.0.0.1 localhost.localdomain localhost
- Edit /etc/idmapd.conf. Make sure Domain is set to your system's
dns domain name.
- Edit /etc/sysconfig/nfs:
Red Hat:
Uncomment/set SECURE_NFS="yes".
Uncomment/set RPCGSSDARGS="-vvv"
Uncomment/set RPCSVCGSSDARGS="-vvv"
Suse:
NFS_SECURITY_GSS="yes"
NFS4_SUPPORT="yes"
NFS_START_SERVICES="gssd,idmapd"
- Before starting nfs, rpc.gssd, rpc.svcgssd, and rpc.idmapd, set
up a keytab file and krb5.conf file. Edit /etc/krb5.conf to match
your Kerberos configuration.
- Unlike other NFSv4 implementations, Linux requires a keytab for
the client in order to mount a secure share. This is because the
Linux NFS client uses the nfs/hostname.domain credential in the keytab
to mount. (This behavior is expected to change once the kernel keyring
support is completed.) Create a keytab as documented in Red Hat Enterprise Linux 5 Deployment Guide or Suse Linux Enterprise Server Administration Guide.
Then FTP the keytab in binary mode or (recommended) SCP the keytab
to the Linux client and save it to /etc/krb5.keytab.
- Most issues with kerberos are related to invalid
keytabs. Once the keytab has been placed on the Linux system, verify
that the keytab is valid by issuing the following command:
kinit -k nfs/hostname.domain.com
- Change hostname and domain to match the hostname and domain of
the Linux system.
- This command should complete with out errors and you should not
be prompted for a password.
- If this command fails, the keytab is invalid or the Kerberos configuration
is incorrect.
- Restart the nfs client/server and related NFS services:
Red
Hat:
service rpcidmapd restart
service rpcgssd restart
service rpcsvcgssd restart
service nfs restart
Suse:
rcnfs restart
rcnfsserver restart
- Check to see if nfsd, rpc.gssd, rpc,svcgssd and rpc.idmapd start
with the ps –A command. If these daemons do not start, check
the logs for error messages. You can also enable NFS debugging with
the following:
- NFS client debug command:
#rpcdebug –m nfs –s all
- NFS server debug command:
#rpcdebug –m nfsd –s all
Note: To disable NFS debugging issue the following
commands:
- Disable client debugging:
#rpcdebug –m nfs –c all
- Disable server debugging:
#rpcdebug –m nfsd –c all
Instead of using rpcdebug
you can start rpcgssd and rpc.svcgssd in the foreground with the following
commands:
#rpc.gssd. –fvvv&
#rpcsvcgssd –fvvv&
This may provide information on
why they did not start.
Note: This debug level is very noisy. The output is sent to /var/log/messages.
- Special considerations for Linux Clients:
- Because Linux has the keytab, the system is able to perform secure
NFS mount without having the credentials acquired by the "kinit"
command. This behavior will change once kernel keyring support is
completed.
- Root user (uid = 0) uses Linux machine credentials, but not the
regular User credentials obtained with kinit. Thus root user will
be able to browse the NFS mount point without performing kinit. Regular
users will need "kinit" to access the mount points. This
behavior will change once kernel keyring support is completed.
- Kdestroy will not destroy the context in the Linux kernel. This
behavior will change once kernel keyring support is completed.
- Extra configuration is needed for Linux remote realm setup since
Linux sends nfs/hostname.domain instead of user principal during
mount time. If the Linux's NFS principal is not defined to RACF,
z/OS NFS server will reject mount requests. A simple way to solve
this problem is to map the entire remote realm to RACF. Another more
secure way to work around this is to map individual Linux machines
to a special realm in the [domain_realm] section in the /etc/skrb/krb5.conf,
and map that realm to a special RACF USER on the z/OS NFS server thus
leaving all other machines in the remote realm intact.