Managing user-defined authentication
In the user-defined mode of authentication, the user is free to select the authentication and ID-mapping methods of their choice. It is the responsibility of the administrator of the client system to manage the authentication and ID mapping for file (NFS and SMB) and object access to the IBM Storage Scale system.
The IBM Storage Scale system administrators are not allowed use any of the GPFS commands to manage authentication. It is important for the end user to be aware of the limitations, if any, of the authentication and ID-mapping scheme that will be implemented after you configured the user-defined mode of authentication.
- The client already has protocol deployments either on GPFS installations or on different systems and is planning to move to using the protocol stack on the IBM Storage Scale system. The client wants to replicate the current authentication and ID-mapping configuration. In this case, the client system administrator must be familiar with the required configuration settings that will be applied to the system.
- If the end user wants an authentication method that is not supported by the IBM Storage Scale system.
# mmuserauth service create --type userdefined --data-access-method file
File Authentication configuration completed successfully.
# mmuserauth service list
FILE access configuration : USERDEFINED
PARAMETERS VALUES
-------------------------------------------------
OBJECT access not configured
PARAMETERS VALUES
-------------------------------------------------
File authentication configuration
- Ensure that the authentication server and ID-mapping server are always reachable from all the protocol nodes. For example, if NIS is configured as the ID-mapping server, you can use the 'ypwhich' command to ensure that NIS is configured and reachable from all the protocol nodes. Similarly, if LDAP is configured as authentication and ID-mapping server, you can bind to the LDAP server from all protocol nodes to monitor if the LDAP server is reachable from all protocol nodes.
- Ensure that the implemented authentication and ID-mapping configuration is always consistent across all the protocol nodes. This requires that the authentication server and ID-mapping server are manually maintained and monitored by the administrator. The administrator must also ensure that the configuration files are not overwritten due to node restart and other similar events.
- Ensure that the implemented authentication and ID mapping-related daemons and processes across the protocol nodes are always up and running.
- The users or groups, accessing the IBM
Storage Scale
system over NFS and SMB protocols must resolve to a unique UID and GID respectively on all protocol
nodes. Especially, it must be resolved in implementations where different servers are used for
authentication and ID mapping. The name that is registered in ID-mapping server for user and group
must be checked for resolution.For example,
# id fileuser uid=1234(fileuser) gid=5678(filegroup) groups=5678(filegroup)
Note: However, in some use cases where only NFSV3 based access to the IBM Storage Scale system is used. In such cases, the user and group IDs are obtained from the NFS client and the ID-mapping setting is not configured on the protocol nodes. - If the IBM Storage Scale system is configured for multiprotocol support (that is, the same data is accessed through both NFS and SMB protocols), ensure that the IDs of users and groups are consistent across the NFS clients and SMB clients and that they resolve uniquely on the protocol nodes.
- Ensure that UID and GID across users and groups that are accessing the system are not conflicting. This conflict check must be strictly enforced, especially in multiprotocol-based access deployments.
- Ensure that the Kerberos configuration files, placed on all protocol nodes, are in synchronization with each other. Ensure that the clients and the IBM Storage Scale system are part of the same Kerberos realm or trusted realm.
- While you are deploying two or more IBM Storage Scale clusters, ensure that the ID mapping is consistent in cases where you want to use IBM Storage Scale features like AFM, AFM-DR, and asynchronous replication of data.
File access protocol | Requirements |
---|---|
NFSV3 | In scenarios where user name and group name are expected to be used to native
GPFS commands (for example, setting data
ownership, listing user or group quota), the IBM
Storage Scale system must be able to resolve the UID and GID
to user name and group name and vice versa, consistently across all the protocol
nodes. Note: However, for some use cases where only the NFSv3 based access to the
IBM
Storage Scale system is used. In such cases, the user
and group IDs are coming from the NFS client and ID-mapping setting is not
configured on the protocol nodes.
|
Kerberos NFSV3 |
Ensure that the user name and group name that are used to access data consistently resolve to same UID and GID across all protocol nodes and NFS clients. Ensure that the time is synchronized on the NFS server, NFS clients, and Kerberos
server.
Note: User names and group names are case-sensitive.
|
NFSV4 | Ensure that the user name and group name that are used to access data
consistently resolve to same UID and GID across all protocol nodes and NFS clients. Domain name
must be specified in the /etc/idmapd.conf file and it must be the same on both
the NFS server and NFS clients.
Note: User names and group names are
case-sensitive.
|
Kerberos NFS V4 | Ensure that the user name and group name that are used to access data
consistently resolve to same UID and GID across all protocol nodes and NFS clients. Ensure that the time is synchronized on the NFS server, NFS clients, and Kerberos server. Domain name and local-realms must be specified in the /etc/idmapd.conf file and it must be the same on both the NFS server and NFS clients. The value of "local-realms” takes the value of Kerberos realm with which the IBM Storage Scale system protocol nodes are configured. |
SMB | Ensure that the user name and group name that are used to access data
consistently resolve to same UID and GID across all protocol nodes and NFS clients. While you integrate with non-windows server, ensure that the samba attributes are populated on the directory server for every user and group that are planning to access the IBM Storage Scale system. Special care must be taken to match the samba domain SIDs. For Kerberized SMB access, ensure that time is synchronized the SMB server, SMB client, and Kerberos server. |
Object authentication configuration
- Integration with external keystone server is supported over http and https.
- The specified object user must be defined while you enable and configuring object in the external keystone server.
- The 'service' tenant or project must be defined in the external keystone server.
- The 'admin' role must be defined in the external keystone server.
- Ensure that the specified swift user has 'admin' role in 'service' tenant or project.For example, the external keystone server must contain the following admin role definition to sift user:
# openstack role list --user swift --project service +----------------------------------+-------+---------+-------+ | ID | Name | Project | User | +----------------------------------+-------+---------+-------+ | 90877d1913964e1eac05031e45afb46a | admin | service | swift | +----------------------------------+-------+---------+-------+
- The users and projects must be mapped to the
Default
domain in Keystone. - Object storage service endpoints must be correctly defined in the external keystone server.
For example, the external keystone server must contain the following endpoint for object-store:
# openstack endpoint list +------------+--------+--------------+--------------+---------+-----------+--------------------------------------------------------------| | ID | Region | Service Name | Service Type | Enabled | Interface | URL | +------------+--------+--------------+--------------+---------+-----------+--------------------------------------------------------------| | c36e..9da5 | None | keystone | identity | True | public | http://specscaleswift.example.com:5000/ | | f4d6..b040 | None | keystone | identity | True | internal | http://specscaleswift.example.com:35357/ | | d390..0bf6 | None | keystone | identity | True | admin | http://specscaleswift.example.com:35357/ | | 2e63..f023 | None | swift | object-store | True | public | http://specscaleswift.example.com:8080/v1/AUTH_%(tenant_id)s | | cd37..9597 | None | swift | object-store | True | internal | http://specscaleswift.example.com:8080/v1/AUTH_%(tenant_id)s | | a349..58ef | None | swift | object-store | True | admin | http://specscaleswift.example.com:8080 | +------------+--------+--------------+--------------+---------+-----------+--------------------------------------------------------------|
- Issue the following command:
# mmces state show
A sample output is as follows:
NODE AUTH AUTH_OBJ NETWORK NFS OBJ SMB CES spectrum-31.localnet.com DISABLED DEGRADED HEALTHY DISABLED DEGRADED HEALTHY DEGRADED
- Issue the following command:
# mmces events list
A sample output is as follows:NODE TIMESTAMP EVENT NAME SEVERITY DETAILS spectrum-31.localnet.com 2015-10-18 18:23:05.386336--1:-1CEST ks_url_exfail WARNING Keystone request failed using http://10.11.0.1:35357/v2.0