Configuring IBM Storage Scale cluster to enable Cinder driver with RHOSP
Before deploying the Cinder backend configuration in RHOSP, configure the IBM Storage Scale cluster to enable Cinder driver with RHOSP as follows.
-
Enable CES NFS service in IBM Storage Scale.
# mmces service list Enabled services: NFS NFS is running
- Create an independent fileset to store OpenStack components (Cinder, Glance) data.
# mmcrfileset fs1 openstack --inode-space new # mmlinkfileset fs1 openstack -J /ibm/fs1/openstack # mmlsfileset fs1 openstack Filesets in file system 'fs1': Name Status Path openstack Linked /ibm/fs1/openstack
- Create following directories for OpenStack components:
- Cinder
# mkdir -p /ibm/fs1/openstack/cinder/volumes
- Glance
# mkdir -p /ibm/fs1/openstack/glance/images
- Cinder
- Create an NFS export for the following directories:
# mmnfs export add /ibm/fs1/openstack/cinder/volumes --client "10.0.0.0/24(Access_Type=RW,SQUASH=no_root_squash)" # mmnfs export add /ibm/fs1/openstack/glance/images --client "10.0.0.0/24(Access_Type=RW,SQUASH=no_root_squash)"
- Update the NFS configuration to support NFS v4.1.
# mmnfs config change MINOR_VERSIONS=1 mmnfs: The NFS configuration was changed successfully. mmnfs: NFS server restarted on all NFS nodes on which NFS server is running.
Note: If there are multiple protocol nodes serving the NFS share, ensure the following requirements:- ssh_host_keys on all the protocol nodes are identical before you integrate IBM Storage Scale Cinder driver with RHOSP. One way to do that is by copying ssh_host_keys from one protocol node to all other protocol nodes in the /etc/ssh directory.
- recover_lost_locks module/kernel parameter
is enabled on all the compute nodes. To enable recover_lost_locks on the
compute nodes, issue the following
commands:
# cat > /etc/modprobe.d/nfs4-locks.conf <<EOF options nfs recover_lost_locks=1 EOF
# [ -d "/sys/module/nfs" ] && echo Y > /sys/module/nfs/parameters/recover_lost_locks