Using AFM with object
The following use case describes how AFM can be used with the object capability to provide multiple benefits.
Use case description
In this use case, an object store site is closer to the end application but has a limited storage capacity. To cater to a large storage capacity requirement, another object store is set up at a geographically remote site. This remote site has an expandable storage capacity and acts as a central archive. The relationship between these two object stores must be set up in a way that allows applications to access all object data from the site closer to them. Although this site might have a limited storage capacity, faster access can be ensured.
IBM Storage Scale provides the capability to use AFM with object. The central archive site is set up as the home cluster and the local site is set up as cache cluster. This capability provides the following benefits:
- Independent writer: While the cache site serves all the Swift Object (REST) clients at its geographic location, the home site too can serve all the Swift Object (REST) clients that are present at its geographic location. All data that is available on the home site can be made available on the cache cluster. Changes that are made on the cache cluster are sent back to the home cluster. However, data from the home site is sent to the cache cluster only when a client at the location of the cache cluster requests an access to the data.
- Failover: If the cache site stops functioning, all the Swift Object (REST) clients that were being served by the cache site are served by the home site after the clients change the target REST point to the home site.
- Failback: When the cache site is repaired and starts functioning again, the clients present at the location of the cache site are back to being served by the cache site after the clients change the target REST point to the cache site.
- Eviction: Quotas must be set on the cache cluster to activate Auto-eviction. Auto-eviction ensures that when the storage quota is reached, files that have not been accessed for a long period of time are evicted, thereby creating storage space for the new data at the cache site. The evicted files are pulled back from the home site when an access for them is requested.
Object setup with the AFM configuration
This section describes the steps to set up an AFM relationship between two object clusters. In the following example scenario, object IO operation is taking place on the cache cluster while the home cluster is the secondary back-up site. This section also explains the disaster recovery scenario the where cache cluster fails and IO operation fails over to the home cluster. When the cache cluster has been recovered, the failback steps from the home cluster to the cache cluster have also been explained.
A cache eviction use case is also explained where the cache cluster is smaller than the home cluster and objects from the cache cluster are evicted to make space for the new data.
Example cluster setup
Home cluster (home):
- 3 NSD servers: home-nsd1, home-nsd2, home-nsd3
- 3 protocol nodes: home-protocol1, home-protocol2, home-protocol3
Cache cluster (cache):
- 3 NSD servers: cache-nsd1, cache-nsd2, cache-nsd3
- 3 protocol nodes: cache-protocol1, cache-protocol2, cache-protocol3
- AFM mode: Independent writer
- AFM protocol: GPFS remote cluster based
- Authentication: External keystone (object)
- Home file system: homefs1
- Cache cluster file system: cachefs1
- Object filset: swift
Steps
- Installing IBM Storage Scale.
- Creating a GPFS file system.
- Setting up the remote cluster.
- Deploying object on the home cluster.
- Setting up AFM.
- Deploying object on the cache cluster.
- Synchronizing the swift builder and ring files between the cache and the home clusters.
- Starting swift services on the cache cluster.
Step 1: Installing IBM Storage Scale
- Install IBM Storage Scale on the home and the cache clusters by using the spectrumscale installer toolkit.
Step 2: Creating a GPFS file system
- Create a GPFS file system by using the mm- commands and mount it. On the home cluster:
[home-nsd1] mmcrcluster -C home -N allnode -r /usr/bin/ssh -R /usr/bin/scp [home-nsd1]# mmchlicense server --accept -N all [home-nsd1]# cat nsd %pool: pool=system blockSize=1M layoutMap=cluster %nsd: nsd=nsd1 device=/dev/sdb servers=home-nsd1,home-nsd2,home-nsd3 usage=dataAndMetadata failureGroup=101 pool=system %nsd: nsd=nsd2 device=/dev/sdc servers=home-nsd1,home-nsd2,home-nsd3 usage=dataAndMetadata failureGroup=102 pool=system %nsd: nsd=nsd3 device=/dev/sdd servers=home-nsd1,home-nsd2,home-nsd3 usage=dataAndMetadata failureGroup=103 pool=system [home-nsd1]# mmcrnsd -F nsd -v no [home-nsd1]# mmstartup -a [home-nsd1]# mmcrfs homefs1 -F nsd -T /gpfs -B 1M -A yes -Q yes -M 3 -m 1 -R 3 -r 1 [home-nsd1]# mmmount all -a
- On the cache
cluster:
[cache-nsd1] mmcrcluster -C home -N allnode -r /usr/bin/ssh -R /usr/bin/scp [cache-nsd1]# mmchlicense server --accept -N all [cache-nsd1]# cat nsd %pool: pool=system blockSize=1M layoutMap=cluster %nsd: nsd=nsd1 device=/dev/sdb servers=cache-nsd1,cache-nsd2,cache-nsd3 usage=dataAndMetadata failureGroup=101 pool=system %nsd: nsd=nsd2 device=/dev/sdc servers=cache-nsd1,cache-nsd2,cache-nsd3 usage=dataAndMetadata failureGroup=102 pool=system %nsd: nsd=nsd3 device=/dev/sdd servers=cache-nsd1,cache-nsd2,cache-nsd3 usage=dataAndMetadata failureGroup=103 pool=system [home-nsd1]# mmcrnsd -F nsd -v no [home-nsd1]# mmstartup -a [home-nsd1]# mmcrfs cachefs1 -F nsd -T /gpfs -B 1M -A yes -Q yes -M 3 -m 1 -R 3 -r 1 [home-nsd1]# mmmount all -a
Step 3: Setting up the remote cluster
- Generate a key on the home cluster by running the following set of
commands:
[home-nsd1] mmauth genkey new [home-nsd1]# mmauth update . -l AUTHONLY [home-nsd1]# cp /var/mmfs/ssl/id_rsa.pub ~/id_rsa-home.pub
- Copy the key from the home cluster to the cache
cluster:
[home-nsd1]# scp id_rsa-home.pub cache-nsd1:~/
- Generate a key on the cache cluster by running the following set of
commands:
[cache-nsd1] mmauth genkey new [cache-nsd1]# mmauth update . -l AUTHONLY [cache-nsd1]# cp /var/mmfs/ssl/id_rsa.pub ~/id_rsa-cache.pub
- Copy the key from the cache cluster to the home
cluster:
[cache-nsd1]# scp id_rsa-cache.pub home-nsd1:~/
- Set up the cache key on the home
cluster:
[home-nsd1] mmauth add cache.cache-nsd1 -k id_rsa-cache.pub [home-nsd1]# mmauth grant cache.cache-nsd1 -f /dev/homefs1
- Set up the remote cluster of the home on the cache
cluster:
[cache-nsd1] mmremotecluster add home.home-nsd1 -n home-nsd1, home-nsd2, home-nsd3, home-protocol1, home-protocol2, home-protocol3 -k id_rsa-home.pub
- Add and mount the remote file system of the home cluster on the cache
cluster:
[cache-nsd1] mmremotefs add /dev/homefs1c -f /dev/homefs1 -C home.home-nsd1 -T /homefs1c [cache-nsd1]# mmmount /dev/homefs1c -a
Step 4: Deploy object on the home cluster
- Configure and deploy the object protocols by using the spectrumscale installer toolkit.
Installing the NFS and SMB protocols is optional. Configure the object protocol with the external
keystone:
[home-protocol1]# ./spectrumscale node add home-protocol1 -p [home-protocol1]# ./spectrumscale node add home-protocol2 -p [home-protocol1]# ./spectrumscale node add home-protocol3 -p [home-protocol1]# ./spectrumscale config protocols -f homefs1 -m /gpfs [home-protocol1]# ./spectrumscale config protocols -e 10.91.99.81,10.91.99.82,10.91.99.83 [home-protocol1]# ./spectrumscale enable object [home-protocol1]# ./spectrumscale config object -f homefs1 -m /gpfs -e csc-object -o swift [home-protocol1]# ./spectrumscale auth object external # Please supply the full URL for your external keystone server keystone_url = http://http://csc-keystone:35357/v3 [home-protocol1]# ./spectrumscale deploy
- Stop the swift service and release export of IP addresses on the home cluster before deploying
the protocol on the cache
cluster:
[home-protocol1] mmces service stop OBJ --all [home-protocol1]# mmces node suspend -a [home-protocol1]# mmces node list Node name Node Flags --------------------------- home-protocol1 Suspended home-protocol3 Suspended home-protocol2 Suspended
Step 5: Setting up AFM
- Designate the gateway nodes to enable AFM. On the home cluster, run the following
command:
[home-nsd1] mmchnode --gateway -N home-nsd1,home-nsd2,home-nsd3
On the cache cluster, run the following command:[cache-nsd1] mmchnode --gateway -N cache-nsd1,cache-nsd2,cache-nsd3
- To fail over the swift service from the cache cluster, synchronize the user metadata (GPFS extend attributes), run the following command:
[home-nsd1] mmafmconfig enable /gpfs/swift
- Create an AFM fileset, by running the following
command:
[cache-protocol1]mmcrfileset cachefs1 swift -p afmmode=iw -p afmtarget=gpfs:///homefs1c/swift --inode-space new --inode-limit 8000000 [cache-protocol1]# mmlinkfileset cachefs1 swift -J /gpfs/swift
- Delete the ac and o directories by running the
following command:
[cache-protocol1]# rm -rf /gpfs/swift/ac /gpfs/swift/o
Step 6: Deploying object on the cache cluster
- Deploy the object protocols by using the spectrumscale installer toolkit with the external
keystone:
[cache-protocol1]# ./spectrumscale node add cache-protocol1 -p [cache-protocol1]# ./spectrumscale node add cache-protocol2 -p [cache-protocol1]# ./spectrumscale node add cache-protocol3 -p [cache-protocol1]# ./spectrumscale config protocols -f cachefs1 -m /gpfs [cache-protocol1]# ./spectrumscale enable object [cache-protocol1]# ./spectrumscale config object -f cachefs1 -m /gpfs -e csc-object -o swift [cache-protocol1]# ./spectrumscale auth object external [home-protocol1]# ./spectrumscale deploy
- Stop the swift services and release the export IP
addresses:
[cache-protocol1] mmces service stop OBJ –all [cache-protocol1]# mmces node suspend -a
Step 7: Synchronizing the swift builder and ring files between the cache and the home clusters
- To fail over the swift service when the cache cluster stops functioning, the following swift
builder and ring files must be shared between the cache and the home cluster. Synchronization of
these files must be performed whenever the builder and ring files are updated which happens when
cluster information like add or remove protocol nodes is changed from initial
state.
/etc/swift/account.builder /etc/swift/account.ring.gz /etc/swift/container.builder /etc/swift/container.ring.gz /etc/swift/object.builder /etc/swift/object.ring.gz
- Copy the files locally from ccr by running the following
command:
[cache-protocol1]# mkdir /homefs1c/cache_ring_files [cache-protocol1]# mmccr fget account.builder /homefs1c/cache_ring_files/account.builder [cache-protocol1]# mmccr fget container.builder /homefs1c/cache_ring_files/container.builder [cache-protocol1]# mmccr fget object.builder /homefs1c/cache_ring_files/object.builder [cache-protocol1]# mmccr fget account.ring.gz /homefs1c/cache_ring_files/account.ring.gz [cache-protocol1]# mmccr fget container.ring.gz /homefs1c/cache_ring_files/container.ring.gz [cache-protocol1]# mmccr fget object.ring.gz /homefs1c/cache_ring_files/object.ring.gz
- Save the cache cluster builder and ring files in ccr and
/etc/swift:
[home-protocol1] mmccr fput account.builder /homefs1/cache_ring_files/account.builder [home-protocol1]# mmccr fput container.builder /homefs1/cache_ring_files/container.builder [home-protocol1]# mmccr fput object.builder /homefs1/cache_ring_files/object.builder [home-protocol1]# mmccr fput account.ring.gz /homefs1/cache_ring_files/account.ring.gz [home-protocol1]# mmccr fput container.ring.gz /homefs1/cache_ring_files/container.ring.gz [home-protocol1]# mmccr fput object.ring.gz /homefs1/cache_ring_files/object.ring.gz
- Update the objRing version by running the following
command:
[home-protocol1] mmcesobjcrring --sync
Step 8: Starting the swift services on the cache cluster
- Start the swift services on the cache cluster by running the following
command:
[cache-protocol1] mmces node resume -a [cache-protocol1]# mmces service start OBJ --all
Failover
When the cache cluster fails, applications must be moved to the home cluster.
- To start the swift service on the home cluster, run the following
command:
[home-protocol1] mmces node resume -a [home-protocol1]# mmces service start OBJ --all
Applications can now be moved to the home cluster.
Failback
-
To stop the swift service on the home cluster, run the following command:
[home-protocol1] mmces service stop OBJ --all [home-protocol1]# mmces node suspend -a
- Recreate the cache GPFS file system by running the following command:
[cache-nsd1] mmcrcluster -C cache -N allnode -r /usr/bin/ssh -R /usr/bin/scp [cache-nsd1]# mmchlicense server --accept -N all [cache-nsd1]# mmcrnsd -F nsd -v no [cache-nsd1]# mmstartup -a [cache-nsd1]# mmcrfs cachefs1 -F nsd -T /gpfs -B 1M -A yes -Q yes -M 3 -m 1 -R 3 -r 1 [cache-nsd1]# mmmount all -a
- Set up the remote mount by running the following command on the home
cluster:
[home-nsd1]# scp id_rsa-home.pub cache-nsd1:~/
- On the cache cluster, run the following set of
commands:
[cache-nsd1] mmauth genkey new [cache-nsd1]# mmauth update . -l AUTHONLY [cache-nsd1]# cp /var/mmfs/ssl/id_rsa.pub ~/id_rsa-cache2.pub [cache-nsd1]# scp id_rsa-cache2.pub home-nsd1:~/
- On the home cluster, run the following set of
commands:
[home-nsd1] mmauth add cache.cache-nsd1 -k id_rsa-cache2.pub [home-nsd1]# mmauth grant cache.cache-nsd1 -f /dev/homefs1
- On the cache cluster, run the following set of
commands:
[cache-nsd1] mmremotecluster add home.home-nsd1 -n home-nsd1, home-nsd2, home-nsd3, home-protocol1, home-protocol2, home-protocol3 -k id_rsa-home.pub [cache-nsd1]# mmremotefs add /dev/homefs1c -f /dev/homefs1 -C home.home-nsd1 -T /homefs1c [cache-nsd1]# mmmount /dev/homefs1c -a
- Set up AFM on the cache cluster by running the following set of
commands:
[cache-nsd1] mmchnode --gateway -N cache-nsd1,cache-nsd2,cache-nsd3 [cache-protocol1]# mmunlinkfileset cachefs1 swift -f [cache-protocol1]# mmdelfileset cachefs1 swift -f [cache-protocol1]# mmcrfileset cachefs1 swift -p afmmode=iw -p afmtarget=gpfs:///homefs1c/swift --inode-space new --inode-limit 8000000 [cache-protocol1]# mmlinkfileset cachefs1 swift -J /gpfs/swift
- Deploy the object protocol on the cache
cluster:
[cache-protocol1]# ./spectrumscale config protocols -f cachefs1 -m /gpfs [cache-protocol1]# ./spectrumscale config protocols -e 10.91.99.81,10.91.99.82,10.91.99.83 [cache-protocol1]# ./spectrumscale enable object [cache-protocol1]# ./spectrumscale config object -f cachefs1 -m /gpfs -e csc-object -o swift [cache-protocol1]# ./spectrumscale auth object external [home-protocol1]# ./spectrumscale deploy
- Synchronize the swift builder and the ring files of the home cluster to the new cache
cluster:
[home-protocol1]# scp /etc/swift/account.builder 192.168.11.171:~/swift_home/ [home-protocol1]# scp /etc/swift/account.ring.gz 192.168.11.171:~/swift_home/ [home-protocol1]# scp /etc/swift/container.builder 192.168.11.171:~/swift_home/ [home-protocol1]# scp /etc/swift/container.ring.gz 192.168.11.171:~/swift_home/ [home-protocol1]# scp /etc/swift/object.builder 192.168.11.171:~/swift_home/ [home-protocol1]# scp /etc/swift/object.ring.gz 192.168.11.171:~/swift_home/
- Save the cache cluster builder and ring files in ccr and
/etc/swift by running the following set of
commands:
[cache-protocol1] mmccr fput account.builder ./swift_home/account.builder [cache-protocol1]# mmccr fput account.ring.gz ./swift_home/account.ring.gz [cache-protocol1]# mmccr fput container.builder ./swift_home/container.builder [cache-protocol1]# mmccr fput container.ring.gz ./swift_home/container.ring.gz [cache-protocol1]# mmccr fput object.builder ./swift_home/object.builder [cache-protocol1]# mmccr fput object.ring.gz ./swift_home/object.ring.gz [cache-protocol1]# cp ./swift_home/* /etc/swift/
- Update the objRing version on the cache cluster by running the following set of
commands:
[cache-protocol1]# md5sum /etc/swift/account.builder >> objRingVersion [cache-protocol1]# md5sum /etc/swift/account.ring.gz >> objRingVersion [cache-protocol1]# md5sum /etc/swift/container.builder >> objRingVersion [cache-protocol1]# md5sum /etc/swift/container.ring.gz >> objRingVersion [cache-protocol1]# md5sum /etc/swift/object.builder >> objRingVersion [cache-protocol1]# md5sum /etc/swift/object.ring >> objRingVersion [cache-protocol1]# md5sum /etc/swift/object.ring.gz >> objRingVersion [cache-protocol1]# md5sum /var/mmfs/gen/cesAddressPoolFile >> objRingVersion [cache-protocol1]# mmccr fput objRingVersion /root/objRingVersion
- Start the swift service on the cache
cluster:
[cache-protocol1] mmces node resume -a [cache-protocol1]# mmces service start OBJ --all
Cache eviction
With the Cache eviction feature, file data blocks in the cache are released when the fileset usage exceeds the fileset soft quota, thereby creating space for new files. This can be valuable in an object storage configuration to keep the cache site storage size relatively small while having a larger home site. When the cache cluster is about to reach the quota, some objects are evicted to make space for the new objects.
- On the cache cluster, enable per-fileset
quota:
[cache-nsd1] mmchfs cachefs1 --perfileset-quota
- Set the soft and hard quota
limits:
[cache-nsd1] mmedquota -j cachefs1:swift *** Edit quota limits for FILESET swift NOTE: block limits will be rounded up to the next multiple of the block size. block units may be: K, M, G, T or P, inode units may be: K, M or G. cachefs1: blocks in use: 4207744K, limits (soft = 80G, hard = 100G) inodes in use: 436, limits (soft = 0, hard = 0)
- Set the grace
period:
[cache-nsd1] mmedquota -t -j *** Edit grace times Time units may be: days, hours, minutes, or seconds Grace period before enforcing soft limits for FILESETs: cachefs1: block grace period: 7 days, file grace period: 7 days
For more information on cache eviction, see Cache eviction.