Simplified setup: Accessing a remote file system

See an example of how to access an encrypted file in a remote cluster.

This topic shows how to configure a cluster so that it can mount an encrypted file system that is in another cluster. In the examples in this topic, the encrypted file system is c1FileSystem1 and its cluster is Cluster1. The cluster that mounts the encrypted file system is Cluster2.

The examples assume that Cluster1 and c1FileSystem1 are the cluster and file system that you configured in the topic Simplified setup: Using SKLM with a self-signed certificate. You configured Cluster1 for encryption and you created a policy that caused all the files in c1FileSystem1 be encrypted.

To configure Cluster2 with remote access to an encrypted file in Cluster1, you must configure Cluster2 for encryption in much the same way that Cluster1 was configured. As the following table shows, Cluster2 must add the same key server and tenant as Cluster1. However, Cluster2 must create its own key client and register it with the tenant.
Note: In the third column of the table, items in square brackets are connected or added during this topic. The fourth column shows the step in which each item in the third column is added.
Table 1. Setup of Cluster1 and Cluster2
Item Cluster1 Cluster2 Steps
File system c1FileSystem1 [c1FileSystem1_Remote] Step 1
Connected to a key server keyserver01 [keyserver01] Step 2
Connected to a tenant c1Tenant1 on keyserver01 [c1Tenant1 on keyserver01] Step 3
Created a key client c1Client1 [c2Client1] Step 4
Registered the key client to the tenant c1Client1 to c1Tenant1 [c2Client1 to c1Tenant1] Step 5
Has access to master encryption keys c1Client1 [c2Client1] Step 6
Has access to encrypted file Local access to hw.enc in c1FileSystem1 [Remote access to hw.enc in c1FileSystem1.] Step 6

The encrypted file hw.enc is in c1FileSystem1 on Cluster1. To configure Cluster2 to have remote access to file hw.enc, follow these steps:

  1. From a node in Cluster2, connect to the remote Cluster1:
    1. To set up access to the remote cluster and file system, follow the instructions in topic Accessing a remote GPFS file system.
    2. Run the mmremotefs add command to make the remote file system c1FileSystem1 known to the local cluster, Cluster2:
      Note: c1Filesystem1_Remote is the name by which the remote file system c1FileSystem1 is known to Cluster2.
      # mmremotefs add c1FileSystem1_Remote -f c1FileSystem1 -C Cluster1.gpfs.net -T 
      /c1FileSystem1_Remote -A no
      mmremotefs: Propagating the cluster configuration data to all affected nodes. 
          This is an asynchronous process.
      Tue Mar 29 06:38:07 EDT 2016: mmcommon pushSdr_async: mmsdrfs propagation started.
      Note: After you have completed Step 1(b) and mounted the remote file system, if you try to access the contents of file hw.enc from Cluster2, the command fails because the local cluster does not have the master encryption key for the file:
      # cat /c1FileSystem1_Remote/hw.enc
      cat: hw.enc: Operation not permitted
      
      mmfs.log:
      Tue Mar 29 06:39:27.306 2016: [E] 
      Key 'KEY-d4e83148-e827-4f54-8e5b-5e1b5cc66de1:keyserver01_devG1' 
      could not be fetched. The specified RKM ID does not exist; 
      check the RKM.conf settings.
  2. From a node in Cluster2, connect to the same SKLM key server, keyserver01, that Cluster1 is connected to:
    1. Run the mmkeyserv server add to connect to keyserver01:
      # mmkeyserv server add keyserver01
      Enter password for the key server keyserver01:
      The security certificate(s) from keyserver01.gpfs.net must be accepted to continue.
      
      View the certificate(s) to determine whether you want to trust the certifying authority.
      Do you want to view or trust the certificate(s)? (view/yes/no) view
      
      Serial number:          01022a8adf20f3
      SHA-256 digest:         2ca4a48a3038f37d430162be8827d91eb584e98f5b3809047ef4a1c72e15fc4c
      Signature:              7f0312e7be18efd72c9d8f37dbb832724859ba4bb5827c230e2161473e0753b367ed49d
      993505bd23858541475de8e021e0930725abbd3d25b71edc8fc3de20b7c2db5cd4e865f41c7c410c1d710acf222e1c4
      5189108e40568ddcbeb21094264da60a1d96711015a7951eb2655363309d790ab44ee7b26adf8385e2c210b8268c5ae
      de5f82f268554a6fc22ece6efeee2a6264706e71416a0dbe8c39ceacd86054d7cc34dda4fffea4605c037d321290556
      10821af85dd9819a4d7e4baa70c51addcda720d33bc9f8bbde6d292c028b2f525a0275ebea968c26f8f0c4b604719ae
      3b04e71ed7a8188cd6adf68764374b29c91df3d101a941bf8b7189485ad72
      Signature algorithm:    SHA256WithRSASignature
      Key size:               2048
      Issuer:           C=US, O=IBM, OU=SKLMNode, SKLMCell, Root Certificate, CN=c40bbc1xn3.gpfs.net
      Subject:          C=US, O=IBM, OU=SKLMNode, SKLMCell, CN=c40bbc1xn3.gpfs.net
      
      Serial number:          01022a24475466
      SHA-256 digest:         077c3b53c5046aa893b760c11cca3a993efbc729479771e03791f9ed4f716879
      Signature:              227b5befe89f2e55ef628da6b50db1ab842095a54e1505655e3d95fee753a7f7554868a
      a79b294c503dc34562cf69c2a20128796758838968565c0812c4aedbb0543d396646a269c02bf4c5ce5acba4409a10e
      ffbd47ca38ce492698e2dcdc8390b9ae3f4a47c23ee3045ff0145218668f35a63edac68201789ed0db6e5c170f5c6db
      49769f0b4c9a5f208746e4342294c447793ed087fa0ac762588faf420febeb3fca411e4e725bd46476e1f9f44759a69
      6573af5dbbc9553218c7083c80440f2e542bf56cc5cc18156cce05efd6c2e5fea2b886c5c1e262c10af18b13ccf38c3
      533ba025b97bbe62f271545b2ab5c1f50c1dca45ce504dfcfc257362e9b43
      Signature algorithm:    SHA256WithRSASignature
      Key size:               2048
      Issuer:           C=US, O=IBM, OU=SKLMNode, SKLMCell, Root Certificate, CN=c40bbc1xn3.gpfs.net
      Subject:          C=US, O=IBM, OU=SKLMNode, SKLMCell, Root Certificate, CN=c40bbc1xn3.gpfs.net
      
      Do you trust the certificate(s) above? (yes/no) yes
      
    2. Verify that the connection succeeded:
      # mmkeyserv server show
      keyserver01.gpfs.net
              Type:                ISKLM
              IPA:                 192.168.40.59
              User ID:             SKLMAdmin
              REST port:           9080
              Label:               1_keyserver01
              NIST:                on
              FIPS1402:            off
              Backup Key Servers:
              Distribute:          yes
              Retrieval Timeout:   120
              Retrieval Retry:     3
              Retrieval Interval:  10000
      
  3. From a node in Cluster2, add the same tenant, c1Tenant1, that Cluster1 added:
    1. Add the tenant devG1:
      # mmkeyserv tenant add devG1 --server keyserver01
      Enter password for the key server keyserver01:
      mmkeyserv: [I] Tenant devG1 belongs to GPFS family exists on the key server.
      Processing continues ...
      mmkeyserv: Propagating the cluster configuration data to all
        affected nodes.  This is an asynchronous process.
      
    2. Verify that the tenant is added:
      # mmkeyserv tenant show
      devG1
              Key Server:          keyserver01.gpfs.net
              Registered Client:   (none)
  4. From a node in Cluster2, create a key client:
    1. Create the key client c2Client1:
       # mmkeyserv client create c2Client1 --server keyserver01
      Enter password for the key server keyserver01:
      Create a pass phrase for keystore:
      Confirm your pass phrase:
      mmkeyserv: Propagating the cluster configuration data to all
        affected nodes.  This is an asynchronous process.
      
    2. Verify that the key client is created:
      # mmkeyserv client show
      c2Client1
              Label:               c2Client1
              Key Server:          keyserver01.gpfs.net
              Tenants:             (none
      
  5. From a node in Cluster2, register the key client to the same tenant that Cluster1 is registered to. The RKM ID must be the same as the one that Cluster1 uses, to allow files created with that RKM ID on Cluster1 to be accessed from Cluster2. However, some of the information in the RKM stanza is different:
    1. Register the client in Cluster2 to the same tenant c1Tenant1:
      # mmkeyserv client register c2Client1 --tenant devG1 --rkm-id keyserver01_devG1
      Enter password for the key server :
      mmkeyserv: [I] Client currently does not have access to the key. 
      Continue the registration process ...
      mmkeyserv: Successfully accepted client certificate
      mmkeyserv: Propagating the cluster configuration data to all
        affected nodes.  This is an asynchronous process.
      
    2. Verify that the tenant shows that c2Client1 is registered:
      # mmkeyserv tenant show
      devG1
              Key Server:          keyserver01.gpfs.net
              Registered Client:   c2Client1
      
    3. Verify that c2Client1 shows that it is registered to the c1Tenant:
      # mmkeyserv client show
      c2Client1
              Label:               c2Client1
              Key Server:          keyserver01.gpfs.net
              Tenants:             devG1
      
    4. You can display the contents of the new RKM stanza:
       # mmkeyserv rkm show
      keyserver01_devG1 {
        type = ISKLM
        kmipServerUri = tls://192.168.40.59:5696
        keyStore = /var/mmfs/ssl/keyServ/serverKmip.1_keyserver01.c2Client1.1.p12
        passphrase = c2Client1
        clientCertLabel = c2Client1
        tenantName = devG1
      }
      
    5. You can also view the RKM stanza by displaying the contents of the RKM.conf file on the command-line console:
      # cat /var/mmfs/ssl/keyServ/RKM.conf
      keyserver01_devG1 {
      type = ISKLM
      kmipServerUri = tls://192.168.40.59:5696
      keyStore = /var/mmfs/ssl/keyServ/serverKmip.1_keyserver01.c2Client1.1.p12
      passphrase = pwc2Client1
      clientCertLabel = c2Client1
      tenantName = devG1
      }
      
  6. You can now access the encrypted file hw.enc remotely from Cluster2:
    1. Verify that you can access the contents of the file hw.enc:
      # cat /c1FileSystem1_Remote/hw.enc
      Hello World!
    2. Display the encryption attributes of the file:
      # mmlsattr -n gpfs.Encryption /c1FileSystem1_Remote/hw.enc
      file name: /c1FileSystem1_Remote/hw.enc
      gpfs.Encryption: "EAGC????t!?v??????????? ???????=T??????????????? ???O?3???)??r??nV?K?OA?;????
      ??x,?:w?d???5)?KEY-d4e83148-e827-4f54-8e5b-5e1b5cc66de1?keyserver01_devG1?"
      EncPar ’AES:256:XTS:FEK:HMACSHA512’
      type: wrapped FEK
      WrpPar ’AES:KWRAP’
      CmbPar ’XORHMACSHA512’
      KEY-d4e83148-e827-4f54-8e5b-5e1b5cc66de1:keyserver01_devG1
      
You can now access encrypted files on c1FileSystem1_Remote from Cluster2.