Topic
8 replies Latest Post - ‏2013-02-15T11:08:26Z by cmcc
cmcc
cmcc
6 Posts
ACCEPTED ANSWER

Pinned topic GPFS lowDiskSpace event handling doesn't start.

‏2013-02-10T11:39:57Z |
Hi all,

I would setup an automatic GPFS policy driven migraton on my linux cluster.
GPFS 3.5 and Tivoli HSM 6.4 software have been installed succesfully. I'm able to manually migrate (and recall) files from HSM managed file system.

It seems everything is ok. I also enabled LowDiskSpace check using mmaddcallback command:

root@mng2# mmlscallback
LowSpaceCallback
command = /callback/lowspace.ksh
event = lowDiskSpace
node = mng1-ib,mng2-ib,tsm1-ib,tsm2-ib
parms = %eventName %fsName

root@mng2# cat /callback/lowspace.ksh
#!/bin/ksh
echo "Logging a lowspace event at: `date` " >> /callback/lowspace.log
echo " The event occurred on node: " $1 >> /callback/lowspace.log
echo " The FS name is: " $2 >> /callback/lowspace.log

root@mng2# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda1 262637668 17952644 231343792 8% /
tmpfs 32958820 0 32958820 0% /dev/shm
/dev/gpfs_home 137657057280 4106190848 133550866432 3% /users/home
/dev/gpfs_scratch 321199800320 29294952448 291904847872 10% /scratch
/dev/gpfs_archive 183542743040 10491449344 173051293696 6% /archive

HSM managed file system used space is about 6%, GPFS policy is up and running with an HIGH threshold at 4%, but nothing happened...
I tested callback functionality using "leaveNode" event with success!!! Someone could explain me why only LowDiskSpace event doesn't run!??! :-(

This is my GPFS policy:

root@mng2# mmlspolicy /dev/gpfs_archive -L
RULE 'hsmdata' MIGRATE FROM POOL 'system' THRESHOLD(4,3,2) WEIGHT(CURRENT_TIMESTAMP - ACCESS_TIME) TO POOL 'hsm' WHERE FILE_SIZE >= 51200
RULE EXTERNAL POOL 'hsm' EXEC '/var/mmfs/etc/mmpolicyExec-hsm.athena' OPTS '-v'
RULE 'exclude hsm system files' EXCLUDE WHERE PATH_NAME LIKE '%/.SpaceMan%'
RULE 'Default' SET POOL 'system'

Another question: Is there a correct way to decide 'hsmdata', 'system' and 'hsm' names or the names selection is "free"!?

Many Thanks,
Mauro
Updated on 2013-02-15T11:08:26Z at 2013-02-15T11:08:26Z by cmcc
  • truongv
    truongv
    76 Posts
    ACCEPTED ANSWER

    Re: GPFS lowDiskSpace event handling doesn't start.

    ‏2013-02-10T16:39:21Z  in response to cmcc
    You LowSpaceCallback is restricting to a set of 4 nodes. But the lowDiskSpace only triggered on the file system manager node. If the current fsmgr is not one of these 4 nodes, you will not see the lowDiskSpace event. Either change the fsmgr to one of the nodes in your set or reconfigure the callback without the -N option (default to all nodes in the cluster).

    I believe policy ruleName can be anything. It "identifies the rule and is used in diagnostic messages." Please see the Advanced Admin Guide.
    • cmcc
      cmcc
      6 Posts
      ACCEPTED ANSWER

      Re: GPFS lowDiskSpace event handling doesn't start.

      ‏2013-02-11T08:54:57Z  in response to truongv
      Greaaat!!! It works, it works!!! Thank you very much, Truongv!

      For the first time, lowDiskSpace event has been triggered on file system manager node and lowspace test script has been launched.
      But, if it is possible, I would ask you something else ( just a few questions...if you can help me again :-) )

      First of all, I would inform you about our cluster configuration:

      • n. 482 compute nodes
      • n. 2 login nodes
      • n. 8 I/O nodes for GPFS file systems
      • n. 4 TSM/HSM client nodes

      At the moment, all 8 I/O nodes are GPFS quorum-manager or manager node ( 1 of them is fs manager for HSM managed file system: io5-ib node )

      Questions:

      • lowDiskspace starts only on io5-ib node, lowspace script executes mmapplypolicy command on io5-ib, but io5-ib node is not a TSM/HSM one :-( I think that I can launch mmapplypolicy (on io5-ib node) specifying -N option (for example: mmapplypolicy... -N tsm1-ib, tsm2-ib and so on). But what happens if, due to some maintenance purposes, we need to restart io5-ib node after it launches mmapplypolicy? HSM Migration started on tsm nodes will be interrupted or not?

      • I would reconfigure callback on I/O nodes only (io1-ib, io2-ib and so on...), but what happens if all I/O nodes launch a lot of mmapplypolicy? Is there a IBM "certified" script that prevents this issue?

      Thanks a lot for your support.
      Have a great day,
      Mauro T.
      • truongv
        truongv
        76 Posts
        ACCEPTED ANSWER

        Re: GPFS lowDiskSpace event handling doesn't start.

        ‏2013-02-11T22:20:46Z  in response to cmcc
        If mmapplypolicy needs to be run on an HSM server, you can script your callback to executes it remotely on HSM server via ssh/rsh. The command mmapplypolicy has --single-instance option to ensure only one mmapplypolicy process running per file system at a time.

        From the expert, you need to modify the RULE since exclude should always proceed migration rule...

        
        RULE EXTERNAL POOL 
        'hsm' EXEC 
        '/var/mmfs/etc/mmpolicyExec-hsm' OPTS 
        '-v' RULE 
        'exclude hsm system files' EXCLUDE WHERE PATH_NAME LIKE 
        '%/.SpaceMan%' RULE 
        'hsmdata' MIGRATE FROM POOL 
        'system' THRESHOLD(4,3,2) WEIGHT(CURRENT_TIMESTAMP - ACCESS_TIME) TO POOL 
        'hsm' WHERE FILE_SIZE >= 51200 RULE 
        'Default' SET POOL 
        'system'
        


        Here is a sample of the callback:
        
        #mmlscallback lowDiskSpace command       = /log/callbackup.hsmspace.ksh event         = lowDiskSpace parms         = %myNode %fsName   # cat /log/callbackup.hsmspace.ksh #!/bin/ksh echo 
        " -------------------------------------------------" >> /hsm211/lowspace.log echo 
        "Logging from node $1" >>/hsm211/lowspace.log echo 
        "    Logging a lowspace event at: `date` " >> /hsm211/lowspace.log echo 
        "    The FS name is: " $2 >> /hsm211/lowspace.log   HSMowner=$(/usr/lpp/mmfs/bin/mmdsh -L hs21n63 
        'dsmmigfs q -d' |awk 
        '/Owner/{print $4}') echo 
        "   HSM owner node is $HSMowner" >>/hsm211/lowspace.log echo 
        "   Executing mmapplypolicy : move data to HSM  on node $HSMowner form file system $2" >>/hsm211/lowspace.log echo 
        "   /usr/lpp/mmfs/bin/mmdsh -v -N $HSMowner /usr/lpp/mmfs/bin/mmapplypolicy $2 -N hs21n63,hs21n65 --single-instance ">>/hsm211/lowspace.log /usr/lpp/mmfs/bin/mmdsh -v -N $HSMowner 
        "/usr/lpp/mmfs/bin/mmapplypolicy $2  -N hs21n63,hs21n65 --single-instance"  2>&1 >>/hsm211/lowspace.log   echo 
        "End of mmapplypolicy run `date`" >> /hsm211/lowspace.log echo 
        "DONE" >>/hsm211/lowspace.log
        
        • cmcc
          cmcc
          6 Posts
          ACCEPTED ANSWER

          Re: GPFS lowDiskSpace event handling doesn't start.

          ‏2013-02-13T09:06:07Z  in response to truongv
          Hi Truongv,

          thank you very much for your support.
          Now GPFS policy driven migration starts automatically and it works correctly! :-)

          I have modified your callback script adding some checks but I have to add some instructions also to backup files before migrate them.
          There are a lot of ANS9297W errors in log file.

          Logging from node io5.cluster.net
          Logging a lowspace event at: Wed Feb 13 01:00:41 CET 2013
          The FS name is: gpfs_archive
          HSM owner node is mng2-ib
          Executing mmapplypolicy : move data to HSM on node mng2-ib from file system gpfs_archive
          /usr/lpp/mmfs/bin/mmdsh -v -N mng2-ib /usr/lpp/mmfs/bin/mmapplypolicy gpfs_archive -N tsm1-ib --single-instance
          mng2-ib: [I] GPFS Current Data Pool Utilization in KB and %
          mng2-ib: system 10491424768 183542743040 5.716066%
          mng2-ib: [I] 4677 of 134217728 inodes used: 0.003485%.
          mng2-ib: [I] Loaded policy rules from /var/mmfs/tmp/cmdTmpDir.mmapplypolicy.8538/tspolicyFile.
          mng2-ib: Evaluating MIGRATE/DELETE/EXCLUDE rules with CURRENT_TIMESTAMP = 2013-02-13@00:02:38 UTC
          mng2-ib: parsed 1 Placement Rules, 0 Restore Rules, 2 Migrate/Delete/Exclude Rules,
          mng2-ib: 0 List Rules, 1 External Pool/List Rules
          mng2-ib: RULE EXTERNAL POOL 'hsm' EXEC '/var/mmfs/etc/mmpolicyExec-hsm.athena' OPTS '-v'
          mng2-ib: RULE 'exclude hsm system files' EXCLUDE WHERE PATH_NAME LIKE '%/.SpaceMan%'
          mng2-ib: RULE 'hsmdata' MIGRATE FROM POOL 'system' THRESHOLD(4,3,2) WEIGHT(CURRENT_TIMESTAMP - ACCESS_TIME) TO POOL 'hsm' WHERE FILE_SIZE >= 51200
          mng2-ib: RULE 'Default' SET POOL 'system'
          mng2-ib: [I] Messages tagged with <1> are from node localhost.
          mng2-ib: <1> /var/mmfs/etc/mmpolicyExec-hsm.athena TEST /archive -v
          mng2-ib: <1> TEST -x /usr/bin/dsmmigrate
          mng2-ib: <1> TEST -x /usr/bin/dsmrecall
          mng2-ib: <1> /var/mmfs/etc/mmpolicyExec-hsm.athena: TEST Ok
          mng2-ib: [I] Directories scan: 144 files, 17 directories, 0 other objects, 0 'skipped' files and/or errors.
          mng2-ib: [I] Summary of Rule Applicability and File Choices:
          mng2-ib: Rule# Hit_Cnt KB_Hit Chosen KB_Chosen KB_Ill Rule
          mng2-ib: 0 23 618427520 0 0 0 RULE 'exclude hsm system files' EXCLUDE WHERE(.)
          mng2-ib: 1 114 9863288320 52 5078128640 0 RULE 'hsmdata' MIGRATE FROM POOL 'system' THRESHOLD(4,3,2) WEIGHT(.) TO POOL 'hsm' WHERE(.)
          mng2-ib: + + + 18 1757813760 0 (PREMIGRATE)
          mng2-ib: + + + 13 0 0 (ALREADY CO-MANAGED)
          mng2-ib:
          mng2-ib: [I] Filesystem objects with no applicable rules: 24.
          mng2-ib:
          mng2-ib: [I] GPFS Policy Decisions and File Choice Totals:
          mng2-ib: Chose to migrate 5078128640KB: 52 of 114 candidates;
          mng2-ib: Chose to premigrate 1757813760KB: 18 candidates;
          mng2-ib: Already co-managed 0KB: 13 candidates;
          mng2-ib: Chose to delete 0KB: 0 of 0 candidates;
          mng2-ib: Chose to list 0KB: 0 of 0 candidates;
          mng2-ib: 0KB of chosen data is illplaced or illreplicated;
          mng2-ib: Predicted Data Pool Utilization in KB and %:
          mng2-ib: system 5413296128 183542743040 2.949338%
          mng2-ib: <1> /var/mmfs/etc/mmpolicyExec-hsm.athena MIGRATE /tmp/mmPolicy.ix.8838.A79D5113.1 -v
          mng2-ib: <1> /usr/bin/dsmmigrate -detail -filelist=/tmp/mmPolicy.ix.8838.A79D5113.1.hsm
          mng2-ib: <1> IBM Tivoli Storage Manager
          mng2-ib: <1> Command Line Space Management Client Interface
          mng2-ib: <1> Client Version 6, Release 4, Level 0.0
          mng2-ib: <1> Client date/time: 02/13/2013 00:02:39
          mng2-ib: <1> (c) Copyright by IBM Corporation and other(s) 1990, 2012. All Rights Reserved.
          mng2-ib: <1>
          mng2-ib: <1> Session established with server TSMSERVER: AIX-RS/6000
          mng2-ib: <1> Server Version 5, Release 5, Level 6.0
          mng2-ib: <1> Server date/time: 02/13/2013 01:02:06 Last access: 02/10/2013 11:41:30
          mng2-ib: <1>
          mng2-ib: <1> 02/13/2013 00:02:41 ANS9297W File /archive/sysm02/test_files_hsm/hsmfile_100GB.2 is skipped for migration: No backup copy found.
          mng2-ib: <1> 02/13/2013 00:02:41 ANS9297W File /archive/sysm02/test_files_hsm/hsmfile_100GB.3 is skipped for migration: No backup copy found.
          mng2-ib: <1> 02/13/2013 00:02:41 ANS9297W File /archive/sysm02/test_files_hsm/hsmfile_100GB.4 is skipped for migration: No backup copy found.
          http://...

          I think that work will be complete soon.
          Many thanks,
          Mauro T.
          • cmcc
            cmcc
            6 Posts
            ACCEPTED ANSWER

            Re: GPFS lowDiskSpace event handling doesn't start.

            ‏2013-02-13T22:37:36Z  in response to cmcc
            Hi Truongv,

            do you know a way to start a backup of lists of files that will be migrated using GPFS policy driven migration?
            I don't want to backup all data that reside on HSM managed file system...

            Do you think that there is a way to do it?

            Thanks in advance.
            Regards,
            Mauto T.
            • truongv
              truongv
              76 Posts
              ACCEPTED ANSWER

              Re: GPFS lowDiskSpace event handling doesn't start.

              ‏2013-02-14T15:27:45Z  in response to cmcc
              Below is from someone who knows TSM/HSM:

              There are few way to do it.

              One way is to use the list provided by policy due to lowspace event and use that list in exec script for both dsmc selective and dsmmigratefs.
              This will require you changing/creating custom exec script.

              Most customers will backup system on regular basis thus, migration should not need the backup. Also their migration rule will specify that older files are migration candidate thus allowing backup to be done before migration on all cases.
              Also keep in mind if migration and backup are being done to same TSM server then doing backup will not cause recall of the migrated file. TSM server will do inline copy.
              • cmcc
                cmcc
                6 Posts
                ACCEPTED ANSWER

                Re: GPFS lowDiskSpace event handling doesn't start.

                ‏2013-02-14T20:48:24Z  in response to truongv
                Thanks Truongv,
                I need to backup all files that will be migrated for a lot of "security" reasons.
                I know I can backup already migrated data using inline copy, but I prefer start a backup before any "moving" operations.

                So, I have to modify my "custom exec script"...that is "/var/mmfs/etc/mmpolicyExec-hsm" file, doesn't it!?
                You are not refer to callbackup.hsmspace.ksh script, I think...

                Bye bye,
                Mauro
                • cmcc
                  cmcc
                  6 Posts
                  ACCEPTED ANSWER

                  Re: GPFS lowDiskSpace event handling doesn't start.

                  ‏2013-02-15T11:08:26Z  in response to cmcc
                  ALL DONE!!! :-)
                  MISSION COMPLETED

                  Thank you very much for your patience and support.
                  Have a great day,
                  Mauro Tridici