Topic
2 replies Latest Post - ‏2013-05-06T15:49:01Z by TSM_Development(Dominic)
glore
glore
6 Posts
ACCEPTED ANSWER

Pinned topic mmbackup long expiration time

‏2013-04-19T09:01:34Z |

Hi all,

I am experiencing a very long expiration time with TSM v6.3.3 and GPFS v3.5.0.7.

We used mmbackup to backup some ~50TB file systems, with many small files (up to 50M).

The number of files expired per day is high, sometimes up to 1 M.

We run mmbackup with up to 3 helper nodes so the total number of parallel dsmc sessions is 4.

According to the TSM support the very long expiration time (up to 12 hours)  is caused by the fact that mmbackup runs in parallel multiple sessions for expiring files for the same file system, and this produced row lock contention on the TSM db.

Did anyone experience the same kind of problem?

Any suggestion on how to work around?

 

Thanks in advance,

 

Giuseppe

 

 

 

Updated on 2013-04-19T09:02:15Z at 2013-04-19T09:02:15Z by glore
  • sberman
    sberman
    59 Posts
    ACCEPTED ANSWER

    Re: mmbackup long expiration time

    ‏2013-05-06T14:59:28Z  in response to glore

    Giuseppe,

      Yes, some other customers with regular occurrence of large numbers of deleted files do also see this.  There are several remedies possible to reduce the problematic TSM database locking contention.

    1) You can reduce the parallel expire load on the TSM server by running a backup with a smaller number of nodes (-N <nodes>)  though the backup activity will go slower, the expire may go faster for you.

    2) There is a fix coming in a later versions of the TSM client which will deliver some improvement in batching larger numbers of expires into a single transaction.  That release is not out yet, but if you watch the TSM announcements it should be coming in a fix-release soon.

    GPFS ILM developers are working with TSM to address this performance concern in ongoing work.

    Regards,