Topic
  • 4 replies
  • Latest Post - ‏2013-04-16T16:12:02Z by Benjulios
ShAKE
ShAKE
5 Posts

Pinned topic GPFS mirroring

‏2007-05-23T04:35:13Z |
Hello there ,
I need help in GPFS mirroring ,below is the environment :

Two buildings 1 mile apart (prod and DR )

I have a GPFS 2.3 cluster for oracle 10G RAC .

RAc node1 sits in building A , RAC nodeB is building B
Storage : IBM 8100 Storage array/SAN in Building A and second 8100 Array in building B .

currunet configuration :

GPFS is using vpaths off of 8100SAN in buidling A and is active ,working fine .

I want to mirror GPFS file systems or NSD`s to 8100 stoarge in buidling B .

cuurent vpaths mapped to NSDs to GPFS are :
root@cemcp50 /: lspv | grep vpath
vpath0 none gpfs1nsd
vpath1 00cacecea78d56ab gpfs2nsd
vpath2 00cacecea78d5a04 gpfs3nsd
vpath3 00cacecea78d5d2d gpfs4nsd
vpath4 00cacecea78d6053 gpfs5nsd
vpath5 00cacecea78d6380 gpfs6nsd
vpath6 00cacecea78d6785 gpfs7nsd
vpath7 00cacecea78d6ada gpfs8nsd

I`m thinking off allocating same number of vpaths from 8100 in buidling B .

Where can I find steps to mirror -GPFS to these new vpath ? What is equalent of extendvg in AIX to GPFS so that it will know about the new disks ?
how to test if mirroring works or not like in AIX we mirror the volume group and then break the mirror ?

Also systems would be IO intesive ,any advice on how to lay down disks for best performance ?

Thanks

Shekhar
Updated on 2007-05-23T06:47:24Z at 2007-05-23T06:47:24Z by dlmcnabb
  • dlmcnabb
    dlmcnabb
    1012 Posts

    Re: GPFS mirroring

    ‏2007-05-23T05:44:49Z  
    GPFS does not do mirroring, it does block by block replication. See the Disaster Recovery chapters in the GPFS manuals.

    Basically, you put all the disks from site A in Failure Group 1 (FG1), and the disks from site B in Failure Group 2 (FG2). Create the filesystem with all the replication factors set at 2 (-M 2 -m 2 -R 2 -r 2). This way every block of a file or metadata will have one replica on some disk in FG1, and one replica on some disk in FG2. If either FG1 dies, or FG2 dies, you keep on using the surviving FG. When the doen FG revives, you use "mmchdisk $fsname start -a" to mark the down disk as "recovering" and copy all the blocks of files that were modified during the down period from the good disks to the recovering disks. This can be done while the filesystem is live and being used.

    To handle automatic recovery of failures, you should have an independent 3rd site that has a small machine that runs GPFS, but just acts as a voting node and does not need to mount the filesystems. It also has its own disk that this 3rd site node serves via NSD that you call FG3 and is marked as a descOnly disk. As long as only 1 of the 3 sites dies, the others will have a quorum of nodes and disks and keep on running.
  • ShAKE
    ShAKE
    5 Posts

    Re: GPFS mirroring

    ‏2007-05-23T06:21:07Z  
    I did not create existing file system using replication factors set at 2 (-M 2 -m 2 -R 2 -r 2)as you mentioned in your reply ,neither do I have failure group inthe current config .
    Is there any way I can configure the above in the current cluster ? If so how ?

    Thanks for the prompt reply . I really appreciate it .
  • dlmcnabb
    dlmcnabb
    1012 Posts

    Re: GPFS mirroring

    ‏2007-05-23T06:47:24Z  
    • ShAKE
    • ‏2007-05-23T06:21:07Z
    I did not create existing file system using replication factors set at 2 (-M 2 -m 2 -R 2 -r 2)as you mentioned in your reply ,neither do I have failure group inthe current config .
    Is there any way I can configure the above in the current cluster ? If so how ?

    Thanks for the prompt reply . I really appreciate it .
    You cannot change the -M or -R values from 1 to 2 after mmcrfs, but if those are already set to 2 you can raise the -m and -r up to 2.

    The FG designations can be change using mmchdisk $fsname change ... but if you do not have full replication, then it does not matter.
  • Benjulios
    Benjulios
    1 Post

    Re: GPFS mirroring

    ‏2013-04-16T16:12:02Z  
    • dlmcnabb
    • ‏2007-05-23T05:44:49Z
    GPFS does not do mirroring, it does block by block replication. See the Disaster Recovery chapters in the GPFS manuals.

    Basically, you put all the disks from site A in Failure Group 1 (FG1), and the disks from site B in Failure Group 2 (FG2). Create the filesystem with all the replication factors set at 2 (-M 2 -m 2 -R 2 -r 2). This way every block of a file or metadata will have one replica on some disk in FG1, and one replica on some disk in FG2. If either FG1 dies, or FG2 dies, you keep on using the surviving FG. When the doen FG revives, you use "mmchdisk $fsname start -a" to mark the down disk as "recovering" and copy all the blocks of files that were modified during the down period from the good disks to the recovering disks. This can be done while the filesystem is live and being used.

    To handle automatic recovery of failures, you should have an independent 3rd site that has a small machine that runs GPFS, but just acts as a voting node and does not need to mount the filesystems. It also has its own disk that this 3rd site node serves via NSD that you call FG3 and is marked as a descOnly disk. As long as only 1 of the 3 sites dies, the others will have a quorum of nodes and disks and keep on running.

    Thanks for this very clear and useful explanation .