Topic
  • 6 replies
  • Latest Post - ‏2013-10-03T10:19:03Z by TheLastWilson
JohnKaufmann
JohnKaufmann
6 Posts

Pinned topic AFM question

‏2013-09-11T20:22:47Z |

When using mmcrfileset to create a fileset on the cache, does the afmTarget need to be each node of the cluster?

An example used in a recent class (below) shows the afmTarget being just one node of a two node cluster (a & b servers), but as this was an ad hoc demonstration it may have been more "quick implementation" and the second node was not added?

mmcrfileset is WCDAS_cache -p afmTarget=nne-isfile100a:/is/RBU_proxy -p afmAsyncDelay=1 -p afmDirLookupRefreshInterval=1 -p afmDirOpenRefreshInterval=1 -p afmMode=independent-writer -p afmFileLookupRefreshInterval=1  -p afmFileOpenRefreshInterval=1 --inode-space=new

It seems like both nodes would need to be defined, but as a AFM newbie I really don't know for sure.

Thanks for any input,

John

  • JohnKaufmann
    JohnKaufmann
    6 Posts

    Re: AFM question

    ‏2013-10-01T18:01:11Z  

    As it turns out, you need to use an alias (cname) in dns that points to both servers as the afmTarget. We are running that right now and it seems to be working fine. :)

     

  • glore
    glore
    6 Posts

    Re: AFM question

    ‏2013-10-02T08:15:54Z  

    Hi,

     

    this is interesting.

    Did you test what happens when one of the two nfs servers is unavailable?

    Does AFM keep working?

     

     

  • TheLastWilson
    TheLastWilson
    6 Posts

    Re: AFM question

    ‏2013-10-02T08:24:46Z  

    As it turns out, you need to use an alias (cname) in dns that points to both servers as the afmTarget. We are running that right now and it seems to be working fine. :)

     

    I'm not 100% on how DNS would react to that configuration but as I understand multiple cnames will round robin which risks that if one of the nodes goes down the afm link could appear either down. 

     

    I'm not saying it's wrong just that I think it's an incomplete solution. We've done some work with AFM in the past and I think the best solution would be to use 2 dedicated IP address and something like CTDB so if one node gets down then the IP address fails over to the other node. I'm not sure if you were going for a performance increase or redundancy but that should provide both. 

  • JohnKaufmann
    JohnKaufmann
    6 Posts

    Re: AFM question

    ‏2013-10-02T17:43:15Z  

    I'm not 100% on how DNS would react to that configuration but as I understand multiple cnames will round robin which risks that if one of the nodes goes down the afm link could appear either down. 

     

    I'm not saying it's wrong just that I think it's an incomplete solution. We've done some work with AFM in the past and I think the best solution would be to use 2 dedicated IP address and something like CTDB so if one node gets down then the IP address fails over to the other node. I'm not sure if you were going for a performance increase or redundancy but that should provide both. 

    So to be a bit more thurough, we have a cname that points to both cnfs IP's of our two file servers (in the cluster) DNS does do a round robin between those ip addresses, and we also have something implemented in an F5 (that does load balancing between the two).

    With that said, as I am sure you are familiar, when one cnfs  gpfs file server fails, the port (ip) fails over to the other server, so that one server is serving both ips (which are pointed to by the cname). So, so far things have been working fine. I will do some further failure testing to verify that there are no issues.

     

  • TheLastWilson
    TheLastWilson
    6 Posts

    Re: AFM question

    ‏2013-10-03T10:14:14Z  

    So to be a bit more thurough, we have a cname that points to both cnfs IP's of our two file servers (in the cluster) DNS does do a round robin between those ip addresses, and we also have something implemented in an F5 (that does load balancing between the two).

    With that said, as I am sure you are familiar, when one cnfs  gpfs file server fails, the port (ip) fails over to the other server, so that one server is serving both ips (which are pointed to by the cname). So, so far things have been working fine. I will do some further failure testing to verify that there are no issues.

     

    I don't run cnfs myself, I have set up CTDB before to do the same task with samba rather then nfs. 

     

    Sounds like you've got everything covered though :)

  • TheLastWilson
    TheLastWilson
    6 Posts

    Re: AFM question

    ‏2013-10-03T10:19:03Z  
    • glore
    • ‏2013-10-02T08:15:54Z

    Hi,

     

    this is interesting.

    Did you test what happens when one of the two nfs servers is unavailable?

    Does AFM keep working?

     

     

    with cnfs and ctdb in the mix then it should. 


    The risk is that a client node will cache the IP address from the DNS cname (or if you hardcoded the IP address the effect is the same). if the AFM servers have virtual IP addresses managed by cnfs or ctdb and you then remove power/comms from one of the nodes cnfs/ctdb will in effect move the IP address from the down node to the live node. The live node should then be responding to both IP addresses and the client is can continue on its merry way. 

    I know CTDB is capable of managing both samba and nfs but I'm not sure what the pros and cons are of it when compared against cnfs.