Topic
  • 11 replies
  • Latest Post - ‏2013-01-30T00:45:53Z by SystemAdmin
SystemAdmin
SystemAdmin
2092 Posts

Pinned topic minimizing client <==> server access to improve security?

‏2013-01-28T22:53:47Z |
I'm setting up GPFS 3.5.03 clients under Linux. Is there a detailed list
of commands that the clients may need to execute as root on the GPFS server?

I'm concerned about giving our client machines complete access to our
infrastructure servers (and the reverse is not ideal either), and would
like to limit the clients as much as possible.

Rather than allow the clients full root access to the servers without
passwords, I'd like to use the "command" mechanism within the SSH
authorized_keys file to restrict the programs that the client may run
to a pre-defined set.

Are there any other recommended practices for limiting the access from GPFS
clients to the GPFS servers?
Updated on 2013-01-30T00:45:53Z at 2013-01-30T00:45:53Z by SystemAdmin
  • dlmcnabb
    dlmcnabb
    1012 Posts

    Re: minimizing client <==> server access to improve security?

    ‏2013-01-29T07:06:05Z  
    Since GPFS on any node can access any disk in any other node, it should only be run on nodes that you control root access on. It expects a "trusted" environment.

    If you do not trust a user's machine since he can change to root, then it should not be part of the GPFS cluster.

    Use NFS from untrusted client machines to get to GPFS that is NFS exported. Then access is only to files that have access control instead of to blocks.
  • SystemAdmin
    SystemAdmin
    2092 Posts

    Re: minimizing client <==> server access to improve security?

    ‏2013-01-29T17:28:31Z  
    • dlmcnabb
    • ‏2013-01-29T07:06:05Z
    Since GPFS on any node can access any disk in any other node, it should only be run on nodes that you control root access on. It expects a "trusted" environment.

    If you do not trust a user's machine since he can change to root, then it should not be part of the GPFS cluster.

    Use NFS from untrusted client machines to get to GPFS that is NFS exported. Then access is only to files that have access control instead of to blocks.
    > dlmcnabb wrote:
    > Since GPFS on any node can access any disk in any other node, it should only be run on nodes that you control root access on. It expects a "trusted" environment.

    There are two fundamental problems here:

    1. Why should GPFS on any node have access to any disk on any other node?

    As I see it, a GPFS client should not have any access
    to any device that has not been configured as an NSD by
    a GPFS server. For example, gpfs_client1.example.com
    has absolutely no need to access the /dev/sda1
    partition (a local, non-NSD, non-shared) disk on
    gpfs_server1.example.com, or the reverse.
    2. There is no need for the "trusted" environment to give the level of
    access that GPFS seems to request.

    A fundamental principle of computer security is privilege
    minimization. Setting up GPFS with unlimited, passwordless
    access between nodes violates that concept. Why should
    a GPFS node have unlimited rights to run any command
    on any other GPFS node? There is no reason why a GPFS
    client should have the right to run "rm -rf /" or "fdisk"
    on another node, but those rights are granted.
    >
    > If you do not trust a user's machine since he can change to root, then it should not be part of the GPFS cluster.

    That is a flawed approach. Just because I trust a client machine today,
    January 29, 2013, doesn't mean that there will not be a security flaw
    discovered or that the machine will not be compromised tomorrow. The
    default stance should be to permit the minimum, required, well-defined
    set of commands and network traffic that is allowed, not to simply
    permit everything, bi-directionally, with root privileges.

    >
    > Use NFS from untrusted client machines to get to GPFS that is NFS exported. Then access is only to files that have access control instead of to blocks.

    Yes, I understand NFS. The issue here is not simply block access from
    a client to resources from a GPFS server, but unlimited access to every
    command and resource from one machine to another.

    Certainly, I see that members of a GPFS cluster need to signal each other
    to run commands with root privileges (mmmount, mmumount, etc.). Ideally,
    the trust relationship between GPFS nodes would be just that--based on
    signals (sent via ssh or some other cryptographically secured channel).

    I could understand if the current GPFS architecture required nodes to be
    able to execute commands on other nodes as root, for example, commands
    in /usr/lpp/mmfs/bin.

    I cannot understand why the configuration should require unlimited access
    from one node to another. This is similar to stating that GPFS uses
    network communication between nodes, so all firewalls between nodes should
    be completely disabled, instead of listing the specific ports (TCP/1191)
    that are required, and allowing the firewall to block undesired traffic.

    Can IBM provide a list of the commands that a GPFS node may need to
    execute remotely on another node? Such a list would allow users to
    configure existing tools (ssh) to significantly improve security.
  • SystemAdmin
    SystemAdmin
    2092 Posts

    Re: minimizing client <==> server access to improve security?

    ‏2013-01-29T17:36:07Z  
    One option is to use GPFS multicluster. You create a separate cluster with the GPFS clients. The two clusters can mount each others filesystems, but do not share root access. The client root password and ssh/rsh access is separate from the same for the server clusters. The clients still have access to the files in GPFS and can wreak havoc there, but your servers as such are safe.
  • SystemAdmin
    SystemAdmin
    2092 Posts

    Re: minimizing client <==> server access to improve security?

    ‏2013-01-29T17:39:25Z  
    • dlmcnabb
    • ‏2013-01-29T07:06:05Z
    Since GPFS on any node can access any disk in any other node, it should only be run on nodes that you control root access on. It expects a "trusted" environment.

    If you do not trust a user's machine since he can change to root, then it should not be part of the GPFS cluster.

    Use NFS from untrusted client machines to get to GPFS that is NFS exported. Then access is only to files that have access control instead of to blocks.
    > dlmcnabb wrote:
    > Since GPFS on any node can access any disk in any other node, it should only be run on nodes that you control root access on. It expects a "trusted" environment.

    There are two fundamental problems here:

    1. Why should GPFS on any node have access to any disk on any other node?

    As I see it, a GPFS client should not have any access
    to any device that has not been configured as an NSD by
    a GPFS server. For example, gpfs_client1.example.com
    has absolutely no need to access the /dev/sda1
    partition (a local, non-NSD, non-shared) disk on
    gpfs_server1.example.com, or the reverse.
    2. There is no need for the "trusted" environment to give the level of
    access that GPFS seems to request.

    A fundamental principle of computer security is privilege
    minimization. Setting up GPFS with unlimited, passwordless
    access between nodes violates that concept. Why should
    a GPFS node have unlimited rights to run any command
    on any other GPFS node? There is no reason why a GPFS
    client should have the right to run "rm -rf /" or "fdisk"
    on another node, but those rights are granted.
    >
    > If you do not trust a user's machine since he can change to root, then it should not be part of the GPFS cluster.

    That is a flawed approach. Just because I trust a client machine today,
    January 29, 2013, doesn't mean that there will not be a security flaw
    discovered or that the machine will not be compromised tomorrow. The
    default stance should be to permit the minimum, required, well-defined
    set of commands and network traffic that is allowed, not to simply
    permit everything, bi-directionally, with root privileges.

    >
    > Use NFS from untrusted client machines to get to GPFS that is NFS exported. Then access is only to files that have access control instead of to blocks.

    Yes, I understand NFS. The issue here is not simply block access from
    a client to resources from a GPFS server, but unlimited access to every
    command and resource from one machine to another.

    Certainly, I see that members of a GPFS cluster need to signal each other
    to run commands with root privileges (mmmount, mmumount, etc.). Ideally,
    the trust relationship between GPFS nodes would be just that--based on
    signals (sent via ssh or some other cryptographically secured channel).

    I could understand if the current GPFS architecture required nodes to be
    able to execute commands on other nodes as root, for example, commands
    in /usr/lpp/mmfs/bin.

    I cannot understand why the configuration should require unlimited access
    from one node to another. This is similar to stating that GPFS uses
    network communication between nodes, so all firewalls between nodes should
    be completely disabled, instead of listing the specific ports (TCP/1191)
    that are required, and allowing the firewall to block undesired traffic.

    Can IBM provide a list of the commands that a GPFS node may need to
    execute remotely on another node? Such a list would allow users to
    configure existing tools (ssh) to significantly improve security.
  • dlmcnabb
    dlmcnabb
    1012 Posts

    Re: minimizing client <==> server access to improve security?

    ‏2013-01-29T18:22:26Z  
    One option is to use GPFS multicluster. You create a separate cluster with the GPFS clients. The two clusters can mount each others filesystems, but do not share root access. The client root password and ssh/rsh access is separate from the same for the server clusters. The clients still have access to the files in GPFS and can wreak havoc there, but your servers as such are safe.
    In multi-cluster you can do root squashing for file access control, but it does not prevent root from accessing blocks served out by any NSD server in the cluster. There is no disk protection because an NSD server will read any block from any disk it can see and pass the contents back to a requesting GPFS node anywhere in the local or remote clusters.

    If root is compromised on any node participating in the GPFS clusters, then a malicious user can get to any disk in any of the GPFS filesystems.
  • SystemAdmin
    SystemAdmin
    2092 Posts

    Re: minimizing client <==> server access to improve security?

    ‏2013-01-29T18:29:19Z  
    > dlmcnabb wrote:
    > Since GPFS on any node can access any disk in any other node, it should only be run on nodes that you control root access on. It expects a "trusted" environment.

    There are two fundamental problems here:

    1. Why should GPFS on any node have access to any disk on any other node?

    As I see it, a GPFS client should not have any access
    to any device that has not been configured as an NSD by
    a GPFS server. For example, gpfs_client1.example.com
    has absolutely no need to access the /dev/sda1
    partition (a local, non-NSD, non-shared) disk on
    gpfs_server1.example.com, or the reverse.
    2. There is no need for the "trusted" environment to give the level of
    access that GPFS seems to request.

    A fundamental principle of computer security is privilege
    minimization. Setting up GPFS with unlimited, passwordless
    access between nodes violates that concept. Why should
    a GPFS node have unlimited rights to run any command
    on any other GPFS node? There is no reason why a GPFS
    client should have the right to run "rm -rf /" or "fdisk"
    on another node, but those rights are granted.
    >
    > If you do not trust a user's machine since he can change to root, then it should not be part of the GPFS cluster.

    That is a flawed approach. Just because I trust a client machine today,
    January 29, 2013, doesn't mean that there will not be a security flaw
    discovered or that the machine will not be compromised tomorrow. The
    default stance should be to permit the minimum, required, well-defined
    set of commands and network traffic that is allowed, not to simply
    permit everything, bi-directionally, with root privileges.

    >
    > Use NFS from untrusted client machines to get to GPFS that is NFS exported. Then access is only to files that have access control instead of to blocks.

    Yes, I understand NFS. The issue here is not simply block access from
    a client to resources from a GPFS server, but unlimited access to every
    command and resource from one machine to another.

    Certainly, I see that members of a GPFS cluster need to signal each other
    to run commands with root privileges (mmmount, mmumount, etc.). Ideally,
    the trust relationship between GPFS nodes would be just that--based on
    signals (sent via ssh or some other cryptographically secured channel).

    I could understand if the current GPFS architecture required nodes to be
    able to execute commands on other nodes as root, for example, commands
    in /usr/lpp/mmfs/bin.

    I cannot understand why the configuration should require unlimited access
    from one node to another. This is similar to stating that GPFS uses
    network communication between nodes, so all firewalls between nodes should
    be completely disabled, instead of listing the specific ports (TCP/1191)
    that are required, and allowing the firewall to block undesired traffic.

    Can IBM provide a list of the commands that a GPFS node may need to
    execute remotely on another node? Such a list would allow users to
    configure existing tools (ssh) to significantly improve security.
    GPFS, in recent releases, actually does a few things to fix the flaws you are talking about:

    • The there is GPFS multicluster, where you have 'client' clusters, completely separate from 'server' clusters. You grant access to filesystems similar to NFS. Besides the explicitly shared filesystems you have complete security and separation.

    • Recent GPFS implementation also allows for 'central' administration, where a secured, administrative node is allowed access to client nodes, but the client nodes have no need to access to other nodes. The restriction is, that you must run all 'mm' commands from the admin node.

    These both of these feature do cover your need for 'client' nodes, where a user, even as root, can not login/execute commands on other nodes due to GPFS.
  • SystemAdmin
    SystemAdmin
    2092 Posts

    Re: minimizing client <==> server access to improve security?

    ‏2013-01-29T19:23:40Z  
    • dlmcnabb
    • ‏2013-01-29T18:22:26Z
    In multi-cluster you can do root squashing for file access control, but it does not prevent root from accessing blocks served out by any NSD server in the cluster. There is no disk protection because an NSD server will read any block from any disk it can see and pass the contents back to a requesting GPFS node anywhere in the local or remote clusters.

    If root is compromised on any node participating in the GPFS clusters, then a malicious user can get to any disk in any of the GPFS filesystems.
    > dlmcnabb wrote:
    > In multi-cluster you can do root squashing for file access control, but it does not prevent root from accessing blocks served out by any NSD server in the cluster. There is no disk protection because an NSD server will read any block from any disk it can see and pass the contents back to a requesting GPFS node anywhere in the local or remote clusters.
    >
    > If root is compromised on any node participating in the GPFS clusters, then a malicious user can get to any disk in any of the GPFS filesystems.

    Sure, that's completely understandable. My objection was to the apparent need for each node to have complete access to every other node via rsh or ssh.
  • SystemAdmin
    SystemAdmin
    2092 Posts

    Re: minimizing client <==> server access to improve security?

    ‏2013-01-29T20:49:55Z  
    GPFS, in recent releases, actually does a few things to fix the flaws you are talking about:

    • The there is GPFS multicluster, where you have 'client' clusters, completely separate from 'server' clusters. You grant access to filesystems similar to NFS. Besides the explicitly shared filesystems you have complete security and separation.

    • Recent GPFS implementation also allows for 'central' administration, where a secured, administrative node is allowed access to client nodes, but the client nodes have no need to access to other nodes. The restriction is, that you must run all 'mm' commands from the admin node.

    These both of these feature do cover your need for 'client' nodes, where a user, even as root, can not login/execute commands on other nodes due to GPFS.
    > markus_b wrote:
    > GPFS, in recent releases, actually does a few things to fix the flaws you are talking about:

    I'll look at upgrading from 3.5.0.3 to 3.5.0.7.

    >
    > - The there is GPFS multicluster, where you have 'client' clusters, completely separate from 'server' clusters. You grant access to filesystems similar to NFS. Besides the explicitly shared filesystems you have complete security and separation.
    >
    > - Recent GPFS implementation also allows for 'central' administration, where a secured, administrative node is allowed access to client nodes, but the client nodes have no need to access to other nodes. The restriction is, that you must run all 'mm' commands from the admin node.

    Thanks, that sounds like just what I'm looking for.

    >
    > These both of these feature do cover your need for 'client' nodes, where a user, even as root, can not login/execute commands on other nodes due to GPFS.
  • dlmcnabb
    dlmcnabb
    1012 Posts

    Re: minimizing client <==> server access to improve security?

    ‏2013-01-29T21:47:14Z  
    > dlmcnabb wrote:
    > In multi-cluster you can do root squashing for file access control, but it does not prevent root from accessing blocks served out by any NSD server in the cluster. There is no disk protection because an NSD server will read any block from any disk it can see and pass the contents back to a requesting GPFS node anywhere in the local or remote clusters.
    >
    > If root is compromised on any node participating in the GPFS clusters, then a malicious user can get to any disk in any of the GPFS filesystems.

    Sure, that's completely understandable. My objection was to the apparent need for each node to have complete access to every other node via rsh or ssh.
    If you configure adminMode=central then you only one (or a few) nodes in a cluster that have ssh/rsh access to the other nodes without passwords or banners in the results. You only need one way ssh access and can setup something like ssh-agent to handle the pw.
  • SystemAdmin
    SystemAdmin
    2092 Posts

    Re: minimizing client <==> server access to improve security?

    ‏2013-01-29T22:51:46Z  
    > markus_b wrote:
    > GPFS, in recent releases, actually does a few things to fix the flaws you are talking about:

    I'll look at upgrading from 3.5.0.3 to 3.5.0.7.

    >
    > - The there is GPFS multicluster, where you have 'client' clusters, completely separate from 'server' clusters. You grant access to filesystems similar to NFS. Besides the explicitly shared filesystems you have complete security and separation.
    >
    > - Recent GPFS implementation also allows for 'central' administration, where a secured, administrative node is allowed access to client nodes, but the client nodes have no need to access to other nodes. The restriction is, that you must run all 'mm' commands from the admin node.

    Thanks, that sounds like just what I'm looking for.

    >
    > These both of these feature do cover your need for 'client' nodes, where a user, even as root, can not login/execute commands on other nodes due to GPFS.
    > GPFSuser wrote:
    > I'll look at upgrading from 3.5.0.3 to 3.5.0.7.

    You are up to date enough for the goodies I was talking about, adminmode=central was introduced back in GPFS V3.3 and cluster to cluster mounting even earlier.
  • SystemAdmin
    SystemAdmin
    2092 Posts

    Re: minimizing client <==> server access to improve security?

    ‏2013-01-30T00:45:53Z  
    > GPFSuser wrote:
    > I'll look at upgrading from 3.5.0.3 to 3.5.0.7.

    You are up to date enough for the goodies I was talking about, adminmode=central was introduced back in GPFS V3.3 and cluster to cluster mounting even earlier.
    Yeah, I now realize that those features (admin Mode = central, cluster-to-cluster mounts) are in the release I'm running. Until now, our environment only had GPFS servers (with full trust via passwordless SSH), and the guides I found (our internal docs on the installation procedure, various other guides) all seemed to use the model of full trust, which I want to avoid between client nodes and the GPFS servers. In looking for info about setting up a GPFS client, I didn't find alternative methods to the examples with full client<==>server trust. Having the features, even while the documentation describes the syntax of those features, doesn't necessarily explain the way to use the features.
    From the Administration and Programmig Reference, I see that "admin Mode=central" is described as:

    Indicates that only a subset of the nodes will be used for running GPFS commands and that only
    those nodes will be able to execute remote commands on the rest of the nodes in the cluster without
    the need of a password.

    but that doesn't give detail on how this mode would be used.

    I began looking for info about where the list of privileged nodes is defined, but now I think I understand that there is no defined subset... Whatever node runs an "mm" command must have remote access to the other nodes, but that access does not need to be symmetrical.

    This thread has the best explanation I've found so far:

    http://www.ibm.com/developerworks/forums/thread.jspa?threadID=274542

    I'm sorry I didn't search more closely & find that information before starting this discussion, and I'm glad that most of the capabilities I was seeking already exist.