Enabling FIPS modes on IAS

The configure_fips.py script manages the FIPS settings on IAS.
  • It is located on every node at /opt/ibm/appliance/platform/xcat/scripts/xcat/configure_fips.py.
  • You can run it from any node.
  • The script applies the changes on all the nodes.
  • After you disable/enable FIPS, you must restart all of the nodes for the changes to be effective.
  • The log file is located in /var/log/appliance/platform/xcat/configure_fips.log.tracelog.

The configure_fips.py script

[root@e1n1 ~]# /opt/ibm/appliance/platform/xcat/scripts/xcat/configure_fips.py --h

usage: configure_fips.py [-h] [--enable] [--disable]

This script will manage fips settings
optional arguments:

  -h, --help  show this help message and exit
  --enable
  --disable
[root@e1n1 ~]#
-h | --help
Displays this help message and exits.
--enable
Enables FIPS on every node. You can verify the outcome of the command by running the sysctl crypto.fips_enabled command from a node. For example:
[root@e1n1 ~]# /opt/ibm/appliance/platform/xcat/scripts/xcat/configure_fips.py --enable 
[root@e1n1 ~]#   // reboot all the nodes 
[root@e1n1 ~]# sysctl crypto.fips_enabled
crypto.fips_enabled = 1
[root@e1n1 ~]#
--disable
Disables FIPS on every node. You can verify the command by running the sysctl crypto.fips_enabled command from a node. For example:
[root@e1n1 ~]# /opt/ibm/appliance/platform/xcat/scripts/xcat/configure_fips.py –disable

[root@e1n1 ~]# // reboot all the nodes

[root@e1n1 ~]# sysctl crypto.fips_enabled
crypto.fips_enabled = 0
[root@e1n1 ~]#

Procedure

  1. Log in as the root user.
  2. Run the following commands
    • To enable FIPS:
      [root@node0101 ~]# /opt/ibm/appliance/platform/xcat/scripts/xcat/configure_fips.py --enable
    • To disable FIPS:
      [root@node0101 ~]# /opt/ibm/appliance/platform/xcat/scripts/xcat/configure_fips.py --disable
    For a multiple rack system, you have to run the command on the head node of each rack.
  3. If you are on 1.0.24.0, enable or disable FIPS at the GPFS level.
    • Enable FIPS at the GPFS level:
      mmchconfig FIPS1402mode=yes
    • Disable FIPS at the GPFS level:
      mmchconfig FIPS1402mode=no
  4. If you are on 1.0.24.0, generate FIPS compliant authentication keys for GPFS.

    Follow the steps that are described in Updating a GPFS cluster to nistCompliance SP800-131A.

  5. Restart all of the nodes. Run appliance shutdown and startup steps.
    Note: For a multiple rack system, you must run the commands from the head node of each rack.
    1. Verify that the system is active and ready. From node0101, run:
      ap state -d
      Example:
      [root@gt14-node1 ~]# ap state -d
      System state is 'Ready'
      Application state is 'Ready'
      Platform management state is 'Active'
    2. Stop the system and services. From node0101, run:
      apstop
      apstop --service
    3. Shut down the GPFS file system from the first control node (for example, node0101)):
      • If you are on 1.0.30.0 or later version, run the following commands:
        mmumount all -a
        mmshutdown -a
      • For all other versions, run the following commands:
        systemctl stop nfs
        mmumount all -a
        mmshutdown -a
    4. Verify that the GPFS file systems are unmounted:
      mmlsmount all -L
      Example:
      [root@node0101 ~]# mmlsmount all -L
      mmcommon: mmremote command cannot be executed. Either none of the
      nodes in the cluster are reachable, or GPFS is down on all of the
      nodes. mmlsmount: Command failed. Examine previous error messages to
      determine cause.
    5. Shut down the docker.
      Note: If you are on 1.0.30.0 or later version, use podman instead of docker.

      If you are on 1.0.25.0, skip this step.

      for node in $(/opt/ibm/appliance/platform/xcat/scripts/xcat/display_nodes.py); do ssh $node "service docker stop";done
    6. Restart all nodes, starting from the last node:
      for ip in node0{1..1}0{7..1}; do echo $ip; ssh $ip 'shutdown -r'; done
      This command is for a 1-rack system.
    7. Start docker.
      Note: If you are on 1.0.30.0 or later version, use podman instead of docker.

      If you are on 1.0.25.0, skip this step.

      for node in $(/opt/ibm/appliance/platform/xcat/scripts/xcat/display_nodes.py); do ssh $node "service docker start";done
    8. Start and mount the GPFS file services and verify the GPFS state:
      mmstartup -a
      mmmount all -a
      mmgetstate -aLv
      Note: For a multiple rack system, you must run the commands from the head node of each rack.
    9. Run the apstart command and wait for the system to come online:
      apstart
    10. Verify that the system state is Ready and Active.
      ap state -d
      Example:
      [root@node0101 ~]# ap state -d
      System state is 'Ready'
      Application state is 'Ready'
      Platform management state is 'Active'
    11. Run the ap node and ap sw commands to see whether the appliance is working correctly.
      ap node
      ap sw