Use the configure_fips.py script to manage FIPS settings on Cloud Pak for Data System. FIPS is disabled by default.
About this task
The script is located on every node at:
/opt/ibm/appliance/platform/xcat/scripts/xcat/configure_fips.py.
This
script can be run from any node and it will apply the changes on all the nodes. Note that all nodes
need to be rebooted for changes to be effective, which means you need to run the system shutdown and
startup steps as described in this task.
The log file is located in
/var/log/appliance/platform/xcat/configure_fips.log.tracelog.
Usage:
/opt/ibm/appliance/platform/xcat/scripts/xcat/configure_fips.py --h
[root@e1n1 ~]# /opt/ibm/appliance/platform/xcat/scripts/xcat/configure_fips.py --h
usage: configure_fips.py [-h] [--enable] [--disable]
This script will manage fips settings
optional arguments:
-h, --help show this help message and exit
--enable
--disable
[root@e1n1 ~]#
- --enable
- This option enables the FIPS on every node. Same can be verified by running sysctl
crypto.fips_enabled command on node.
Example:
[root@e1n1 ~]# /opt/ibm/appliance/platform/xcat/scripts/xcat/configure_fips.py --enable
[root@e1n1 ~]# // reboot all the nodes
[root@e1n1 ~]# sysctl crypto.fips_enabled
crypto.fips_enabled = 1
[root@e1n1 ~]#
- --disable
- This option disables the FIPS on every node. Same can be verified by running sysctl
crypto.fips_enabled command on
node.
Example:
[root@e1n1 ~]# /opt/ibm/appliance/platform/xcat/scripts/xcat/configure_fips.py –disable
[root@e1n1 ~]# // reboot all the nodes
[root@e1n1 ~]# sysctl crypto.fips_enabled
crypto.fips_enabled = 0
[root@e1n1 ~]#
To enable FIPS, run the following commands from any of the control nodes. Then you must reboot
ALL the nodes in the system as described.
Procedure
- Log in to e1n1 as
root
.
- Run the following commands:
- If you are on versions 1.0.7.4, or 1.0.7.5:
- Generate FIPS compliant authentication keys for GPFS:
-
Verify that GPFS service is running and GPFS nodes are up:
- To generate a new key, from a node in the cluster which is running version 4.1 or
later, issue:
- To commit the new key generated in previous step, issue:
- Set the release to LATEST:
mmchconfig release=LATEST
- Set the
nistCompliance
value:
mmchconfig nistCompliance=SP800-131A
Restart the nodes as in the following steps:
-
Run the ap state -d command on e1n1 to verify that the system is active and
ready:
ap state -d
System state is 'Ready'
Application state is 'Ready'
Platform management state is 'Active'
- Stop the system and services. Run the commands on e1n1:
- Shut down GPFS file system from the first control node e1n1:
systemctl stop nfs
mmumount all -a
mmshutdown -a
- Verify GPFS file systems are unmounted by running:
mmlsmount all -L
[root@e1n1 ~]# mmlsmount all -L
mmcommon: mmremote command cannot be executed. Either none of the
nodes in the cluster are reachable, or GPFS is down on all of the
nodes. mmlsmount: Command failed. Examine previous error messages to
determine cause.
- Shut down Docker:
for node in $(/opt/ibm/appliance/platform/xcat/scripts/xcat/display_nodes.py --all); do ssh $node "service docker stop";done
- Reboot all nodes starting from the last node. Two enclosures example:
for ip in $(/opt/ibm/appliance/platform/xcat/scripts/xcat/display_nodes.py --all | tr ' ' '\n' | tac); do echo $ip; ssh $ip 'shutdown -r'; done
- Run the apstart command and wait for system to come
online.
- Verify the state of the system by running the command:
ap state -d
[root@node0101 ~]# ap state -d
System state is 'Ready'
Application state is 'Ready'
Platform management state is 'Active'
- Run the ap node and ap sw commands to verify the
health of the system.