Shutting down the system with Netezza Performance Server
Learn how to power off IBM® Cloud Pak for Data System with Netezza Performance Server running on it.
Procedure
-
Log in to the control plane node and verify the state of the system:
[root@e1n1 ~]# ap state -d System state is 'Ready' Application state is 'Ready' Platform management state is 'Active'
- Run ap node and make a note of nodes on system.
ap node
- Shut down Netezza Performance Server by running the
following commands:
- docker exec -it ipshost1 bash
- su - nz
- rm /nz/.gpfstoken
- nzstop
- exit to come out of nz.
- exit to come out of docker.
- Stop the system and services running the commands from e1n1:
[root@e1n1 ~]# ap apps disable VDB --force
[root@e1n1 ~]# ap apps
Wait and verify that VDB gets to the
DISABLED
state:[root@e1n1 ~]# apstop System in 'Stopped' state
[root@e1n1 ~]# apstop --service Successfully deactivated system Stopping Magneto service on all nodes
Note: Ignore the message: Problem occurred: Some service stopping steps failed on nodes fbond. - Shut down GPFS file system from first control node. From e1n1 run:
[root@e1n1 ~]#systemctl stop nfs [root@e1n1 ~]#mmumount all -a [root@e1n1 ~]#mmshutdown -a
- Verify GPFS file systems are
unmounted:
Example output:[root@e1n1 ~]# mmlsmount all -L
[root@e1n1 ~]# mmlsmount all -Lmmcommon: mmremote command cannot be executed. Either none of the nodes in the cluster are reachable, or GPFS is down on all of the nodes. mmlsmount: Command failed. Examine previous error messages to determine cause.
- Shut down docker on all the control
nodes:
Example:[root@e1n1 ~]# for node in $(/opt/ibm/appliance/platform/xcat/scripts/xcat/display_nodes.py --control); do ssh $node "service docker stop";done
[root@e1n1 ~]# for node in $(/opt/ibm/appliance/platform/xcat/scripts/xcat/display_nodes.py --control); do ssh $node "service docker stop";done Redirecting to /bin/systemctl stop docker.service Redirecting to /bin/systemctl stop docker.service Redirecting to /bin/systemctl stop docker.service
- Shut down nodes from e1n1, in the following order:
- Power off all SPU (NPS) nodes (enclosures 3 and 4 in the example)
- Shut down the ICPD nodes (enclosures 1 and 2 in the example)
- Power off the ICPD nodes
[root@e1n1 ~]# for ip in e{3..4}n{1..4}bmc; do ipmitool -I lanplus -H $ip -U USERID -P PASSW0RD power off; done
[root@e1n1 ~]# ssh e2n4 'shutdown -h' [root@e1n1 ~]# ssh e2n3 'shutdown -h' [root@e1n1 ~]# ssh e2n2 'shutdown -h' [root@e1n1 ~]# ssh e2n1 'shutdown -h' [root@e1n1 ~]# ssh e1n4 'shutdown -h' [root@e1n1 ~]# ssh e1n3 'shutdown -h' [root@e1n1 ~]# ssh e1n2 'shutdown -h' [root@e1n1 ~]# shutdown -h
- Verify that all nodes are powered off. Wait 5 - 10 minutes.
Check that the power LEDs are blinking on all the nodes.
- Physically remove power from the D2 enclosures, the fabric switch, and management switch.