ap commands
The ap command and its subcommands display information about the system and send management requests.
Syntax
ap [-h] [-v] [--host address] [--user user] [--password password]
[--from-file file_path]
{apps,config,df,ds,elog,events,fs,hw,info,issues,maintenance,node,sd,state,sw,version} ...
Optional parameters
If you do not specify any of the parameters, the same output as for ap state is displayed, that is the general state of the entire system.
For the location parameters in subcommands, use the same format that is
shown in the output of the ap hw command, for example
enclosure1.node2
- -h|--help
-
Displays help for the command.
- -v|--verbose
-
Displays some additional information from the logs.
You can use the following parameters to run the ap command remotely:
- --host address
- Specifies the address of the system. Default value is
localhost
. When specifying other than default, you must also provide user and password. - --port number
- Specifies the port number to use. Default value is
5001
. - --user user
- User name to access the host.
- --password password
- Password to access the host.
- --from-file file_path
-
Uses the specified file to load hostname, port, user and password. Values provided as options override the values from the file.
Subcommands
- ap apps [-h] [-v] [--host <address>] [--user <user>] [--password <password>] [--from-file <file_path>] [-d] [-f] [{enable,disable,restart} <application>]
- Lists and manages the state of monitored applications. Without any parameters, the command lists
all monitored applications and their state, as in the example:
Parameters specific to the command are as follows:[root@e1n1 ~]# ap apps +----------+------------------+ | Name | Management State | +----------+------------------+ | CallHome | ENABLED | | CYCLOPS | ENABLED | | ICP4D | DISABLED | | INFLUXDB | ENABLED | | VDB | ENABLED | +----------+------------------+ Generated: 2022-03-14 11:03:44
- -d|--detail
-
Displays detailed information about the state of all monitored applications.
- -f|--force
- Does not ask before taking action.
- enable application
- Enables the specified application.
- disable application
- Disables the specified application.
- restart application
- Restarts the specified application.
- ap config [--host <address>] [-h] (--set alerts_rules --type (action|to|add_to) [--scope <alert_type>|<alert_reason_code>] --value (<action_int>|<email_list>) | --del alerts_rules --type (action|to|add_to) [--scope <alert_type>|<alert_reason_code>] | --test alerts_rules --reason_code <reason_code> | --list alerts_rules | --set smtp --mail_server_name <server_name> --mail_server_port <port> --sender_name <sender_name> --sender_address <sender_address> | --list smtp)
- Enables you to configure alert rules or SMTP. For more information, see ap config command.
- ap df [-h] [-v] [--host <address>] [--user <user>] [--password <password>] [--from-file <file_path>]
- Shows storage utilization.
[root@e1n1 ~]# ap df Shared file systems utilization +-----------+----------+-----------+-----------+--------+ | HA Domain | Name | Size [GB] | Used [GB] | % Used | +-----------+----------+-----------+-----------+--------+ | hadomain1 | ips | 2147.48 | 17.70 | 0.82 | | hadomain1 | platform | 1073.74 | 137.62 | 12.82 | +-----------+----------+-----------+-----------+--------+ Node local files systems utilization +------------------+------+-----------------+-----------+-----------+--------+ | Node | Name | Mountpoint | Size [GB] | Used [GB] | % Used | +------------------+------+-----------------+-----------+-----------+--------+ | enclosure1.node1 | sda1 | /boot | 1.02 | 0.13 | 12.50 | | enclosure1.node1 | sda2 | /var/lib/docker | 104.81 | 7.22 | 6.88 | | enclosure1.node1 | sda5 | /var | 40.53 | 9.55 | 23.57 | | enclosure1.node1 | sda6 | /home | 30.83 | 0.30 | 0.97 | | enclosure1.node1 | sda7 | /tmp | 30.83 | 12.07 | 39.15 | | enclosure1.node1 | sda8 | /var/log/audit | 5.03 | 0.78 | 15.56 | | enclosure1.node1 | sda9 | / | 208.14 | 38.83 | 18.66 | | enclosure1.node2 | sda1 | /boot | 1.02 | 0.13 | 12.50 | | enclosure1.node2 | sda2 | /var/lib/docker | 104.81 | 4.53 | 4.32 | | enclosure1.node2 | sda5 | /var | 40.53 | 8.32 | 20.52 | | enclosure1.node2 | sda6 | /home | 30.83 | 0.30 | 0.97 | | enclosure1.node2 | sda7 | /tmp | 30.83 | 0.92 | 2.99 | | enclosure1.node2 | sda8 | /var/log/audit | 5.03 | 0.70 | 13.86 | | enclosure1.node2 | sda9 | / | 208.14 | 37.68 | 18.10 | | enclosure1.node3 | sda1 | /boot | 1.02 | 0.13 | 12.50 | | enclosure1.node3 | sda2 | /var/lib/docker | 104.81 | 4.53 | 4.32 | | enclosure1.node3 | sda5 | /var | 40.53 | 8.32 | 20.52 | | enclosure1.node3 | sda6 | /home | 30.83 | 0.30 | 0.97 | | enclosure1.node3 | sda7 | /tmp | 30.83 | 0.92 | 2.99 | | enclosure1.node3 | sda8 | /var/log/audit | 5.03 | 0.70 | 13.93 | | enclosure1.node3 | sda9 | / | 208.14 | 38.81 | 18.65 | | enclosure1.node4 | sda1 | /boot | 1.02 | 0.13 | 12.40 | | enclosure1.node4 | sda5 | /var | 40.53 | 7.83 | 19.32 | | enclosure1.node4 | sda6 | /home | 30.83 | 0.30 | 0.97 | | enclosure1.node4 | sda7 | /tmp | 30.83 | 0.87 | 2.84 | | enclosure1.node4 | sda8 | /var/log/audit | 5.03 | 0.42 | 8.38 | | enclosure1.node4 | sda9 | / | 208.14 | 14.60 | 7.01 | | enclosure2.node1 | sda1 | /boot | 1.02 | 0.13 | 12.40 | | enclosure2.node1 | sda5 | /var | 40.53 | 7.83 | 19.33 | | enclosure2.node1 | sda6 | /home | 30.83 | 0.30 | 0.97 | | enclosure2.node1 | sda7 | /tmp | 30.83 | 0.87 | 2.84 | | enclosure2.node1 | sda8 | /var/log/audit | 5.03 | 0.65 | 12.91 | | enclosure2.node1 | sda9 | / | 208.14 | 14.60 | 7.01 | | enclosure2.node2 | sda1 | /boot | 1.02 | 0.13 | 12.50 | | enclosure2.node2 | sda5 | /var | 40.53 | 7.79 | 19.23 | | enclosure2.node2 | sda6 | /home | 30.83 | 0.30 | 0.97 | | enclosure2.node2 | sda7 | /tmp | 30.83 | 0.87 | 2.84 | | enclosure2.node2 | sda8 | /var/log/audit | 5.03 | 0.42 | 8.42 | | enclosure2.node2 | sda9 | / | 208.14 | 14.60 | 7.01 | | enclosure2.node3 | sda1 | /boot | 1.02 | 0.13 | 12.40 | | enclosure2.node3 | sda5 | /var | 40.53 | 7.46 | 18.40 | | enclosure2.node3 | sda6 | /home | 30.83 | 0.30 | 0.97 | | enclosure2.node3 | sda7 | /tmp | 30.83 | 0.87 | 2.84 | | enclosure2.node3 | sda8 | /var/log/audit | 5.03 | 0.33 | 6.49 | | enclosure2.node3 | sda9 | / | 208.14 | 14.60 | 7.01 | | enclosure2.node4 | sda1 | /boot | 1.02 | 0.13 | 12.50 | | enclosure2.node4 | sda5 | /var | 40.53 | 7.45 | 18.39 | | enclosure2.node4 | sda6 | /home | 30.83 | 0.30 | 0.97 | | enclosure2.node4 | sda7 | /tmp | 30.83 | 0.87 | 2.84 | | enclosure2.node4 | sda8 | /var/log/audit | 5.03 | 0.36 | 7.14 | | enclosure2.node4 | sda9 | / | 208.14 | 14.60 | 7.01 | +------------------+------+-----------------+-----------+-----------+--------+ Generated: 2022-03-14 11:09:02
- ap elog close <node> (<elog_event_id> | all)
- Used to close elog related events.
- -f| --force
- Does not ask before performing action.
- <node>
- Positional argument. Node for which the event closes.
- <elog_event_id>
- Positional argument. ID of the event to close. Might be set as
all
to close all of the events.
- close
- Positional argument. Might be used to close an FSP event with ID
ZZZ
generated on nodeY
in HA domainX
by ap elog close hadomainX.nodeY ZZZ command. For instance:
To close all the FSP events for a specified node runap elog close hadomain1.node2 0x504BDCB8
ap elog close hadomain1.node3 all
- ap events [--host <address>] [[-h] [-d] [-ni] [-rc <reason_code>] [-tp <type>] [-tg <target>] [-tsub <target>]| <event_id> | [--user <user>] [--password <password>] [--from-file <file_path>]
- Displays information about all events. The output is the same as for ap issues
-e.
- --from <time> --to <time>
- Specifies the time frame for which to display events. The values can be provided in three
forms:
where--from YYYY-MM-DD-hh:mm --to YYYY-MM-DD-hh:mm
YYYY-MM-DD-hh:mm
is date and time.
where--from YYYY-MM-DD --to YYYY-MM-DD
YYYY-MM-DD
is a date. In this case, the start time is assumed to be 00:00:00 and the end time is 23:59:59.
where N is a number of days before the current date, so, for example,--from -N --to -N
--from -32
sets the start date to 32 days before the current date. The starts time is assumed to be 00:00:00 and the end time is 23:59:59.
- -ni| --no_information
- Displays events with severity other than INFORMATION.
- -tp <type> [<type> ...]| --types <type> [<type> ...]
- Displays only events for the given type(s).
- -rc <reason_code> [<reason_code> ...]| --reason_codes <reason_code> [<reason_code> ...]
- Displays only events for the given reason code(s).
- -tg <target>| --target <target>
- Displays only events of the given target.
- -tsub <target>| --target_subcomponents <target>
- Displays all events for target and its subcomponents.
- ap fs [-h] [-v] [--host <address>] [--user <user>] [--password <password>] [--from-file <file_path>] [-d]
- Displays file system information. A sample command and output follow.
[root@e1n1 ~]# ap fs GPFS filesystems +-----------+------------+------------------------------------+-----------+-----------+--------+--------+ | HA Domain | Filesystem | Disk | Size [GB] | Used [GB] | % Used | Status | +-----------+------------+------------------------------------+-----------+-----------+--------+--------+ | hadomain1 | ips | nsd.ips_e1n1_ssd_0/e1n1.fbond | 500.00 | 4.15 | 0.83 | OK | | hadomain1 | ips | nsd.ips_e1n1_ssd_1/e1n1.fbond | 500.00 | 4.10 | 0.82 | OK | | hadomain1 | ips | nsd.ips_e1n1_ssd_2/e1n1.fbond | 500.00 | 4.10 | 0.82 | OK | | hadomain1 | ips | nsd.ips_e1n1_ssd_3/e1n1.fbond | 500.00 | 4.10 | 0.82 | OK | | hadomain1 | ips | nsd.ips_e1n2_ssd_0/e1n2.fbond | 500.00 | 4.15 | 0.83 | OK | | hadomain1 | ips | nsd.ips_e1n2_ssd_1/e1n2.fbond | 500.00 | 4.10 | 0.82 | OK | | hadomain1 | ips | nsd.ips_e1n2_ssd_2/e1n2.fbond | 500.00 | 4.10 | 0.82 | OK | | hadomain1 | ips | nsd.ips_e1n2_ssd_3/e1n2.fbond | 500.00 | 4.10 | 0.82 | OK | | hadomain1 | ips | nsd.ips_e1n3_ssd_0/e1n3.fbond | 500.00 | 4.15 | 0.83 | OK | | hadomain1 | ips | nsd.ips_e1n3_ssd_1/e1n3.fbond | 500.00 | 4.10 | 0.82 | OK | | hadomain1 | ips | nsd.ips_e1n3_ssd_2/e1n3.fbond | 500.00 | 4.10 | 0.82 | OK | | hadomain1 | ips | nsd.ips_e1n3_ssd_3/e1n3.fbond | 500.00 | 4.10 | 0.82 | OK | | hadomain1 | platform | nsd.platform_e1n1_ssd_0/e1n1.fbond | 250.00 | 32.05 | 12.82 | OK | | hadomain1 | platform | nsd.platform_e1n1_ssd_1/e1n1.fbond | 250.00 | 32.05 | 12.82 | OK | | hadomain1 | platform | nsd.platform_e1n1_ssd_2/e1n1.fbond | 250.00 | 32.02 | 12.81 | OK | | hadomain1 | platform | nsd.platform_e1n1_ssd_3/e1n1.fbond | 250.00 | 32.05 | 12.82 | OK | | hadomain1 | platform | nsd.platform_e1n2_ssd_0/e1n2.fbond | 250.00 | 32.08 | 12.83 | OK | | hadomain1 | platform | nsd.platform_e1n2_ssd_1/e1n2.fbond | 250.00 | 32.02 | 12.81 | OK | | hadomain1 | platform | nsd.platform_e1n2_ssd_2/e1n2.fbond | 250.00 | 32.02 | 12.81 | OK | | hadomain1 | platform | nsd.platform_e1n2_ssd_3/e1n2.fbond | 250.00 | 32.05 | 12.82 | OK | | hadomain1 | platform | nsd.platform_e1n3_ssd_0/e1n3.fbond | 250.00 | 32.05 | 12.82 | OK | | hadomain1 | platform | nsd.platform_e1n3_ssd_1/e1n3.fbond | 250.00 | 32.02 | 12.81 | OK | | hadomain1 | platform | nsd.platform_e1n3_ssd_2/e1n3.fbond | 250.00 | 32.05 | 12.82 | OK | | hadomain1 | platform | nsd.platform_e1n3_ssd_3/e1n3.fbond | 250.00 | 32.05 | 12.82 | OK | +-----------+------------+------------------------------------+-----------+-----------+--------+--------+ Filesystems mounts +------------+------------------+-------------------------------------+ | Filesystem | Node | Mountpoint | +------------+------------------+-------------------------------------+ | ips | enclosure1.node1 | /opt/ibm/appliance/storage/ips | | ips | enclosure1.node2 | /opt/ibm/appliance/storage/ips | | ips | enclosure1.node3 | /opt/ibm/appliance/storage/ips | | platform | enclosure1.node1 | /opt/ibm/appliance/storage/platform | | platform | enclosure1.node2 | /opt/ibm/appliance/storage/platform | | platform | enclosure1.node3 | /opt/ibm/appliance/storage/platform | +------------+------------------+-------------------------------------+ Generated: 2022-03-14 12:16:55
- ap hw [--host <address>] [-h] [SN | location | -t <type>] [-d] [<subcomponent> -s <level>] [--user <user>] [--password <password>] [--from-file <file_path>]
Displays a list of the hardware devices with their roles, statuses, and other properties. Parameters are as follows:
- SN | location
- A positional argument. Specifies the serial number or location of the device. If you do not specify a serial number or location, information about all devices is displayed.
- -t type|--type type
-
Displays information about devices of the specified type. For types available in your system, see the output table in
ap hw -d
. - -d|--detail
-
Displays additional information.
- -s|--subcomponents
- Shows subcomponents tree of the specified device. You can specify to which level you want to see
the tree, for example,
-s 1
displays only direct subcomponents of the device, while-s 2
shows all subcomponents of the device, and their subcomponents.
[root@e1n1 ~]# ap hw +-----------------------------+------------------+-----------+----------+------------+--------------+ | Name | Location | Status | SN | Model | FW | +-----------------------------+------------------+-----------+----------+------------+--------------+ | modular thinksystem chassis | enclosure1 | OK | J100DHNL | 3453-C2E | 1.21 | | Compute Node | enclosure1.node1 | OK | J100E7AF | 7X21CTO1WW | TEE172G-3.01 | | Compute Node | enclosure1.node2 | OK | J100E7BP | 7X21CTO1WW | TEE172G-3.01 | | Compute Node | enclosure1.node3 | OK | J100E7CG | 7X21CTO1WW | TEE172G-3.01 | | Compute Node | enclosure1.node4 | OK | J100E79A | 7X21CTO1WW | TEE172G-3.01 | | modular thinksystem chassis | enclosure2 | OK | J100EXXC | 3453-C2E | 1.21 | | Compute Node | enclosure2.node1 | OK | J100E7BE | 7X21CTO1WW | TEE172G-3.01 | | Compute Node | enclosure2.node2 | OK | J100E7CN | 7X21CTO1WW | TEE172G-3.01 | | Compute Node | enclosure2.node3 | OK | J100E79Z | 7X21CTO1WW | TEE172G-3.01 | | Compute Node | enclosure2.node4 | ATTENTION | J100E7D2 | 7X21CTO1WW | TEE172G-3.01 | | modular thinksystem chassis | enclosure3 | OK | J100DHNE | 3453-C2E | 1.21 | | Compute Node | enclosure3.node1 | OK | J100E099 | 7X21CTO1WW | TEE172G-3.01 | | Compute Node | enclosure3.node2 | OK | J100BN3H | 7X21CTO1WW | TEE172G-3.01 | | Compute Node | enclosure3.node3 | OK | J100BN3R | 7X21CTO1WW | TEE172G-3.01 | | Compute Node | enclosure3.node4 | OK | J100BN41 | 7X21CTO1WW | TEE172G-3.01 | | modular thinksystem chassis | enclosure4 | OK | J100DHN3 | 3453-C2E | 1.21 | | Compute Node | enclosure4.node1 | OK | J100DT8X | 7X21CTO1WW | TEE172G-3.01 | | Compute Node | enclosure4.node2 | OK | J100DT7Z | 7X21CTO1WW | TEE172G-3.01 | | Compute Node | enclosure4.node3 | OK | J100CW1C | 7X21CTO1WW | TEE172G-3.01 | | Compute Node | enclosure4.node4 | OK | J100E09C | 7X21CTO1WW | TEE172G-3.01 | | Fabric Switch | fabsw1a | OK | 10MM0T6 | 3454-B8C | 3.7.11 | | Fabric Switch | fabsw1b | OK | 10NB02C | 3454-B8C | 3.7.11 | | Management Switch | mgtsw1a | OK | 10LAEZW | 3454-A3C | 3.7.3 | | Management Switch | mgtsw2a | OK | 10LAFKE | 3454-A3C | 3.7.3 | +-----------------------------+------------------+-----------+----------+------------+--------------+ Generated: 2022-03-14 12:18:41
- ap info [-h] [-v] [--host <address>] [--user <user>] [--password <password>] [--from-file <file_path>]
- Displays general information about the platform. Sample output
follows:
[root@e1n1 ~]# ap info +-------------------------------------+ | General information | +-----------------+-------------------+ | MTM | 3453-C2E | | Serial | J100DHNL | +-----------------+-------------------+ +-----------------------------------+ | System information | +-----------------------+-----------+ | Description | | | Location | | | Country Code | | | Model | | +-----------------------+-----------+ +--------------------------------+ | Customer information | +--------------------+-----------+ | Company | | | Address 1 | | | Address 2 | | | Address 3 | | | ICN | | +--------------------+-----------+ +-------------------------------------------------------------+ | MTMs information | +---------------------+-------------------+-------------------+ | Location | MTM | Serial | +---------------------+-------------------+-------------------+ | fabsw1a | 3454-B8C | 10MM0T6 | | fabsw1b | 3454-B8C | 10NB02C | | enclosure4 | 3453-C2E | J100DHN3 | | enclosure2 | 3453-C2E | J100EXXC | | enclosure3 | 3453-C2E | J100DHNE | | enclosure1 | 3453-C2E | J100DHNL | | mgtsw1a | 3454-A3C | 10LAEZW | | mgtsw2a | 3454-A3C | 10LAFKE | +---------------------+-------------------+-------------------+
- ap issues [-h] [--host <address>] [--user <user>] [--password <password>] [--from-file <file_path>] [-d] [-i] [-e] [-c] [-hw] [-sw] [-gpfs] [--show_registry] [--open_service_request <alert_id> [<alert_id> ...]] [-rc <reason_code> [<reason_code> ...]] [-tp <type> [<type> ...]] [-tg <target>] [-tsub <target>] [-ni] [--from <time_specifier>] [--to <time_specifier>] | <issue_id> | [-f] --close <issue_id> | --service_requests [<srid>] | --generate_test_alert | --close_test_alert | --generate_test_sw_alert | --close_test_sw_alert
- Displays information about current system issues. Parameters are as follows. If you do not
specify any parameters, a list of all current issues (with their types, severities, and
descriptions) is displayed.
- -d|--detail
-
Displays detailed information about all issues.
- -i|--issues
- Displays information about all open issues.
- -c|--closed
- Displays information about all closed issues.
- -e|--events
- Displays information about all events.
- -hw|--hw
- Displays information about all hardware components with some issues.
- -sw|--sw
- Displays information about all software components with some issues.
- -gpfs
- Displays information about all GPFS components with some issues.
- --from <time> --to <time>
- Specifies the time frame for which to display events. The values can be provided in three
forms:
where--from YYYY-MM-DD-hh:mm --to YYYY-MM-DD-hh:mm
YYYY-MM-DD-hh:mm
is date and time.
where--from YYYY-MM-DD --to YYYY-MM-DD
YYYY-MM-DD
is a date. In this case, the start time is assumed to be 00:00:00 and the end time is 23:59:59.
where N is a number of days before the current date, so, for example,--from -N --to -N
--from -32
sets the start date to 32 days before the current date. The starts time is assumed to be 00:00:00 and the end time is 23:59:59.
- -tp <type> [<type> ...]| --types <type> [<type> ...]
- Displays only issues/events for the given type(s).
- -rc <reason_code> [<reason_code> ...]| --reason_codes <reason_code> [<reason_code> ...]
- Displays only issues/events for the given reason code(s).
- -tg <target>| --target <target>
- Displays only issues/events of the given target.
- -tsub <target>| --target_subcomponents <target>
- Displays all issues/events for target and its subcomponents.
- -ni| --no_information
- Displays events with severity other than INFORMATION.
- --close issue_id
- Closes alert with the specified ID.
- -f|--force
- Does not ask before closing the alert.
- issue_id
- Positional argument. Displays detailed information about a particular issue.
- --generate_test_alert
- Generates a test issue.
- --close_test_alert
- Closes a test issue.
- --generate_test_sw_alert
- Generates a test software issue.
- --close_test_sw_alert
- Closes a test software issue.
- --show_registry
- Shows alerts registry.
- --open_service_request <alert_id> [<alert_id> ...]
- Opens PMRs for a specified set of issues and events.
- -sr [<srid>]| --service_requests [<srid>]
- Shows service requests opened by Platform Manager with their statuses and associated alert IDs.
ap issues Open alerts (issues) +------+---------------------+----------------------------+-----------------------------------------------+----------------------------+----------+ | ID | Date (CEST) | Type | Reason Code and Title | Target | Severity | +------+---------------------+----------------------------+-----------------------------------------------+----------------------------+----------+ | 1002 | 2019-05-17 13:25:31 | HW_SERVICE_REQUESTED | 108: Subcomponent is unreachable | hw://fabswa | MAJOR | | 1003 | 2019-05-17 13:25:31 | HW_NEEDS_ATTENTION | 201: Unhealthy component detected | hw://enclosure2.node1.bmc1 | WARNING | | 1005 | 2019-05-17 13:25:31 | HW_NEEDS_ATTENTION | 201: Unhealthy component detected | hw://enclosure2.node4.bmc1 | WARNING | | 1006 | 2019-05-17 13:25:31 | HW_NEEDS_ATTENTION | 201: Unhealthy component detected | hw://enclosure1.node3.bmc1 | WARNING | | 1007 | 2019-05-17 13:25:31 | HW_NEEDS_ATTENTION | 201: Unhealthy component detected | hw://enclosure1.node2.bmc1 | WARNING | | 1008 | 2019-05-17 13:25:31 | HW_NEEDS_ATTENTION | 201: Unhealthy component detected | hw://enclosure1.node1.bmc1 | WARNING | | 1009 | 2019-05-17 13:25:31 | HW_NEEDS_ATTENTION | 201: Unhealthy component detected | hw://enclosure1.node4.bmc1 | WARNING | | 1010 | 2019-05-17 13:25:31 | HW_SERVICE_REQUESTED | 108: Subcomponent is unreachable | hw://mgtswa | MAJOR | | 1026 | 2019-05-18 15:03:39 | HW_NEEDS_ATTENTION | 201: Unhealthy component detected | hw://enclosure2.node2.bmc1 | WARNING | | 1036 | 2019-05-20 15:01:49 | APPLIANCE_APPLICATION_DOWN | 704: Appliance application went down (db2 HA) | appliance:// | CRITICAL | +------+---------------------+----------------------------+-----------------------------------------------+----------------------------+----------+
ap issues 1002 General Information ID : 1002 Date : 2019-05-17 13:25:31.324696 Close Date : None Target : hw://fabswa Target Type : fabsw Severity : MAJOR Title : Subcomponent is unreachable Stateful : 1 Referenced Alert ID : None Classification Group : HW Type : HW_SERVICE_REQUESTED Reason Code : 108 Processing Status State : DELIVERED Log Collection Status : COLLECTED SMTP Status : FAILED SNMP Status : NOT_APPLICABLE Call Home Status : NOT_APPLICABLE Service Request SRID : None SR Status : None Collected Logs Log File Path : /var/log/appliance/platform/management/alerts/alert_1002.tar.gz Log File Node : enclosure1.node1 Log File Checksum : 5f878eeb40de4e3f869aaa6988c95afa Additional Data: Message : Status on component fabsw at location hw://fabswa is UNREACHABLE creator_id : fabsw@hw://fabswa location : hw://fabswa serial : status : UNREACHABLE type : fabsw type_desc : fabsw Generated: 2019-05-20 15:25:25
- ap maintenance [-h] [-v] [--host <address>] [--user <user>] [--password <password>] [--from-file <file_path>] [-f] [-r <reason>] [enable -r <reason>] [disable]
- Manages the maintenance mode of platform manager. When in maintenance mode, platform is
monitored, but all management actions are hanged (no alerts or PMRs are open and no invasive actions
are performed by platform manager). Parameters specific to the command are as follows:
- -f|--force
- Does not ask for confirmation before taking action.
- enable -r <reason>
- Enables maintenance mode. Requires providing a reason by using -r argument.
- disable
- Disables maintenance mode.
- -r|--reason <reason>
- Reason of the maintenance mode.
- ap node [-h] [-v] [--host <address>] [-d] [-f] [{enable,disable,init} <node>] [set_personality <node> <personality>] [rebalance] [--user <user>] [--password <password>] [--from-file <file_path>] [{shutdown,bootstrap} <node_location>]
- Displays detailed information about nodes and enables to change node state, or rebalance nodes.
Note: NPS nodes are not monitored by Platform Manager, and they are listed in a separateParameters specific to the command are as follows:
IPS nodes
table as shown in the example below.- -d|--detail
-
Displays detailed information about nodes
- -f|--force
- Does not ask before taking action on a node.
- enable node
- Enables the specified node. The node format should be
hadomainX.nodeY
, where X is the HA domain number, and Y is the node number in that HA domain. - disable node
- Disables the specified node. The node format should be
hadomainX.nodeY
, where X is the HA domain number, and Y is the node number in that HA domain. - init node
- Initializes Platform Manager on a node that was removed physically. This operation is performed by IBM Support.
- rebalance
- Rebalances MLNs after a node was enabled.
- set_personality node personality
- Defines personality of the node. The
personality
value is provided as<roleX>
, or<roleX>[<labelX>]
. The latter should be used only when the label is not blank. For more information on personalities, see Setting node personalities. - shutdown node_location
- Powers off a disabled node.
- bootstrap node_location
- Powers on a node.
ap node -d command [root@e1n1 ~]# ap node -d +------------------+---------+-------------+-----------+-----------+--------+---------------+ | Node | State | Personality | Monitored | Is Master | Is HUB | Is VDB Master | +------------------+---------+-------------+-----------+-----------+--------+---------------+ | enclosure1.node1 | ENABLED | CONTROL | YES | YES | YES | YES | | enclosure1.node2 | ENABLED | CONTROL | YES | NO | NO | NO | | enclosure1.node3 | ENABLED | CONTROL | YES | NO | NO | NO | | enclosure1.node4 | ENABLED | WORKER | YES | NO | NO | NO | | enclosure2.node1 | ENABLED | WORKER | YES | NO | NO | NO | | enclosure2.node2 | ENABLED | WORKER | YES | NO | NO | NO | | enclosure2.node3 | ENABLED | UNSET | YES | NO | NO | NO | | enclosure2.node4 | ENABLED | UNSET | YES | NO | NO | NO | +------------------+---------+-------------+-----------+-----------+--------+---------------+ IPS nodes +------------------+-----------+---------------+-----------+------------+----------+ | Node | State | Personality | Monitored | IPS Status | IPS Role | +------------------+-----------+---------------+-----------+------------+----------+ | enclosure3.node1 | UNMANAGED | VDB[IPS1NODE] | NO | OK | Active | | enclosure3.node2 | UNMANAGED | VDB[IPS1NODE] | NO | OK | Active | | enclosure3.node3 | UNMANAGED | VDB[IPS1NODE] | NO | OK | Active | | enclosure3.node4 | UNMANAGED | VDB[IPS1NODE] | NO | OK | Active | | enclosure4.node1 | UNMANAGED | VDB[IPS1NODE] | NO | OK | Active | | enclosure4.node2 | UNMANAGED | VDB[IPS1NODE] | NO | OK | Active | | enclosure4.node3 | UNMANAGED | VDB[IPS1NODE] | NO | OK | Active | | enclosure4.node4 | UNMANAGED | VDB[IPS1NODE] | NO | OK | Active | +------------------+-----------+---------------+-----------+------------+----------+ Generated: 2022-03-14 12:40:00
- ap sd [-h] [-v] [--host <address>] [-d] [-f] [{enable,disable} <storage_drive>] [--user <user>] [--password <password>] [--from-file <file_path>]
Each node has four storage NVMe drives. These drives must be managed manually. With the ap sd command, you can disable and enable storage drives, for example when a drive is damaged and needs to be replaced.
Without any parameters, the command lists all system storage drives and their state, as in the example:Parameters specific to the command are as follows:[root@e1n1 ~]# ap sd +-------------------------+-----------+----------+ | Drive | State | Assigned | +-------------------------+-----------+----------+ | enclosure1.node1.drive1 | ENABLED | GPFS | | enclosure1.node1.drive2 | ENABLED | GPFS | | enclosure1.node1.drive3 | ENABLED | GPFS | | enclosure1.node1.drive4 | ENABLED | GPFS | | enclosure1.node2.drive1 | ENABLED | GPFS | | enclosure1.node2.drive2 | ENABLED | GPFS | | enclosure1.node2.drive3 | ENABLED | GPFS | | enclosure1.node2.drive4 | ENABLED | GPFS | | enclosure1.node3.drive1 | ENABLED | GPFS | | enclosure1.node3.drive2 | ENABLED | GPFS | | enclosure1.node3.drive3 | ENABLED | GPFS | | enclosure1.node3.drive4 | ENABLED | GPFS | | enclosure1.node4.drive1 | ENABLED | | | enclosure1.node4.drive2 | ENABLED | | | enclosure1.node4.drive3 | ENABLED | | | enclosure1.node4.drive4 | ENABLED | | | enclosure2.node1.drive1 | ENABLED | | | enclosure2.node1.drive2 | ENABLED | | | enclosure2.node1.drive3 | ENABLED | | | enclosure2.node1.drive4 | ENABLED | | | enclosure2.node2.drive1 | ENABLED | | | enclosure2.node2.drive2 | ENABLED | | | enclosure2.node2.drive3 | ENABLED | | | enclosure2.node2.drive4 | ENABLED | | | enclosure2.node3.drive1 | ENABLED | | | enclosure2.node3.drive2 | ENABLED | | | enclosure2.node3.drive3 | ENABLED | | | enclosure2.node3.drive4 | ENABLED | | | enclosure2.node4.drive1 | ENABLED | | | enclosure2.node4.drive2 | ENABLED | | | enclosure2.node4.drive3 | ENABLED | | | enclosure2.node4.drive4 | ENABLED | | | enclosure3.node1.drive1 | UNMANAGED | IPS | | enclosure3.node1.drive2 | UNMANAGED | IPS | | enclosure3.node1.drive3 | UNMANAGED | IPS | | enclosure3.node1.drive4 | UNMANAGED | IPS | | enclosure3.node2.drive1 | UNMANAGED | IPS | | enclosure3.node2.drive2 | UNMANAGED | IPS | | enclosure3.node2.drive3 | UNMANAGED | IPS | | enclosure3.node2.drive4 | UNMANAGED | IPS | | enclosure3.node3.drive1 | UNMANAGED | IPS | | enclosure3.node3.drive2 | UNMANAGED | IPS | | enclosure3.node3.drive3 | UNMANAGED | IPS | | enclosure3.node3.drive4 | UNMANAGED | IPS | | enclosure3.node4.drive1 | UNMANAGED | IPS | | enclosure3.node4.drive2 | UNMANAGED | IPS | | enclosure3.node4.drive3 | UNMANAGED | IPS | | enclosure3.node4.drive4 | UNMANAGED | IPS | | enclosure4.node1.drive1 | UNMANAGED | IPS | | enclosure4.node1.drive2 | UNMANAGED | IPS | | enclosure4.node1.drive3 | UNMANAGED | IPS | | enclosure4.node1.drive4 | UNMANAGED | IPS | | enclosure4.node2.drive1 | UNMANAGED | IPS | | enclosure4.node2.drive2 | UNMANAGED | IPS | | enclosure4.node2.drive3 | UNMANAGED | IPS | | enclosure4.node2.drive4 | UNMANAGED | IPS | | enclosure4.node3.drive1 | UNMANAGED | IPS | | enclosure4.node3.drive2 | UNMANAGED | IPS | | enclosure4.node3.drive3 | UNMANAGED | IPS | | enclosure4.node3.drive4 | UNMANAGED | IPS | | enclosure4.node4.drive1 | UNMANAGED | IPS | | enclosure4.node4.drive2 | UNMANAGED | IPS | | enclosure4.node4.drive3 | UNMANAGED | IPS | | enclosure4.node4.drive4 | UNMANAGED | IPS | +-------------------------+-----------+----------+ Generated: 2022-03-14 12:44:09
- -f|--force
- Does not ask before performing action.
- enable storage_drive
- Enables the specified storage drive.
- disable storage_drive
- Disables the specified storage drive.
- ap state [-h] [-v] [--host <address>] [--user <user>] [--password <password>] [--from-file <file_path>] [-d] [-e]
- Displays the state of the system. Parameters are as follows:
- -d|--details
-
Displays the state for the system, application and platform manager separately.
- -e|--explain
- Explains system state.
A sample command and output follow:[root@e1n1 ~]# ap state -d System state is 'Ready' Application state is 'Ready' Platform management state is 'Active'
- ap sw [--host <address>] [-h] [-v] [location | -t type] [-d] [--user <user>] [--password <password>] [--from-file <file_path>]
- Displays a list of the software components with their statuses. Parameters are as follows:
- location
- Specifies software inventory location.
- -t type
- Specifies the type of software items to list.
- -d|--details
- Displays all software inventory items.
For a list of components, see Software components.
A sample command and output follow:[root@e1n1 ~]# ap sw +-----------------------+--------------------------------+--------+----------+ | Name | Location | Status | Version | +-----------------------+--------------------------------+--------+----------+ | Appliance Application | nps | OK | 11.2.2.2 | | Application Node | nps.spa1.spu1/enclosure3.node1 | OK | | | Application Node | nps.spa1.spu2/enclosure3.node2 | OK | | | Application Node | nps.spa1.spu3/enclosure3.node3 | OK | | | Application Node | nps.spa1.spu4/enclosure3.node4 | OK | | | Application Node | nps.spa1.spu5/enclosure4.node1 | OK | | | Application Node | nps.spa1.spu6/enclosure4.node2 | OK | | | Application Node | nps.spa1.spu7/enclosure4.node3 | OK | | | Application Node | nps.spa1.spu8/enclosure4.node4 | OK | | +-----------------------+--------------------------------+--------+----------+
- ap version [-h] [-v] [--host <address>] [--port<number>] [--user <user>] [--password <password>] [--from-file <file_path>] [-b] [-c] [-p] [-d] [-s]
- Displays version of the system and its components
- -b|--build
-
Displays build information.
- -p|--platform
- Displays version of platform management
- -c|--cli
- Displays version of
apcli
- -d|--details
- Displays versions of all major components for every node in the system
- -s|--summary
- Displays versions of all major components on the system and shows if all nodes have the same version installed
[root@e1n1 ~]# ap version Appliance software version is 1.0.7.8
- ap locate [-h] [-v] component_location
- Displays the physical location of a selected hardware component, such as a node, switch, an
enclosure.Sample command and output follow:
For more information see Locating hardware components[root@e1n1 ~]# ap locate enclosure2.node1 Component name: enclosure2.node1 Rack Location : Location Description : Rack ID : 2E-R1 Rack UID : 26 Generated: 2022-03-14 12:52:03