IBM Support

Useful command line options to check different healthy aspects of a PureData Sytem for Operational Analytics Environment

Troubleshooting


Problem

 PureData System for Operational Analytics environments is a complex appliance to manage. The following command lines can be used by appliance system admins to check the status of various aspects of their environments. These command lines are designed to get information about the state of the appliance quickly in a compressed and easy to read format.

The commands are available on any version of PDOA at any fix pack level unless restrictions are indicated.

The following commands are run as root on the management node and otherwise indicated. Some commands may be run on any AIX host in the environment.  The output shown will vary depending on the version of the appliance, the fix pack level of the appliance, the size of the appliance, modifications to the appliance and the health of the appliance. Output that is host specific will vary depending on the host role. The roles of an appliance host are "management", "management standby", "admin", "admin standby", "data and data standby". Many of the commands have been provided to customers via problem tickets or are found in other appliance technotes.
Administrators are encouraged to establish baselines for their environments that they can compare to in the future.
Administrators are encouraged to learn the best practices for using "dsh" and its companion command: "dshbak". The techniques used will vary depending on the type of output expected from the "dsh" command. These techniques include using commands like sort, cut, and sed to manipulate the output. Do not attempt to use "awk" as it is difficult to correctly specify the single and double quote through "dsh".
The following commands are created to allow copy and paste from a browser window into a shell session. If typing the commands note that the use of single quotation marks and double quotation marks are important as they are not interchangeable in the commands.
When a system is in distress, it is possible for commands using dsh to hang. In those cases, the '-t <number>' option will tell "dsh" to abandon the attempt if a connection isn't established by the number of seconds specified with the "-t" option.
Some commands may be better run serially as they can increase the load or have conflicts if run in parallel. For those "dsh" commands using the '-f <number>' or "fanout" option can run commands serially in the order the hosts are provided in the list.
Commands run with "dsh" may provide output to "stderr". For those commands use "2>&1 | dshbak -c" which will mix combine the "stderr" and "stdout" output together so it shows up in the "dshbak" stanza for the host. If the "stderr" output is not redirected it will show up at the top of the output and can be easily missed. One aspect of "stderr" versus "stdout" outputs is that "stderr" output is not buffered. Therefore the order of the output with both "stderr" and "stdout" are mixed is not guaranteed to match the actual order that the output was produced.
Commands run with "dsh" may provide no output on some hosts. On an environment with many hosts, this makes it difficult to notice that host is missing. For commands that can return no output use a technique which includes a command that is guaranteed to have output such as "echo 1;< cmds>...". Using this technique will ensure that all hosts provide output.
 
 
PureData System for Operational Analytics Status Commands and Example Output
Category Description Command Line Output
Hosts
Verify hosts are available on the network.
Run as root from any host.
dping ${ALL} | sort

$ dping ${ALL} | sort
stgkf101: ping (alive)
stgkf102: ping (alive)
stgkf103: ping (alive)
stgkf104: ping (alive)
stgkf105: ping (alive)
stgkf106: ping (alive)
stgkf108: ping (alive)
Hosts
Verify hosts are responding and can provide logins.
Run as root on any host.
dsh -n ${ALL} date 2>&1 | sort
 
$ dsh -n ${ALL} date 2>&1 | sort
stgkf101: Thu Apr  4 17:44:28 BRT 2019
stgkf102: Thu Apr  4 17:44:28 BRT 2019
stgkf103: Thu Apr  4 17:45:12 BRT 2019
stgkf104: Thu Apr  4 17:45:11 BRT 2019
stgkf105: Thu Apr  4 17:45:19 BRT 2019
stgkf106: Thu Apr  4 17:45:12 BRT 2019
stgkf108: Thu Apr  4 17:44:47 BRT 2019

Hosts
Check the "errpt" command to see messages counts for different time periods.
More sophisticated queries can be used by looking at the "errpt" man page.
These commands use the "date" command to provide a formatted date value to the "-s" option.
Current Hour:

dsh -t 0 -n ${ALL} 'errpt -s $(date +%m%d%H00%y) | wc -l' 2>&1 | sort
Current Day:

dsh -t 0 -n ${ALL} 'errpt -s $(date +%m%d0000%y) | wc -l' 2>&1 | sort
Current Month:

dsh -t 0 -n ${ALL} 'errpt -s $(date +%m010000%y) | wc -l' 2>&1 | sort
Current Year:

dsh -t 0 -n ${ALL} 'errpt -s $(date +01010000%y) | wc -l' 2>&1 | sort

dsh -t 0 -n ${ALL} 'errpt -s $(date +%m%d%H00%y) | wc -l' 2>&1 | sort
stgkf201:        0
stgkf202:        0
stgkf203:        0
stgkf204:        0
stgkf205:        0
stgkf206:        0
stgkf208:        0


$ dsh -t 0 -n ${ALL} 'errpt -s $(date +%m%d0000%y) | wc -l' 2>&1 | sort
stgkf201:        0
stgkf202:        0
stgkf203:        0
stgkf204:        2
stgkf205:       20
stgkf206:       21
stgkf208:       31


$ dsh -t 0 -n ${ALL} 'errpt -s $(date +%m010000%y) | wc -l' 2>&1 | sort
stgkf201:      878
stgkf202:      716
stgkf203:     1153
stgkf204:     1049
stgkf205:      896
stgkf206:      912
stgkf208:      894


$ dsh -t 0 -n ${ALL} 'errpt -s $(date +01010000%y) | wc -l' 2>&1 | sort
stgkf201:      878
stgkf202:     1164
stgkf203:     1153
stgkf204:     1156
stgkf205:      896
stgkf206:      912
stgkf208:      894

Storage
Verify Fibre Channel Path Counts
Fibre Channel path counts vary depending on the version of PDOA, the type of host and for data hosts the number of hosts within the same rack.


dsh -n ${ALL} 'lspath | grep fscsi | while read state disk if rest;do echo "$if:$state" ;done | sort | uniq -c' | dshbak -c

V1.0:
--------


$ dsh -n ${ALL} 'lspath | grep fscsi | while read state disk if rest;do echo "$if:$state" ;done | sort | uniq -c' | dshbak -c
HOSTS -------------------------------------------------------------------------
stgkf101
-------------------------------------------------------------------------------
  54 fscsi1:Enabled
  54 fscsi3:Enabled
  54 fscsi4:Enabled
  54 fscsi6:Enabled

HOSTS -------------------------------------------------------------------------
stgkf105, stgkf106, stgkf108
-------------------------------------------------------------------------------
  48 fscsi0:Enabled
  16 fscsi0:Missing
  64 fscsi1:Enabled
  64 fscsi2:Enabled
  64 fscsi3:Enabled
  64 fscsi4:Enabled
  64 fscsi5:Enabled
  64 fscsi6:Enabled
  64 fscsi7:Enabled

HOSTS -------------------------------------------------------------------------
stgkf102, stgkf104
-------------------------------------------------------------------------------
  54 fscsi0:Enabled
  54 fscsi1:Enabled
  54 fscsi2:Enabled
  54 fscsi3:Enabled
  54 fscsi4:Enabled
  54 fscsi5:Enabled
  54 fscsi6:Enabled
  54 fscsi7:Enabled

HOSTS -------------------------------------------------------------------------
stgkf103
-------------------------------------------------------------------------------
  54 fscsi0:Enabled
  54 fscsi2:Enabled
  54 fscsi4:Enabled
  54 fscsi6:Enabled
V1.1:
---------

$ dsh -n ${ALL} 'lspath | grep fscsi | while read state disk if rest;do echo "$if:$state" ;done | sort | uniq -c' | dshbak -c
HOSTS -------------------------------------------------------------------------
kf5hostname01
-------------------------------------------------------------------------------
  14 fscsi0:Enabled
  14 fscsi2:Missing
  14 fscsi4:Enabled
  14 fscsi6:Missing

HOSTS -------------------------------------------------------------------------
kf5hostname03
-------------------------------------------------------------------------------
  10 fscsi0:Enabled
  10 fscsi2:Enabled
  10 fscsi4:Enabled
  10 fscsi6:Enabled

HOSTS -------------------------------------------------------------------------
kf5hostname02, kf5hostname04
-------------------------------------------------------------------------------
  42 fscsi10:Enabled
  42 fscsi11:Enabled
  42 fscsi12:Enabled
  42 fscsi13:Enabled
  42 fscsi14:Enabled
  42 fscsi15:Enabled
  42 fscsi8:Enabled
  42 fscsi9:Enabled

HOSTS -------------------------------------------------------------------------
kf5hostname05, kf5hostname06, kf5hostname07
-------------------------------------------------------------------------------
 160 fscsi0:Enabled
 160 fscsi12:Enabled
 160 fscsi13:Enabled
 160 fscsi14:Enabled
 160 fscsi15:Enabled
 160 fscsi1:Enabled
 160 fscsi2:Enabled
 160 fscsi3:Enabled
 160 fscsi4:Enabled
 160 fscsi8:Enabled
Storage
Verify HDISK Path Counts
Shows a count (first column) of LUN types and enabled path counts.
The number of hard disk in the histogram output depends on the PDOA version, the host role, and the size of the environment, and the number of hosts within each rack.

dsh -n ${ALL} 'for i in 2076 FlashSystem;do lsdev | grep hdisk | grep "$i" | while read f g;do printf "%-20s:" $i;lspath -l $f | grep -c Enabled;done;done | sort | uniq -c' | dshbak -c
V1.0:
------



dsh -n ${ALL} 'for i in 2076 FlashSystem;do lsdev | grep hdisk | grep "$i" | while read f g;do printf "%-20s:" $i;lspath -l $f | grep -c Enabled;done;done | sort | uniq -c' | dshbak -c
HOSTS -------------------------------------------------------------------------
stgkf101, stgkf103
-------------------------------------------------------------------------------
  27 2076                :8

HOSTS -------------------------------------------------------------------------
stgkf102, stgkf104
-------------------------------------------------------------------------------
  27 2076                :16

HOSTS -------------------------------------------------------------------------
stgkf105, stgkf106, stgkf108
-------------------------------------------------------------------------------
  16 2076                :7
  48 2076                :8

V1.1:
------

dsh -n ${ALL} 'for i in 2076 FlashSystem;do lsdev | grep hdisk | grep "$i" | while read f g;do printf "%-20s:" $i;lspath -l $f | grep -c Enabled;done;done | sort | uniq -c' | dshbak -c
HOSTS -------------------------------------------------------------------------
kf5hostname01
-------------------------------------------------------------------------------
   4 2076                :4
   3 FlashSystem         :4

HOSTS -------------------------------------------------------------------------
kf5hostname02, kf5hostname04
-------------------------------------------------------------------------------
   7 2076                :16
  14 FlashSystem         :16

HOSTS -------------------------------------------------------------------------
kf5hostname03
-------------------------------------------------------------------------------
   2 2076                :8
   3 FlashSystem         :8

HOSTS -------------------------------------------------------------------------
kf5hostname05, kf5hostname06, kf5hostname07
-------------------------------------------------------------------------------
  20 2076                :20
  40 FlashSystem         :30
Storage Verify HDISK IDs

 
dsh -n ${ALL} 'lspv | while read disk id label rest;do echo "${label}";done | sed "s|\(nsd[a-z][a-z]*\).*|\1|" | sort | uniq -c' | dshbak -c

V1.0
------



$ dsh -n ${ALL} 'lspv | while read disk id label rest;do echo "${label}";done | sed "s|\(nsd[a-z][a-z]*\).*|\1|" | sort | uniq -c' | dshbak -c
HOSTS -------------------------------------------------------------------------
stgkf101
-------------------------------------------------------------------------------
  23 gpfs
   1 nsdappsvr
   1 nsdopm
   2 rootvg
   1 vgbcushare
   1 vgpscfs

HOSTS -------------------------------------------------------------------------
stgkf106, stgkf108
-------------------------------------------------------------------------------
   2 None
  16 nsdbkpfs
  48 nsddb
   2 rootvg
   1 vgssd

HOSTS -------------------------------------------------------------------------
stgkf102
-------------------------------------------------------------------------------
   2 None
   2 gpfs
   5 nsdbkpfs
  16 nsddb
   1 nsddwhome
   1 nsdstage
   2 rootvg
   1 vgssd

HOSTS -------------------------------------------------------------------------
stgkf104
-------------------------------------------------------------------------------
   4 None
   2 gpfs
   5 nsdbkpfs
  16 nsddb
   1 nsddwhome
   1 nsdstage
   2 rootvg
   1 vgssd

HOSTS -------------------------------------------------------------------------
stgkf103
-------------------------------------------------------------------------------
   2 None
  23 gpfs
   1 nsdappsvr
   1 nsdopm
   2 rootvg

HOSTS -------------------------------------------------------------------------
stgkf105
-------------------------------------------------------------------------------
  16 nsdbkpfs
  48 nsddb
   2 rootvg
   1 vgssd
V1.1
------

$ dsh -n ${ALL} 'lspv | while read disk id label rest;do echo "${label}";done | sed "s|\(nsd[a-z][a-z]*\).*|\1|" | sort | uniq -c' | dshbak -c
HOSTS -------------------------------------------------------------------------
kf5hostname01
-------------------------------------------------------------------------------
   3 gpfs
   1 nsdappsvr
   1 nsdopm
   2 rootvg
   1 vgbcushare
   1 vgpscfs

HOSTS -------------------------------------------------------------------------
kf5hostname05, kf5hostname06, kf5hostname07
-------------------------------------------------------------------------------
   1 None
  20 nsdbkpfs
  40 nsddb
   2 rootvg

HOSTS -------------------------------------------------------------------------
kf5hostname04
-------------------------------------------------------------------------------
   6 nsdbkpfs
  13 nsddb
   1 nsddwhome
   1 nsdstage
   2 rootvg

HOSTS -------------------------------------------------------------------------
kf5hostname02
-------------------------------------------------------------------------------
  21 gpfs
   2 rootvg

HOSTS -------------------------------------------------------------------------
kf5hostname03
-------------------------------------------------------------------------------
   3 gpfs
   1 nsdappsvr
   1 nsdopm
   2 rootvg

Storage
Verify GPFS is started.
dsh -n ${ALL} '/usr/lpp/mmfs/bin/mmgetstate -a' 2>&1 | dshbak -c

$ dsh -n ${ALL} '/usr/lpp/mmfs/bin/mmgetstate -a' 2>&1 | dshbak -c
HOSTS -------------------------------------------------------------------------
stgkf101, stgkf103
-------------------------------------------------------------------------------

 Node number  Node name        GPFS state
------------------------------------------
       1      stgkf101         active
       2      stgkf103         active

HOSTS -------------------------------------------------------------------------
stgkf102, stgkf104, stgkf105, stgkf106, stgkf108
-------------------------------------------------------------------------------

 Node number  Node name        GPFS state
------------------------------------------
       2      stgkf102         active
       4      stgkf104         active
       5      stgkf105         active
       6      stgkf106         active
       7      stgkf108         active
Storage
Verify GPFS Mounts
Run as root. Does not require GPFS to be started to provide output.


dsh -t 10 -n ${ALL} 'printf "Defined GPFS Filesystems: %10d    Mounted GPFS Filesystems: %10d\n" $(lsfs -c | cut -d: -f 3 | grep -c mmfs) $(mount | grep -c " mmfs ")' | sort

V1.0:
-----


dsh -t 10 -n ${ALL} 'printf "Defined GPFS Filesystems: %10d    Mounted GPFS Filesystems: %10d\n" $(lsfs -c | cut -d: -f 3 | grep -c mmfs) $(mount | grep -c " mmfs ")' | sort
stgkf101: Defined GPFS Filesystems:          5    Mounted GPFS Filesystems:          5
stgkf102: Defined GPFS Filesystems:         87    Mounted GPFS Filesystems:         23
stgkf103: Defined GPFS Filesystems:          5    Mounted GPFS Filesystems:          5
stgkf104: Defined GPFS Filesystems:         87    Mounted GPFS Filesystems:         23
stgkf105: Defined GPFS Filesystems:         87    Mounted GPFS Filesystems:         67
stgkf106: Defined GPFS Filesystems:         87    Mounted GPFS Filesystems:         67
stgkf108: Defined GPFS Filesystems:         87    Mounted GPFS Filesystems:         67
V1.1:
-----

dsh -t 10 -n ${ALL} 'printf "Defined GPFS Filesystems: %10d    Mounted GPFS Filesystems: %10d\n" $(lsfs -c | cut -d: -f 3 | grep -c mmfs) $(mount | grep -c " mmfs ")' | sort
kf5hostname01: Defined GPFS Filesystems:          5    Mounted GPFS Filesystems:          5
kf5hostname02: Defined GPFS Filesystems:         81    Mounted GPFS Filesystems:         21
kf5hostname03: Defined GPFS Filesystems:          5    Mounted GPFS Filesystems:          2
kf5hostname04: Defined GPFS Filesystems:         81    Mounted GPFS Filesystems:          0
kf5hostname05: Defined GPFS Filesystems:         81    Mounted GPFS Filesystems:          0
kf5hostname06: Defined GPFS Filesystems:         81    Mounted GPFS Filesystems:          0
kf5hostname07: Defined GPFS Filesystems:         81    Mounted GPFS Filesystems:          0
Storage
Check for unfixed events on the PDOA storage enclosures.
Run as root on management.
Shows the unfixed alerts on all of the storage enclosures in the environment.
On older firmware levels, following the MAP procedures does not close some alerts.
These commands work on Flash and V7000 based storage enclosures.


grep "SAN_FRAME[0-9]*[0-9]_IP" /pschome/config/xcluster.cfg | while read sf eq ip rest;do echo " *** ${sf} ${ip} ***";ssh -n superuser@${ip} 'lseventlog -alert yes -message no';done

grep "SAN_FRAME[0-9]*[0-9]_IP" /pschome/config/xcluster.cfg | while read sf eq ip rest;do echo " *** ${sf} ${ip} ***";ssh -n superuser@${ip} 'lseventlog -alert yes -message no';done
 *** SAN_FRAME1_IP 172.23.2.201 ***
sequence_number last_timestamp object_type object_id object_name copy_id status fixed event_id error_code description
921             181027143310   drive       4                             alert  no    981020              Managed Disk error count warning threshold met
1037            190116170826   drive       35                            alert  no    010083   1685       Drive fault type 2
1051            190116171516   drive       54                            alert  no    010083   1685       Drive fault type 2
1058            190116172236   drive       71                            alert  no    981020              Managed Disk error count warning threshold met
1078            190116182751   drive       11                            alert  no    981020              Managed Disk error count warning threshold met
1089            190116183101   drive       23                            alert  no    981020              Managed Disk error count warning threshold met
1141            190117040222   drive       67                            alert  no    010083   1685       Drive fault type 2
1173            190121005027   drive       0                             alert  no    010083   1685       Drive fault type 2
1269            190123063540   drive       67                            alert  no    010083   1685       Drive fault type 2
1030            190123133912   drive       35                            alert  no    981020              Managed Disk error count warning threshold met
1029            190123134122   drive       35                            alert  no    010097   1685       Drive fault type 2
1305            190123225258   drive       67                            alert  no    010083   1685       Drive fault type 2
1342            190123225838   drive       67                            alert  no    010083   1685       Drive fault type 2
1390            190124005247   drive       67                            alert  no    010083   1685       Drive fault type 2
1407            190124065444   drive       0                             alert  no    010083   1685       Drive fault type 2
1044            190124084143   drive       54                            alert  no    981020              Managed Disk error count warning threshold met
1045            190124084223   drive       54                            alert  no    010097   1685       Drive fault type 2
1424            190124084243   drive       54                            alert  no    010083   1685       Drive fault type 2
1151            190125175159   drive       0                             alert  no    981020              Managed Disk error count warning threshold met
1511            190125175624   drive       0                             alert  no    010083   1685       Drive fault type 2
1538            190125180204   drive       0                             alert  no    010083   1685       Drive fault type 2
1548            190125193119   drive       67                            alert  no    010083   1685       Drive fault type 2
1589            190131221431   drive       67                            alert  no    010083   1685       Drive fault type 2
1593            190131221516   drive       0                             alert  no    010083   1685       Drive fault type 2
1636            190131222721   drive       67                            alert  no    010083   1685       Drive fault type 2
1654            190131223021   drive       0                             alert  no    010083   1685       Drive fault type 2
1686            190131223701   drive       0                             alert  no    010083   1685       Drive fault type 2
1692            190131223816   drive       67                            alert  no    010083   1685       Drive fault type 2
1114            190201023035   drive       7                             alert  no    981020              Managed Disk error count warning threshold met
1756            190201181753   drive       67                            alert  no    010083   1685       Drive fault type 2
1759            190201181758   drive       0                             alert  no    010083   1685       Drive fault type 2
1785            190201182853   drive       67                            alert  no    010083   1685       Drive fault type 2
1791            190201182923   drive       0                             alert  no    010083   1685       Drive fault type 2
1825            190201184258   drive       0                             alert  no    010083   1685       Drive fault type 2
1832            190201184503   drive       67                            alert  no    010083   1685       Drive fault type 2
1167            190201215636   drive       0                             alert  no    010097   1685       Drive fault type 2
1131            190201215646   drive       67                            alert  no    981020              Managed Disk error count warning threshold met
1847            190201215656   drive       0                             alert  no    010083   1685       Drive fault type 2
1132            190201215736   drive       67                            alert  no    010097   1685       Drive fault type 2
1875            190201215856   drive       67                            alert  no    010083   1685       Drive fault type 2
1879            190201215901   drive       0                             alert  no    010083   1685       Drive fault type 2
1158            190203031649   drive       21                            alert  no    981020              Managed Disk error count warning threshold met
 *** SAN_FRAME2_IP 172.23.2.204 ***
sequence_number last_timestamp object_type object_id        object_name          copy_id status fixed event_id error_code description
807             181207225301   drive       23                                            alert  no    981020              Managed Disk error count warning threshold met
796             181211023349   drive       29                                            alert  no    981020              Managed Disk error count warning threshold met
831             181211085354   drive       24                                            alert  no    981020              Managed Disk error count warning threshold met
815             181212032413   drive       4                                             alert  no    981020              Managed Disk error count warning threshold met
1048            190218202549   node        1                node1                        alert  no    984003              I/O group caching was set to synchronous destage
1049            190218202549   node        2                node2                        alert  no    984003              I/O group caching was set to synchronous destage
1053            190218225224   cluster                      Cluster_172.23.2.204         alert  no    010094   1231       Login excluded
1228            190404233911   cluster     00000200A100A8D8                              alert  no    010094   1231       Login excluded
1229            190404233911   cluster     00000200A100A8D8                              alert  no    010094   1231       Login excluded
 *** SAN_FRAME3_IP 172.23.2.205 ***
sequence_number last_timestamp object_type object_id object_name copy_id status fixed event_id error_code description
1169            181208015111   drive       10                            alert  no    981020              Managed Disk error count warning threshold met
1301            181212195122   drive       31                            alert  no    981020              Managed Disk error count warning threshold met
1310            181216014831   drive       21                            alert  no    981020              Managed Disk error count warning threshold met
1184            181219235324   drive       9                             alert  no    981020              Managed Disk error count warning threshold met
1195            190116213527   drive       28                            alert  no    981020              Managed Disk error count warning threshold met
1433            190118040529   drive       12                            alert  no    981020              Managed Disk error count warning threshold met
1499            190124212332   drive       43                            alert  no    981020              Managed Disk error count warning threshold met
 *** SAN_FRAME4_IP 172.23.2.206 ***
sequence_number last_timestamp object_type object_id        object_name          copy_id status fixed event_id error_code description
9244            181207203523   drive       46                                            alert  no    981020              Managed Disk error count warning threshold met
9435            181211020430   drive       40                                            alert  no    981020              Managed Disk error count warning threshold met
9323            181211153817   drive       25                                            alert  no    981020              Managed Disk error count warning threshold met
9256            181212041338   drive       19                                            alert  no    981020              Managed Disk error count warning threshold met
9224            181223124408   drive       29                                            alert  no    981020              Managed Disk error count warning threshold met
9298            190122191407   drive       1                                             alert  no    981020              Managed Disk error count warning threshold met
9794            190213063817   drive       47                                            alert  no    981020              Managed Disk error count warning threshold met
10043           190404213054   cluster     00000200A060A8C6                              alert  no    010094   1231       Login excluded
10044           190404213054   cluster     00000200A0E0A918                              alert  no    010094   1231       Login excluded
10614           190404213054   port        4                                             alert  no    073305   1065       Fibre Channel Speed Change
10022           190404213914   cluster                      Cluster_172.23.2.206         alert  no    010094   1231       Login excluded
10023           190404213914   cluster                      Cluster_172.23.2.206         alert  no    010094   1231       Login excluded
 *** SAN_FRAME5_IP 172.23.2.207 ***
sequence_number last_timestamp object_type object_id object_name copy_id status fixed event_id error_code description
773             181115232043   drive       47                            alert  no    981020              Managed Disk error count warning threshold met
864             181208001248   drive       10                            alert  no    981020              Managed Disk error count warning threshold met
845             181211024651   drive       9                             alert  no    981020              Managed Disk error count warning threshold met
992             190122215810   drive       36                            alert  no    981020              Managed Disk error count warning threshold met
1132            190329142235   drive       17                            alert  no    981020              Managed Disk error count warning threshold met
1261            190405142154   drive       17                            alert  no    010084   1285       Drive SAS error counts exceeded warning thresholds
Storage
Check for drives that are offline.
Run as root on management host.

grep "SAN_FRAME[0-9]*[0-9]_IP" /pschome/config/xcluster.cfg | while read sf eq ip rest;do echo " *** ${sf} ${ip} ***";ssh -n superuser@${ip} 'lsdrive' | grep -v online;done

$ grep "SAN_FRAME[0-9]*[0-9]_IP" /pschome/config/xcluster.cfg | while read sf eq ip rest;do echo " *** ${sf} ${ip} ***";ssh -n superuser@${ip} 'lsdrive' | grep -v online;done
 *** SAN_FRAME1_IP 172.23.2.201 ***
id status error_sequence_number use    tech_type capacity mdisk_id mdisk_name member_id enclosure_id slot_id node_id node_name auto_manage
 *** SAN_FRAME2_IP 172.23.2.204 ***
id status error_sequence_number use    tech_type capacity mdisk_id mdisk_name member_id enclosure_id slot_id node_id node_name auto_manage
 *** SAN_FRAME3_IP 172.23.2.205 ***
id status error_sequence_number use    tech_type capacity mdisk_id mdisk_name member_id enclosure_id slot_id node_id node_name auto_manage
 *** SAN_FRAME4_IP 172.23.2.206 ***
id status error_sequence_number use    tech_type capacity mdisk_id mdisk_name member_id enclosure_id slot_id node_id node_name auto_manage
 *** SAN_FRAME5_IP 172.23.2.207 ***
id status   error_sequence_number use    tech_type capacity mdisk_id mdisk_name member_id enclosure_id slot_id node_id node_name auto_manage
17 degraded 1261                  member sas_hdd   837.9GB  1        ARRAY4     6         2            22                        inactive

HA
Verify Domain Status (hatools)
Run as root on any host.
V1.0.0.4 and higher
V1.1.0.0 and higher

hadomain -mgmt status

hadomain -core status

$ hadomain -mgmt status
mgmtdomain Online  3.2.1.2           No            12347  12348


$ hadomain -core status
bcudomain01 Online  3.2.1.2           No            12347  12348
bcudomain02 Online  3.2.1.2           No            12347  12348

HA
Verify Domain Status (Tivoli System Automation).
Run as root on any host.
V1.0.0.3 and earlier have a single domain, bcudomain, across all hosts. V1.0.0.4 and higher and V1.1.0.0 and higher have a single domain bcudomain# for each rack in the environment.

dsh -n ${ALL} 'lsrpdomain -x' 2>&1 | dshbak -c

$ dsh -n ${ALL} 'lsrpdomain -x' 2>&1 | dshbak -c
HOSTS -------------------------------------------------------------------------
stgkf101, stgkf103
-------------------------------------------------------------------------------
mgmtdomain Online  3.2.1.2           No            12347  12348

HOSTS -------------------------------------------------------------------------
stgkf102, stgkf104
-------------------------------------------------------------------------------
bcudomain01 Online  3.2.1.2           No            12347  12348

HOSTS -------------------------------------------------------------------------
stgkf105, stgkf106, stgkf108
-------------------------------------------------------------------------------
bcudomain02 Online  3.2.1.2           No            12347  12348

HA
Check Db2 Resource Group Status
The "lsrg" and "lssam" commands are more advanced commands than the standard command: "hals". These commands are run on all of the core nodes using "dsh" to obtain more granular information.
In PDOA appliances, the "lssam" produces a lot of ouptut that is difficult to read quickly. The following commands manipulate that command to only show resource groups at the partition level and do not show the status of the file systems, peer node equivalencies, or network equivalencies.
In PDOA environments hosts within the same "TSA" domain should show the same output. However, during transitional states or error states it is possible that hosts within the same domain will have different output which is not collapsed by the "dshbak -c" command.
Note that the "lssam" command and the "lsrg -m" command may appear to hang during some transitional states.

dsh -t 10 -n ${BCUDB2ALL} 'lsrg | egrep "db2_.*-rg" | xargs -n1 lssam -g' 2>&1 | dshbak -c

$ dsh -t 10 -n ${BCUDB2ALL} 'lsrg | egrep "db2_.*-rg" | xargs -n1 lssam -g' 2>&1 | dshbak -c
HOSTS -------------------------------------------------------------------------
stgkf202, stgkf204
-------------------------------------------------------------------------------
Offline IBM.ResourceGroup:db2_bcuaix_0_1_2_3_4-rg Nominal=Offline
        |- Offline IBM.Application:db2_bcuaix_0_1_2_3_4-rs
                |- Offline IBM.Application:db2_bcuaix_0_1_2_3_4-rs:stgkf202
                '- Offline IBM.Application:db2_bcuaix_0_1_2_3_4-rs:stgkf204
        '- Offline IBM.ServiceIP:db2ip_172_23_2_111-rs
                |- Offline IBM.ServiceIP:db2ip_172_23_2_111-rs:stgkf202
                '- Offline IBM.ServiceIP:db2ip_172_23_2_111-rs:stgkf204

HOSTS -------------------------------------------------------------------------
stgkf205, stgkf206, stgkf208
-------------------------------------------------------------------------------
Offline IBM.ResourceGroup:db2_bcuaix_13_14_15_16_17_18_19_20-rg Nominal=Offline
        |- Offline IBM.Application:db2_bcuaix_13_14_15_16_17_18_19_20-rs
                |- Offline IBM.Application:db2_bcuaix_13_14_15_16_17_18_19_20-rs:stgkf205
                '- Offline IBM.Application:db2_bcuaix_13_14_15_16_17_18_19_20-rs:stgkf206
        '- Offline IBM.ServiceIP:db2ip_172_23_2_44-rs
                |- Offline IBM.ServiceIP:db2ip_172_23_2_44-rs:stgkf205
                '- Offline IBM.ServiceIP:db2ip_172_23_2_44-rs:stgkf206
Offline IBM.ResourceGroup:db2_bcuaix_5_6_7_8_9_10_11_12-rg Nominal=Offline
        |- Offline IBM.Application:db2_bcuaix_5_6_7_8_9_10_11_12-rs
                |- Offline IBM.Application:db2_bcuaix_5_6_7_8_9_10_11_12-rs:stgkf206
                '- Offline IBM.Application:db2_bcuaix_5_6_7_8_9_10_11_12-rs:stgkf208
        '- Offline IBM.ServiceIP:db2ip_172_23_2_43-rs
                |- Offline IBM.ServiceIP:db2ip_172_23_2_43-rs:stgkf206
                '- Offline IBM.ServiceIP:db2ip_172_23_2_43-rs:stgkf208

HA
HA Mount Status
The example output was taken with the domain online, the database down and running mmumount all on all core hosts.
This demonstrates the various states of the storage resources as "TSA" and the associated "TSA" policies attempt to restart the storage.
If the output is blank, then all file system resources are in the "Online" state.
The commands "lssam" and "lsrg -m" may hang during some transitional states.

dsh -t 10 -n ${BCUDB2ALL} 'lsrg | egrep "db2mnt_.*-rg" | xargs -n1 lssam -g | egrep -i "Pending|Failed|Unknown|Stuck|Offline"' 2>&1 | dshbak -c

$ dsh -t 10 -n ${BCUDB2ALL} 'lsrg | egrep "db2mnt_.*-rg" | xargs -n1 lssam -g | egrep -i "Pending|Failed|Unknown|Stuck|Offline"' 2>&1 | dshbak -c
HOSTS -------------------------------------------------------------------------
stgkf202
-------------------------------------------------------------------------------
                |- Failed offline IBM.Application:db2mnt_bkpfs_bcuaix_NODE0001-rs:stgkf202
                |- Failed offline IBM.Application:db2mnt_bkpfs_bcuaix_NODE0003-rs:stgkf202
                '- Failed offline IBM.Application:db2mnt_bkpfs_bcuaix_NODE0004-rs:stgkf204
                '- Failed offline IBM.Application:db2mnt_db2fs_bcuaix_NODE0001-rs:stgkf204
                |- Failed offline IBM.Application:db2mnt_db2fs_bcuaix_NODE0004-rs:stgkf202
        |- Failed offline IBM.Application:db2mnt_db2mlog_bcuaix_NODE0001-rs
                |- Failed offline IBM.Application:db2mnt_db2mlog_bcuaix_NODE0001-rs:stgkf202
                '- Failed offline IBM.Application:db2mnt_db2mlog_bcuaix_NODE0001-rs:stgkf204
                '- Failed offline IBM.Application:db2mnt_db2mlog_bcuaix_NODE0002-rs:stgkf204
                |- Failed offline IBM.Application:db2mnt_db2mlog_bcuaix_NODE0004-rs:stgkf202
                '- Failed offline IBM.Application:db2mnt_db2path_bcuaix_NODE0001-rs:stgkf204
                |- Failed offline IBM.Application:db2mnt_db2path_bcuaix_NODE0003-rs:stgkf202
                |- Failed offline IBM.Application:db2mnt_db2path_bcuaix_NODE0004-rs:stgkf202

HOSTS -------------------------------------------------------------------------
stgkf205, stgkf206, stgkf208
-------------------------------------------------------------------------------
                |- Failed offline IBM.Application:db2mnt_bkpfs_bcuaix_NODE0014-rs:stgkf205
                |- Failed offline IBM.Application:db2mnt_bkpfs_bcuaix_NODE0017-rs:stgkf205
                |- Failed offline IBM.Application:db2mnt_db2fs_bcuaix_NODE0014-rs:stgkf205
                |- Failed offline IBM.Application:db2mnt_db2fs_bcuaix_NODE0015-rs:stgkf205
                |- Failed offline IBM.Application:db2mnt_db2fs_bcuaix_NODE0017-rs:stgkf205
                |- Failed offline IBM.Application:db2mnt_db2fs_bcuaix_NODE0018-rs:stgkf206
                '- Failed offline IBM.Application:db2mnt_db2fs_bcuaix_NODE0020-rs:stgkf208
                |- Failed offline IBM.Application:db2mnt_db2mlog_bcuaix_NODE0013-rs:stgkf205
                |- Failed offline IBM.Application:db2mnt_db2mlog_bcuaix_NODE0014-rs:stgkf206
                |- Failed offline IBM.Application:db2mnt_db2mlog_bcuaix_NODE0015-rs:stgkf205
                |- Failed offline IBM.Application:db2mnt_db2mlog_bcuaix_NODE0016-rs:stgkf205
                |- Failed offline IBM.Application:db2mnt_db2mlog_bcuaix_NODE0017-rs:stgkf205
                |- Failed offline IBM.Application:db2mnt_db2mlog_bcuaix_NODE0017-rs:stgkf206
                |- Failed offline IBM.Application:db2mnt_db2mlog_bcuaix_NODE0018-rs:stgkf205
                |- Failed offline IBM.Application:db2mnt_db2mlog_bcuaix_NODE0018-rs:stgkf206
                |- Failed offline IBM.Application:db2mnt_db2mlog_bcuaix_NODE0019-rs:stgkf205
                |- Failed offline IBM.Application:db2mnt_db2path_bcuaix_NODE0015-rs:stgkf205
                |- Failed offline IBM.Application:db2mnt_db2path_bcuaix_NODE0017-rs:stgkf205
                |- Failed offline IBM.Application:db2mnt_db2path_bcuaix_NODE0017-rs:stgkf206
                |- Failed offline IBM.Application:db2mnt_db2path_bcuaix_NODE0018-rs:stgkf205
                |- Failed offline IBM.Application:db2mnt_db2path_bcuaix_NODE0018-rs:stgkf206
                |- Failed offline IBM.Application:db2mnt_db2path_bcuaix_NODE0019-rs:stgkf205
                |- Failed offline IBM.Application:db2mnt_db2path_bcuaix_NODE0019-rs:stgkf206
                |- Failed offline IBM.Application:db2mnt_db2path_bcuaix_NODE0020-rs:stgkf206
                '- Failed offline IBM.Application:db2mnt_bkpfs_bcuaix_NODE0009-rs:stgkf208
                '- Failed offline IBM.Application:db2mnt_db2fs_bcuaix_NODE0005-rs:stgkf208
                '- Failed offline IBM.Application:db2mnt_db2fs_bcuaix_NODE0007-rs:stgkf208
                '- Failed offline IBM.Application:db2mnt_db2fs_bcuaix_NODE0008-rs:stgkf208
                |- Failed offline IBM.Application:db2mnt_db2mlog_bcuaix_NODE0005-rs:stgkf206
                |- Failed offline IBM.Application:db2mnt_db2mlog_bcuaix_NODE0006-rs:stgkf206
                '- Failed offline IBM.Application:db2mnt_db2mlog_bcuaix_NODE0009-rs:stgkf208
                |- Failed offline IBM.Application:db2mnt_db2path_bcuaix_NODE0005-rs:stgkf206
                |- Failed offline IBM.Application:db2mnt_db2path_bcuaix_NODE0006-rs:stgkf206
                '- Failed offline IBM.Application:db2mnt_db2path_bcuaix_NODE0007-rs:stgkf208
                |- Failed offline IBM.Application:db2mnt_db2path_bcuaix_NODE0008-rs:stgkf205
                '- Failed offline IBM.Application:db2mnt_db2path_bcuaix_NODE0009-rs:stgkf208
                '- Failed offline IBM.Application:db2mnt_db2path_bcuaix_NODE0010-rs:stgkf208

HOSTS -------------------------------------------------------------------------
stgkf204
-------------------------------------------------------------------------------
                |- Pending offline IBM.Application:db2mnt_bkpfs_bcuaix_NODE0000-rs:stgkf202
                |- Failed offline IBM.Application:db2mnt_bkpfs_bcuaix_NODE0001-rs:stgkf202
                |- Failed offline IBM.Application:db2mnt_bkpfs_bcuaix_NODE0003-rs:stgkf202
                '- Failed offline IBM.Application:db2mnt_bkpfs_bcuaix_NODE0004-rs:stgkf204
        |- Pending offline IBM.Application:db2mnt_db2fs_bcuaix_NODE0000-rs
                |- Pending offline IBM.Application:db2mnt_db2fs_bcuaix_NODE0000-rs:stgkf202
                '- Pending offline IBM.Application:db2mnt_db2fs_bcuaix_NODE0000-rs:stgkf204
                |- Pending offline IBM.Application:db2mnt_db2fs_bcuaix_NODE0001-rs:stgkf202
                '- Failed offline IBM.Application:db2mnt_db2fs_bcuaix_NODE0001-rs:stgkf204
                |- Failed offline IBM.Application:db2mnt_db2fs_bcuaix_NODE0004-rs:stgkf202
                |- Pending offline IBM.Application:db2mnt_db2mlog_bcuaix_NODE0000-rs:stgkf202
        |- Failed offline IBM.Application:db2mnt_db2mlog_bcuaix_NODE0001-rs
                |- Failed offline IBM.Application:db2mnt_db2mlog_bcuaix_NODE0001-rs:stgkf202
                '- Failed offline IBM.Application:db2mnt_db2mlog_bcuaix_NODE0001-rs:stgkf204
                |- Failed offline IBM.Application:db2mnt_db2mlog_bcuaix_NODE0002-rs:stgkf202
                '- Failed offline IBM.Application:db2mnt_db2mlog_bcuaix_NODE0002-rs:stgkf204
                |- Failed offline IBM.Application:db2mnt_db2mlog_bcuaix_NODE0004-rs:stgkf202
        |- Pending offline IBM.Application:db2mnt_db2path_bcuaix_NODE0000-rs
                |- Pending offline IBM.Application:db2mnt_db2path_bcuaix_NODE0000-rs:stgkf202
                '- Pending offline IBM.Application:db2mnt_db2path_bcuaix_NODE0000-rs:stgkf204
                '- Failed offline IBM.Application:db2mnt_db2path_bcuaix_NODE0001-rs:stgkf204
                |- Failed offline IBM.Application:db2mnt_db2path_bcuaix_NODE0003-rs:stgkf202
                |- Failed offline IBM.Application:db2mnt_db2path_bcuaix_NODE0004-rs:stgkf202
HA HA Equivalency Status
This command is important on the core nodes. This commands is used to ensure per-rack and per-domain consistency of the domain metadata. There have been edge cases where the "TSA" equivalency definitions for "IBM.Peer Node" equivalencies can get to an inconsistent state within the domain. During "Roving HA" events the "PeerNode" equivalency definitions are altered to reflect the new primary and standby hosts as part of a "failover". This also happens to all of the resources in the "Db2 partition set resource groups". During these updates it is possible that the hosts within the domain do not synchronize correctly.

$ dsh -t 0 -n ${ALL} 'lsequ  | sort | grep -v "Displaying Equivalencies" | xargs -n 1 lsequ -x -D@ -e | egrep -v "Equivalency"|cut -d@ -f 1,2,3,4' 2>&1 | dshbak -c
HOSTS -------------------------------------------------------------------------
stgkf201, stgkf203
-------------------------------------------------------------------------------
(lsrsrc-api) 2610-643 The specified session scope is not currently supported.
lsequ: 2622-009 An unexpected RMC error occurred.The RMC return code was 1.

HOSTS -------------------------------------------------------------------------
stgkf202, stgkf204
-------------------------------------------------------------------------------
db2_FCM_network@IBM.NetworkInterface @{}@"Name = 'en11' AND (NodeNameList = 'stgkf204' || NodeNameList = 'stgkf202')"
db2_bcuaix_0_1_2_3_4-rg_group-equ@IBM.PeerNode @{stgkf202:stgkf202,stgkf204:stgkf204}@""

HOSTS -------------------------------------------------------------------------
stgkf205, stgkf206, stgkf208
-------------------------------------------------------------------------------
db2_FCM_network@IBM.NetworkInterface @{}@"Name = 'en11' AND (NodeNameList = 'stgkf208' || NodeNameList = 'stgkf206' || NodeNameList = 'stgkf205')"
db2_bcuaix_13_14_15_16_17_18_19_20-rg_group-equ@IBM.PeerNode @{stgkf205:stgkf205,stgkf206:stgkf206}@""
db2_bcuaix_5_6_7_8_9_10_11_12-rg_group-equ@IBM.PeerNode @{stgkf208:stgkf208,stgkf206:stgkf206}@""
HA HA Resource Status
This command is important on the core nodes (as is the HA Equivalency Status command). This command displays the roving HA resources. Pay close attention to the hostnames and the order of the hostnames. Compare this output to the equivalency check and to the contents of the db2nodes.cfg file. There are two separate commands, one for IBM.ServiceIPs and one for IBM.Applications.

dsh -t 0 -n ${ALL} 'export CT_MANAGEMENT_SCOPE=2;lsrsrc -Ab -D@ IBM.Application Name NodeNameList ResourceType | grep -v db2mnt | grep "@[12]@" | sort' 2>&1 | dshbak -c

dsh -t 0 -n ${ALL} 'export CT_MANAGEMENT_SCOPE=2;lsrsrc -Ab -D@ IBM.ServiceIP Name NodeNameList ResourceType | grep "@[12]@" | sort' 2>&1 | dshbak -c

$ dsh -t 0 -n ${ALL} 'export CT_MANAGEMENT_SCOPE=2;lsrsrc -Ab -D@ IBM.Application Name NodeNameList ResourceType | grep -v db2mnt | grep "@[12]@" | sort' 2>&1 | dshbak -c
HOSTS -------------------------------------------------------------------------
kf5hostname02, kf5hostname04
-------------------------------------------------------------------------------
"db2_bcuaix_0_1_2_3_4_5-rs"@{"kf5hostname02","kf5hostname04"}@1@

HOSTS -------------------------------------------------------------------------
kf5hostname01, kf5hostname03
-------------------------------------------------------------------------------
2610-643 The specified session scope is not currently supported.

HOSTS -------------------------------------------------------------------------
kf5hostname05, kf5hostname06, kf5hostname07
-------------------------------------------------------------------------------
"db2_bcuaix_16_17_18_19_20_21_22_23_24_25-rs"@{"kf5hostname06","kf5hostname05"}@1@
"db2_bcuaix_6_7_8_9_10_11_12_13_14_15-rs"@{"kf5hostname07","kf5hostname05"}@1@




$ dsh -t 0 -n ${ALL} 'export CT_MANAGEMENT_SCOPE=2;lsrsrc -Ab -D@ IBM.Application Name NodeNameList ResourceType | grep -v db2mnt | grep "@[12]@" | sort' 2>&1 | dshbak -c
HOSTS -------------------------------------------------------------------------
kf5hostname02, kf5hostname04
-------------------------------------------------------------------------------
"db2_bcuaix_0_1_2_3_4_5-rs"@{"kf5hostname02","kf5hostname04"}@1@

HOSTS -------------------------------------------------------------------------
kf5hostname01, kf5hostname03
-------------------------------------------------------------------------------
2610-643 The specified session scope is not currently supported.

HOSTS -------------------------------------------------------------------------
kf5hostname05, kf5hostname06, kf5hostname07
-------------------------------------------------------------------------------
"db2_bcuaix_16_17_18_19_20_21_22_23_24_25-rs"@{"kf5hostname06","kf5hostname05"}@1@
"db2_bcuaix_6_7_8_9_10_11_12_13_14_15-rs"@{"kf5hostname07","kf5hostname05"}@1@
Capacity
Check the partition filesystem capacities on all hosts.
This command checks filesystems that are shipped with PDOA. Other filesystems can be added to the for loop list to be tracked. Note db2mlog and db2ssd are only available on V1.0 environments.
This presents a histogram of which shows the number of filesystems mounted with the same space % free and inode % free stats.
Note that "db2ssd#" file systems are expected to be full as they are for system temp and have no growth potential. 3 of these file systems on the admin and admin standby hosts are not used, so these file systems will appear empty.


dsh -n ${BCUDB2ALL} 'for pre in db2path db2fs bkpfs db2mlog db2ssd;do echo " *** ${pre} ***";lsfs -c | cut -d: -f 1 | grep ${pre} | sort | xargs df -g | grep ${pre} | uniq | while read dev size free pused iused ipused rest;do echo "${pused}:${ipused}";done | sort | uniq -c;done' | dshbak -c

V1.0:
-----

dsh -n ${BCUDB2ALL} 'for pre in db2path db2fs bkpfs db2mlog db2ssd;do echo " *** ${pre} ***";lsfs -c | cut -d: -f 1 | grep ${pre} | sort | xargs df -g | grep ${pre} | uniq | while read dev size free pused iused ipused rest;do echo "${pused}:${ipused}";done | sort | uniq -c;done' | dshbak -c

HOSTS -------------------------------------------------------------------------
stgkf205, stgkf206, stgkf208
-------------------------------------------------------------------------------

 *** db2path ***
  16 8%:6%
 *** db2fs ***
  15 30%:1%
   1 31%:1%
 *** bkpfs ***
  16 1%:1%
 *** db2mlog ***
  16 1%:6%
 *** db2ssd ***
   8 98%:1%


HOSTS -------------------------------------------------------------------------
stgkf202, stgkf204
-------------------------------------------------------------------------------
 *** db2path ***
   5 8%:6%
 *** db2fs ***
   4 31%:1%
   1 8%:1%
 *** bkpfs ***
   5 1%:1%
 *** db2mlog ***
   5 1%:6%
 *** db2ssd ***
   3 1%:1%
   5 98%:1% 
V1.1: (some hosts are down)
-----

dsh -n ${BCUDB2ALL} 'for pre in db2path db2fs bkpfs db2mlog db2ssd;do echo " *** ${pre} ***";lsfs -c | cut -d: -f 1 | grep ${pre} | sort | xargs df -cg | grep ${pre} | uniq | cut -d: -f 4,6 | sort | uniq -c;done' | dshbak -c
HOSTS -------------------------------------------------------------------------
kf5hostname04, kf5hostname05, kf5hostname06, kf5hostname07
-------------------------------------------------------------------------------
 *** db2path ***
 *** db2fs ***
 *** bkpfs ***
 *** db2mlog ***
 *** db2ssd ***

HOSTS -------------------------------------------------------------------------
kf5hostname02
-------------------------------------------------------------------------------
 *** db2path ***
   6 1%:6%
 *** db2fs ***
   1 11%:1%
   5 31%:1%
 *** bkpfs ***
   1 1%:1%
   5 9%:1%
 *** db2mlog ***
 *** db2ssd ***
V1.1: Healthier
-----

$ dsh -n ${BCUDB2ALL} 'for pre in db2path db2fs bkpfs db2mlog db2ssd;do echo " *** ${pre} ***";lsfs -c | cut -d: -f 1 | grep ${pre} | sort | xargs df -g | grep ${pre} | uniq | while read dev size free pused iused ipused rest;do echo "${pused}:${ipused}";done | sort | uniq -c;done' | dshbak -c 

HOSTS ------------------------------------------------------------------------- 
kf5hostname02, kf5hostname04 
------------------------------------------------------------------------------- 
*** db2path *** 
6 1%:6% 
*** db2fs *** 
1 11%:1% 
5 31%:1% 
*** bkpfs *** 
1 1%:1% 
5 9%:1% 
*** db2mlog *** 
*** db2ssd *** 

HOSTS ------------------------------------------------------------------------- 
kf5hostname05, kf5hostname06, kf5hostname07 
------------------------------------------------------------------------------- 
*** db2path *** 
20 1%:6% 
*** db2fs *** 
8 30%:1% 
12 31%:1% 
*** bkpfs *** 
20 9%:1% 
*** db2mlog *** 
*** db2ssd *** 
DB2
Determine Db2 copies installed on the appliance.
dsh -t 10 -n ${ALL} '/usr/local/bin/db2ls -c | grep -v "#PATH" | cut -d: -f 1' 2>&1 | dshbak -c 
V1.1.0.5
---------

$ dsh -t 10 -n ${ALL} '/usr/local/bin/db2ls -c | grep -v "#PATH" | cut -d: -f 1' 2>&1 | dshbak -c
HOSTS -------------------------------------------------------------------------
stgkf203
-------------------------------------------------------------------------------
/usr/IBM/dwe/mgmt_db2/V10.1

HOSTS -------------------------------------------------------------------------
stgkf201
-------------------------------------------------------------------------------
/opt/ibm/director/db2
/usr/IBM/dwe/mgmt_db2/V10.1

HOSTS -------------------------------------------------------------------------
stgkf202, stgkf204, stgkf206, stgkf208
-------------------------------------------------------------------------------
/usr/IBM/dwe/db2/V10.1.0.5..0

HOSTS -------------------------------------------------------------------------
stgkf205
-------------------------------------------------------------------------------
/usr/IBM/dwe/db2/V10.1.0.2..0
/usr/IBM/dwe/db2/V10.1.0.5..0

 
V1.1.0.2
---------

$ dsh -t 10 -n ${ALL} '/usr/local/bin/db2ls -c | grep -v "#PATH" | cut -d: -f 1' 2>&1 | dshbak -c
HOSTS -------------------------------------------------------------------------
kf5hostname01, kf5hostname03
-------------------------------------------------------------------------------
/usr/IBM/dwe/mgmt_db2/V10.5

HOSTS -------------------------------------------------------------------------
kf5hostname02, kf5hostname04, kf5hostname05, kf5hostname06, kf5hostname07
-------------------------------------------------------------------------------
/usr/IBM/dwe/db2/V10.5.0.8..0
/usr/IBM/dwe/db2/V10.5.0.10..0
/usr/IBM/dwe/db2/V11.1.0.1..0
/usr/IBM/dwe/db2/V11.1.1.3.b.0
More Comments:
---------------------
FP5_FP1 and earlier will include the IBM System Director embedded Db2 copy. This is removed as part of FP6_FP2.
The expected output shows all core hosts have the same Db2 copies and all management hosts have the same Db2 copies. The lone exception is that the management node will have a Db2 copy for IBM System Director if IBM System Director is still installed.
Customers who have multiple Db2 copies (other than on the management host) and are planning to apply V1.0.0.5/V1.1.0.1 or earlier fix packs must only have one Db2 copy on all hosts except for the management host which will have two copies
"PDOA V1.1 FP2" and later fix packs update Db2 differently and do not have the same restrictions as earlier fix packs.
The "PDOA" method to install Db2 is to use the Db2 version in the installation path. However, if a Db2 fix pack was installed in that installation path, the version will no longer reflect the version installed. This fact makes it important to verify the Db2 versions installed on the environment and to not rely on the installation paths.
Db2
Determine the actual Db2 level installed per Db2 copy.
dsh -t 10 -n ${ALL} '/usr/local/bin/db2ls -c | grep -v "#PATH" | cut -d: -f 1,2,3 ' 2>&1 | dshbak -c
V1.0:

-----

$ dsh -t 10 -n ${ALL} '/usr/local/bin/db2ls -c | grep -v "#PATH" | cut -d: -f 1,2,3 ' 2>&1 | dshbak -c
HOSTS -------------------------------------------------------------------------
stgkf201
-------------------------------------------------------------------------------
/opt/ibm/director/db2:9.7.0.4:4
/usr/IBM/dwe/mgmt_db2/V10.1:10.1.0.5:5

HOSTS -------------------------------------------------------------------------
stgkf202, stgkf204, stgkf206, stgkf208
-------------------------------------------------------------------------------
/usr/IBM/dwe/db2/V10.1.0.5..0:10.1.0.5:5

HOSTS -------------------------------------------------------------------------
stgkf203
-------------------------------------------------------------------------------
/usr/IBM/dwe/mgmt_db2/V10.1:10.1.0.5:5

HOSTS -------------------------------------------------------------------------
stgkf205
-------------------------------------------------------------------------------
/usr/IBM/dwe/db2/V10.1.0.2..0:10.1.0.2:2
/usr/IBM/dwe/db2/V10.1.0.5..0:10.1.0.5:5

V1.1:
-----

$ dsh -t 10 -n ${ALL} '/usr/local/bin/db2ls -c | grep -v "#PATH" | cut -d: -f 1,2,3 ' 2>&1 | dshbak -c
HOSTS -------------------------------------------------------------------------
kf5hostname01, kf5hostname03
-------------------------------------------------------------------------------
/usr/IBM/dwe/mgmt_db2/V10.5:10.5.0.10:10

HOSTS -------------------------------------------------------------------------
kf5hostname02, kf5hostname04, kf5hostname05, kf5hostname06, kf5hostname07
-------------------------------------------------------------------------------
/usr/IBM/dwe/db2/V10.5.0.8..0:10.5.0.8:8
/usr/IBM/dwe/db2/V10.5.0.10..0:10.5.0.10:10
/usr/IBM/dwe/db2/V11.1.0.1..0:11.1.1.1:1
/usr/IBM/dwe/db2/V11.1.1.3.b.0:11.1.3.3:3b
Db2
Db2 Copies and associated Licenses.
Db2 10.1 should have the Product Identifier "iwee". Db2 10.5 and 11.1 should have the product identifier "db2aese".
The following command will help to discover inconsistent Db2 licenses in the environment.
dsh -n ${ALL} '/usr/local/bin/db2ls -c | grep -v "#PATH" | cut -d: -f 1 | while read x;do echo " ** ${x} **";${x}/adm/db2licm -l show detail;done' | dshbak -c
V1.0:
-----


$ dsh -n ${ALL} '/usr/local/bin/db2ls -c | grep -v "#PATH" | cut -d: -f 1 | while read x;do echo " ** ${x} **";${x}/adm/db2licm -l show detail;done' | dshbak -c
HOSTS -------------------------------------------------------------------------
stgkf201
-------------------------------------------------------------------------------
 ** /opt/ibm/director/db2 **
Product name:                     "DB2 Enterprise Server Edition"
License type:                     "Restricted"
Expiry date:                      "Permanent"
Product identifier:               "db2ese"
Version information:              "9.7"


 ** /usr/IBM/dwe/mgmt_db2/V10.1 **
Product name:                     "InfoSphere Warehouse Enterprise Edition"
License type:                     "CPU"
Expiry date:                      "Permanent"
Product identifier:               "iwee"
Version information:              "10.1"
Features:


HOSTS -------------------------------------------------------------------------
stgkf203
-------------------------------------------------------------------------------
 ** /usr/IBM/dwe/mgmt_db2/V10.1 **
Product name:                     "InfoSphere Warehouse Enterprise Edition"
License type:                     "CPU"
Expiry date:                      "Permanent"
Product identifier:               "iwee"
Version information:              "10.1"
Features:


HOSTS -------------------------------------------------------------------------
stgkf202, stgkf204, stgkf206, stgkf208
-------------------------------------------------------------------------------
 ** /usr/IBM/dwe/db2/V10.1.0.5..0 **
Product name:                     "InfoSphere Warehouse Enterprise Edition"
License type:                     "CPU"
Expiry date:                      "Permanent"
Product identifier:               "iwee"
Version information:              "10.1"
Features:


HOSTS -------------------------------------------------------------------------
stgkf205
-------------------------------------------------------------------------------
 ** /usr/IBM/dwe/db2/V10.1.0.2..0 **
Product name:                     "DB2 Enterprise Server Edition"
License type:                     "License not registered"
Expiry date:                      "License not registered"
Product identifier:               "db2ese"
Version information:              "10.1"


 ** /usr/IBM/dwe/db2/V10.1.0.5..0 **
Product name:                     "InfoSphere Warehouse Enterprise Edition"
License type:                     "CPU"
Expiry date:                      "Permanent"
Product identifier:               "iwee"
Version information:              "10.1"
Features:

V1.1
-----

$ dsh -n ${ALL} '/usr/local/bin/db2ls -c | grep -v "#PATH" | cut -d: -f 1 | while read x;do echo " ** ${x} **";${x}/adm/db2licm -l show detail;done' | dshbak -c
HOSTS -------------------------------------------------------------------------
kf5hostname01, kf5hostname03
-------------------------------------------------------------------------------
 ** /usr/IBM/dwe/mgmt_db2/V10.5 **
Product name:                     "DB2 Advanced Enterprise Server Edition"
License type:                     "CPU Option"
Expiry date:                      "Permanent"
Product identifier:               "db2aese"
Version information:              "10.5"
Enforcement policy:               "Soft Stop"


HOSTS -------------------------------------------------------------------------
kf5hostname02, kf5hostname04, kf5hostname05, kf5hostname06, kf5hostname07
-------------------------------------------------------------------------------
 ** /usr/IBM/dwe/db2/V10.5.0.8..0 **
Product name:                     "DB2 Advanced Enterprise Server Edition"
License type:                     "CPU Option"
Expiry date:                      "Permanent"
Product identifier:               "db2aese"
Version information:              "10.5"
Enforcement policy:               "Soft Stop"


 ** /usr/IBM/dwe/db2/V10.5.0.10..0 **
Product name:                     "DB2 Advanced Enterprise Server Edition"
License type:                     "CPU Option"
Expiry date:                      "Permanent"
Product identifier:               "db2aese"
Version information:              "10.5"
Enforcement policy:               "Soft Stop"


 ** /usr/IBM/dwe/db2/V11.1.0.1..0 **
Product name:                     "DB2 Advanced Enterprise Server Edition"
License type:                     "CPU Option"
Expiry date:                      "Permanent"
Product identifier:               "db2aese"
Version information:              "11.1"
Enforcement policy:               "Soft Stop"


 ** /usr/IBM/dwe/db2/V11.1.1.3.b.0 **
Product name:                     "DB2 Advanced Enterprise Server Edition"
License type:                     "CPU Option"
Expiry date:                      "Permanent"
Product identifier:               "db2aese"
Version information:              "11.1"
Enforcement policy:               "Soft Stop"
Features:
IBM DB2 Performance Management Offering:              "Not licensed"
Product identifier:               "db2pmf"
Expiry date:                      "Expired"

Db2
Db2 Registry Entries Instance Records
dsh -n ${ALL} '/usr/local/bin/db2ls -c | grep -v "#PATH" | cut -d: -f 1 | tail -1 | while read x;do echo " ** ${x} **";${x}/bin/db2greg -dump | grep "^I";done' | dshbak -c
V1.0:
----

$ dsh -n ${ALL} '/usr/local/bin/db2ls -c | grep -v "#PATH" | cut -d: -f 1 | tail -1 | while read x;do echo " ** ${x} **";${x}/bin/db2greg -dump | grep "^I";done' | dshbak -c
HOSTS -------------------------------------------------------------------------
stgkf201
-------------------------------------------------------------------------------
 ** /usr/IBM/dwe/mgmt_db2/V10.1 **
I,DB2,9.7.0.4,dirinst1,/home/dirinst1/sqllib,,1,0,/opt/ibm/director/db2,,
I,DB2,10.1.0.5,db2psc,/pschome/db2psc/sqllib,,1,0,/usr/IBM/dwe/mgmt_db2/V10.1,,
I,DB2,10.1.0.5,dweadmin,/usr/IBM/dwe/appserver_001/home/dweadmin/sqllib,,1,0,/usr/IBM/dwe/mgmt_db2/V10.1,,
I,DB2,10.1.0.5,db2opm,/opmfs/home/db2opm/sqllib,,1,0,/usr/IBM/dwe/mgmt_db2/V10.1,,

HOSTS -------------------------------------------------------------------------
stgkf202, stgkf204, stgkf205, stgkf206, stgkf208
-------------------------------------------------------------------------------
 ** /usr/IBM/dwe/db2/V10.1.0.5..0 **
I,DB2,10.1.0.5,bcuaix,/db2home/bcuaix/sqllib,,1,0,/usr/IBM/dwe/db2/V10.1.0.5..0,,
I,DAS,10.1.0.5,bcudasp,/dashome/bcudasp/das,,1,,/usr/IBM/dwe/db2/V10.1.0.5..0/das,,

HOSTS -------------------------------------------------------------------------
stgkf203
-------------------------------------------------------------------------------
 ** /usr/IBM/dwe/mgmt_db2/V10.1 **
I,DB2,10.1.0.5,dweadmin,/usr/IBM/dwe/appserver_001/home/dweadmin/sqllib,,1,0,/usr/IBM/dwe/mgmt_db2/V10.1,,
I,DB2,10.1.0.5,db2opm,/opmfs/home/db2opm/sqllib,,1,0,/usr/IBM/dwe/mgmt_db2/V10.1,,
V1.1:
-----

$ dsh -n ${ALL} '/usr/local/bin/db2ls -c | grep -v "#PATH" | cut -d: -f 1 | tail -1 | while read x;do echo " ** ${x} **";${x}/bin/db2greg -dump | grep "^I";done' | dshbak -c
HOSTS -------------------------------------------------------------------------
kf5hostname01
-------------------------------------------------------------------------------
 ** /usr/IBM/dwe/mgmt_db2/V10.5 **
I,DB2,10.5.0.10,db2opm,/opmfs/home/db2opm/sqllib,,1,0,/usr/IBM/dwe/mgmt_db2/V10.5,,
I,DB2,10.5.0.10,dweadmin,/usr/IBM/dwe/appserver_001/home/dweadmin/sqllib,,1,0,/usr/IBM/dwe/mgmt_db2/V10.5,,
I,DB2,10.5.0.10,db2dsm,/opmfs/home/db2dsm/sqllib,,1,0,/usr/IBM/dwe/mgmt_db2/V10.5,,

HOSTS -------------------------------------------------------------------------
kf5hostname03
-------------------------------------------------------------------------------
 ** /usr/IBM/dwe/mgmt_db2/V10.5 **
I,DB2,10.5.0.10,db2opm,/opmfs/home/db2opm/sqllib,,1,0,/usr/IBM/dwe/mgmt_db2/V10.5,,
I,DB2,10.5.0.10,dweadmin,/usr/IBM/dwe/appserver_001/home/dweadmin/sqllib,,1,0,/usr/IBM/dwe/mgmt_db2/V10.5,,

HOSTS -------------------------------------------------------------------------
kf5hostname02, kf5hostname04, kf5hostname05, kf5hostname06, kf5hostname07
-------------------------------------------------------------------------------
 ** /usr/IBM/dwe/db2/V11.1.1.3.b.0 **
I,DB2,11.1.3.3,bcuaix,/db2home/bcuaix/sqllib,,1,0,/usr/IBM/dwe/db2/V11.1.1.3.b.0,,

Comments:
-------------
The following instances will exist only on the management host.
  • "dirinst1" (V1.1 GA through V1.1 FP1)
  • "db2psc"(V1.1 GA through V1.1 FP1)
The following instances will exist only the management hosts.
  • "dweadmin" (V1.1 GA through V1.1 FP1)
  • "db2opm" (V1.1 GA through V1.1 FP3)
The appliance was designed to only have one instance on the core hosts. However some customershave multiple instances and may have changed their instance from the default shown in the following list.
  • "bcuaix"
Carefully check the core instance records for differences. The most common issue is a standby host that is not updated correctly when Db2 special builds are applied by customers.
Db2 "DAS" instances do not appear in V1.1 systems and also should be removed from V1.0 systems.
Db2
Db2 Registry entries  for "S records".
Use the cut command to remove installation dates.
"GPFS/TSA/RSCT" entries were added as part of Db2 10.5 but Db2 does not manage these components in PDOA environments.
There is an "S record" for each Db2 copy installed on the host.
 
dsh -n ${ALL} '/usr/local/bin/db2ls -c | grep -v "#PATH" | cut -d: -f 1 | tail -1 | while read x;do echo " ** ${x} **";${x}/bin/db2greg -dump | grep "^S" | cut -d, -f 1,2,3,4,5,6,7,8,9,11;done' | dshbak -c
V1.0:
----

$ dsh -n ${ALL} '/usr/local/bin/db2ls -c | grep -v "#PATH" | cut -d: -f 1 | tail -1 | while read x;do echo " ** ${x} **";${x}/bin/db2greg -dump | grep "^S" | cut -d, -f 1,2,3,4,5,6,7,8,9,11;done' | dshbak -c
HOSTS -------------------------------------------------------------------------
stgkf201
-------------------------------------------------------------------------------
 ** /usr/IBM/dwe/mgmt_db2/V10.1 **
S,DB2,9.7.0.4,/opt/ibm/director/db2,,,4,0,,0
S,DB2,10.1.0.5,/usr/IBM/dwe/mgmt_db2/V10.1,,,5,0,,0

HOSTS -------------------------------------------------------------------------
stgkf202, stgkf204, stgkf206, stgkf208
-------------------------------------------------------------------------------
 ** /usr/IBM/dwe/db2/V10.1.0.5..0 **
S,DB2,10.1.0.5,/usr/IBM/dwe/db2/V10.1.0.5..0,-,,5,,
S,DAS,10.1.0.5,/usr/IBM/dwe/db2/V10.1.0.5..0/das,lib/libdb2dasgcf.a,,5,, ,

HOSTS -------------------------------------------------------------------------
stgkf203
-------------------------------------------------------------------------------
 ** /usr/IBM/dwe/mgmt_db2/V10.1 **
S,DB2,10.1.0.5,/usr/IBM/dwe/mgmt_db2/V10.1,,,5,0,,0

HOSTS -------------------------------------------------------------------------
stgkf205
-------------------------------------------------------------------------------
 ** /usr/IBM/dwe/db2/V10.1.0.5..0 **
S,DB2,10.1.0.2,/usr/IBM/dwe/db2/V10.1.0.2..0,,,2,0,,0
S,DB2,10.1.0.5,/usr/IBM/dwe/db2/V10.1.0.5..0,-,,5,,
S,DAS,10.1.0.5,/usr/IBM/dwe/db2/V10.1.0.5..0/das,lib/libdb2dasgcf.a,,5,, ,
V1.1:
----

$ dsh -n ${ALL} '/usr/local/bin/db2ls -c | grep -v "#PATH" | cut -d: -f 1 | tail -1 | while read x;do echo " ** ${x} **";${x}/bin/db2greg -dump | grep "^S" | cut -d, -f 1,2,3,4,5,6,7,8,9,11;done' | dshbak -c
HOSTS -------------------------------------------------------------------------
kf5hostname01, kf5hostname03
-------------------------------------------------------------------------------
 ** /usr/IBM/dwe/mgmt_db2/V10.5 **
S,GPFS,4.1.1.17,/usr/lpp/mmfs,DG_NOT_ALLOWED,DB2_INSTALLED,0,0,-,0
S,TSA,4.1.0.3,/opt/IBM/tsamp,DG_NOT_ALLOWED,DB2_INSTALLED,0,0,-,0
S,RSCT,3.2.2.4,/usr/sbin/rsct,DG_NOT_ALLOWED,DB2_INSTALLED,0,0,-,0
S,DB2,10.5.0.10,/usr/IBM/dwe/mgmt_db2/V10.5,,,10,0,,0

HOSTS -------------------------------------------------------------------------
kf5hostname02, kf5hostname04, kf5hostname05, kf5hostname06, kf5hostname07
-------------------------------------------------------------------------------
 ** /usr/IBM/dwe/db2/V11.1.1.3.b.0 **
S,DB2,10.5.0.8,/usr/IBM/dwe/db2/V10.5.0.8..0,,,8,0,,0
S,GPFS,4.1.1.17,/usr/lpp/mmfs,DG_NOT_ALLOWED,DB2_INSTALLED,0,0,-,0
S,DB2,10.5.0.10,/usr/IBM/dwe/db2/V10.5.0.10..0,,,10,0,,0
S,DB2,11.1.1.1,/usr/IBM/dwe/db2/V11.1.0.1..0,,,1,0,,0
S,TSA,4.1.0.3,/opt/IBM/tsamp,DG_NOT_ALLOWED,DB2_INSTALLED,0,0,-,0
S,RSCT,3.2.2.4,/usr/sbin/rsct,DG_NOT_ALLOWED,DB2_INSTALLED,0,0,-,0
S,DB2,11.1.3.3,/usr/IBM/dwe/db2/V11.1.1.3.b.0,,,3,0,b,0
Db2
Db2 Registry V records.
The "DB2SYSTEM" variable will have unique values on each host.  This variable is created as part of the Db2 installation pattern used by the appliance.
The "DB2INSTDEF" variable is not relevant to Db2 on AIX. See this link for more information. technote.

dsh -n ${ALL} '/usr/local/bin/db2ls -c | grep -v "#PATH" | cut -d: -f 1 | tail -1 | while read x;do echo " ** ${x} **";${x}/bin/db2greg -dump | grep "^V";done' | dshbak -c
V1.0:
----

$ dsh -n ${ALL} '/usr/local/bin/db2ls -c | grep -v "#PATH" | cut -d: -f 1 | tail -1 | while read x;do echo " ** ${x} **";${x}/bin/db2greg -dump | grep "^V";done' | dshbak -c
HOSTS -------------------------------------------------------------------------
stgkf201
-------------------------------------------------------------------------------
 ** /usr/IBM/dwe/mgmt_db2/V10.1 **
V,DB2GPRF,DB2SYSTEM,stgkf201,/opt/ibm/director/db2,
V,DB2GPRF,DB2INSTDEF,dirinst1,/opt/ibm/director/db2,
V,DB2GPRF,DB2FCMCOMM,TCPIP4,/opt/ibm/director/db2,
V,DB2GPRF,DB2SYSTEM,stgkf201,/usr/IBM/dwe/mgmt_db2/V10.1,
V,DB2GPRF,DB2INSTDEF,db2psc,/usr/IBM/dwe/mgmt_db2/V10.1,

HOSTS -------------------------------------------------------------------------
stgkf202
-------------------------------------------------------------------------------
 ** /usr/IBM/dwe/db2/V10.1.0.5..0 **
V,DB2GPRF,DB2SYSTEM,stgkf202,/usr/IBM/dwe/db2/V10.1.0.2..0,
V,DB2GPRF,DB2SYSTEM,stgkf202,/usr/IBM/dwe/db2/V10.1.0.3..2,
V,DB2GPRF,DB2SYSTEM,stgkf202,/usr/IBM/dwe/db2/V10.1.0.5..0,
V,DB2GPRF,DB2INSTDEF,bcuaix,/usr/IBM/dwe/db2/V10.1.0.5..0,
V,DB2GPRF,DB2ADMINSERVER,bcudasp,/usr/IBM/dwe/db2/V10.1.0.5..0,

HOSTS -------------------------------------------------------------------------
stgkf203
-------------------------------------------------------------------------------
 ** /usr/IBM/dwe/mgmt_db2/V10.1 **
V,DB2GPRF,DB2SYSTEM,stgkf203,/usr/IBM/dwe/mgmt_db2/V10.1,

HOSTS -------------------------------------------------------------------------
stgkf208
-------------------------------------------------------------------------------
 ** /usr/IBM/dwe/db2/V10.1.0.5..0 **
V,DB2GPRF,DB2SYSTEM,stgkf208,/usr/IBM/dwe/db2/V10.1.0.2..0,
V,DB2GPRF,DB2INSTDEF,bcuaix,/usr/IBM/dwe/db2/V10.1.0.5..0,
V,DB2GPRF,DB2ADMINSERVER,bcudasp,/usr/IBM/dwe/db2/V10.1.0.5..0,

HOSTS -------------------------------------------------------------------------
stgkf206
-------------------------------------------------------------------------------
 ** /usr/IBM/dwe/db2/V10.1.0.5..0 **
V,DB2GPRF,DB2SYSTEM,stgkf206,/usr/IBM/dwe/db2/V10.1.0.2..0,
V,DB2GPRF,DB2INSTDEF,bcuaix,/usr/IBM/dwe/db2/V10.1.0.3..2,
V,DB2GPRF,DB2INSTDEF,bcuaix,/usr/IBM/dwe/db2/V10.1.0.5..0,
V,DB2GPRF,DB2ADMINSERVER,bcudasp,/usr/IBM/dwe/db2/V10.1.0.5..0,

HOSTS -------------------------------------------------------------------------
stgkf204
-------------------------------------------------------------------------------
 ** /usr/IBM/dwe/db2/V10.1.0.5..0 **
V,DB2GPRF,DB2SYSTEM,stgkf204,/usr/IBM/dwe/db2/V10.1.0.2..0,
V,DB2GPRF,DB2INSTDEF,bcuaix,/usr/IBM/dwe/db2/V10.1.0.3..2,
V,DB2GPRF,DB2INSTDEF,bcuaix,/usr/IBM/dwe/db2/V10.1.0.5..0,
V,DB2GPRF,DB2ADMINSERVER,bcudasp,/usr/IBM/dwe/db2/V10.1.0.5..0,

HOSTS -------------------------------------------------------------------------
stgkf205
-------------------------------------------------------------------------------
 ** /usr/IBM/dwe/db2/V10.1.0.5..0 **
V,DB2GPRF,DB2SYSTEM,stgkf205,/usr/IBM/dwe/db2/V10.1.0.2..0,
V,DB2GPRF,DB2INSTDEF,bcuaix,/usr/IBM/dwe/db2/V10.1.0.5..0,
V,DB2GPRF,DB2ADMINSERVER,bcudasp,/usr/IBM/dwe/db2/V10.1.0.5..0,
V1.1:
----

$ dsh -n ${ALL} '/usr/local/bin/db2ls -c | grep -v "#PATH" | cut -d: -f 1 | tail -1 | while read x;do echo " ** ${x} **";${x}/bin/db2greg -dump | grep "^V";done' | dshbak -c
HOSTS -------------------------------------------------------------------------
kf5hostname01
-------------------------------------------------------------------------------
 ** /usr/IBM/dwe/mgmt_db2/V10.5 **
V,DB2GPRF,DB2SYSTEM,kf5hostname01,/opt/ibm/director/db2,
V,DB2GPRF,DB2FCMCOMM,TCPIP4,/opt/ibm/director/db2,
V,DB2GPRF,DB2SYSTEM,kf5hostname01,/usr/IBM/dwe/mgmt_db2/V10.5,

HOSTS -------------------------------------------------------------------------
kf5hostname03
-------------------------------------------------------------------------------
 ** /usr/IBM/dwe/mgmt_db2/V10.5 **
V,DB2GPRF,DB2SYSTEM,kf5hostname03,/usr/IBM/dwe/mgmt_db2/V10.5,

HOSTS -------------------------------------------------------------------------
kf5hostname02
-------------------------------------------------------------------------------
 ** /usr/IBM/dwe/db2/V11.1.1.3.b.0 **
V,DB2GPRF,DB2SYSTEM,kf5hostname02,/usr/IBM/dwe/db2/V10.5.0.8..0,
V,DB2GPRF,DB2INSTDEF,bcuaix,/usr/IBM/dwe/db2/V10.5.0.8..0,
V,DB2GPRF,DB2INSTDEF,bcuaix,/usr/IBM/dwe/db2/V10.5.0.10..0,
V,DB2GPRF,DB2SYSTEM,kf5hostname02,/usr/IBM/dwe/db2/V11.1.0.1..0,
V,DB2GPRF,DB2INSTDEF,bcuaix,/usr/IBM/dwe/db2/V11.1.0.1..0,

HOSTS -------------------------------------------------------------------------
kf5hostname04
-------------------------------------------------------------------------------
 ** /usr/IBM/dwe/db2/V11.1.1.3.b.0 **
V,DB2GPRF,DB2SYSTEM,kf5hostname04,/usr/IBM/dwe/db2/V10.5.0.10..0,
V,DB2GPRF,DB2SYSTEM,kf5hostname04,/usr/IBM/dwe/db2/V11.1.0.1..0,
V,DB2GPRF,DB2SYSTEM,kf5hostname04,/usr/IBM/dwe/db2/V11.1.1.3.b.0,

HOSTS -------------------------------------------------------------------------
kf5hostname07
-------------------------------------------------------------------------------
 ** /usr/IBM/dwe/db2/V11.1.1.3.b.0 **
V,DB2GPRF,DB2SYSTEM,kf5hostname07,/usr/IBM/dwe/db2/V11.1.0.1..0,

HOSTS -------------------------------------------------------------------------
kf5hostname06
-------------------------------------------------------------------------------
 ** /usr/IBM/dwe/db2/V11.1.1.3.b.0 **
V,DB2GPRF,DB2INSTDEF,bcuaix,/usr/IBM/dwe/db2/V10.5.0.8..0,
V,DB2GPRF,DB2INSTDEF,bcuaix,/usr/IBM/dwe/db2/V10.5.0.10..0,
V,DB2GPRF,DB2SYSTEM,kf5hostname06,/usr/IBM/dwe/db2/V11.1.0.1..0,
V,DB2GPRF,DB2INSTDEF,bcuaix,/usr/IBM/dwe/db2/V11.1.0.1..0,

HOSTS -------------------------------------------------------------------------
kf5hostname05
-------------------------------------------------------------------------------
 ** /usr/IBM/dwe/db2/V11.1.1.3.b.0 **
V,DB2GPRF,DB2INSTDEF,bcuaix,/usr/IBM/dwe/db2/V10.5.0.5..1,
V,DB2GPRF,DB2SYSTEM,kf5hostname05,/usr/IBM/dwe/db2/V11.1.0.1..0,
Servers
Server Status As Viewed From the HMC.
Run as hscroot on hmc1 or hmc2.

lssyscfg -r sys -F name type_model serial_num state | sort | while read name tm serial state;do printf "${name},${tm},${serial},${state},";lsrefcode -r sys -m ${name} -F refcode_num,refcode,fru_call_out_loc_codes;done

$ lssyscfg -r sys -F name type_model serial_num state | sort | while read name tm serial state;do printf "${name},${tm},${serial},${state},";lsrefcode -r sys -m ${name} -F refcode_num,refcode,fru_call_out_loc_codes;done

DATA1-8231E2C-SN100ABCR,8231-E2C,100ABCR,"Power Off",0,11002630,U78AB.001.WZSGXCJ-P1-C12
DATA2-8231E2C-SN100AB9R,8231-E2C,100AB9R,Operating,0,,null
DATA3-8231E2C-SN100ABAR,8231-E2C,100ABAR,"Power Off",0,11002630,U78AB.001.WZSGXCE-P1-C12
DATA4-8231E2C-SN1009DCR,8231-E2C,1009DCR,Operating,0,,null
DATA5-8231E2C-SN1009DBR,8231-E2C,1009DBR,Operating,0,,null
DATA6-8231E2C-SN10501DR,8231-E2C,10501DR,Operating,0,,null
DATA_STDBY1-8231E2C-SN100ABBR,8231-E2C,100ABBR,Operating,0,,null
DATA_STDBY2-8231E2C-SN1009DDR,8231-E2C,1009DDR,Operating,0,,null
FDN_ACTIVE-8205E6C-SN100AB8R,8205-E6C,100AB8R,"Power Off",0,11002630,U78AA.001.WZSGXFP-P1-C12
FDN_STANDBY-8205E6C-SN100993R,8205-E6C,100993R,Operating,0,,null

Adapters Fiber Channel ("fcs") modified settings.

dsh -n ${ALL} 'lscfg -l fcs* | while read if rest;do lsattr -EOl $if -a lg_term_dma,max_xfer_size,num_cmd_elems | grep -v "^#";done  | sort | uniq -c' | dshbak -c
V1.1 Example:
===========

$ dsh -n ${ALL} 'lscfg -l fcs* | while read if rest;do lsattr -EOl $if -a lg_term_dma,max_xfer_size,num_cmd_elems | grep -v "^#";done  | sort | uniq -c' | dshbak -c
HOSTS -------------------------------------------------------------------------
hostname01, hostname03 
------------------------------------------------------------------------------- 
   8 0x1000000:0x200000:2048 

HOSTS ------------------------------------------------------------------------- 
hostname02, hostname04, hostname05, hostname06, hostname07 
------------------------------------------------------------------------------- 
  16 0x1000000:0x200000:2048 

Notes:
The top stanza shows the management "LPARs". Each "LPAR" has 2 "4 Port HBA Adapters".
The second stanza shows an admin and a data "LPAR". Each "LPAR" has 4 "4 Port HBA Adapters".
In PDOA only the following variables are modified for the fibre channel adapters.
lg_term_dma,max_xfer_size,num_cmd_elems
See https://www.ibm.com/support/knowledgecenter/en/SSH2TE_1.1.0/com.ibm.7700.r2.common.doc/doc/r00000185.html for V1.1 settings.
Interfaces Fiber Channel Interface (fscsi) Settings

dsh -n ${ALL} 'lscfg -l fscsi* | while read if rest;do lsattr -EOl $if -a dyntrk,fc_err_recov| grep -v "^#";done  | sort | uniq -c' | dshbak -c 
V1.1: Example:

$ dsh -n ${ALL} 'lscfg -l fscsi* | while read if rest;do lsattr -EOl $if -a dyntrk,fc_err_recov| grep -v "^#";done  | sort | uniq -c' | dshbak -c

HOSTS -------------------------------------------------------------------------
hostname01, hostname03
-------------------------------------------------------------------------------
   8 yes:fast_fail

HOSTS -------------------------------------------------------------------------
hostname02, hostname04, hostname05, hostname06, hostname07
-------------------------------------------------------------------------------
  16 yes:fast_fail
NOTES: PDOA does not modify any fscsi variables in V1.1
AIX hard disks. "(hdisks)" "hdisk settings"
# V7K Disks
dsh -n ${ALL} 'lscfg -l hdisk* | grep MPIO | grep 2076 | while read d rest;do lsattr -EOl ${d} -a algorithm,max_coalesce,max_transfer,queue_depth | grep -v "^#";done | sort | uniq -c' | dshbak -c 
# Flash Disks
dsh -n ${ALL} 'lscfg -l hdisk* | grep MPIO | grep -i FLASH | while read d rest;do lsattr -EOl ${d} -a algorithm,max_coalesce,max_transfer,queue_depth | grep -v "^#";done | sort | uniq -c' | dshbak -c 
V1.1: (See https://www.ibm.com/support/knowledgecenter/en/SSH2TE_1.1.0/com.ibm.7700.r2.common.doc/doc/r00000185.html for V1.1 settings)
The following shows the V7000 settings on a V1.1 environment.

$ dsh -n ${ALL} 'lscfg -l hdisk* | grep MPIO | grep 2076 | while read d rest;do lsattr -EOl ${d} -a algorithm,max_coalesce,max_transfer,queue_depth | grep -v "^#";done | sort | uniq -c' | dshbak -c 
HOSTS ------------------------------------------------------------------------- 
hostname01 
------------------------------------------------------------------------------- 
   4 round_robin:0x40000:0x100000:256 

HOSTS ------------------------------------------------------------------------- 
hostname03 
------------------------------------------------------------------------------- 
   2 round_robin:0x40000:0x100000:256 

HOSTS ------------------------------------------------------------------------- 
hostname02, hostname04 
------------------------------------------------------------------------------- 
   7 round_robin:0x40000:0x100000:256 

HOSTS ------------------------------------------------------------------------- 
hostname05, hostname06, hostname07 
------------------------------------------------------------------------------- 
  20 round_robin:0x40000:0x100000:256 
The following shows the Flash900 settings on a V1.1 environment.

$ dsh -n ${ALL} 'lscfg -l hdisk* | grep MPIO | grep -i FLASH | while read d rest;do lsattr -EOl ${d} -a algorithm,max_coalesce,max_transfer,queue_depth | grep -v "^#";done | sort | uniq -c' | dshbak -c 

HOSTS ------------------------------------------------------------------------- 
hostname01, hostname03 
------------------------------------------------------------------------------- 
   3 shortest_queue:0x40000:0x100000:256 

HOSTS ------------------------------------------------------------------------- 
hostname02, hostname04 
------------------------------------------------------------------------------- 
  14 shortest_queue:0x40000:0x100000:256 

HOSTS ------------------------------------------------------------------------- 
hostname05, hostname06, hostname07 
------------------------------------------------------------------------------- 
  40 shortest_queue:0x40000:0x100000:256 
NOTES:
The hosts indicated by hostname1 and hostname3 are management LPARs.
The hosts indicated by hostname2 and hostname4 are admin LPARs.
The hosts indicated by hostname5, hostname6 and hostname7 are data LPARs.
The data LPAR counts vary depending on the number of hosts in the same rack. Each data rack has 1 standby host and at least one data host up to a maximum of 4 data hosts.
Appliance file consistency. How to compare a file on the managmeent host to the same file on the rest of the hosts in the environment?
A quick and useful way to compare files from the host running dsh to a set or all hosts in the environment.
Variations:
1. The host ip address, in the example 172.23.1.1, will be the source to compare against. 172.23.1.1 is the default ip address for the management host.
2. The "-n ${ALL}" argument includes all AIX hosts in the environment.
3. The "f=/etc/ssh/sshd_config" argument represented the full path of the file to be used in the comparison.
If there is no difference between the source host and the target host, only the cksum will be displayed in the stanza for that host.
If there is a difference the "<" will indicate the target host differences and the ">" will indicate the source host differences.

dsh -n ${ALL} 'f=/etc/ssh/sshd_config;cksum $f;ssh -n 172.23.1.1 "cat $f" | diff $f -' 2>&1 | dshbak -c 

$ dsh -n ${ALL} 'f=/etc/ssh/sshd_config;cksum $f;ssh -n 172.23.1.1 "cat $f" | diff $f -' 2>&1 | dshbak -c
HOSTS -------------------------------------------------------------------------
flashdancehostname01
-------------------------------------------------------------------------------
279326565 3348 /etc/ssh/sshd_config

HOSTS -------------------------------------------------------------------------
flashdancehostname03
-------------------------------------------------------------------------------
618889683 3182 /etc/ssh/sshd_config
40d39
< #PermitRootLogin without-password
118a118,127
>
> X11Forwarding yes
> X11DisplayOffset 10
> X11UseLocalhost yes
>
> XauthLocation /usr/bin/X11/xauth
>
> Match host "172.23.1.161,172.23.1.162,172.23.1.163,172.23.1.164"
>         PubkeyAcceptedKeyTypes +ssh-dss
>

HOSTS -------------------------------------------------------------------------
flashdancehostname02, flashdancehostname04, flashdancehostname05, flashdancehostname06, flashdancehostname07
-------------------------------------------------------------------------------
1770190567 3148 /etc/ssh/sshd_config
117a118,127
>
> X11Forwarding yes
> X11DisplayOffset 10
> X11UseLocalhost yes
>
> XauthLocation /usr/bin/X11/xauth
>
> Match host "172.23.1.161,172.23.1.162,172.23.1.163,172.23.1.164"
>         PubkeyAcceptedKeyTypes +ssh-dss
>
Storage FC Path Selection
Requires FP6_FP2 or higher.
Flash:

dsh -n ${ALL} -t 30 'lsdev | grep Flash | while read d rest;do lsmpio -l ${d} | grep -v "^name" | while read n pid stat pstat par con rest;do echo "$pstat";done | sort | uniq -c;done | sort | uniq -c' 2>&1 | dshbak -c
V7000:

dsh -n ${ALL} -t 30 'lsdev | grep 2076| while read d rest;do lsmpio -l ${d} | grep -v "^name" | while read n pid stat pstat par con rest;do echo "$pstat";done | sort | uniq -c;done | sort | uniq -c' 2>&1 | dshbak -c

$ dsh -n ${ALL} -t 30 'lsdev | grep Flash | while read d rest;do lsmpio -l ${d} | grep -v "^name" | while read n pid stat pstat par con rest;do echo "$pstat";done | sort | uniq -c;done | sort | uniq -c' 2>&1 | dshbak -c
HOSTS -------------------------------------------------------------------------
flashdancehostname01
-------------------------------------------------------------------------------
   3    1
   3    2 Deg,Fai
   3    6 Sel

HOSTS -------------------------------------------------------------------------
flashdancehostname03
-------------------------------------------------------------------------------
   3    1
   3    2 Clo
   3    6 Sel

HOSTS -------------------------------------------------------------------------
flashdancehostname02, flashdancehostname04
-------------------------------------------------------------------------------
  14    1
  14    4 Deg,Fai
  14   12 Sel

HOSTS -------------------------------------------------------------------------
flashdancehostname05, flashdancehostname06, flashdancehostname07
-------------------------------------------------------------------------------
  40    1
  40   30 Sel

(0) root @ flashdancehostname01: 7.1.0.0: /
$ dsh -n ${ALL} -t 30 'lsdev | grep 2076 | while read d rest;do lsmpio -l ${d} | grep -v "^name" | while read n pid stat pstat par con rest;do echo "$pstat";done | sort | uniq -c;done | sort | uniq -c' 2>&1 | dshbak -c
HOSTS -------------------------------------------------------------------------
flashdancehostname01
-------------------------------------------------------------------------------
   4    1
   4    4 Non
   4    4 Sel,Opt

HOSTS -------------------------------------------------------------------------
flashdancehostname03
-------------------------------------------------------------------------------
   2    1
   2    4 Non
   2    4 Sel,Opt

HOSTS -------------------------------------------------------------------------
flashdancehostname02, flashdancehostname04
-------------------------------------------------------------------------------
   7    1
   7    8 Non
   7    8 Sel,Opt

HOSTS -------------------------------------------------------------------------
flashdancehostname05, flashdancehostname06, flashdancehostname07
-------------------------------------------------------------------------------
  20    1
  20   10 Non
  20   10 Sel,Opt
Appliance Map "CEC" to "LPAR" to Host using the "pflayer" tools.
The following command is run as the root user on the management host. It will query the platform layer "ODM" database.
"PF ID": Platform Layer identifier for "server_os" resource types.
"LPAR": LPAR name that can be used when running LPAR commands on the HMC.
PROFILE: This lists the name of the profile used to start the LPAR.
HOSTNAME: This is the internal hostname for the LPAR.
IP: This is the internal IP address assigned to en11 for the host/LPAR.
MT: This is the model type for the Server or CEC that owns the LPAR.
SN: This is the serial number for the Server or CEC that owns the LPAR.

$ printf "%-10s%-15s%-15s%-50s%-20s%-10s%-10s\n" "PF ID" "LPAR" "PROFILE" "HOSTNAME" "IP" "MT" "SN";printf "%-10s%-15s%-15s%-50s%-20s%-10s%-10s\n" $(appl_ls_hw -r server_os -A Logical_name,LPAR_NAME,LPAR_PROFILE_NAME,F_Hostname,F_IP_address,Machine_type,Serial_number | sed 's|[",]| |g') | sort -k 4b

$ printf "%-10s%-15s%-15s%-50s%-20s%-10s%-10s\n" "PF ID" "LPAR" "PROFILE" "HOSTNAME" "IP" "MT" "SN";printf "%-10s%-15s%-15s%-50s%-20s%-10s%-10s\n" $(appl_ls_hw -r server_os -A Logical_name,LPAR_NAME,LPAR_PROFILE_NAME,F_Hostname,F_IP_address,Machine_type,Serial_number | sed 's|[",]| |g') | sort -k 4b

PF ID     LPAR           PROFILE        HOSTNAME                                          IP                  MT        SN
server6   sysNode        mgmt_node     flashdancehostname01                              172.23.1.1          8286      #######
server3   adminnode_2    adm_node      flashdancehostname02                              172.23.1.2          8286      #######
server1   stdbynode_3    mgmt_node     flashdancehostname03                              172.23.1.3          8286      #######
server4   stdbynode_4    adm_node      flashdancehostname04                              172.23.1.4          8286      #######
server2   stdbynode_5    stdby_node    flashdancehostname05                              172.23.1.5          8284      #######
server0   datanode_6     data_node     flashdancehostname06                              172.23.1.6          8284      #######
server5   datanode_7     data_node     flashdancehostname07                              172.23.1.7          8284      #######
Adapters Network Adapter Etherchannel Health
The following command will run entstat -d ent11 on all hosts in the environment. It will then report on the Synchronization Status. Management nodes will show  4 IN_SYNC if healthy (2 Actors + 2 Partners). Core nodes will show 4 segments and 8 IN_SYNC if healthy (4 Actors + 4 Partners). This will not tell you which of the child adapters of the etherchannel are impacted and the order of the children in the etherchannel changes over time for a variety of reasons.
This gives a quick easily readable set of output.

$ dsh -n ${ALL} "entstat -d ent11 | egrep -p 'Actor State|Partner State' | egrep 'Synchronization:'" | dshbak -c
HOSTS -------------------------------------------------------------------------
host01, host03
-------------------------------------------------------------------------------
                Synchronization: IN_SYNC
                Synchronization: IN_SYNC
                Synchronization: IN_SYNC
                Synchronization: IN_SYNC

HOSTS -------------------------------------------------------------------------
host02, host04, host05, host06
-------------------------------------------------------------------------------
                Synchronization: IN_SYNC
                Synchronization: IN_SYNC
                Synchronization: IN_SYNC
                Synchronization: IN_SYNC
                Synchronization: IN_SYNC
                Synchronization: IN_SYNC
                Synchronization: IN_SYNC
                Synchronization: IN_SYNC
Adapters Network Adapter Etherchannel Expanded Information
This command uses the output of the command "entstat -d ent11". The command pulls the adapter names, the synchronization status, and the partner port number on the switch in hexadecimal format and then converts that to a decimal format.
While this cannot programmatically determine which "10Gb" switch a port is attached, it can narrow down list of ports involved when troubleshooting a network connection.

$ dsh -n ${ALL} 'entstat -d ent11 | egrep "ETHERNET STATISTICS|Partner Port:|Synchronization" | while read x;do h=$(echo ${x} | grep "Partner Port"|sed "s|.*0x||");echo $x;if [ ! "$h" == "" ];then echo "ibase=16; $h" | bc;fi;done' | dshbak -c

HOSTS -------------------------------------------------------------------------
host01
-------------------------------------------------------------------------------
ETHERNET STATISTICS (ent11) :
ETHERNET STATISTICS (ent4) :
Synchronization: IN_SYNC
Partner Port: 0x002F
47
Synchronization: IN_SYNC
ETHERNET STATISTICS (ent0) :
Synchronization: IN_SYNC
Partner Port: 0x002F
47
Synchronization: IN_SYNC

HOSTS -------------------------------------------------------------------------
host03
-------------------------------------------------------------------------------
ETHERNET STATISTICS (ent11) :
ETHERNET STATISTICS (ent4) :
Synchronization: IN_SYNC
Partner Port: 0x0030
48
Synchronization: IN_SYNC
ETHERNET STATISTICS (ent0) :
Synchronization: IN_SYNC
Partner Port: 0x0030
48
Synchronization: IN_SYNC

HOSTS -------------------------------------------------------------------------
host04
-------------------------------------------------------------------------------
ETHERNET STATISTICS (ent11) :
ETHERNET STATISTICS (ent4) :
Synchronization: IN_SYNC
Partner Port: 0x0033
51
Synchronization: IN_SYNC
ETHERNET STATISTICS (ent5) :
Synchronization: IN_SYNC
Partner Port: 0x0033
51
Synchronization: IN_SYNC
ETHERNET STATISTICS (ent0) :
Synchronization: IN_SYNC
Partner Port: 0x0034
52
Synchronization: IN_SYNC
ETHERNET STATISTICS (ent1) :
Synchronization: IN_SYNC
Partner Port: 0x0034
52
Synchronization: IN_SYNC

HOSTS -------------------------------------------------------------------------
host0
-------------------------------------------------------------------------------
ETHERNET STATISTICS (ent11) :
ETHERNET STATISTICS (ent4) :
Synchronization: IN_SYNC
Partner Port: 0x0013
19
Synchronization: IN_SYNC
ETHERNET STATISTICS (ent0) :
Synchronization: IN_SYNC
Partner Port: 0x0014
20
Synchronization: IN_SYNC
ETHERNET STATISTICS (ent5) :
Synchronization: IN_SYNC
Partner Port: 0x0013
19
Synchronization: IN_SYNC
ETHERNET STATISTICS (ent1) :
Synchronization: IN_SYNC
Partner Port: 0x0014
20
Synchronization: IN_SYNC

HOSTS -------------------------------------------------------------------------
host05
-------------------------------------------------------------------------------
ETHERNET STATISTICS (ent11) :
ETHERNET STATISTICS (ent4) :
Synchronization: IN_SYNC
Partner Port: 0x0011
17
Synchronization: IN_SYNC
ETHERNET STATISTICS (ent0) :
Synchronization: IN_SYNC
Partner Port: 0x0012
18
Synchronization: IN_SYNC
ETHERNET STATISTICS (ent5) :
Synchronization: IN_SYNC
Partner Port: 0x0011
17
Synchronization: IN_SYNC
ETHERNET STATISTICS (ent1) :
Synchronization: IN_SYNC
Partner Port: 0x0012
18
Synchronization: IN_SYNC

HOSTS -------------------------------------------------------------------------
host02
-------------------------------------------------------------------------------
ETHERNET STATISTICS (ent11) :
ETHERNET STATISTICS (ent4) :
Synchronization: IN_SYNC
Partner Port: 0x0031
49
Synchronization: IN_SYNC
ETHERNET STATISTICS (ent0) :
Synchronization: IN_SYNC
Partner Port: 0x0032
50
Synchronization: IN_SYNC
ETHERNET STATISTICS (ent5) :
Synchronization: IN_SYNC
Partner Port: 0x0031
49
Synchronization: IN_SYNC
ETHERNET STATISTICS (ent1) :
Synchronization: IN_SYNC
Partner Port: 0x0032
50
Synchronization: IN_SYNC
Appliance Using the platform layer to display the appliance components. This does not list the RACKs, PDUs, or KVM switch.
Run as root on the management host.
1. Servers:
appl_ls_hw -r server_fsp -A Logical_name,FW_level,Machine_type,Model,Serial_number
2. HMCs:
appl_ls_hw -r hmc -A Logical_name,Machine_type,M_IP_address,FW_level,Model,Serial_number
3. Storage Enclosures:
appl_ls_hw -r storage -A Logical_name,Machine_type,M_IP_address,FW_level,Model,Serial_number
4. SAN Switches:
appl_ls_hw -r san -A Logical_name,Machine_type,M_IP_address,FW_level,Model,Serial_number
5. Network Switches
appl_ls_hw -r net -A Logical_name,Machine_type,M_IP_address,FW_level,Model,Serial_number
6. Network Adapters
appl_ls_hw -r net_adapter -A Parent,Logical_name,FW_level,Model,OS_Device_names,HW_location
            
7. Fibre Channel Adapters

$ appl_ls_hw -r fc_adapter -A Parent,Logical_name,FW_level,Model,OS_Device_names,HW_location
1. Servers:

$ appl_ls_hw -r server_fsp -A Logical_name,FW_level,Machine_type,Model,Serial_number
"server_fsp0","SV860_226","8286","42A","<SN>"
"server_fsp1","SV860_226","8286","42A","<SN>"
"server_fsp2","SV860_226","8284","22A","<SN>"
"server_fsp3","SV860_226","8284","22A","<SN>"
"server_fsp4","SV860_226","8284","22A","<SN>"
2. HMCs:

$ appl_ls_hw -r hmc -A Logical_name,Machine_type,M_IP_address,FW_level,Model,Serial_number
"hmc0","7042","172.23.1.245","9.1.942","CR9","<SN>"
"hmc1","7042","172.23.1.246","9.1.942","CR9","<SN>"
3. Storage Enclosures:

$ appl_ls_hw -r storage -A Logical_name,Machine_type,M_IP_address,FW_level,Model,Serial_number
"storage0","2076","172.23.1.204","8.2.1.11","524","<SN>"
"storage1","9840","172.23.1.205","1.5.2.7","AE2","<SN>"
"storage2","2076","172.23.1.206","8.2.1.11","524","<SN>"
"storage3","9840","172.23.1.207","1.5.2.7","AE2","<SN>"
"storage4","2076","172.23.1.208","8.2.1.11","524","<SN>"
"storage5","9840","172.23.1.209","1.5.2.7","AE2","<SN>"
4. SAN Switches

$ appl_ls_hw -r san -A Logical_name,Machine_type,M_IP_address,FW_level,Model,Serial_number
"san0","","172.23.1.161","v7.4.2e","40-1000569-13","<SN>"
"san1","","172.23.1.162","v7.4.2e","40-1000569-13","<SN>"
"san2","","172.23.1.163","v7.4.2e","40-1000569-13","<SN>"
"san3","","172.23.1.164","v7.4.2e","40-1000569-13","<SN>"
5. Network Switches

$ appl_ls_hw -r net -A Logical_name,Machine_type,M_IP_address,FW_level,Model,Serial_number
"net0","","172.23.1.254","7.11.19.0","G8052","<SN>"
"net1","","172.23.1.253","7.11.19.0","G8052","<SN>"
"net2","","172.23.1.252","7.11.19.0","G8264","<SN>"
"net3","","172.23.1.251","7.11.19.0","G8264","<SN>"
6. Network Adapters

 appl_ls_hw -r net_adapter -A Parent,Logical_name,FW_level,Model,OS_Device_names,HW_location
"server0","net_adapter0","30100310","e4148a1614109304","ent0,ent1,ent2,ent3","U78CB.001.WZS0F0Y-P1-C6"
"server0","net_adapter1","30100310","e4148a1614109304","ent4,ent5,ent6,ent7","U78CB.001.WZS0F0Y-P1-C5"
"server1","net_adapter2","30100310","e4148a1614109304","ent0,ent1,ent2,ent3","U78C9.001.WZS0J9W-P1-C7"
"server1","net_adapter3","30100310","e4148a1614109304","ent4,ent5,ent6,ent7","U78C9.001.WZS0J9W-P1-C6"
"server2","net_adapter4","30100310","e4148a1614109304","ent4,ent5,ent6,ent7","U78CB.001.WZS0C4J-P1-C5"
"server2","net_adapter5","30100310","e4148a1614109304","ent0,ent1,ent2,ent3","U78CB.001.WZS0C4J-P1-C6"
"server3","net_adapter6","30100310","e4148a1614109304","ent0,ent1,ent2,ent3","U78C9.001.WZS0FYR-P1-C5"
"server3","net_adapter7","30100310","e4148a1614109304","ent4,ent5,ent6,ent7","U78C9.001.WZS0FYR-P1-C3"
"server4","net_adapter8","30100310","e4148a1614109304","ent4,ent5,ent6,ent7","U78C9.001.WZS0J9W-P1-C3"
"server4","net_adapter9","30100310","e4148a1614109304","ent0,ent1,ent2,ent3","U78C9.001.WZS0J9W-P1-C5"
"server5","net_adapter10","30100310","e4148a1614109304","ent0,ent1,ent2,ent3","U78C9.001.WZS0FYR-P1-C7"
"server5","net_adapter11","30100310","e4148a1614109304","ent4,ent5,ent6,ent7","U78C9.001.WZS0FYR-P1-C6"
 
7. Fibre Channel Adapters

$ appl_ls_hw -r fc_adapter -A Parent,Logical_name,FW_level,Model,OS_Device_names,HW_location
"server0","fc_adapter0","0325080271","7710322514101e04","fcs8,fcs9,fcs10,fcs11","U78CB.001.WZS0F0Y-P1-C2"
"server0","fc_adapter1","0325080271","7710322514101e04","fcs4,fcs5,fcs6,fcs7","U78CB.001.WZS0F0Y-P1-C9"
"server0","fc_adapter2","0325080271","7710322514101e04","fcs12,fcs13,fcs14,fcs15","U78CB.001.WZS0F0Y-P1-C3"
"server0","fc_adapter3","0325080271","7710322514101e04","fcs0,fcs1,fcs2,fcs3","U78CB.001.WZS0F0Y-P1-C7"
"server1","fc_adapter4","210313","df1000f114100104","fcs0,fcs1,fcs2,fcs3","U78C9.001.WZS0J9W-P1-C11"
"server1","fc_adapter5","210313","df1000f114100104","fcs4,fcs5,fcs6,fcs7","U78C9.001.WZS0J9W-P1-C12"
"server2","fc_adapter6","0325080271","7710322514101e04","fcs8,fcs9,fcs10,fcs11","U78CB.001.WZS0C4J-P1-C2"
"server2","fc_adapter7","0325080271","7710322514101e04","fcs4,fcs5,fcs6,fcs7","U78CB.001.WZS0C4J-P1-C9"
"server2","fc_adapter8","0325080271","7710322514101e04","fcs12,fcs13,fcs14,fcs15","U78CB.001.WZS0C4J-P1-C3"
"server2","fc_adapter9","0325080271","7710322514101e04","fcs0,fcs1,fcs2,fcs3","U78CB.001.WZS0C4J-P1-C7"
"server3","fc_adapter10","210313","df1000f114100104","fcs12,fcs13,fcs14,fcs15","U78C9.001.WZS0FYR-P1-C2"
"server3","fc_adapter11","210313","df1000f114100104","fcs4,fcs5,fcs6,fcs7","U78C9.001.WZS0FYR-P1-C9"
"server3","fc_adapter12","210313","df1000f114100104","fcs8,fcs9,fcs10,fcs11","U78C9.001.WZS0FYR-P1-C4"
"server3","fc_adapter13","210313","df1000f114100104","fcs0,fcs1,fcs2,fcs3","U78C9.001.WZS0FYR-P1-C8"
"server4","fc_adapter14","210313","df1000f114100104","fcs4,fcs5,fcs6,fcs7","U78C9.001.WZS0J9W-P1-C9"
"server4","fc_adapter15","210313","df1000f114100104","fcs0,fcs1,fcs2,fcs3","U78C9.001.WZS0J9W-P1-C8"
"server4","fc_adapter16","210313","df1000f114100104","fcs12,fcs13,fcs14,fcs15","U78C9.001.WZS0J9W-P1-C2"
"server4","fc_adapter17","210313","df1000f114100104","fcs8,fcs9,fcs10,fcs11","U78C9.001.WZS0J9W-P1-C4"
"server5","fc_adapter18","210313","df1000f114100104","fcs4,fcs5,fcs6,fcs7","U78C9.001.WZS0FYR-P1-C12"
"server5","fc_adapter19","210313","df1000f114100104","fcs0,fcs1,fcs2,fcs3","U78C9.001.WZS0FYR-P1-C11"

Document Location

Worldwide

[{"Business Unit":{"code":"BU048","label":"IBM Software"},"Product":{"code":"SSH2TE","label":"PureData System for Operational Analytics A1801"},"Component":"","Platform":[{"code":"PF002","label":"AIX"}],"Version":"V1.0;V1.1","Edition":"","Line of Business":{"code":"LOB76","label":"Data Platform"}}]

Log InLog in to view more of this document

This document has the abstract of a technical article that is available to authorized users once you have logged on. Please use Log in button above to access the full document. After log in, if you do not have the right authorization for this document, there will be instructions on what to do next.

Document Information

Modified date:
03 August 2022

UID

ibm10880017