IBM Support

How To Determine The VIO Disk Mapped To A VSCSI Client Disk

Question & Answer


Question

If you have a vscsi disk on a VIO client, this method shows how you can determine what adapter and disk are mapped to the client on the VIO server. This can aid you when the client vscsi disk has issues and you need to find out if the corresponding disk attached to the VIO server is encountering a problem.

Answer

Get a snap -gc on the client at the least, and a snap from the VIO servers.  Instructions are also provided for doing this live with a customer.


1. Find the disk we are interested in

$ cd /tmp/ibmsupt/general/
$ more lsdev.disk

hdisk1  Available          U9117.MMD.067A967-V20-C21-T1-L8100000000000000
         Virtual SCSI Disk Drive


So already we know it will be LUN 81 on some virtual adapter the VIO server because of the L8100000000000000 at the end of the physical location output in lsdev.


To get this same output on a live system run
$ lsdev -c disk -F "name status physloc description"



2. The adapter will be U9117.MMD.067A967-V20-C21-T1 from the output in the last step.

In a snap, look in the file general/lsdev.adapter to find the adapter:
vscsi1 Available       U9117.MMD.067A967-V20-C21-T1 Virtual SCSI Client Adapter

So adapter vscsi1 is mapped from a VIO server.


On a live system you can get the same output running lsdev with these options:
$ lsdev -c adapter -F "name status physloc description"



3. Now go to the virtual client info directory
$ cd ../client_collect/
Find the state file for vscsi1:

$ more vscsi1.state

           START              END <name>
0000000000001000 00000000058E0000 start+000FD8
F00000002FF47600 F00000002FFE1000 __ublock+000000
000000002FF22FF4 000000002FF22FF8 environ+000000
000000002FF22FF8 000000002FF22FFC errno+000000
F1000F0A00000000 F1000F0A10000000 pvproc+000000
F1000F0A10000000 F1000F0A18000000 pvthread+000000
read vscsi_scsi_ptrs OK, ptr = 0xF1000000C014FDF0
(0)> cvai vscsi1; cvdi vscsi1; cvcrq vscsi1
unit_id: 0x30000015    
partition_num: 0x14     partition_name: lvm17
capability_level: 0x0   location_code:
priv_cap: 0x1  host_capability: 0x0 
host_name: vhost16 host_location:
heart_beat_enabled: 0x0 sample_time: 0x0        ping_response_time: 0x3C
rw_timeout: 0x0
host part_number: 0x2   os_type: 0x3   
host part_name: lvmvio2



If you are doing this on a live system you can use kdb as root:
# echo "cvai vscsi1" | kdb

and replace "vscsi1" with the appropriate adapter name


Scroll down to the (0)> in the snap which is output from kdb
Lots of information here, but we only need a few things.

First we see the partition ID number and name
partition_num: 0x14     partition_name: lvm17

Notice the ID is in hex so the partition number is really decimal 20


In the snap, you can read the lparstat output:
$ cd ../general/
$ more lparstat.out

On a live system use

$ lparstat -i | head
Node Name                                  : lvm17
Partition Name                             : lvm17
Partition Number                           : 20


We also know the name of the adapter on the VIO server, and which VIO server it is:
host_name: vhost16
I'm not sure why they called this field "host_name", when it's really the virtual adapter name on the VIO server.
host_name: lvmvio2

So we know it's on LPAR lvmvio2 and virtual adapter vhost16.  

4. If we have a snap from that server, or can log in and check the server we can find the disk we are looking for.

If looking at a snap, check that it is the correct LPAR first off:
$ cd general/
$ grep hostname general.snap
hostname      lvmvio2                       Host Name                          

Go to the VIO server snap collection directory:
$ cd ../svCollect/

There will be a map file for each vhost on the VIO server.
$ more vhost16.map


If on the live VIO server system use:

$ hostname
lvmvio2


$ lsmap -vadapter vhost16
SVSA            Physloc                                      Client Partition ID
--------------- -------------------------------------------- ------------------

vhost16
         U9117.MMD.067A967-V2-C36                     0x00000014

VTD                   lvm17_rtvg1
Status                Available
LUN                  
0x8100000000000000
Backing device        lvm17_rtvg
Physloc
Mirrored              N/A

VTD                   vtopt6
Status                Available
LUN                   0x8200000000000000
Backing device        /var/vio/VMLibrary/aix7200-00-00_DVD_1_of_2.iso
Physloc
Mirrored              N/A


So using the LUN ID, client partition ID and the vhost adapter name we see that the local virtual device is called lvm17_rtvg1 and the backing device is lvm17_rtvg.  If this is an hdisk then we can stop here.

In this example it is not an hdisk name so it must be a logical volume.


5. If the backing device is a logical volume, we can find the disk(s) that lv resides on.

In the snap look in
$ cd ../lvm/
$ more clients_rootvg.snap


On the live system:
$ lslv lvm17_rtvg
LOGICAL VOLUME:     lvm17_rtvg             VOLUME GROUP:   clients_rootvg
LV IDENTIFIER:      00c7a96700004c0000000150d37247fd.17 PERMISSION:     read/write

etc.

Then if we are interested in what disk this logical volume is on, check in the filesystem snap:
$ cd ../filesys/
$ more filesys.snap

....
.....    lslv -l lvm17_rtvg
.....

lvm17_rtvg:N/A
PV                COPIES        IN BAND       DISTRIBUTION
hdisk1            020:000:000   0%            020:000:000:000:000


On the live system we can use lslv:

$ lslv -pv lvm17_rtvg
lvm17_rtvg:N/A
PV                COPIES        IN BAND       DISTRIBUTION
hdisk1            020:000:000   0%            020:000:000:000:000


So in this example on the VIO server we are using a logical volume, that is on hdisk1.

$ lsdev -dev hdisk1
name             status      description
hdisk1           Available   SAS Disk Drive


If the backing device was a disk directly mapped to the client we would not need to run the lslv commands.

[{"Product":{"code":"SWG10","label":"AIX"},"Business Unit":{"code":"BU058","label":"IBM Infrastructure w\/TPS"},"Component":"Attached devices","Platform":[{"code":"PF002","label":"AIX"}],"Version":"Not Applicable","Edition":"","Line of Business":{"code":"LOB08","label":"Cognitive Systems"}}]

Document Information

Modified date:
17 June 2018

UID

isg3T1024944