Troubleshooting
Problem
Unable to perform DR failover rehearsal operation using vSCSI
Symptom
From KSYS: HSCLA24E The partition's virtual SCSI adapter NN cannot be hosted by the existing Virtual I/O Servers (VIOSs) on the destination managed system.
From both VIO Server on DR(BACKUP) site.
List of Logical Units not found on the destination VIOS:
Logical Unit 1 : Descriptor VOL_ID = 60000000000000000000000000001111, VOL_NAME=vol1111, managed_disk=0
..
rc = 83 MIG_RESOURCE_NOT_FOUND
Leaving mig_vscsi fn=find_devices, rc=83
VIOS_DETAILED_ERROR
exited with rc=83
List of Logical Units not found on the destination VIOS:
Logical Unit 1 : Descriptor VOL_ID = 60000000000000000000000000001111, VOL_NAME=vol1111, managed_disk=0
..
rc = 83 MIG_RESOURCE_NOT_FOUND
Leaving mig_vscsi fn=find_devices, rc=83
VIOS_DETAILED_ERROR
exited with rc=83
Cause
- Information: In case of NPIV:
- For NPIV, LUN mapping is required only for target disk. KSYS will do LUN mapping during DR Rehearsal for Tertiary disks (clone disks).
- Special consideration in case of vSCSI:
- LUN mapping is required for both target disk and tertiary disk and during DR rehearsal, at VIO Server level KSYS will handling the VFC mapping.
- Extra steps are required for the DR failover rehearsal (clone) on the flash copy and storage mappings when configured with vSCSI.
- * After adding flash copy, cfgmgr on both HOME and DR must be run to discover new LUN used for new tertiary disk used by flash copy *
- Refer to item 10 under, Diagnosing The Problem.
Environment
- VIOS Level is 3.1.4.10
- AIX VM Client 7.2
- KSYS 1.6; ENVIRONMENT: 7200-05-04-2220
- POWER9
- Storage: MPIO IBM 2076 FC Disk / FlashCopy Storage 7300 for clone (DR failover rehearsal operation)
- HMC: HMC Build level 2303020606"version= Version: 10
VM name TEST1
HOME or Source:
LOCALRACK1-E950-SN111111 SITE 1:LOC STATE=operating
VIOS: lvio2 A MANAGED
VIOS: lvio1 B MANAGED
LOCALRACK1-E950-SN111111 SITE 1:LOC STATE=operating
VIOS: lvio2 A MANAGED
VIOS: lvio1 B MANAGED
BACKUP(DR) or Destination:
REMOTERACK1-9040-SN222222 SITE 2:REM STATE=operating
VIOS: rvio2 C MANAGED
VIOS: rvio1 D MANAGED
REMOTERACK1-9040-SN222222 SITE 2:REM STATE=operating
VIOS: rvio2 C MANAGED
VIOS: rvio1 D MANAGED
Diagnosing The Problem
1) From VM TEST1 determine disk configured and Volume Serial.
# lsmpio -q -l hdiskNN | grep "Volume Serial"
# lscfg -vl hdiskNN (Client LPAR ID in decimal)
2) From the VIO Servers on HOME site, check the disk attributes.
$ oem_setup_env
# lsattr -El hdiskNN | egrep "reserve_policy|unique_id|ww_name"
reserve_policy should be no_reserve.
3) From the VIO Servers on HOME site, check lsmap from user padmin.
$ lsmap -all (grep for client lpar ID which will be in in HEX, under, Client Partition ID 0x000000NN
4) Storage type, site source_storage_ip and target_storage_ip
# ksysmgr query storage_agent
5) Locate HOME and DR storage pairing
# ksysmgr query dp
Storage: LC-IBM7300-01 (LOC) <-> Storage: RM-IBM7300-01 (REM)
=====================================================================
60000000000000000000000000001111 <-> 60000000000000000000000000002222
Storage: LC-IBM7300-01 (LOC) <-> Storage: RM-IBM7300-01 (REM)
=====================================================================
60000000000000000000000000001111 <-> 60000000000000000000000000002222
6) VOLUME and disks for HOME and DR
# ssh admin@<source_storage_ip> svcinfo lsvdisk | grep 60000000000000000000000000001111
# ssh admin@<source_storage_ip> svcinfo lsfcmap | grep vol1111
# ssh admin@<target_storage_ip> svcinfo lsvdisk | grep 60000000000000000000000000002222
# ssh admin@<target_storage_ip> svcinfo lsfcmap | grep vol1111
# ssh admin@<target_storage_ip> svcinfo lsfcmap | grep vol1111
7) Create ternary disk on the DR site using GUI or command line. You can choose below options or leave defaults.

Using CLI:
** Make sure that the consistency is is the same directory for all VMs and LUNs **
8) After the flash copy or ternary disk is competed, discover the new LUN on VIOs on both sites.
$ cfgdev
or
$ oem_setup-env
# cfgmgr
9) Locate the target disk from VIO Servers on DR sites.
# lspv -u | grep 60000000000000000000000000002222
10) Check reserve_policy. All other setting should also be identical, such as queue_depth, max_transfer.
# lsattr -El hdiskNN | egrep "reserve_policy|unique_id|ww_name"
11) On KSYS check and confirm the ternary disk matches
# ksysmgr query dp
Storage: LC-IBM7300-01 (LOC) <-> Storage: RM-IBM7300-01 (REM)
=====================================================================
60000000000000000000000000001111 <-> 60000000000000000000000000002222
Storage: LC-IBM7300-01 (LOC) <-> Storage: RM-IBM7300-01 (REM)
=====================================================================
60000000000000000000000000001111 <-> 60000000000000000000000000002222
Tertiary Disks:
Source disk ) -> Tertiary disk: RM-IBM7300-01 (REM)
=====================================================================
60000000000000000000000000002222 -> 60000000000000000000000000003333
=====================================================================
60000000000000000000000000002222 -> 60000000000000000000000000003333
12) For all fcs and fscsi on VIO Server on HOME and DR site (for storage team)
for i in [0-NN] ; do; lscfg -vpl fcs$i | grep Network ; done
for i in [0-NN] ; do; lscfg -vpl fscsi$i | grep scsi_id ; done
13) Check KSYS
14) Network Address can then be checked on the storage fabric to see how these are mapped and should be mapped such that storage port has access through both fabrics.
FABRIC-A
> zoneshow *io1*
> alishow *io1*
alias: lvio1_fc1
NN:00:00:10:fe:f0:YY:93
NN:00:00:10:fe:f0:YY:94
alias: lvio1_fc3
NN:00:00:10:fe:f0:YY:1b
NN:00:00:10:fe:f0:YY:1c
alias: lvio2_fc1
NN:00:00:10:fe:f0:YY:33
NN:00:00:10:fe:f0:YY:34
alias: lvio2_fc3
NN:00:00:10:fe:f0:YY:81
NN:00:00:10:fe:f0:YY:82
> zoneshow *io1*
> alishow *io1*
alias: lvio1_fc1
NN:00:00:10:fe:f0:YY:93
NN:00:00:10:fe:f0:YY:94
alias: lvio1_fc3
NN:00:00:10:fe:f0:YY:1b
NN:00:00:10:fe:f0:YY:1c
alias: lvio2_fc1
NN:00:00:10:fe:f0:YY:33
NN:00:00:10:fe:f0:YY:34
alias: lvio2_fc3
NN:00:00:10:fe:f0:YY:81
NN:00:00:10:fe:f0:YY:82
Fabric-B
> zoneshow *io1*
> aliashow *io1*
alias: lvio1_fc2
NN:00:00:10:fe:f0:YY:93
NN:00:00:10:fe:f0:YY:94
alias: lvio1_fc3
NN:00:00:10:fe:f0:YY:1b
NN:00:00:10:fe:f0:YY:1c
alias: lvio2_fc1
NN:00:00:10:fe:f0:YY:33
NN:00:00:10:fe:f0:YY:34
alias: lvio2_fc3
NN:00:00:10:fe:f0:YY:81
NN:00:00:10:fe:f0:YY:82
> zoneshow *io1*
> aliashow *io1*
alias: lvio1_fc2
NN:00:00:10:fe:f0:YY:93
NN:00:00:10:fe:f0:YY:94
alias: lvio1_fc3
NN:00:00:10:fe:f0:YY:1b
NN:00:00:10:fe:f0:YY:1c
alias: lvio2_fc1
NN:00:00:10:fe:f0:YY:33
NN:00:00:10:fe:f0:YY:34
alias: lvio2_fc3
NN:00:00:10:fe:f0:YY:81
NN:00:00:10:fe:f0:YY:82
Resolving The Problem
- For vSCSI, need to map all fcs adapters to Fabric A, instead of fcs0 and fcs2 mapped to Fabric A and fcs1 and fcs3 mapped to fabric B.
- The other solution is to map all fcs adapters to both Fabric A and Fabric B. These would need to be added manually from GUI by entering the Network Address.
- Need to add manually from GUI by selecting: Create Flashcopy Mapping, selecting Source Volume and Target Volume.
- On next menu, from 2 options below, select "Delete mapping after completion", and NEXT to complete.
- Steps to now complete discovery, verify and failover rehearsal.
- To get storagetype, Site and source_storage_ip and target_storage_ip
# ksysmgr query storage_agent
Provides site and IPO address- ksysmgr -t discovery workgroup TEST_WG verify=yes
- ksysmgr -t discovery workgroup TEST_WG verify=yes dr_test=yes
- ksysmgr -t move workgroup TEST_WG to=<target_site> dr_test=yes
- ksysmgr -t cleanup workgroup TEST_WG dr_test=yes
- ksysmgr -t cleanup workgroup TEST_WG dr_test=yes (asked again)
- Results:
- Discovery has finished for TEST_WG
- 1 out of 1 VMs have been successfully discovered
- 1 out of 1 VMs have been successfully verified
- Results:
- To get storagetype, Site and source_storage_ip and target_storage_ip
Additional Questions:
1) Do we need VIOS VSCSI vhost definitions at the DR site?
2) What specifically needs to be done at the DR site to receive a VM?
2) What specifically needs to be done at the DR site to receive a VM?
Respose: Both will be created by the DR operation.
Created by: Rajendra D Patel (rajpat@us.ibm.com)
Original date: August 22, 2023
Revised: April 20, 2025
Related Information
Document Location
Worldwide
[{"Type":"MASTER","Line of Business":{"code":"LOB57","label":"Power"},"Business Unit":{"code":"BU058","label":"IBM Infrastructure w\/TPS"},"Product":{"code":"SSPHKW","label":"PowerVM Virtual I\/O Server"},"ARM Category":[{"code":"a8m50000000L06pAAC","label":"VIOS"}],"ARM Case Number":"TS013118572","Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"3.1.0;3.1.1;3.1.2;3.1.3;3.1.4"}]
Was this topic helpful?
Document Information
Modified date:
16 May 2025
UID
ibm17028104