BFV failed because devNode doesn't exist
Issue description
Deploy an instance from SCSI image failed with errors like:
Deploy of virtual machine rhel82_bfv_will_fail on host m54zvm147 failed with exception: Build of instance 4c2334e7-c2b9-4926-9986-4ff87bea5cbe was re-scheduled: zVM Cloud Connector request failed: {'overallRC': 300, 'modID': 30, 'rc': 300, 'rs': 5, 'errmsg': "Refresh bootmap fails, error code: 6 and reason: \\nERROR: devNode /dev/disk/by-path/ccw-0.0.1b13-zfcp-0x500507680b21bac6:0x0000000000000000-part1 doesn't exist. Get fcps: 1b13 1a10 and wwpns: 500507680b21bac6 500507680b21bac7 500507680b22bac6 500507680b22bac7", 'output': ''}
Explanation
Following, the known 3 causes are listed:
- The format property of the image does not match to the real
format of the uploaded image file. For example, the image file
uploaded is of type
QCOW2, but the type in its properties isRAW. - The
skip_kpartxproperty of multipath.conf is set to valueyes. - If the deployed image contains LVM partition, then it is possible that the DM multipath device component(eg, sda, sdb) is not filtered in the /etc/lvm/lvm.conf.
Resolution
-
For a mismatch of image format property:
- Click the details of the image to check the format property from the UI.
- Run command
qemu-img info <the path of image file>to check the real format of the image file you uploaded from your local directory. If the local image file can not be found anymore, you can download it to local directory from the storage by the command.openstack image save --file <file_name> <image> - Re-create the image with correct format value from the UI.
-
For
skip_kpartxsetting:- Check the
skip_kpartxsetting by running the commandmultipathd show config | grep skip_kpartxon the target host to which the instance was deployed, for example:
[root@113-cmp-cb ~]# multipathd show config | grep skip_kpartx skip_kpartx "yes"- If the value was set to
yes, change it tonoby editing the/etc/multipath.conf, for example:
defaults { # change skip_kpartx to no skip_kpartx no # keep other configuration items as original ...... } - Check the
Restart multipathd to let the changes take effect by running the
command
systemctl restart multipathd
.
Finally, verify the result as follows.
[root@113-cmp-cb ~]# multipathd show config | grep skip_kpartx
skip_kpartx "no"
- For LVM filter Device Mapper multipath device component:
- Login to the management and compute nodes, add the following
configuration into the
[devices]section of the /etc/lvm/lvm.conf:
filter = [ "r|/dev/sd.*|" ]- Retry the virtual machine deployment.
- Login to the management and compute nodes, add the following
configuration into the