we installed PowerKVM and updated it to
IBM_PowerKVM release 2.1.0 build 19 Service (pkvm2_1)
Then we added on one of the internal disks a primary partition containing the complete disk, did a pvcreate on that partition and built a VG on it. An LV was created, formatted with ext4 and mounted on /iso to contain all of our ISO images that we use to install VMs. That filesystem was added to /etc/fstab to be mounted automatically at boot:
UUID=c7071713-3d30-433b-9cf0-ce38dc1b302a /iso ext4 defaults 0 0
Now, when rebooting the host we constantly get this boot message:
[ OK ] Started udev Wait for Complete Device Initialization. Starting Activation of DM RAID sets... [ OK ] Started Activation of DM RAID sets. [ OK ] Reached target Encrypted Volumes. [ TIME ] Timed out waiting for device dev-disk-by\x2duuid-c70...c1b302a.device. [DEPEND] Dependency failed for /iso. [DEPEND] Dependency failed for Local File Systems. [DEPEND] Dependency failed for Relabel all filesystems, if necessary. [DEPEND] Dependency failed for Mark the need to relabel after reboot. Welcome to emergency mode! After logging in, type "journalctl -xb" to view system logs, "systemctl reboot" to reboot, "systemctl default" to try again to boot into default mode.
That seems to be known as an issue on Fedora, as shown by the entries at e.g.:
So we tried to get around this system by mounting with the absolute path instead of the UUID but the problem persists since the VG doesn't get activated and thus the filesystem can't mount.
At the moment we have commented out the entry in /etc/fstab and activate the VG by hand after the boot which allows for mounting the filesystem. Obviously, this is not a solution we like to keep for an extended time span.
Does anybody have a similar problem and can help us to get around it?