Creating images from ISO for z/VM
This section introduces how to create an image that can be used by IBM® Cloud Infrastructure Center for virtual machine provisioning.
Note: A running compute node is required to complete the image creation steps. So, add a compute node as described in the Add Host section before you complete the capture steps in this section.
Note: RHCOS4 images are not in the scope of this topic.
Follow these instructions to create an image for IBM Cloud Infrastructure Center from a Linux server.
Note: The Linux server should be installed on the same z/VM® system where there is a running compute node. Images can only be created from a MiniDisk and not from a dedicated disk.
0. Valiate the pre-requirements of image creation
When creating an image used to deploy virtual machines from ECKD or FBA disk, refer to Requirements for creating images used to deploy virtual machines from ECKD or FBA disk for the detail requirements of the source virtual machine to be captured into an image.
When creating an image used to deploy virtual machines from persistent volume, refer to Requirements for creating images used to deploy virtual machines from volume for the detail requirements of the source virtual machine to be captured into an image.
1. Install a Linux server in z/VM system.
Install a Linux server from ISO:
For Linux server installation, see Booting the Installation on IBM Z®.
The root disk of this Linux server must be formatted as CDL, which is the default. Refer to the partitioning scheme for more details.
For UBUNTU20.04 server, when the root disk is FBA, should use the Legacy server install image.
When images are used to boot a virtual machine from volume, make sure the root disk of the server can be found by the multipathd module. This means that a multipath disk should be selected as the installation destination disk, for example,
2. Configure the Linux server
Enable the system repositories of the Linux server to ensure that the Linux server can install software via yum or zypper.
Note: For SLES 15 SP1 or SP2 server, firewall disable sshd by default, if you want logon Linux server via ssh, run the following command on the Linux server to add 22 port permanently:
firewall-cmd --add-port=22/tcp --permanent firewall-cmd --reload
For installation and configuration of IUCV service, see "Install and configure IUCV".
For the multipathd, if you are using images to boot a virtual machine from volume or using the persistent storage for your virtual machines, you need to install and enable it, manually. If you are deploying an OpenShift Container Platform cluster to create RHCOS virtual machines, you can use
machineconfigto enable multipathd in both control nodes and worker nodes, automatically:
For the supported RHEL distros, run the following commands on the server:
yum install device-mapper-multipath mpathconf --enable systemctl enable multipathd.service systemctl start multipathd.service
For UBUNTU20.04 and UBUNTU22.04, run the following commands on the server:
apt-get install multipath-tools apt-get install multipath-tools-boot systemctl enable multipathd.service systemctl start multipathd.service
For SLES 15 SP1, SP2 and SP3, run the following commands on the server:
zypper install multipath-tools modprobe dm-multipath systemctl enable multipathd.service systemctl start multipathd.service
For RHCOS 4.12, 4.13 and 4.14, run the following commands on the server:
su - root rpm-ostree kargs --editor Append the kernel parameters: rd.multipath=default root=/dev/disk/by-label/dm-mpath-root systemctl reboot
For RHCOS 4.12, 4.13 and 4.14 created in the OpenShift Container Platform cluster, create the
machineconfigfile to enable multipath, automatically: For example, if you want to enable multipath on a control node, create a new
machineconfigfile that is named
99-master-kargs-mpath.yamlon the bastion server:
apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: "master" name: 99-master-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root'
Next, apply the config file:
[root@bastion ocp_upi]# oc create -f 99-master-kargs-mpath.yaml machineconfig.machineconfiguration.openshift.io/99-master-kargs-mpath created
Now, you can use command
oc get nodesto check that a specific control node is in
SchedulingDisabledstatus temporarily. Following is the example of the control node
[root@bastion ocp_upi]# oc get nodes NAME STATUS ROLES AGE VERSION scsicluster-fgjkr-master-0 Ready,SchedulingDisabled control-plane,master 20h v1.26.9+07c0911 scsicluster-fgjkr-master-1 Ready control-plane,master 20h v1.26.9+07c0911 scsicluster-fgjkr-master-2 Ready control-plane,master 20h v1.26.9+07c0911
Check that the kernel argument works on the control node
scsicluster-fgjkr-master-0. There, the multipath information shows in
[root@bastion ocp_upi]# oc debug node/scsicluster-fgjkr-master-0 sh-4.4# cat /host/proc/cmdline ignition.platform.id=metal ostree=/ostree/boot.0/rhcos/7d5276b62f5705e05374f92d549c900752441650167dc8a2d0f5b6b4ffe300aa/0 zfcp.allow_lun_scan=0 rw rd.zfcp=0.0.e83f,0x500507680b24bac6,0x0000000000000000 rd.zfcp=0.0.e83f,0x500507680b21bac6,0x0000000000000000 rd.zfcp=0.0.e83f,0x500507680b21bac7,0x0000000000000000 rd.zfcp=0.0.e83f,0x500507680b24bac7,0x0000000000000000 rd.znet=qeth,0.0.1000,0.0.1001,0.0.1002,layer2=1,portno=0 root=UUID=27c0cdf7-58dc-4d0a-98f0-ef9e56ef9516 rw rootflags=prjquota boot=UUID=5187e9d8-c8b4-4a65-be70-197f261d2b47 systemd.unified_cgroup_hierarchy=0 systemd.legacy_systemd_cgroup_controller=1 rd.multipath=default root=/dev/disk/by-label/dm-mpath-root
Moreover, login with the control node that is mentioned above with the
rootuser and check the multipathd service in running status:
[root@scsicluster-fgjkr-master-0 ~]# multipath -l mpatha (3600507640083826de000000000011c35) dm-0 IBM,2145 size=40G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw |-+- policy='service-time 0' prio=0 status=active | |- 0:0:11:0 sdd 8:48 active undef running | `- 0:0:9:0 sdb 8:16 active undef running `-+- policy='service-time 0' prio=0 status=enabled |- 0:0:10:0 sdc 8:32 active undef running `- 0:0:8:0 sda 8:0 active undef running
Also, use the command
df - hon the control node, as well as to check that the root filesystem is using multipath devices:
[root@scsicluster-fgjkr-master-0 ~]# df -h Filesystem Size Used Avail Use% Mounted on devtmpfs 4.0M 0 4.0M 0% /dev tmpfs 7.9G 84K 7.9G 1% /dev/shm tmpfs 3.2G 71M 3.1G 3% /run tmpfs 4.0M 0 4.0M 0% /sys/fs/cgroup /dev/mapper/mpatha4 40G 18G 22G 46% /sysroot tmpfs 7.9G 36K 7.9G 1% /tmp /dev/mapper/mpatha3 350M 64M 264M 20% /boot
If you want to enable multipath on the worker node. You need to change the
machineconfigfile. Other steps are all the same that describe the previous. For example:
apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: "worker" name: 99-worker-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root'
If the virtual machines deployed from this image have occasion of attaching more than 20 volumes simultaneously, suggest to set a bigger value of
/etc/systemd/system.conf. For more details, please refer to the trouble shooting Failed to attach multiple volumes simultaneously.
For installation and configuration of zvmguestconfigure, see "Install and configure zvmguestconfigure".
For installation and configuration of cloud-init, see "Install and configure cloud-init".
When you create images for the use to boot a virtual machine from volume and the image has an LVM partition, see "Configure LVM partition".
The previous steps must be performed step by step to create an image. If you want to customize the image, you can add the corresponding operations before the capture steps.
Note: Optionally, to reduce the file size of captured image, you can also remove unnecessary files from the Linux server, such as the /root/IUCV directory and the yum cache:
rm -rf /root/IUCV
yum clean all
Additonally, When images are used to boot a virtual machine from volume, the multipath bindings and wwids on the linux server should be removed from both the multipath configuration files and initramfs image so that
mpatha is used as the root volume name of the provisioned VM when friendly name is enabled:
[root@imgserver ~]# cat /dev/null >/etc/multipath/bindings
[root@imgserver ~]# cat /dev/null >/etc/multipath/wwids
[root@imgserver ~]# dracut --force --include /etc/multipath /etc/multipath/
dracut-install: ERROR: installing '/etc/dasd.conf'
dracut: FAILED: /usr/lib/dracut/dracut-install -D /var/tmp/dracut.9FjgfY/initramfs -H /etc/dasd.conf
Note: The error message in the sample output of dracut command mentioned before can be ignored but only these error messages.
3. Capture the Linux server into an image.
After the Linux server is configured for capture, shut it down and log off the z/VM userid, then take the following steps to generate the image:
When images are used to deploy virtual machines on ECKD or FBA disk:
SSH to your compute node, type the command:
/opt/zthin/bin/creatediskimage <zvm_userid> <vdev> <image_file_name>
<zvm_userid>is the z/VM userid of the Linux server.
<vdev>is the device number of the disk to be captured.
<image_file_name>is the image file name including the store location, such as "/root/rhel79.img".
When images are used to boot virtual machine from volume:
SSH to your compute node and run the following command to remove the FCP device from the ignored device list of the compute node:
cio_ignore -r <FCP device>
and make sure the device is omitted from the ignored list by:
SSH to your compute node and run:
/opt/zthin/bin/creatediskimage <FCP device> <WWPN> <LUN> <image_file_name>
<FCP device>is the FCP device number used when installing the Linux server.
<WWPN>is the World Wide Port Number of the target port.
<LUN>is the LUN number of the target volume
<image_file_name>is the image file name including the store location.
<LUN>info can be get from the
lszfcpcommand on compute node.
Here's an example:
[root@tstcomp ~]# /opt/zthin/bin/creatediskimage 5c60 0x5005076802400c1a 0x0000000000000000 imgname
Convert the image format from
qcow2to reduce the image file size.
SSH onto the compute node and use the
qemu-imgcommand to convert the image format:
qemu-img convert -f raw -O qcow2 imagename /root/imagename.qcow2
After the image is generated, you can upload image to IBM® Cloud Infrastructure Center, refer to uploading images.
If you are going to upload image using IBM Cloud Infrastructure Center UI, you would need to first transfer the image from the current location on compute node onto the server where you would run the IBM Cloud Infrastructure Center UI, since the UI would only browse locations on the local server for security considerations.
As an alternative to avoid uploading image from your local system, you can copy the image file to the management node and use the
openstack CLI to upload the image, refer to uploading images.