Setting up and configuring IBM Hyper Protect Virtual Servers

You can set up, configure, and operate IBM Hyper Protect Virtual Servers on IBM Z hardware just as you would operate Linux® on KVM instances and their virtual devices running on a Kernel-based Virtual Machine (KVM) host. For more information, see KVM Virtual Server Management.

To learn about IBM Secure Execution for Linux, see Introducing IBM Secure Execution for Linux.

This procedure is intended for users with the role system admin or app developer or ISV.

Example of bringing up IBM Hyper Protect Virtual Servers on a KVM host by using the virsh utility

There are multiple ways in which you can set up and configure an instance (KVM guest) on a KVM host. The following information illustrates the use of the virsh commands and is only for your reference. For more information about virsh commands, see libvirt virtualization API.

Before you begin

Ensure that you have read and understood the System requirements.

Procedure

Complete the following steps on your KVM host.

  1. Create a directory called hpvs.

    sudo mkdir /var/lib/libvirt/images/hpvs
    
  2. Copy the IBM Hyper Protect Virtual Servers (HPVS) image to the folder that you created. You will find this image in the installation directory after you extract the TAR file. For more information, see Downloading the image. This is the qcow2 image that will be used to bring up an IBM Hyper Protect Virtual Servers instance.

    sudo mv -i ibm-hyper-protect-container-runtime-25.1.0.qcow2 \
    /var/lib/libvirt/images/hpvs/ibm-hyper-protect-container-runtime-25.1.0.qcow2
    
  3. You must prepare the contract that should be passed as an input when you create the instance. This is a mandatory step and a IBM Hyper Protect Virtual Servers instance can be created successfully only by passing in a contract as an input. To create the data source that is used to pass the contract, complete the following steps:

    1. Create a file called meta-data and add the following content to it.

      local-hostname: myhost
      
    2. Create a file called vendor-data and add the following content to it.

      #cloud-config
      users:
      - default
      
    3. Create a file called user-data and add your encrypted contract yaml file to it. For more information about creating the contract, see About the contract

    4. Create an ISO init-disk by using the following command :

      cloud-localds init-disk user-data meta-data -V vendor-data
      
    5. Move init-disk to the working directory:

      mv init-disk /var/lib/libvirt/images/hpvs/
      

      We now have a data source disk that contains the contract information. This will then be passed as an input in the domain XML configuration file (named as hpvs.xml in this topic).

  4. You must create a data disk that can be attached to the instance. The data disk or volume that is attached will be used by the instance to store the container workload data. This disk will be automatically encrypted with the passphrase that is created based on the seeds that are passed as part of the contract. For more information, see volumes. If a data disk is not attached, the workload data resides on the instance, and might be lost if the instance crashes. Therefore, it is recommended that a data disk is used for storing data for backup and recovery. The following example shows how to create a data disk using storage pools that can be used with an instance. For more information about all other storage options that are supported with a KVM guest, see Device setup.

    Starting from Hyper Protect Virtual Servers 2.1.4, for new instances, the data volume is partitioned into two parts. The first partition (100Mib) is reserved for internal metadata; the second partition remains as the data volume for workload. Only new volumes are partitioned, and you can't use the partitioned volume with an older version of the HPVS image. Provisioning with an existing encrypted volume also works. The difference is that the existing volume does not get partitioned, and you can also go back to an older image with this volume.

  5. Create a file called pool.xml and add the following to it.

    <pool type='dir'>
      <name>storagedirpool</name>
      <target>
        <path>/var/lib/libvirt/images/storage</path>
      </target>
    </pool>
    
  6. Define the storage pool by using the following command.

    virsh pool-define pool.xml
    
  7. To list your pool, use the following command.

    virsh pool-list --all
    
  8. To build the storage pool, use the following command.

    virsh pool-build storagedirpool
    
  9. To start the storage pool, use the following command.

    virsh pool-start storagedirpool
    
  10. To check whether the storage pool is active, use the following command.

    virsh pool-list --all
    
  11. To create a data volume from the storage pool, use the following command.

    virsh vol-create-as storagedirpool datavolume 10G
    
  12. To list the volume, use the following command.

    virsh vol-list storagedirpool
    
  13. For DASD disks, you must ensure that the formatting and partitioning of the disks are done on the KVM host and the partition path is provided as part of the domain XML configuration file (named as hpvs.xml in this topic). See Preparing DASDs for more details about DASD, and Commands for Linux on Z, and Partition a DASD for more details about DASD commands. The following is a reference of the DASD partition (/dev/disk/by-path/ccw-0.0.c28c-part1):

    <disk type='block' device='disk'>
        <driver name='qemu' type='raw' cache='none' io='native' iommu='on'/>
        <source dev='/dev/disk/by-path/ccw-0.0.c28c-part1'/>
        <target dev='vdb' bus='virtio'/>
        <address type="ccw" cssid="0xfe" ssid="0x0" devno="0xc28c"/>
    </disk>
    
  14. To create the IBM Hyper Protect Virtual Servers instance (KVM guest), the domain XML configuration file must be provided as an input to the virsh command to create an instance. The domain XML configuration file must have the details for the qcow2 image (boot disk), the data source which has the contract (init disk), and optionally the data disk that is used to store container data. You must also ensure that the domain XML configuration file follows the IBM Secure Execution requirements. For more information, see Example guest definition. Complete the following steps:

    1. Create a domain XML configuration file named hpvs.xml with information about the HPVS image, the data source init disk, and the data disk. The XML code snippet here is for your reference.

      <domain type='kvm'>
        <name>hpvs</name>
        <uuid>73938412-95df-4611-a3dd-d95cc3b8443f</uuid>
        <metadata>
          <libosinfo:libosinfo xmlns:libosinfo="http://libosinfo.org/xmlns/libvirt/domain/1.0">
            <libosinfo:os id="http://ubuntu.com/ubuntu/20.04"/>
          </libosinfo:libosinfo>
        </metadata>
        <memory unit='KiB'>3906250</memory>
        <currentMemory unit='KiB'>3906250</currentMemory>
        <vcpu placement='static'>2</vcpu>
        <os>
          <type arch='s390x' machine='s390-ccw-virtio-focal'>hvm</type>
          <boot dev='hd'/>
        </os>
        <cpu mode='host-model' check='partial'/>
        <clock offset='utc'/>
        <on_poweroff>destroy</on_poweroff>
        <on_reboot>restart</on_reboot>
        <on_crash>destroy</on_crash>
        <devices>
          <emulator>/usr/bin/qemu-system-s390x</emulator>
          <disk type='file' device='disk'>
            <driver name='qemu' type='qcow2' iommu='on'/>
            <source file='/var/lib/libvirt/images/hpvs/ibm-hyper-protect-container-runtime-25.1.0.qcow2' index='2'/>
            <backingStore/>
            <target dev='vda' bus='virtio'/>
            <alias name='virtio-disk0'/>
            <address type='ccw' cssid='0xfe' ssid='0x0' devno='0x0000'/>
          </disk>
          <disk type='file' device='disk'>
            <driver name='qemu' type='raw' cache='none' io='native' iommu='on'/>
            <source file='/var/lib/libvirt/images/hpvs/init-disk'/>
            <target dev='vdc' bus='virtio'/>
            <readonly/>
            <address type='ccw' cssid='0xfe' ssid='0x0' devno='0x0002'/>
          </disk>
          <disk type='file' device='disk'>
            <driver name='qemu' type='raw' cache='none' iommu='on'/>
            <source file='/var/lib/libvirt/images/storage/datavolume'/> // Location where volume is created
            <target dev='vdb' bus='virtio'/> // This volume disk is located in the guest VM
            <serial>test1</serial>
            <address type="ccw" cssid="0xfe" ssid="0x0" devno="0xc28c"/>
          </disk>
          <disk type='block' device='disk'>
            <driver name='qemu' type='raw' cache='none' io='native' iommu='on'/>
            <source dev='/dev/disk/by-path/ccw-0.0.c20d-part1'/>
            <target dev='vdg' bus='virtio'/>
            <serial>test2</serial>
            <address type="ccw" cssid="0xfe" ssid="0x0" devno="0xc28d"/>
          </disk>
          <controller type='pci' index='0' model='pci-root'/>
          <interface type='network'>
            <mac address='52:54:00:83:2e:36'/>
            <source network='default'/>
            <model type='virtio'/>
            <driver name='vhost' iommu='on'/>
            <address type='ccw' cssid='0xfe' ssid='0x0' devno='0x0001'/>
          </interface>
          <console type='pty'>
            <target type='sclp' port='0'/>
          </console>
          <memballoon model='none'/>
          <panic model='s390'/>
        </devices>
      </domain>
      

      The following snippet shows the boot disk section in the example XML file shown above:

      <disk type='file' device='disk'>
        <driver name='qemu' type='qcow2' iommu='on'/>
        <source file='/var/lib/libvirt/images/hpvs/ibm-hyper-protect-container-runtime-25.1.0.qcow2 ' index='2'/>
        <backingStore/>
        <target dev='vda' bus='virtio'/>
        <alias name='virtio-disk0'/>
        <address type='ccw' cssid='0xfe' ssid='0x0' devno='0x0000'/>
      </disk>
      

      The following snippet shows the init disk section:

      <disk type='file' device='disk'>
        <driver name='qemu' type='raw' cache='none' io='native' iommu='on'/>
        <source file='/var/lib/libvirt/images/hpvs/init-disk'/>
        <target dev='vdc' bus='virtio'/>
        <readonly/>
        <address type='ccw' cssid='0xfe' ssid='0x0' devno='0x0002'/>
      </disk>
      

      The following snippet shows the data disk section:

      <disk type='file' device='disk'>
        <driver name='qemu' type='raw' cache='none' iommu='on'/>
        <source file='/var/lib/libvirt/images/storage/datavolume'/> // Location where volume is created
        <target dev='vdb' bus='virtio'/> // This volume disk is located in the guest VM
      </disk>
      

      The following snippet shows the disk section with multiple volumes:

      </disk>
      <disk type='file' device='disk'>
       <driver name='qemu' type='raw' cache='none' iommu='on'/>
       <source file='/var/lib/libvirt/images/storage/datavolume'/> // Location where volume is created
       <target dev='vdb' bus='virtio'/> // This volume disk is located in the guest VM
       <serial>test1</serial>
       <address type="ccw" cssid="0xfe" ssid="0x0" devno="0xc28c"/>
      </disk>
      <disk type='block' device='disk'>
       <driver name='qemu' type='raw' cache='none' io='native' iommu='on'/>
       <source dev='/dev/disk/by-path/ccw-0.0.c20d-part1'/>
       <target dev='vdg' bus='virtio'/>
       <serial>test2</serial>
       <address type="ccw" cssid="0xfe" ssid="0x0" devno="0xc28d"/>
      </disk>
      
    2. For the network configuration associated with instance, see KVM Host Networking Configuration Choices, and Preparing network devices. See KVM default NAT-based networking for details on NAT networking. The following step shows an example of how NAT-based networking can be used to define an IP address for the instance:

      <network>
        <name>default</name>
        <uuid>990f567f-6376-4a9c-84fa-c6e95b313ffb</uuid>
        <forward mode='nat'>
          <nat>
            <port start='1024' end='65535'/>
          </nat>
        </forward>
        <bridge name='virbr0' stp='on' delay='0'/>
        <mac address='52:54:00:99:ff:ff'/>
        <ip address='192.168.xxx.1' netmask='255.255.255.0'>
          <dhcp>
            <range start='192.168.xxx.2' end='192.168.xxx.254'/>
            <host mac='52:54:00:83:2e:36' name='hpvs' ip='192.168.xxx.4'/>
            <host mac='52:54:00:83:28:38' name='hpvs2' ip='192.168.xxx.5'/>
          </dhcp>
        </ip>
      </network>
      
    3. To define the domain, use the following command.

      virsh define hpvs.xml
      
    4. To start the instance, use the following command.

      virsh start hpvs --console
      

      You can view the status of the deployment and also view the logs that are displayed on the serial console. You can also use the information from this log to troubleshoot provisioning issues. After the successful completion of this command, the IBM Hyper Protect Virtual Server will be in the running state. You can log in to your logging instance as provided in the contract to view the logs generated from within the instance.

  15. The port that is associated with the container workload running on the instance needs to be opened up to be able to access the workload. Complete the following steps on the KVM host:

    1. To view the IP of the instance, use the following command.

      virsh net-dhcp-leases default
      
    2. To forward the port to access the container application from outside the instance, use the following command.

      iptables -I FORWARD -o virbr0 -p tcp -d $GUEST_IP --dport $GUEST_PORT -j ACCEPT
      iptables -t nat -I PREROUTING -p tcp --dport $HOST_PORT -j DNAT --to $GUEST_IP:$GUEST_PORT
      
    3. You can now access your application with the KVM host IP and the host port ($HOST_PORT) on which it is running.