Migrating KVM compute nodes that use the OpenStack scheduler

If you customized your KVM compute nodes in IBM Cloud Manager with OpenStack version 4.2 to use the OpenStack scheduler, follow these steps to migrate the KVM compute nodes to Red Hat Enterprise Linux® 7.1, 7.2, or 7.3 for the IBM Cloud Manager with OpenStack version 4.3 environment.

Procedure

  1. Go the directory that is used to store the 4.3 topology file that was is created in the Migrating the OpenStack controller section. Change your-deployment-name to the name for your 4.3 deployment.
    $ cd your-deployment-name
  2. Run the following command on the deployment server to start the migration. Change your-topology-name.json to the name of your topology file. Change kvm-compute-node to the name of the compute node that you are migrating:
    Important: Do not manually create or migrate instances on the kvm-compute-node during the migration.
    $ knife os manage migrate kvm compute node your-topology-name.json kvm-compute-node
    The knife os manage migrate kvm compute node command provides the following options:
    • --migrate-type TYPE: This option specifies the type of compute migration. Its value must be one of (cold, block, or live):
      Table 1. Migration type
      Option Description
      cold The instance is shut down to be moved to another hypervisor. In this case, the instance recognized that it was rebooted.
      Note: Only active and stopped instances can be migrated.
      live Almost no instance downtime. Useful when the instances must be kept running during the migration.
      Important: Shared storage for the instance path is required. Both hypervisors must have access to shared storage.
      Note: Only active and paused instances can be migrated.
      block Almost no instance downtime. Useful when the instances must be kept running during the migration. In this case, no shared storage is required.
      Note: Only active and paused instances can be migrated. This option is incompatible with read-only devices such as CD-ROMs and configuration drives.

      If you choose cold as the migration type, ensure that the hypervisors can SSH (as a Nova user) to each other by public key. For more information, see SSH permissions error when resizing or migrating an instance (OpenStack).

      If you choose live or block as the migration type, complete the following steps before you run the command.
      1. On the compute nodes in the IBM Cloud Manager with OpenStack 4.2 environment, upgrade the libvirt packages from libvirt-0.10.2-29.el6.x86_64 to libvirt-0.10.2-46.el6_6.2.x86_64. After the upgrade completes, use the rpm command to check whether the packages are upgraded or not:
        $ rpm -qa |grep libvirt
        libvirt-client-0.10.2-46.el6_6.2.x86_64
        libvirt-0.10.2-46.el6_6.2.x86_64
        libvirt-python-0.10.2-46.el6_6.2.x86_64
      2. On the compute nodes in the 4.2 environment, change the Nova configuration file to make sure the block_migration_flag option is assigned the right value:
        block_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE, VIR_MIGRATE_PEER2PEER, VIR_MIGRATE_LIVE
      3. On the compute nodes in 4.2 environment, restart the following services.
        $ service libvirtd restart
        $ service openstack-nova-compute restart
      4. On the compute nodes in the 4.2 and 4.3 environments, run the following command to build iptables rules for migration:
        $ iptables -I INPUT -p tcp --dport 49152:49215 -j ACCEPT 
        $ iptables -I INPUT -p tcp --dport 16509 -j ACCEPT
    • --target-compute-node: This option specifies the host name of the target compute node where the instances are migrated. Make sure that the target compute node is also stored on the Chef server.
      Note: If you don’t specify this option, the target compute node relies on the scheduler in OpenStack. Each instance might have a different target compute node.
    • --interval: This option specifies the status check interval in seconds for the instances that are migrating. The default value is 30.
    • --timeout: This option specifies the max status check time in seconds for the instances that are migrating. The default value is 600.
    • --period: This option means to start to migrate each instance in every period. The default value is 10.
    • --no-validation: This option indicates that when the nodes list or node runlist that is identified in the topology file conflicts with the list that is stored on the deployment Chef server, the upgrade should continue. Without this option, you are prompted whether to continue. By default, you are prompted to confirm.
      Example:
      $ knife os manage migrate kvm compute node your-topology-name.json kvm-compute-node --no-validation
      Note: To avoid breaking the cloud unintentionally, use this option only when you don’t want to confirm.
  3. At the end of the command, you can see the result of migration. Here is an example of the migration result:
    Instance Id	                    Instance Name	Start Time	          End Time	           Status
    40588df1-217a-48c7-b7d3-43665ac8dd85	instance01	  2015-03-24 14:27:58	 2015-03-24 14:30:01	ACTIVE
    The instances which can’t be migrated
    
    Instance Id                         	Instance Name	 Status
    37f11bf8-68d1-480d-9f8e-11af22ac30c5	instance02	   ERROR
  4. If all the instances on the compute node are migrated, then you need to reinstall the node to Red Hat Enterprise Linux 7.1, 7.2, or 7.3 system and redeploy this node as a new compute node. For more information, see Deploying an advanced configuration with KVM or QEMU compute nodes.
    Notes:
    • The IP address of the compute node and its fully qualified domain name should not be changed after reinstallation.
    • Before you deploy the new compute node, you should remove the old node and client on the deployment server. For more information, see Cleaning up a node for redeployment.
    • Manually update your topology file to add the compute node parts after you complete the migration of one compute.
  5. Repeat the previous steps. Ensure that all of the compute nodes in the 4.2 environment are migrated.
    Note: You must migrate all the compute nodes to IBM Cloud Manager with OpenStack version 4.3, or you cannot apply new patches.
  6. If you completed the steps in SSH permissions error when resizing or migrating an instance (OpenStack) to make the instance resize function work in your 4.2 environment, complete the steps again to ensure that the instance resize function still works in your 4.3 environment.