Deploying a prescribed configuration with KVM or QEMU compute nodes (HA controller nodes)

Deploy the components that are necessary to create a cloud environment with KVM or QEMU compute nodes and highly available (HA) controller nodes by using a prescribed cloud configuration.

Before you begin

Before you begin, complete the following prerequisite steps.

About this task

The following information provides details about this prescribed configuration.
Table 1. Summary of prescribed configuration
Component Configuration
OpenStack Components Identity, Image, Network, Compute, Orchestration, Block Storage, Telemetry, and Dashboard
OpenStack Networking Neutron with ML2 plug-in using Open vSwitch mechanism driver
OpenStack Network types supported
  • Local, GRE, or VXLAN
  • Flat, VLAN (Note: Only supported if all HA controller nodes have a data network.)
Note: You must create your initial OpenStack networks after deployment. For more information, see Creating initial networks.
OpenStack compute scheduler Compute scheduler filters (default) or IBM Platform Resource Scheduler
OpenStack Block Storage Driver IBM Storwize® Cinder driver
Note:
Database IBM DB2®
Message Queue RabbitMQ
Hypervisor type KVM or QEMU
IBM Cloud Manager - Self Service Enabled (default) or disabled
Use the following procedure to deploy the topology to your node systems.

Procedure

  1. Log in to the deployment system as the root user. This is the system where IBM Cloud Manager with OpenStack was installed.
  2. Create a directory to store the files for the topology that you deploy. Change your-deployment-name to the name for your deployment.
    $ mkdir your-deployment-name
    $ chmod 600 your-deployment-name
    $ cd your-deployment-name
  3. Copy the example-ha-controller-n-compute-kvm-cloud.yml example cloud file as the base structure for your cloud deployment and rename it for your cloud environment.
    Run the following command to copy the example cloud file and rename it for your cloud.
    Note: This step assumes the default IBM Cloud Manager with OpenStack installation path on the deployment server, that is /opt/ibm/cmwo.
    In the following command, change your-cloud.yml to the name of your cloud.
    $ cp /opt/ibm/cmwo/cli/config/example-ha-controller-n-compute-kvm-cloud.yml your-cloud.yml
  4. Change the required YAML attributes in your cloud file, your-cloud.yml.
    Note: There must be a space between the colon and the field value. For example, name: cloudname. If not, you receive an error.
    • Cloud Information (cloud): Customize the cloud information.
      1. name: Set the name for your cloud. The name cannot contain spaces or special characters. This name is also used as the OpenStack region name.
      2. password: Set the cloud administrator (admin) user's password.
    • HA Information (ha): Customize the HA information.
      1. virtual_ip_address: Set the cloud virtual IP address that is used to connect to the cloud services. The virtual IP address is an available IP address in your infrastructure that can float between the HA controller nodes. The IBM Cloud Manager with OpenStack HA services will manage the location and availability of the virtual IP address. A host name cannot be used for the virtual IP address.
    • Node Information (nodes): Customize the information for each node system in your cloud. Your cloud must have at least three HA controller nodes.
      1. name and description: Leave these set to the default values provided.
      2. fqdn: Set to the fully qualified domain name of the node systems. The deployment system must be able to SSH by using the fully qualified domain name. You can also set to the public IP address, private IP address, or host name. Add one fully qualified domain per line for each node system. The node information is applied to all nodes in the list. You can copy the node section for nodes that do not share the same node information. It is recommended that the value used correspond to the management network interface for the node.
      3. password or identity_file: Set to the appropriate SSH root user authentication for the node system. You can use either a password or an SSH identity file for authentication.
      4. nics.management_network: Set to the management network interface card for the node system. This network is used for IBM Cloud Manager with OpenStack communication between the nodes in the cloud. The fully qualified domain name setting for the node must resolve to the IP address of this network. The default is eth0. All HA controller nodes must have the same value for this field. If specifying a HA virtual public IP address, the management network interface card for the node system must be an available network interface card that is not the public network interface card (ibm-openstack.ha.virtualip_public.interface) of the node system.
      5. nics.data_network: Set to the data network interface card for the node system. The default is eth1. If the node system does not have an available network interface card that can be used as a data network, then set to ~. Do not set to the same value as nics.management_network. Also, do not set to a network interface card that provides an alternative management network or an external network for the node, for example, a private or public IP address. A data network is required to use VLAN or Flat networks in your cloud. If one of the HA controller nodes does not have a data network, then set to ~ for all HA controller nodes.
  5. Optional: Complete any optional customization by changing the appropriate YAML attributes in your cloud file, your-cloud.yml.
    1. Optional: Cloud information (cloud): Customize the cloud information.
      • self_service_portal: IBM Cloud Manager with OpenStack features an easy to use IBM Cloud Manager - Self Service user interface for cloud operations. You can disable the IBM Cloud Manager - Self Service feature by setting the self_service_portal attribute to disabled.
      • platform_resource_scheduler: The default scheduler for the HA topology uses the OpenStack compute scheduler filters, and requires no additional configuration. IBM Cloud Manager with OpenStack offers an enhanced compute scheduler, IBM Platform Resource Scheduler. To enable this enhanced compute scheduler in a HA topology, additional work is required. All controller nodes must be configured with access to a shared file system, and this attribute must be set to enabled. For more information about enabling Platform Resource Scheduler for an HA topology, see Customizing the scheduler.
    2. Optional: Environment Information (environment): Customize the environment information.
      override_attributes
      • ntp.servers: Set to the NTP servers that are accessible to your deployment. The list of NTP servers must be comma-separated, for example, [your.0.ntpserver.com, your.1.ntpserver.com]. The default is [0.pool.ntp.org, 1.pool.ntp.org, 2.pool.ntp.org, 3.pool.ntp.org].
      • ibm-openstack.prs.ha.shared_dir: Set this to the directory on each controller node that is mapped to a shared file system (default is /var/opt/ibm/prs). This is only needed if enabling Platform Resource Scheduler in an HA topology. For more information, see Customizing the scheduler.
      default_attributes
      • openstack.block-storage.volume.multi_backend.storwize-1.san_ip: Set this to the IP of your Storwize server.
      • openstack.block-storage.volume.multi_backend.storwize-1.san_private_key: See your Storwize documentation for instructions on how to create a public/private key pair and assign it to a user on your Storwize system. Or, if you already have a key pair that is assigned to the Storwize user you want to use, find your copy of that private key. Then, as root, copy the private key to each controller with at least 644 (rw-r--r--) permissions on it (use chmod 644 <priv_key_file> if necessary). The private key must be placed in the same place on each controller. It is recommended to put it in the root directory so that there are no directory permissions that prevent the key from being read. When complete, set this attribute to the absolute path of the private key you put on each of your controllers. If you follow the recommendation and put your private key with a file name of v7000_rsa in the root directory, then your attribute should have a value of '/v7000_rsa'. After the HA deploy succeeds, run the following two commands on each controller to improve the security of the private key. This cannot be done before the deploy because the "cinder" user and group do not exist yet:
          chown cinder:cinder /v7000_rsa
          chmod 400 /v7000_rsa
      • openstack.block-storage.volume.multi_backend.storwize-1.storwize_svc_volpool_name: If you haven't already, create a volume pool on your Storwize server, and then set this attribute to the volume pool name.
      • ibm-openstack.ha.virtualip.cidr_netmask: Set to the CIDR network mask of your HA virtual IP address. The default is 24. The HA virtual IP address was set in the virtual_ip_address cloud attribute.
    3. Optional: Specify a HA virtual public IP address. When a HA virtual public IP address is specified, the public URL for each cloud service is configured to use this IP address.
      default_attributes
      • ibm-openstack.ha.virtualip_public.interface: Set to the public network interface card for the node system. This network is used for public cloud services communication in the cloud. All HA controller nodes must have the same public network interface card name.
      • ibm-openstack.ha.virtualip_public.cidr_netmask: Set to the CIDR network mask of your HA virtual public IP address. The default is 24.
      override_attributes
      • ibm-openstack.ha.virtualip_public.address: Set the cloud virtual IP public address that is used to connect to the cloud services public URL. The virtual IP public address is an available public IP address in your infrastructure that can float between the HA controller nodes. The IBM Cloud Manager with OpenStack HA services manage the location and availability of the virtual IP public address. A host name cannot be used for the virtual IP public address.
      • openstack.telemetry.service-credentials.insecure: This attribute is used as a work-around to configure the Ceilometer compute agent to use the internalURL (which uses the HA virtual private IP address) to connect to OpenStack services. The value should be set to false\nos_endpoint_type=internalURL
  6. Deploy your cloud.
    $ knife os manage deploy cloud your-cloud.yml
    Note: This command generates a topology file and other related files for your deployment and stores them in the same directory as your cloud file, your-cloud.yml. The cloud file is no longer needed after the deployment completes and can be removed. The generated files are only used if you must update your cloud.
    $ rm your-cloud.yml
  7. After the deployment is complete, the IBM Cloud Manager with OpenStack services are ready to use.
    The IBM Cloud Manager - Dashboard is available at https://x.x.x.x/auth/login/ and the web interface for IBM Cloud Manager - Self Service is available at https://x.x.x.x:8080/login.html, where x.x.x.x is the cloud virtual IP address or virtual IP public address. Log in with "admin" and the password you customized earlier.

    For more information about managing IBM Cloud Manager with OpenStack services, see Managing IBM Cloud Manager with OpenStack services.

Results

You are ready to start using your cloud environment. To continue, see Using your cloud environment.
Note: For HA cloud deployments, extra steps are required when creating images. For more information, see Configuring image storage for HA controller nodes.