Deploying multi-region support

In a multi-region cloud environment, you can set up two separate deployments that use the same OpenStack Keystone server, but use different regions and different hypervisors. Each region needs a separate controller, but Keystone is shared.

About this task

Use the following instructions to build your own multi-region cloud environment. These instructions assume that you are familiar with the instructions for deploying a single-region cloud environment. For more information about architecture for a multi-region environment, see the OpenStack information about Multi-site Architecture.
Note: Within a region different types of hypervisors can coexist. One exception is that VMware and PowerVC hypervisors should exist in separate regions.

This example uses two regions for the multi-region cloud environment; however, you can have more than two regions. Also, this example uses a single deployment server to manage all of the regions. However, you can use a separate deployment server for each region. If separate deployment servers are used, they must have the same version of IBM Cloud Manager with OpenStack installed, but are allowed to have different fix pack levels. For more information about updates and release upgrades in these configurations, see Best practices for maintaining a multi-region cloud or test cloud.

Procedure

  1. Create two directories to store the files for the multi-region cloud environment.
    One directory is used for region one and the second directory is used for region two.
  2. Create two environment files, which are copied from the example-ibm-os-single-controller-n-compute example environment.
    One file is used for region one and the second file is used for region two.

    If you are deploying a High Availability (HA) controller +n compute multi-region deployment, as HA deployment doesn't support Self Service HA , ensure that ibm-sce.service.enabled is set to false.

  3. Create two topology files that are based on the hypervisor that is used in each region.
    One file is used for region one and the second file is used for region two.
  4. Update the environment and topology files for each region to support the multi-region cloud environment.
    1. In each region’s environment file, update openstack.region to be the unique name for each region. The region name must not contain spaces or special characters.
      Note: For the Keystone configuration, the UUID authentication strategy is the default authentication strategy. The PKI authentication strategy must not be used for a multi-region topology.
    2. In each region’s environment file, update the openstack.endpoints.identity-api.host, openstack.endpoints.identity-admin.host, and openstack.endpoints.identity-internal.host attributes to specify the IP address of the node that contains the shared OpenStack Keystone server. In a non high availability multiregion configuration the IP address is the one associated to the controller of the first region of the multiple region cloud environment. In an high availability multiregion configuration, it is the virtual address associated to the high availability controller nodes (primaries and secondaries), within the first region of the multiple region cloud environment.
      The following example JSON snippet is added to the environment file inside override_attributes > openstack > endpoints, where X.X.X.X is the management IP address mentioned above for the single controller node where the shared OpenStack Keystone server is located.
             "identity-api": {
                "host": "X.X.X.X"
              },
              "identity-admin": {
                "host": "X.X.X.X"
              },
              "identity-internal": {
                "host": "X.X.X.X"
              }
      
      The other endpoint host attributes (openstack.endpoints.host, openstack.endpoints.bind-host, openstack.endpoints.mq.host, openstack-endpoints.db.host, and so on) must refer to the virtual IP address associated to the controller node of the region that you are installing. In a non high availability multiregion configuration, this is the IP address of the single controller node. In an high availability multiregion configuration, it is the virtual address associated to the high availability controller nodes (primaries and secondaries) of the region. Note that the virtual address must be different for each region.
    3. If you want to use the IBM Cloud Manager - Self Service user interface to manage your multi-region cloud environment, then each region’s environment file must set ibm-sce.service.enabled to true. In addition, only one region’s topology file can contain a node with the ibm-sce-node role. That is, only one self-service interface is supported in a multi-region cloud environment. The self-service interface can be installed in any region.
      If you do not want to use the self-service interface to manage your multi-region cloud environment, then each region's environment file must set ibm-sce.service.enabled to false. Neither region's topology file can contain a node with the ibm-sce-node role. If you are deploying a High Availability (HA) controller +n compute multi-region deployment, as HA deployment doesn't support the IBM Cloud Manager - Self Service user interface, ensure ibm-sce.service.enabled is set to false.
    4. If you have a High Availability (HA) controller +n compute multi-region deployment that is not the first region (for example, the second region), then set ibm-openstack.first_region to false. As a result, Keystone is not installed and cannot be managed by Pacemaker on the second region.
    5. Customize the passwords and secrets for each region. Since each region uses the same OpenStack Keystone server, the data bag items that are related to OpenStack Keystone must have the same passwords in all regions. Other passwords and secrets can be unique for each region. For more information about customizing passwords and secrets, see Customizing passwords and secrets.

      The following passwords and secrets must be the same between the regions. For more information on the data bags that are referenced, see Data bags.

      • Shared passwords and secrets in the secrets data bag:
          openstack_identity_bootstrap_token
          openstack_simple_token
      • All passwords and secrets in the service_passwords data bag are shared.
      • Shared passwords and secrets in the user_passwords data bag:
          admin
          sceagent
          heat_stack_admin
      Note: You can use the following command to determine the current passwords and secrets for the first region. The command downloads and decrypts the data bags that contain the passwords and secrets for the first region and stores them in the data_bags directory. The directory also contains a passwords and secrets JSON file, region-one-environment-name_passwords_file.json, that can be used to set the passwords and secrets for the second region. Ensure that you remove the data_bags directory after you are done using it.
      $ knife os manage get passwords --topology-file topology_region_one.json data_bags
    6. The remaining environment and topology file updates are normal updates for a stand-alone deployment. However, the same database service (DB2, MariaDB, or MySQL) and messaging service (Qpid) must be used for each region.
    7. Finally, add a new role, 'ibm-sce-add-cloud-node' in the region two topology file, topology_region_two, before you deploy the second region.
  5. Deploy the topology for the region that contains the shared OpenStack Keystone server:
    $ knife os manage deploy topology topology_region_one.json
  6. Deploy the topology for the remaining region:
    $ knife os manage deploy topology topology_region_two.json
  7. (Optional) Check the detailed status of the IBM Cloud Manager with OpenStack services that are deployed.
    $ knife os manage services status --topology-file your-topology-name.json

What to do next

For more information about managing IBM Cloud Manager with OpenStack services, see Managing IBM Cloud Manager with OpenStack services.