Deploying with block storage (Cinder) nodes

Deploy the components that are necessary to create a cloud environment with extra block storage (Cinder) nodes.

About this task

One or more nodes in your cloud topology can be configured as block storage nodes. The controller node is automatically configured as a block storage node. You can define more nodes by using the ibm-os-block-storage-node role. A block storage node runs the Cinder volume and Ceilometer compute services; it does not run other controller or compute node services.
A block storage node must meet the following requirements:
  • Operating System: Red Hat Enterprise Linux® 7.1, 7.2, or 7.3
  • Architecture: x86_64 or ppc64
For more information about storage (Cinder) drivers and the compute node hypervisors that they support, see Configuring Cinder drivers.

When you define your cloud topology, you must define attributes for each block storage node. For example, when you configure the block storage nodes to use the LVM iSCSI driver, you define the iSCSI IP address for each node. Depending on how you want to allocate volumes across the block storage nodes, you might want to define a unique volume backend name for each node.

The general process for including block storage nodes in your cloud topology is as follows:
  • Define a controller +n compute or distributed database topology.
  • Define one or more nodes in your topology as block storage nodes and define the node-specific block storage attributes for each block storage node.
    Note: For the HA controller +n compute topology, extra block storage nodes are not supported. The controller nodes are the only block storage nodes in the HA controller +n compute topology.
  • Deploy your topology.
Use the following instructions to define block storage nodes in your topology.

Procedure

  1. Ensure that your environment overrides openstack.endpoints.db.host with the IP address of the database server (your controller or distributed database server).
  2. Create node attribute files for each block storage node. The files include the volume driver-specific attributes for the volume driver that is being used.

    Example node attributes file, storage-node-1-attributes.json. It uses the LVM volume driver with its default volume group name, cinder-volumes.

    Note: For Red Hat Enterprise Linux and Fedora operation systems:
    {
      "openstack" : {
        "block-storage" : {
          "volume" : {
            "create_volume_group" : true,
            "create_volume_group_type" : "file",
            "iscsi_ip_address" : "x.x.x.x",
            "multi_backend" : {
              "lvm-1" : {
                "volume_driver" : "cinder.volume.drivers.lvm.LVMISCSIDriver",
                "volume_backend_name" : "lvm-1",
                "iscsi_helper": "lioadm"
              }
            }
          }
        }
      }
    }
    Note: For other operation systems:
    {
      "openstack" : {
        "block-storage" : {
          "volume" : {
            "create_volume_group" : true,
            "create_volume_group_type" : "file",
            "iscsi_ip_address" : "x.x.x.x",
            "multi_backend" : {
              "lvm-1" : {
                "volume_driver" : "cinder.volume.drivers.lvm.LVMISCSIDriver",
                "volume_backend_name" : "lvm-1"
              }
            }
          }
        }
      }
    }
  3. Edit your topology file, my-topology.json, and add nodes for the block storage nodes.
    The run_order_number for the block storage nodes must be greater than that of the controller node and the distributed database node (if present). The run list for the node must be the single role ibm-os-block-storage-node. Identify the attribute_file you created for each node.
    {
      "name": "CHANGEME",
      "environment": "CHANGEME",
      "secret_file":"/opt/ibm/cmwo/chef-repo/data_bags/example_data_bag_secret",
      "nodes": [
        {
          "fqdn": "controller.private.cloud",
          . . .
        },
        {
          "fqdn": "storage-node-1.private.cloud",
          "password": "CHANGEME",
          "run_order_number": 3,
          "quit_on_error": true,
          "runlist": [
             "role[ibm-os-block-storage-node]"
          ],
          "chef_client_options": "",
          "user": "root",
          "attribute_file": "storage-node-1-attributes.json"
        },
        {
          "fqdn": "compute-node-1.private.cloud",
          . . .
        }
      ]
    }
  4. Finish deploying the topology as described in the directions for your hypervisor.