Creating a database deployment on the cluster

You create a database deployment on your cluster from the Red Hat® OpenShift® web client.

About this task

You must have the Create service instances permission to complete this task.

Tip: If you are deploying a sandbox environment, you can use the Create with defaults option to deploy the database. However, you don't have any control over the resources that are allocated to the database, the storage that is used, and so forth.

Procedure

  1. From the navigation, select Collect > My data.
  2. Open the Databases tab.
    Restriction: This tab is displayed only if you completed the previous steps to install the database service.
  3. Click Create a database.
  4. Select the database type and version. Click Next.
  5. Specify the number of cores to use.

    You are constrained by the total number of cores on the node. For example, if you created a node with 16 cores, you can specify up to 16 cores. If the database is deployed on multiple nodes, the resources must exist on each node.

  6. Specify the amount of memory to use.

    You are constrained by the total amount of memory on the node. For example, if you created a node with 512 GB of memory, you can specify up to 512 GB. If the database is deployed on multiple nodes, the resources must exist on each node.

  7. Specify whether you want to deploy the database on dedicated nodes.

    It is recommended that you deploy the database on dedicated nodes for production workloads to ensure that the database has sufficient resources. However, this option is available only if you used the --add-database-node option when you prepared your cluster.

    If you cannot select this option or if you choose not to deploy the database on dedicated nodes, the nodes might be used by other services.

  8. Select which existing namespace to use for the database deployment. Click Next.
    Namespaces divide cluster resources between multiple users, applications, or services to ensure a fair sharing of available resources. You can use a namespace to structure the resources that are assigned to the database. You can also use separate namespaces for extra security. For more information, see Namespaces in the Kubernetes documentation.
  9. Specify the storage to use for the database.
    The options that are available depend on:
    • The type of database you are deploying
    • The storage classes or persistent volume claims that are available on the cluster. For example, if you do not have a persistent volume claim, you cannot use existing storage.
    • To create new storage by using a storage class template:
      Storage type Details
      • NFS persistent volume
      • GlusterFS persistent volume
      1. Select Create new storage.
      2. Select Use storage template and specify the following information:
        1. Select the storage class to use.
        2. The amount of storage to allocate to the persistent volume. You are constrained by the total amount of storage on the node. For example, if you created a 1 TB storage partition, you can specify up to 1000 GB.
    • To create new storage by specifying storage parameters:
      Storage type Details
      • NFS persistent volume
      1. Select Create new storage.
      2. Select Define storage parameters and specify the following information:
        1. If the database supports multiple types of storage, specify NFS as the storage volume type.
        2. The name to use for the persistent volume claim.
        3. The amount of storage to allocate to the persistent volume. You are constrained by the total amount of storage on the node. For example, if you created a 1 TB storage partition, you can specify up to 1000 GB.
        4. The reclaim policy to use for the persistent volume:
          • Retain means that the data remains on the storage volume and the volume cannot be used until an administrator reclaims it.
          • Recycle scrubs the data from the storage volume and makes the volume available for reuse.
          • Delete means the data and the storage volume are deleted.
        5. Specify the IP address of your NFS server.
        6. For HADR deployments, specify the IP address of a second NFS server for failover support.
        7. Specify the directory on the NFS server where you want to store the data.
      • GlusterFS persistent volume
      1. Select Create new storage.
      2. Select Define storage parameters and specify the following information:
        1. If the database supports multiple types of storage, specify GlusterFS as the storage volume type.
        2. The name to use for the persistent volume claim.
        3. The storage class to use for the persistent volume.
        4. The amount of storage to allocate to the persistent volume. You are constrained by the total amount of storage on the node. For example, if you created a 1 TB storage partition, you can specify up to 1000 GB.
        5. The reclaim policy to use for the persistent volume:
          • Retain means that the data remains on the storage volume and the volume cannot be used until an administrator reclaims it.
          • Recycle scrubs the data from the storage volume and makes the volume available for reuse.
          • Delete means the data and the storage volume are deleted.
      • IBM Cloud Object Storage bucket
      1. Select Create new storage.
      2. Select Use cloud object storage and specify the following information:
        1. The bucket where you want to store the data.
        2. The URL of your Cloud Object Storage. Do not include the https prefix in the URL, for example, s3-api.dal-us-geo.objectstorage.softlayer.net.
        3. Your access key and secret key credentials for authenticating to your cloud storage.
      • hostPath
      1. Select Create new storage.
      2. Select Define storage parameters and specify the following information:
        1. If the database supports multiple types of storage, specify hostPath as the storage volume type.
        2. The name to use for the persistent volume claim.
        3. The amount of storage to allocate to the persistent volume. You are constrained by the total amount of storage on the node. For example, if you created a 1 TB storage partition, you can specify up to 1000 GB.
        4. The reclaim policy to use for the persistent volume:
          • Retain means that the data remains on the storage volume and the volume cannot be used until an administrator reclaims it.
          • Recycle scrubs the data from the storage volume and makes the volume available for reuse.
          • Delete means the data and the storage volume are deleted.
        5. Specify the fully qualified host path.
    • To use existing storage:
      Storage Details
      • NFS persistent volume
      • GlusterFS persistent volume
      • hostPath persistent volume
      1. Select Use existing storage.
      2. (IBM® Db2® Event Store only) Ensure that Cluster storage is selected.
      3. Select the claim that you want to use.
      • IBM Cloud Object Storage bucket
      1. Select Use existing storage.
      2. Select Use cloud object storage and specify the following information:
        1. The bucket where you want to store the data.
        2. The URL of your Cloud Object Storage. Do not include the https prefix in the URL, for example, s3-api.dal-us-geo.objectstorage.softlayer.net.
        3. Your access key and secret key credentials for authenticating to your cloud storage.
  10. Click Next.
  11. Optional: Specify a new display name for the database.
  12. When the database is ready, select Submit connection for approval from the action menu.
    Important: The connection to the database is not available in the catalog until the request is approved by a user with Manage Catalog permissions (for example, a Data Steward).

What to do next

Ensure that a user with Manage Catalog permissions approves the request. The request shows on the Publish to Catalog Requests tab on their home page.

After the request is approved, the database is available on the Data connections page. You can use the connection when you run automated discovery to import, analyze, and classify data from the database.