Deploying Db2 Warehouse using the Db2uInstance custom resource
Once the Db2® Operator is installed, the Db2uInstance custom resource (CR) provides the interface required to deploy Db2 Warehouse. This CR is supported by a Red Hat OpenShift CR definition.
- Access the Db2uInstance custom resource
- Configure the Db2 Warehouse version
- Configure the database name
- Deploy on a dedicated node
- Configure memory and CPU consumption
- Configure storage
- Use existing persistent storage claims
- Enabling 4K support
- Internal LDAP
- Disable the Node Port service
- Deploy a Db2 Warehouse instance with limited privileges
- Deploy Db2 Warehouse with a custom service account
- Specifying a license certificate key
- Override the default database settings
- Override the Db2 Warehouse database configuration (dbConfig) settings
- Set the Db2 registry variable
- Example of a complete Db2uInstance CR
- Deploying a Db2 Warehouse MPP instance
Access the Db2uInstance custom resource
- Through the Red Hat® OpenShift® console.
- Through the Red Hat OpenShift command-line tool.
- Through the command-line tool of a Kubernetes cluster.
The following sections cover CR options that can be included in the YAML file. An example of a completed Db2uInstance CR is also included.
Configure the Db2 Warehouse version
spec:
version: "s11.5.9.0<container layer release number>"
Configure the database name
Specifies the name of the desired Db2 Warehouse database.
spec:
environment:
dbType: db2wh
databases:
- name: BLUDB
Deploy on a dedicated node
Specifies how to target labels on specific nodes for dedicated deployments. Deploying on dedicated nodes is a best practice in production. See Setting up dedicated nodes for your Db2 deployment.
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: database
operator: In
values:
- db2u-affinity
tolerations:
- key: "database"
operator: "Equal"
value: "db2u-affinity"
effect: "NoSchedule"
Configure memory and CPU consumption
When deploying Db2 Warehouse using the Db2 Operator, you have the ability to assign a CPU and Memory profile. This assigns CPU/MEM values to the container running the Db2 Common SQL Engine.
spec:
podTemplate:
db2u:
resource:
db2u:
limits:
cpu: 5
memory: 8Gi
Configure storage
- meta shared storage volume for db2 meta data.
- data non-shared storage volume for database storage.
- backup shared storage volume for backing up the database (optional).
- activelogs non-shared storage volume for transactional logs (optional). This is supported only for single-mode (SMP) deployment and not MPP. For more information, see Creating separate storage for database transaction logs.
- tempts non-shared storage volume for temporary table spaces (optional). For more information, see Creating separate storage for temporary table spaces.
- archivelogs non-shared storage volume for archive logs is mandatory. For more information, see Creating separate storage for database archive logs.
The CR calls that you make depend on whether you are configuring new or existing storage, or configuring template storage.
spec:
storage:
- name: meta
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Gi
storageClassName: ocs-storagecluster-cephfs
type: create
- name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
storageClassName: ocs-storagecluster-ceph-rbd
type: template
- name: activelogs
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
storageClassName: ocs-storagecluster-ceph-rbd
type: template
- name: backup
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Gi
storageClassName: ocs-storagecluster-cephfs
type: create
- name: tempts
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
storageClassName: ocs-storagecluster-ceph-rbd
type: template
- name: archivelogs
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Gi
storageClassName: ocs-storagecluster-cephfs
type: create
If you are deploying Q Replication, note that the following custom resource (CR) calls only work on Db2uInstance for Db2 Warehouse SMP or MPP. The calls will not work with Q Replication deployed on a Db2uInstance for online transaction processing (OLTP).
See Certified storage options for Db2 for a full list of for supported storage solutions.
Use existing persistent storage claims
Existing claims can be used also for deployment of any of the storage categories.
storage:
- claimName: <meta-pvc-name>
name: meta
spec:
resources: {}
type: existing
- claimName: <data-pvc-name>
name: data
spec:
resources: {}
type: existing
- claimName: <backup-pvc-name>
name: backup
spec:
resources: {}
type: existing
- claimName: <activelogs-pvc-name>
name: activelogs
spec:
resources: {}
type: existing
- claimName: <tempts-pvc-name>
name: tempts
spec:
resources: {}
type: existing
- claimName: <archivelogs-pvc-name>
name: archivelogs
spec:
resources: {}
type: existing
Enabling 4K support
When your Db2 Warehouse on OpenShift deployment is configured to use either OpenShift Data Foundation (ODF) or Portworx container storage (PX), ensure that you have enabled 4K support.
spec:
environment:
...
instance:
registry:
DB2_4K_DEVICE_SUPPORT: "ON"
Internal LDAP
spec:
environment:
authentication:
ldap:
enabled: false
Db2uInstance
on version s11.5.8.0-cn1 with LDAP disabled and
have an existing instance with LDAP enabled, or you are trying to disable LDAP on s11.5.8.0-cn2, see
this troubleshooting doc first: Recovering a
Db2u deployment from failure due to missing user-mgmt
secret.External LDAP
spec:
environment:
authentication:
ldap:
enabled: true
admin: bluadmin
externalLdap:
server: "my-ldap-server.example.com"
port: "389"
userGroup: usergrp
adminGroup: admingrp
Disable the Node Port service
environment:
disableNodePortService: true
Deploy a Db2 Warehouse instance with limited privileges
unsafe sysctls
:spec:
account:
securityConfig:
privilegedSysctlInit: false
The following example shows how to set
limited privileges by setting IPC kernel parameters on the nodes: spec:
account:
securityConfig:
privilegedSysctlInit: false
advOpts:
hostIPC: true
Deploy Db2 Warehouse with a custom service account
A service account is an OpenShift Container Platform account that allows a component to directly access the CR. You can set parameters in your CR to create the Db2 Warehouse instance with a custom service account.
spec:
account:
serviceAccountName: ${SERVICE_ACCOUNT}
Specifying a license certificate key
- Developer edition
- Enterprise edition
- Db2 Warehouse Enterprise Edition:
dashDB_c.lic
- Encode your Db2 Warehouse license to base64 by running the following
command:
LICENSE_KEY="./dashdb_c.lic" cat ${LICENSE_KEY} | base64 | tr -d '\n'
- The following example shows how to set the license key for your Db2 Warehouse
instance:
spec: license: value: <ENCODED STRING FROM STEP 1 GOES HERE>
Override the default database settings
spec:
environment:
databases:
- name: BLUDB
settings:
dftTableOrg: "COLUMN"
dftPageSize: "32768"
encrypt: "NO"
codeset: "UTF-8"
territory: "US"
collation: "IDENTITY"
Override the Db2 Warehouse database configuration (dbConfig) settings
spec:
environment:
databases:
- name: BLUDB
dbConfig:
LOGPRIMARY: "50"
LOGSECOND: "35"
APPLHEAPSZ: "25600"
STMTHEAP: "51200 AUTOMATIC"
Set the Db2 registry variable
spec:
environment:
instance:
registry:
DB2_ATS_ENABLE: "NO"
DB2_OBJECT_STORAGE_SETTINGS: "OFF"
DB2_DISPATCHER_PEEKTIMEOUT: "2"
DB2_COMPATIBILITY_VECTOR: "ORA"
Example of a complete Db2uInstance CR
- Database name: BLUDB.
- 4 CPUs.
- 16 GB of memory.
- 5 storage volumes (meta, data, backup, archivelogs, and tempts).
- DB2 4K SUPPORT enabled.
- LDAP disabled.
- Privileged instance.
apiVersion: db2u.databases.ibm.com/v1
kind: Db2uInstance
metadata:
name: db2wh-example
spec:
account:
privileged: true
environment:
authentication:
ldap:
enabled: false
databases:
- dbConfig:
APPLHEAPSZ: "25600"
LOGPRIMARY: "50"
LOGSECOND: "35"
STMTHEAP: 51200 AUTOMATIC
name: BLUDB
dbType: db2wh
instance:
dbmConfig:
DIAGLEVEL: "2"
registry:
DB2_4K_DEVICE_SUPPORT: "ON"
DB2_ATS_ENABLE: "NO"
DB2_DISPATCHER_PEEKTIMEOUT: "2"
DB2_OBJECT_STORAGE_SETTINGS: "OFF"
partitionConfig:
dataOnMln0: true
total: 2
volumePerPartition: true
license:
accept: true
nodes: 1
podTemplate:
db2u:
resource:
db2u:
limits:
cpu: 4
memory: 16Gi
storage:
- name: meta
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
storageClassName: ocs-storagecluster-cephfs
type: create
- name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
storageClassName: ocs-storagecluster-ceph-rbd
type: template
- name: backup
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
storageClassName: ocs-storagecluster-cephfs
type: create
- name: tempts
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: ocs-storagecluster-ceph-rbd
type: template
- name: archivelogs
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Gi
storageClassName: ocs-storagecluster-cephfs
type: create
version: s11.5.9.0-cn2
Deploying a Db2 Warehouse MPP instance
Db2 Warehouse can be deployed in either a single-node (SMP) or multi-node deployment designed for massively parallel processing (MPP). In MPP deployments, Db2 Warehouse segments a query into smaller tasks that are then spread across multiple database partitions.
You control these parameters in your CR to create a Db2 Warehouse MPP
instance. The size
specifies the number of nodes and
environment.partitionConfig.total
specifies the total number of MLN(multiple
logical node). By default, a single-node (SMP) deployment has size: 1 without a specification
in environment.partitionConfig.total
This example provides us a Db2 Warehouse MPP
instance of 3 nodes with 4 multiple logical nodes per node. The value in
environment.mln.total
must be evenly divisible by size
. The
volumePerPartition
value can also be specified as true
to allow
for each multiple logical node to obtain it's own unique volume. If the
volumePerPartition
value is false
, then the partitions share
either the data
volume per pod, or meta
volume across all Db2 Warehouse
pods.
spec:
version: s11.5.9.0
nodes: 3
environment:
partitionConfig:
total: 12
volumePerPartition: true