Deploying Db2 Warehouse using the Db2uCluster custom resource
Once the Db2 Operator is installed, the Db2uCluster custom resource (CR) provides the interface required to deploy Db2. This CR is supported by an OpenShift Custom Resource definition.
Accessing the Db2uCluster custom resource
- Through the Red Hat® OpenShift® console.
- Through the Red Hat OpenShift command-line tool.
- Through the command-line tool of a Kubernetes cluster.
The following sections cover CR options that can be included in the YAML file. An example of a completed Db2uCluster CR is also included.
Configure the Db2 Warehouse Version
spec:
version: "11.5.8.0<container layer release number>"Configure the database name
Specifies the name of the desired Db2 Warehouse database.
spec:
environment:
database:
name: bludbDeploy on a dedicated node
Specifies how to target labels on specific nodes for dedicated deployments. Deploying on dedicated nodes is a best practice in production. See Setting up dedicated nodes for your Db2 deployment.
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: database
operator: In
values:
- db2u-affinity
tolerations:
- key: "database"
operator: "Equal"
value: "db2u-affinity"
effect: "NoSchedule"Configure memory and CPU consumption
When deploying Db2 Warehouse using the Db2 Warehouse Operator, you have the ability to assign a CPU and Memory profile. This will assign CPU/MEM values to the container running the Db2 Warehouse Common SQL Engine.
spec:
podConfig:
db2u:
resource:
db2u:
limits:
cpu: 2
memory: 4GiConfigure storage
- meta shared storage volume for db2 meta data.
- data non-shared storage volume for database storage.
- backup shared storage volume for backing up the database (optional).
- activelogs non-shared storage volume for transactional logs (optional). For more information, see Creating separate storage for database transaction logs.
- tempts non-shared storage volume for temporary tablespaces (optional). For more information, see Creating separate storage for temporary table spaces.
spec:
storage:
- name: meta
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Gi
storageClassName: ocs-storagecluster-cephfs
type: create
- name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
storageClassName: ocs-storagecluster-ceph-rbd
type: template
- name: activelogs
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
storageClassName: ocs-storagecluster-ceph-rbd
type: template
- name: backup
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Gi
storageClassName: ocs-storagecluster-cephfs
type: create
- name: tempts
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
storageClassName: ocs-storagecluster-ceph-rbd
type: templateSee Certified storage options for Db2 for a full list of for supported storage solutions.
Use existing persistent storage claims
Existing persistent volume claims can be used also for deployment of any of the storage categories.
spec:
storage:
- claimName: <meta-pvc-name>
name: meta
spec:
resources: {}
type: existingEnabling 4K support
When your Db2 Warehouse on OpenShift deployment is configured to use either OpenShift Container Storage (OCS) or Portworx container storage (PX), ensure that you have enabled 4K support.
spec:
environment:
...
instance:
registry:
DB2_4K_DEVICE_SUPPORT: "ON"Disabling LDAP
spec:
environment:
ldap:
enabled: false Db2uCluster on version s11.5.8.0-cn1 with LDAP disabled
and have an existing instance with LDAP enabled, or you are trying to disable LDAP on s11.5.8.0-cn2,
see this troubleshooting doc first:
Recovering a Db2u deployment from failure due to
missing user-mgmt secret.Disabling the Node Port service
spec:
environment:
database:
disableNodePortService: true
Deploying a Db2 instance with limited privileges
spec:
account:
privileged: false
The following example shows how to set limited privileges by setting IPC kernel parameters on the nodes
:spec:
account:
privileged: false
advOpts:
hostIPC: true
Deploying Db2 with a custom service account
A Service Account is an OpenShift Container Platform account that allows a component to directly access the API. You can set parameters in your CR to create the Db2 instance with a custom service account.
spec:
account:
serviceAccountName: ${SERVICE_ACCOUNT}
Overriding the default database settings
spec:
environment:
database:
settings:
dftTableOrg: "COLUMN"
dftPageSize: "32768"
encrypt: "NO"
codeset: "UTF-8"
territory: "US"
collation: "IDENTITY"Overriding the Db2 database configuration (dbConfig) settings
spec:
environment:
database:
dbConfig:
LOGPRIMARY: "50"
LOGSECOND: "35"
APPLHEAPSZ: "25600"
STMTHEAP: "51200 AUTOMATIC"Setting the Db2 registry variable
spec:
environment:
instance:
registry:
DB2_ATS_ENABLE: "NO"
DB2_OBJECT_STORAGE_SETTINGS: "OFF"
DB2_DISPATCHER_PEEKTIMEOUT: "2"
DB2_COMPATIBILITY_VECTOR: "ORA"Example of a complete Db2uCluster CR
- Database name: BLUDB.
- 4 CPUs.
- 16 GB of memory.
- 5 storage volumes (meta, data, backup, activelogs, and tempts).
- DB2 4K SUPPORT enabled.
- LDAP disabled.
- Privileged instance.
apiVersion: db2u.databases.ibm.com/v1
kind: Db2uCluster
metadata:
name: db2wh-test
namespace: db2u
spec:
account:
privileged: true
environment:
database:
name: bludb
dbType: db2wh
ldap:
enabled: false
license:
accept: true
podConfig:
db2u:
resource:
db2u:
limits:
cpu: "4"
memory: 16Gi
size: 1
storage:
- name: meta
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
storageClassName: ocs-storagecluster-cephfs
type: create
- name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: ocs-storagecluster-ceph-rbd
type: template
- name: backup
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
storageClassName: ocs-storagecluster-cephfs
type: create
- name: activelogs
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: ocs-storagecluster-ceph-rbd
type: template
- name: tempts
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: ocs-storagecluster-ceph-rbd
type: template
version: 11.5.8.0
Deploying a Db2 Warehouse MPP instance
Db2 Warehouse can be deployed in either a single-node (SMP) or multi-node deployment designed for massively parallelized processing (MPP). In MPP deployments, Db2 Warehouse segments a query into smaller tasks that are then spread across multiple database partitions.
You control these parameters in your CR to create a Db2 Warehouse MPP instance. The
size specifies the number of nodes and environment.mln.total
specifies the total number of MLN(multiple logical node). By default, a single-node (SMP) deployment
will have size: 1 without a specification in environment.mln.total This
example provides us a Db2 Warehouse MPP instance of 3 nodes with 4 multiple logical node per node.
The value in environment.mln.total must be evenly divisible by
size.
size: 3
environment:
mln:
total: 12