Configuring an external EDB PostgreSQL database for IM
This version of documentation is no longer updated. For the latest information, see the following links:
- Continuous Delivery (CD) documentation
- Support Cycle-2 (SC-2) documentation
From foundational services version 4.6, Identity Management (IM) service uses EDB PostgreSQL as a database to store IM data. If you upgrade foundational services from version 3.19, 3.23, 4.0 or later to 4.6 or later, IM operator migrates IAM/IM data from MongoDB to EDB PostgreSQL database. For more information, see Migrating foundational services version 3.x to 4.x.
Ensure that you configure the IM with the embedded or external EDB PostgreSQL database before you install or upgrade the foundational services. You cannot migrate data from embedded PostgreSQL to external EDB PostgreSQL after you install or upgrade the foundational services.
For the Isolated migration of the cluster with two or more Cloud Paks in the same namespace, the MongoDB data is migrated successfully when you upgrade the first Cloud Pak. To migrate the MongoDB data from other Cloud Paks, run the following command:
#!/bin/bash
DB_POD="icp-mongodb-0"
# Execute MongoDB rollback commands
echo 'use samlDB
db.saml.updateMany({}, {$unset:{migrated: null}})
use platform-db
db.cloudpak_ibmid_v3.updateMany({}, {$unset:{migrated: null}})
db.cloudpak_ibmid_v2.updateMany({}, {$unset:{migrated: null}})
db.Directory.updateMany({}, {$unset:{migrated: null}})
db.Users.updateMany({}, {$unset:{migrated: null}})
db.UserPreferences.updateMany({}, {$unset:{migrated: null}})
db.ZenInstance.updateMany({}, {$unset:{migrated: null}})
db.ZenInstanceUsers.updateMany({}, {$unset:{migrated: null}})
db.ScimAttributes.updateMany({}, {$unset:{migrated: null}})
db.ScimAttributeMapping.updateMany({}, {$unset:{migrated: null}})
db.Groups.updateMany({}, {$unset:{migrated: null}})
db.ScimServerUsers.updateMany({}, {$unset:{migrated: null}})
db.ScimServerGroups.updateMany({}, {$unset:{migrated: null}})
use OAuthDBSchema
db.OauthClient.updateMany({}, {$unset:{migrated: null}})' | oc exec -ti $DB_POD -- bash -ec 'mongo --host rs0/mongodb:27017 --username $ADMIN_USER --password $ADMIN_PASSWORD --authenticationDatabase admin --ssl --sslCAFile /data/configdb/tls.crt --sslPEMKeyFile /work-dir/mongo.pem'
Prerequisites
Complete the following prerequisites to configure IM with the EDB PostgreSQL database:
-
Set up the database server if you use new external database server. For more information, see Setting up an external PostgreSQL database server.
You can skip this step if you already completed the setup for the external database server with all the certificates.
-
Configure Secure Sockets Layer (SSL) modes for IM. For more information, see Configuring SSL modes for IM.
-
You need to generate the following key files in the database server to configure IM to use EDB PostgreSQL as a database:
- client.crt
- client.key
- client_key.pem
- client.pem
- root.pem
To generate the key files, see Generating key files for IM.
Supported database
IM supports embedded and external PostgreSQL database. IM is configured with the embedded EDB PostgreSQL database as default.
You can configure IM with PostgreSQL version 16 as the external database and use the SSL client-certificate authentication method.
Configuring SSL modes for IM
You can configure IM with the require, verify-ca, and verify-full SSL modes. By default, IM is configured with the require SSL mode.
You can change the default SSL mode for IM with one of the following methods:
-
To change the SSL mode with the OpenShift Container Platform console, complete the following steps:
-
Log in to the OpenShift Container Platform console.
-
From the navigation menu, click Workloads > Config Maps.
-
Search for
platform-auth-idp. -
Click ... > Edit Config Map.
-
Update the
DB_SSL_MODEparameter withverify-fullorverify-caSSL mode.apiVersion: v1 kind: ConfigMap metadata: name: platform-auth-idp namespace: <your-foundational-service-namespace> data: DB_SSL_MODE: <ssl mode> DB_CONNECT_TIMEOUT: "60000" DB_IDLE_TIMEOUT: "10" DB_POOL_MAX_SIZE: "15" DB_POOL_MIN_SIZE: "5" DB_CONNECT_MAX_RETRIES: "5" SEQL_LOGGING: falseReplace
<ssl mode>withverify-fullorverify-caSSL mode parameters. -
Click Save.
-
From the navigation menu, click Workloads > Deployments.
-
Locate
platform-auth-service. -
Click ... > Edit Deployment. A window for editing displays.
-
Click Save without changes in the configmap to reload the
platform-auth-servicepods with the updated configmap values. -
Click
platform-auth-service. -
Wait for some time. Then, check the status of the
platform-auth-servicepods in the Pods pane. The status of all the pods must show as4/4in the Ready field name. -
Repeat steps 8 through 13 for the
platform-identity-providerandplatform-identity-managementpods.
-
-
To change the SSL mode with the CLI, complete the following steps:
Replace
<your-foundational-services-namespace>in the commands with the namespace where you deployed the foundational services.-
Log in to your cluster with the
oc logincommand. -
Edit the
platform-auth-idpconfigmap.oc -n <your-foundational-services-namespace> edit configmap platform-auth-idp -
Change the following attribute values as required:
-
Restart the
platform-auth-service,platform-identity-management, andplatform-identity-providerpods.-
Get the
platform-auth-service,platform-identity-management, andplatform-identity-providerpod names.oc -n <your-foundational-services-namespace> get pods | grep platform-auth-serviceoc -n <your-foundational-services-namespace> get pods | grep platform-identity-managementoc -n <your-foundational-services-namespace> get pods | grep platform-identity-provider -
Delete the
platform-auth-service,platform-identity-management, andplatform-identity-providerpods.oc delete pod <platform-identity-management-pod-name> -n <your-foundational-services-namespace>oc delete pod <platform-auth-service-pod-name> -n <your-foundational-services-namespace>oc delete pod <platform-identity-provider-pod-name> -n <your-foundational-services-namespace>
-
-
Wait for some time and then check the status of the
platform-auth-service,platform-identity-management, andplatform-identity-providerpods. The status of allplatform-auth-service,platform-identity-management, andplatform-identity-providerpods must show asRunning.oc -n <your-foundational-services-namespace> get pods | grep platform-auth-serviceoc -n <your-foundational-services-namespace> get pods | grep platform-identity-managementoc -n <your-foundational-services-namespace> get pods | grep platform-identity-provider
-
Generating key files for IM
To configure IM with an external PostgreSQL database, generate the key files in the database server with the following steps:
-
Login to an external database server and copy the key files to your target cluster.
scp client.crt client.key client_key.pem client.pem root.pem root@<your_ocp_IPaddress>:/rootReplace
<your_ocp_IPaddress>with the Red Hat OpenShift Container Platform IP address. -
Export the following environment variables.
export PGREQUIRESSL=require export PGSSLCERT=client.crt export PGSSLKEY=client.key -
Download and enable
PostgreSQL 16on your target cluster.sudo dnf install -y https://download.postgresql.org/pub/repos/yum/reporpms/EL-8-x86_64/pgdg-redhat-repo-latest.noarch.rpm sudo dnf -qy module disable postgresql sudo dnf install -y postgresql16-server -
Test the PostgreSQL server connection.
psql --host=<yourFyreClusterName>1.fyre.ibm.com --port=5432 --dbname=im --username=im_user -c "SELECT VERSION()" -c "SHOW ssl" -c "SHOW max_connections" -c "\list" -c "\dn" -c "\du"Replace
<yourFyreClusterName>with your Fyre cluster name to specify the host name of the PostgreSQL server. -
Create the
im-metastore-edb-secretsecret.oc create secret generic im-datastore-edb-secret \ --type=kubernetes.io/tls \ --from-file=ca.crt=root.pem \ --from-file=tls.crt=client.pem \ --from-file=tls.key=client_key.pem \ -n <your foundational service operand namespace>Replace
<your foundational service operand namespace>with the namespace where you deployed the foundational services.You can update
ca.crt,tls.crt, andtls.keyparameters with the appropriate values. -
Verify that the certificates are stored in the secret.
oc extract secret/im-datastore-edb-secret -n <your foundational service operand namespace> --to=- --keys=ca.crt | openssl x509 -noout -subject -issuer -startdate -enddate oc extract secret/im-datastore-edb-secret -n <your foundational service operand namespace> --to=- --keys=tls.crt | openssl x509 -noout -subject -issuer -startdate -enddateReplace
<your foundational service operand namespace>with the namespace where you deployed the foundational services. -
Create the
im-metastore-edb-cmconfigmap.apiVersion: v1 kind: ConfigMap metadata: name: im-datastore-edb-cm namespace: <your foundational service operand namespac> data: DATABASE_CA_CERT: ca.crt DATABASE_CLIENT_CERT: tls.crt DATABASE_CLIENT_KEY: tls.key DATABASE_NAME: im DATABASE_PORT: "5432" DATABASE_R_ENDPOINT: sertexternaledb16x1.fyre.ibm.com DATABASE_RW_ENDPOINT: sertexternaledb16x1.fyre.ibm.com DATABASE_SCHEMA: public DATABASE_USER: im_user IS_EMBEDDED: 'false'Replace
<your foundational service operand namespace>with the namespace where you deployed the foundational services.You can update the
DATABASE_R_ENDPOINT,DATABASE_RW_ENDPOINT,DATABASE_USER, andDATABASE_NAMEparameters with the appropriate values. -
Apply the
im-metastore-edb-cmconfigmap.oc apply -f im-metastore-edb-cm.yaml -
Create a
operand-request.yamlfile with the following definitions:apiVersion: operator.ibm.com/v1alpha1 kind: OperandRequest metadata: name: common-service namespace: $<your foundational service namespace> spec: requests: - operands: - name: ibm-im-operator - name: ibm-events-operator - name: ibm-platformui-operator - name: cloud-native-postgresql registry: common-service registryNamespace: $<your foundational service namespace>Replace
<your foundational service namespace>with the namespace where you deployed the foundational services. -
Verify that the
platform-auth-service,platform-identity-management, andplatform-identity-providerpods are running. Ensure that thecommon-service-dbcluster is not created and its pods are not running.