To connect Db2® Big
SQL to a Cloudera
Hadoop cluster that is secured by Kerberos when the Kerberos configuration is not managed by Cloudera Manager,
you must update the Db2 Big
SQL
secret after you provision a Db2 Big
SQL instance.
Procedure
- Log in to your OpenShift® cluster as a project administrator:
oc login <OpenShift_URL>:<port>
- Change to the project where the Cloud Pak for Data control
plane is installed:
oc project ${PROJECT_CPD_INSTANCE}
-
Get the Db2 Big
SQL
secret:
export bigsqlSecret=$(oc get secret | grep bigsql-secret | awk '{print $1}')
- Convert the Kerberos configuration file
to a secret:
export secret=$(base64 krb5.conf | awk ' { secret = secret $0 }; END { print secret } ')
- Update the Db2 Big
SQL
secret with the Kerberos
secret:
oc patch secret $bigsqlSecret --patch '{"data": {"krb5-conf": "'$secret'"}}'
- Update the Db2 Big
SQL
secret with the Kerberos
krbPrincipal.
Note: Adjust the Kerberos admin principal to match
the principal in your environment.
oc patch secret $bigsqlSecret --patch '{"data": {"krbPrincipal": "'$(printf 'root/admin@IBM.COM' | base64)'"}}'
- Update the Db2 Big
SQL
secret with the Kerberos
password.
Note: Adjust the Kerberos admin password to match
the password in your environment.
oc patch secret $bigsqlSecret --patch '{"data": {"krbPassword": "'$(printf 'admin' | base64)'"}}'
- Get the Db2 Big
SQL custom resource
name:
export cr_name=$(oc get cm bigsql-db2-big-sql-1-cm -o custom-columns=CR_NAME:.metadata.labels.app --no-headers=true)
- Trigger the refresh of the Hadoop
configuration:
oc patch bigsql $cr_name --patch '{"spec": {"hadoopCluster": {"generation": 2}}}' --type merge
- Wait until Db2 Big
SQL is in a
Ready
state:
- Restart the Db2 Big
SQL
instance:
oc exec -it c-$cr_name-db2u-0 -- su - db2inst1 -c 'bigsql stop; bigsql start'