Troubleshooting
Problem
When upgrading IBM API Connect from version 10.0.5.x to 10.0.8.0, the database import job (dbimport) fails with the following error:
kubectl get pods -n <namespace where APIC is deployed> | grep db
NAME READY STATUS RESTARTS AGE
management-subin-db-1 0/1 Running 0 1d
kubectl logs management-subin-db-1 -c postgres
{"level":"info","ts":"<timestamp>","logger":"pg_dump","msg":"pg_dump: error: connection to server at \"management-subin-postgres\" (<ip hidden>), port 5432 failed: FATAL: database \"compliance\" does not exist","pipe":"stderr","logging_pod":"management-subin-db-1-import"}
{"level":"info","ts":"<timestamp>","logger":"pg_dump","msg":"pg_dump: error: connection to server at \"management-subin-postgres\" (<ip hidden>), port 5432 failed: FATAL: database \"compliance\" does not exist","pipe":"stderr","logging_pod":"management-subin-db-1-import"}
{"level":"info","ts":"<timestamp>","logger":"pg_ctl","msg":"pg_ctl: server is running (PID: 28)\n/usr/pgsql-15/bin/postgres \"-D\" \"/var/lib/postgresql/data/pgdata\" \"-c\" \"port=5432\" \"-c\" \"unix_socket_directories=/controller/run\" \"-c\" \"listen_addresses=127.0.0.1\"\n","pipe":"stdout","logging_pod":"management-subin-db-1-import"}
{"level":"info","ts":"<timestamp>","msg":"Shutting down instance","logging_pod":"management-subin-db-1-import","pgdata":"/var/lib/postgresql/data/pgdata","mode":"fast","timeout":null}
...
...
{"level":"info","ts":"<timestamp>","logger":"pg_ctl","msg":"waiting for server to shut down.... done","pipe":"stdout","logging_pod":"management-subin-db-1-import"}
{"level":"info","ts":"<timestamp>","logger":"pg_ctl","msg":"server stopped","pipe":"stdout","logging_pod":"management-subin-db-1-import"}
{"level":"info","ts":"<timestamp>","msg":"Exited log pipe","fileName":"/controller/log/postgres.csv","logging_pod":"management-subin-db-1-import"}
{"level":"error","ts":"<timestamp>","msg":"Error while bootstrapping data directory","logging_pod":"management-subin-db-1-import","error":"while executing logical import: error in pg_dump, exit status 1","stacktrace":"github.com/EnterpriseDB/cloud-native-postgres/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:125\ngithub.com/EnterpriseDB/cloud-native-postgres/pkg/management/log.Error\n\tpkg/management/log/log.go:163\ngithub.com/EnterpriseDB/cloud-native-postgres/internal/cmd/manager/instance/initdb.initSubCommand\n\tinternal/cmd/manager/instance/initdb/cmd.go:167\ngithub.com/EnterpriseDB/cloud-native-postgres/internal/cmd/manager/instance/initdb.NewCmd.func2\n\tinternal/cmd/manager/instance/initdb/cmd.go:118\ngithub.com/spf13/cobra.(*Command).execute\n\tpkg/mod/github.com/spf13/cobra@v1.8.0/command.go:983\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tpkg/mod/github.com/spf13/cobra@v1.8.0/command.go:1115\ngithub.com/spf13/cobra.(*Command).Execute\n\tpkg/mod/github.com/spf13/cobra@v1.8.0/command.go:1039\nmain.main\n\tcmd/manager/main.go:70\nruntime.main\n\t/opt/hostedtoolcache/go/1.22.2/x64/src/runtime/proc.go:271"}
Error: while executing logical import: error in pg_dump, exit status 1
Subsequent runs of that job are stuck because
{"level":"info","ts":"<timestamp>","msg":"EPAS specific features","logging_pod":"management-subin-db-1-import","enabled":false}
{"level":"info","ts":"<timestamp>","msg":"PGData already exists, can't overwrite","logging_pod":"management-subin-db-1-import"}
Error: PGData directories already exist
Symptom
- Upgrade process halts during database import.
- Logs show
database "compliance" does not exist. PGData already exists, can't overwritemessage appears in logs.
Cause
This issue occurs if the governance and discovery features were added to the Management CR YAML at the same time as performing the upgrade. Here is a sample:
governance:
enabled: true
discovery:
enabled: true
proxyCollectorEnabled: true
tablespaceDbVolumeClaimTemplate:
storageClassName: ceph-rbd-sc
volumeSize: 25Gi
Resolving The Problem
To resolve the issue:
- Remove the governance and discovery sections from the Management CR. These can be added back after the upgrade completes.
- Delete the EDB cluster:
kubectl -n <namespace where APIC is deployed> get cluster kubectl -n <namespace where APIC is deployed> delete <edb-cluster-name> - Take a backup of the the job :
kubectl -n <namespace where APIC is deployed> get job <import job name> -o yaml > import_job.yaml - Scale down the ibm-apiconnect operator deployment to stop the stuck job:
kubectl scale deploy <deployment-name> --replicas=0 - Scale the operator back up.
The import job will rerun automatically, and the upgrade should continue.kubectl scale deploy <deployment-name> --replicas=1
Document Location
Worldwide
[{"Type":"MASTER","Line of Business":{"code":"LOB77","label":"Automation Platform"},"Business Unit":{"code":"BU048","label":"IBM Software"},"Product":{"code":"SSMNED","label":"IBM API Connect"},"ARM Category":[{"code":"a8mKe000000CaZWIA0","label":"API Connect-\u003EAPIC Platform - Install\/Upgrade\/Migrate"}],"ARM Case Number":"TS016630289","Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"10.0.5;10.0.8"}]
Was this topic helpful?
Document Information
Modified date:
21 August 2025
UID
ibm17242756