Resolved issues in containers

Review the defects and fixes that are resolved in the release.

23 October 2025 (10.0.2509.1)

Case number Description
TS020545287 After upgrading catalog image to the latest version, Order Service deployment failed causing the containers to continuously restart. This issue is resolved.

23 September 2025 (10.0.2509.0)

Case number Description
TS020252990 A certificate-related issue that occurs when upgrading the Operator by using the Operator utility script is now resolved.
TS019937598 When the XAPI pod starts, an error occurs in the AccessLogger, displaying a NullPointerException. This issue is resolved.

29 July 2025 (10.0.2506.1)

Case number Description
TS019128642 The Operator is now fixed to remove readOnlyRootFilesystem:false and supplementalGroups:0, ensuring that these settings no longer cause security violations.
TS019885617 The Operator is now fixed to resolve the PVC already exist error.

20 June 2025 (10.0.2506.0)

Case number Description
TS019189367 Large logs are generated by the ibm-oms-controller-manager pod. This issue is resolved by enhancing the Operator to prevent annotation key to exceed 63 characters.
TS019380106 The container image JDK21: 10.0.2503.2-jdk21-amd64 is missing /usr/share/zoneinfo folder and the time zone is not set correctly. This issue is resolved by fixing the JDK21 image to honor the TZ environment variable.

17 April 2025 (10.0.2503.1)

Case number Description
TS018593366 You can now define a custom domain at the common level for all application servers. For more information, see common parameter.

21 March 2025 (10.0.2503.0)

Case number Description
TS018577688 The ibm-oms-controller-manager pod generates a long verbose log in Kubernetes for the operators namespace. This issue is resolved.

11 February 2025 (10.0.2409.2)

Case number Description
TS017483467 An issue occurred when setting up Google cloud storage bucket as storage in probe container.

This issue is resolved such that the probe pod is now included with custom annotation from the common section to support Google cloud storage.

25 October 2024 (10.0.2409.1)

Case number Description
TS017468749 A custom build that uses Red Hat® OpenShift® buildConfig only clones the master branch and does not offer an option to directly clone a specific Git branch.

This issue is resolved to provide an option to clone a Git branch that is specified in the base image.

TS017438188 The replicaCount for OMServer cannot be set to 0, defaulting to 1, in cases such as running CDT, db-verify, or debugging.

This issue is resolved, allowing OMServer to be deployed with a replicaCount of 0. In this state, the pod status shows as 0/0 or 0/1, ensuring that no pod is running.

TS016785382 A reference implementation is failing when installing Aurora for the first time in the container environment.

This issue is resolved.

18 July 2024 (10.0.2406.1)

Case number Description
TS016565573 Agents and integration servers crashed in pre-production and production environments despite being allocated 400m CPU, with actual consumption stabilizing at 10-20m. The pods crashed before initialization, independent of load or data volume, resulting in resource wastage when higher limits were set.

This issue is resolved by increasing the startup probe check time to allow more time for pod initialization when resources are low.

TS016468988 When a Kubernetes server fails to start, the error message that is displayed lacks detailed information about the cause of the issue and how to resolve it.

This issue is resolved by enhancing the error message to include details about whether the domain is valid or not.

13 May 2024 (10.0.2403.2)

Case number Description
TS015996327 A setup file error occurred when the customization of the om-base image was ongoing. This issue was a result of a process to reduce the size of the om-base image by compressing few large directories in the runtime. Due to this, when the container is run for the first time, these directories undergo decompression. Therefore, simultaneous startup of container and execution of setup files triggered an error.

This issue is resolved. Now, the setupfiles.sh is modified to wait for completion of decompression, thus preventing errors.

TS015957671 In the Order Hub and Call Center integrated mode, if a host was not configured in Ingress, an upgrade from Operator v1.0.10 to v1.0.13 caused the controller manager to crash after installation. Also, variations were found in the product behavior regarding customDomains usage between OMEnvironment and Order Hub across Operator versions 1.0.13 and 1.0.14. These issue are now resolved.

12 April 2024 (10.0.2403.1)

Case number Description
TS015889898 The Operator automatically deletes a cron job each time it is initiated during deployments. Additionally, the associated integration server does not launch as expected. The Operator is enhanced to resolved this issue.
TS015653977 You can now customize the Sterling™ Order Management System Software images in an airgap environment, where there is no Internet connectivity. For more information, see ../customization/c_OMRHOC_customizing_OMS_runtime.html#concept_ytb_vmv_rrb__cus_airgap.

8 March 2024 (10.0.2403.0)

Case number Description
TS015320240 When SSL is set to false, routes for Order Hub and Call Center are not created because the Operator defaults to https. This issue is resolved by accommodating both http and https support that ensures seamless route creation.
TS015295743 When you deploy the agent with a database password that contains '&', the system_overrides.properties file fails to evaluate the password correctly. This resulted in an invalid username/password error when connecting to the database. This issue is now resolved by introducing a special handling mechanism for such characters to ensure successful connection to the database..

25 January 2024 (10.0.2309.2)

Case number Description
TS014512882 The Call Center and Order Hub applications did not work with multiple pods in Red Hat OpenShift Container Platform. Both the applications now use unique affinity cookies, and the cookies are routed to the correct pod, when you use multiple pods.
TS014571869 When you run an agent as a job, the job never finishes and the pod keeps crashing. This issue is resolved.