Review the known issues for IBM Cloud Pak for Multicloud Management. Additionally, see IBM Cloud Pak for Multicloud Management troubleshooting for troubleshooting topics.
local-cluster in console search results cs-ca-clusterissuer not ready error, causing installation failureUsers might only see cluster in Monitoring and not in other pages of the console. This can occur when users are not granted permissions to view clusters in both IBM Cloud Pak for Multicloud Management and Red Hat Advanced Cluster Management.
User access for Red Hat Advanced Cluster Management features and for cluster management is controlled through Red Hat Advanced Cluster Management. Access for other IBM Cloud Pak for Multicloud Management capabilities is granted and managed through IBM Cloud Pak for Multicloud Management Identity and Access Management (IAM).
For example, to grant a user access to view clusters in Monitoring, you need to assign the clusters namespaces to a user's associated team through IBM Cloud Pak for Multicloud Management Identity and Access Management (IAM). To grant a user access to view clusters on Cluster Management UI pages in the console, you need to use cluster role bindings within Red Hat Advanced Cluster Management. If a user does not have access granted through both methods, the user cannot view clusters across the entire console.
For more information about granting a user access through cluster role bindings within Red Hat Advanced Cluster Management, see Using RBAC to define and apply permissions .
For more information about adding permissions to a team and adding a user to a team within IBM Cloud Pak for Multicloud Management, see Managing teams.
When Red Hat Advanced Cluster Management is not integrated, and you are using only the default IBM Cloud Pak for Multicloud Management capabilities, you only need to grant access through IBM Cloud Pak for Multicloud Management Identity and Access Management (IAM).
Account Administrators are not able to edit or delete labels for managed clusters. When an Account Administrator attempts to edit a label from the Clusters dashboard options menu for a cluster and clicks to save their change, the change is not saved. Cluster Administrators continue to have the same level of access for managing clusters with the console.
When an Account Administrator attempts to delete a label, an error message can display that is similar to the following message:
403 - managedclusters.cluster.open-cluster-management.io "my-cluster" is forbidden: User "user1" cannot patch resource "managedclusters" in API group "cluster.open-cluster-management.io" at the cluster scope: RBAC: clusterrole.rbac.authorization.k8s.io "extension" not found.
If a user must edit the labels for a managed cluster, the user must either be granted access or have the Cluster Administrator role. If you want to grant additional access to an Account Administrator, you must grant access through IBM Cloud Pak for Multicloud Management Identity and Access Management (IAM) and through cluster role bindings within Red Hat Advanced Cluster Management
For more information about granting a user access through cluster role bindings within Red Hat Advanced Cluster Management, see Using RBAC to define and apply permissions .
For more information about adding permissions to a team and adding a user to a team within IBM Cloud Pak for Multicloud Management, see Managing teams.
When a user is working within the IBM Cloud Pak for Multicloud Management console and launches the Red Hat Advanced Cluster Management UI, single sign-on is used to log the user into the UI. However, if the user then logs out of the IBM Cloud Pak for Multicloud Management console, the user is not logged out of the Red Hat Advanced Cluster Management UI. The OpenShift access token is not revoked when the IBM Cloud Pak for Multicloud Management token is revoked, so the user can continue to use the UI. These access tokens are not synchronized during logon, instead a new OpenShift token is generated and is independent of the IBM Cloud Pak for Multicloud Management token.
A user needs to log out of Red Hat Advanced Cluster Management independently of logging out of IBM Cloud Pak for Multicloud Management.
You can encounter an issue when importing a Red Hat OpenShift Container Platform 3.11 clusters from Red Hat Advanced Cluster Management.
As an alternative, you can complete the following steps to download a Red Hat OpenShift Container Platform 4.x client binary to use to import the cluster:
Backup your current oc and kubectl CLI binary, by running the following commands:
cp /usr/bin/oc /usr/bin/oc-3
cp /usr/bin/kubectl /usr/bin/kubectl-111
Download the 4.x CLI binary. For more information, see Getting started with the CLI.
After you download the files, extract the files and copy the CLI binary to the folder where your current CLI binary exists:
cp ./oc /usr/bin/oc
cp ./oc /usr/bin/kubectl
Import your cluster.
Note: As you are importing a cluster, you might encounter browser-related issues. If you do encounter issues, try a different browser. For instance, you might encounter a truncated import script. When this issue occurs, you can see an error message similar to the following message:
error parsing STDIN: error converting YAML to JSON
If this issue occurs, the import process did not complete. Try a different browser and generate the import script again before running the script.
cs-ca-clusterissuer not ready error, causing installation failureWhen IBM Cloud Pak for Multicloud Management is installed on a cluster having Red Hat Advanced Cluster Management 2.1.x, the user needs to run a script from IBM GitHub repo to avoid the cs-ca-clusterissuer not ready issue.
When you import a cluster to the Red Hat Advanced Cluster Management hub cluster, you might find that the import command can't run successfully.
To solve this problem, follow the steps:
Create a local file, such as a file import-command.
Copy the import commands that you get from Red Hat Advanced Cluster Management console, and paste the commands to the file import-command. For how to get the import commands, see Importing a target managed cluster to the Red Hat Advanced Cluster Management for Kubernetes hub cluster.
Enter the directory of the file import-command, and run the following command.
. import-command
Note: You might need to run the same command a few more times to get the good result.
Data retention and summarization for Monitoring is now configured as part of the Red Hat Advanced Cluster Management observability service (multicluster-observability-operator). The observability service is now a prerequisite for viewing agent metrics, and all previous agent metrics are lost after you upgrade to IBM Cloud Pak for Multicloud Management 2.2.
When the metric summarization support is enabled in the deployment, a blank page is returned when you attempt to display metrics for a resource.
Run the command to resolve this issue.
oc expose deployment monitoring-metricsvc-summary-policy --name monitoring-metricsummarypolicy --type ClusterIP --protocol TCP --port 9080 --target-port 9080 -n management-monitoring
This is a temporary limitation resulting from limitations in the Red Hat Advanced Cluster Management observability service.
This is a temporary limitation resulting from limitations in the Red Hat Advanced Cluster Management observability service.
When you install Monitoring on Red Hat OpenShift V4.2 and V4.3, you can specify the Minimum Available Replicas for HPAs. The following stateless services do not automatically increase from their minimum count (based on Minimum Available Replicas for HPAs) to a larger number of replicas as CPU grows.
For more information about installing Monitoring, see Monitoring.
In the Resource types tab, in the Favorited column, when you click to deselect a single resource, multiple resources are incorrectly deselected.
When you navigate to certain pages in the IBM Cloud Pak® for Multicloud Management console, the menu items might shorten and the logo in the header might change from IBM Cloud Pak® for Multicloud Management to IBM Cloud Pak® for Multicloud Management.
To return to the IBM Cloud Pak® for Multicloud Management context, click the button in your browser to go back until you see the IBM Cloud Pak® for Multicloud Management logo in the header, or replace the location of the browser with the following URL:
https://HOST:port/multicloud
You are unable to deploy Helm charts that contain images on a managed cluster. To fix this error, you must configure ClusterImagePolicy. Run the following command to configure ClusterImagePolicy:
apiVersion: securityenforcement.admission.cloud.ibm.com/v1beta1
kind: ClusterImagePolicy
metadata:
annotations:
helm.sh/hook: post-install
helm.sh/hook-weight: "1"
name: ibmcloud-default-cluster-image-policy
spec:
repositories:
- name: <repo_name>
Applications fail to install during deployment when the ClusterImagePolicy is not configured.
Note: Be sure to configure ClusterImagePolicy. View the Cannot create a Helm release on a remote cluster section for information about configuring the policy.
To fix this error, reinstall your application by following the tasks:
Verify the status of your application by running the following command:
helm list --tls
To delete your application, run the following command:
helm delete releaseName --purge
Edit and locate the ClusterImagePolicy to push your images to your application. Run the following command:
kubectl get clusterimagepolicy
Edit the ClusterImagePolicy by running the following command:
kubectl edit clusterimagepolicy <policyname>
Reinstall your application. Run the following command:
helm install chartName
For more details, see the Helm community issue .
If a deployable that was deployed to a managed cluster through a subscription is deleted from the source location where it was stored, the deployable is not removed from the managed cluster. For instance, if a Helm release is deleted from the Helm repository, the Helm release is not removed from the managed cluster and continues to work. The deleted deployable remains on the managed cluster until the associated subscription is deleted or updated to replace the deployable.
When you are including resources into the object store, do not include multiple resources in a single object. Object stores are used to store Kubernetes resource YAML files as objects. These files define the Kubernetes resource without wrapping the resource. To include these objects in a channel, each file can define only a single Kubernetes resource.
For a subscription that uses a secret to access a channel, the secret that exists on the Hub cluster. If the secret is updated, the subscription is not able to detect and retrieve the changes for the secret by default. This behavior can result in the secret becoming out of sync between the subscription and the actual secret resource. When changes are then made for the dependency resources for the subscribed channel, such as ConfigMaps or secrets, the subscription does not able to detect the changes by default.
To synchronize the subscription with the updated resources, you must edit the subscription. To update a subscription, you can add a label to the subscription, such as with the following command:
kubectl label subscription <subscription-name> -n <subscription-namespace> <label-name>=<any-content>
When the subscription is changed, the subscription controller is triggered to synchronize with the referenced secret and the subscribed channel resources.
The Remediation field in the detail panel for security findings becomes empty for all of your policies that are associated with your cluster. The Remediation field becomes empty because there are communication issues with the managed cluster.
local-cluster in console search resultsSearch returns and lists each cluster with the resource that you search. For resources in the hub cluster, the cluster name is displayed as local-cluster.
When you create a certificate policy without a certificate policy controller for a third-party cluster, you might receive the following violation message:
mapping error from raw object: no matches for kind "CertificatePolicy" in version "policies.ibm.com/v1alpha1"
You must unbind the certificate policy from your third-party cluster. Complete the following steps to unbind each of your certificate policies:
Log in to your IBM Cloud Pak for Multicloud Management hub cluster.
From the navigation menu, click Automate infrastructure > Clusters.
Create a unique label for each of your clusters with IBM Cloud Pak for Multicloud Management services installed. Select the Options icon () > Edit Labels.
Add a new label for each of your clusters with IBM Cloud Pak for Multicloud Management services installed by selecting the Add icon. For example, create the following label:
cloud = common services
From the navigation menu, click Govern risk > Policies tab to view your policies.
Edit your certificate policy by updating the placement policy. Update the spec.clusterLabels parameter by removing and adding labels. Your placement policy might resemble the following content:
spec:
clusterLabels:
matchExpressions:
- key: cloud
operator: In
values:
- common-services
Your certificate policies are unbound from your third-party clusters.
The limit for the returned content of a command that is issued with the Visual Web Terminal is 200 KB. If the returned information exceeds 200 KB, an error is displayed. The workaround is to enter the command using a terminal window that is outside of the Visual Web Terminal.
The monitoring chart includes a standalone installation option that specifies whether monitoring is available on the IBM Cloud Pak for Multicloud Management hub cluster. Valid values are for standalone are true or false. If the value is set to true, certain monitoring service features that are needed for the IBM Cloud Pak for Multicloud Management hub cluster, are unavailable. For example, the Grafana dashboard list does not include
the IBM Cloud Pak for Multicloud Management dashboard that is needed to view metrics for your managed clusters.
Use the following Helm command to check the value of the standalone option.
helm get values monitoring --tls|grep standalone
If standalone is set as true, set it to false to enable monitoring service features for the hub cluster.
monitoring release page and click Upgrade.mgmt-charts in the Confirm Repository of Chart section.1.7.1 in the Version section.Reuse values in the Using previous configured values section.Parameters section, clear the checkbox for Standalone deployment.Messages in the monitoring-metric pod indicate that an "net/http: HTTP/1.x transport connection" error occurred. To check the pod log, follow the steps:
Locate the monitoring-metric pod by running the following command.
oc get pods -n management-monitoring | grep "monitoring-metric-"
View the log by running the following command.
oc logs <monitoring-metric-pod-id> -n management-monitoring
Here are the sample log entries that are associated with this problem:
[2021-03-24 08-21-17.719] [ERROR] [restInterface.doPromQLQuery] [12757] [Error when attempting to execute promQL HTTP request. Query: https://observability-observatorium-observatorium-api.open-cluster-management-observability.svc.cluster.local:8080/api/metrics/v1/default/api/v1/query?query=esx_server_overallCpuUtil%7BresourceId%3D~%22a53c8218-fbd5-317f-8370-9724d07d8a50%22%2Cibm_type%3D%22%22%7D[14400s]+offset+1s, TenantID: id-mycluster-account]
[2021-03-24 08-21-17.719] [ERROR] [restInterface.doPromQLQuery] [12757] [Request execution error was: Get "https://observability-observatorium-observatorium-api.open-cluster-management-observability.svc.cluster.local:8080/api/metrics/v1/default/api/v1/query?query=esx_server_overallCpuUtil%7BresourceId%3D~%22a53c8218-fbd5-317f-8370-9724d07d8a50%22%2Cibm_type%3D%22%22%7D[14400s]+offset+1s": net/http: HTTP/1.x transport connection broken: malformed HTTP response "\x00\x00\x18\x04\x00\x00\x00\x00\x00\x00\x05\x00\x10\x00\x00\x00\x03\x00\x00\x00\xfa\x00\x06\x00\x10\x01@\x00\x04\x00\x10\x00\x00"]
Messages in the Red Hat® Advanced Cluster Management for Kubernetes Observability pod observability-observatorium-observatorium-api indicate that http2 errors occurred in processing incoming requests. To check the pod
log, follow the steps:
observability-observatorium-observatorium-api pods by running the following commands. oc get pods -n open-cluster-management-observability | grep observability-observatorium-observatorium-api
View the log by running the following command.
oc logs <pod_name> -n open-cluster-management-observability
Here are the sample log entries that are associated with this problem:
2021/03/24 08:21:17 http2: server: error reading preface from client 10.128.2.82:52316: bogus greeting "GET /api/metrics/v1/defa"
An internal error in the monitoring-metric pod prevents it from successfully communicating with the Red Hat® Advanced Cluster Management for Kubernetes Observability service. This error causes all metric retrieval requests to fail. This
issue will be fixed in a future release of IBM Cloud Pak® for Multicloud Management.
To resolve this issue, follow the steps.
Locate the failing monitoring-metric pod by running the following command.
oc get pods -n management-monitoring | grep "monitoring-metric-"
Delete the pod by running the following command. A new instance of the pod can be created automatically.
oc delete pod <monitoring-metric-pod-id> -n management-monitoring
LDAP usernames are case-sensitive. You must use the name exactly the way it is configured in your LDAP directory.
In OpenShift clusters with multitenant isolation mode, each project is isolated by default. Network traffic is not allowed between pods or services in different projects.
To resolve the issue, disable network isolation in the ibm-common-services project.
Get the network-operator pod name.
oc get pods -n openshift-network-operator | grep network-operator
Get the default multitenant network settings information.
oc exec -n openshift-network-operator -it <network-operator-pod-name> cat /bindata/network/openshift-sdn/004-multitenant.yaml > 004-multitenant.yaml
Add the following to the 004-multitenant.yaml file and set the ibm-common-services NETID to 0.
apiVersion: network.openshift.io/v1
kind: NetNamespace
metadata:
name: ibm-common-services
netid: 0
netname: ibm-common-services
Generate a configmap by using the updated 004-multitenant.yaml file.
oc create configmap 004-multitenant -n openshift-network-operator --from-file=./004-multitenant.yaml
Open the network-operator deployment for editing.
oc edit deploy -n openshift-network-operator network-operator
Add the following content in the volumes section.
volumes:
- configMap:
name: 004-multitenant
name: 004-multitenant
Add the following content in the volumeMounts section.
volumeMounts:
- mountPath: /bindata/network/openshift-sdn/004-multitenant.yaml
subPath: 004-multitenant.yaml
name: 004-multitenant
readOnly: true
Wait for the network-operator deployment to restart.
Check the ibm-common-services NETID. The NETID must be 0.
oc get netnamespaces ibm-common-services
When viewing your resources from the Govern risk dashboard, the policies and VMs listed in the Profiles section can sometimes not appear. If the refresh frequency is too high, the load can overwhelm and inadvertently crash the Infrastructure Management pod, leading to an empty section.
You can mitigate the issue by toggling the automatic refresh rate, or manually refreshing the dashboard in your browser.
When you enable Vulnerability Advisor (VA) scanning in the ImagePolicy and ClusterImagePolicy specification, you are unable to create workloads in the associated namespaces. The VA scanning integration with image security enforcement only supports the built-in IBM Cloud Pak for Multicloud Management registry. For more information, see Scanning an image registry with the Vulnerability Advisor (VA).
Installing a Helm chart to the default namespace fails to launch the Helm release. To work around the problem, deploy the Helm chart to a namespace where the container image security policies are met.
For DEM on Liberty, When you select custom metrics to view, the displayed data is not consistent if you select different filters.
In the Browser console, for Top operating systems widget, the same operating system version is displayed as two different ones.
For DEM on HTTP server and NGINX server, the memory saturation gives the memory usage of the whole host node, not the Pod itself. It cannot correctly represent the real memory usage of the Pod.
This problem happens when Kubernetes cluster is not accessible or not responsive, and can be solved when the cluster can be accessed later. This is a known issue of Kubernetes. For more information, see leaderelection panic when failing to renew lease.
During upgrade of IBM Cloud Pak for Multicloud Management from Version 2.0.0 to Version 2.2.0, even when a Helm-based operator meets the ImagePullBackOff issue, it continues to create secrets.
Run these commands to identify such an issue in your cluster:
Check the upgrade jobs to see whether there is any ImagePullBackOff issue.
oc get pods -A | grep upgrade-job
If you see any failed pod due to an ImagePullBackOff error, check the secrets that are created for that pod.
oc get secrets | grep <pod-name>
You see multiple secrets created for the service account with type kubernetes.io/dockercfg and kubernetes.io/service-account-token.
Get the details of the failed job pod.
oc describe pod <pod-name-of-the-failed-job>
You might see an ImagePullBackOff error similar to the following error:
Warning Failed 34s (x524 over 120m) kubelet, worker2, Error: ImagePullBackOff
To resolve the issue, contact IBM Support. See the available support options.
For more information about the issue, see the operator-sdk issue.
While installing IBM Cloud Pak for Multicloud Management on Red Hat Advanced Cluster Management for Kubernetes 2.1 with common services 3.5.6, common services and other modules fail to install. A script is required to resolve this issue.
During IBM Cloud Pak for Multicloud Management installation, while the pods are starting up, check whether the secret cs-ca-certificate-secret is created in the ibm-common-services namespace. There are two ways to check
the secret:
Run the following command and the secret appears once it is generated.
watch -n 30 'oc get secrets -n ibm-common-services | grep cs-ca-certificate-secret'
Or
Run the following command intermittently until you see the secret listed in the list of secrets for that namespace.
oc get secrets -n ibm-common-services | grep cs-ca-certificate-secret
Run the following command to issue the necessary secret:
bash <(curl -s https://raw.githubusercontent.com/IBM/cp4mcm-samples/master/scripts/cp4mcm-rhacm21-cp-issuer-secret.sh)
Installation of IBM Cloud Pak for Multicloud Management fails on a OpenShift Container Platform cluster which has other cloud paks installed with the IBM Cloud Platform Common Services version 3.6.x.
To resolve this issue, you must install the IBM Cloud Pak for Multicloud Management on a different OpenShift Container Platform cluster.