How To
Summary
This article includes the steps to uninstall the Data Foundation in an internal mode deployment
Objective
The following article is not made to remove only certain components of Data Foundation, but to uninstall all of Data Foundation's components.
Use the steps in this section to uninstall Data Foundation.
Environment
Steps
Uninstall Annotations
Annotations on the Storage Cluster are used to change the behavior of the uninstall process. To define the uninstall behavior, the following two annotations have been introduced in the storage cluster:
uninstall.ocs.openshift.io/cleanup-policy: delete
uninstall.ocs.openshift.io/mode: gracefulThese annotations should be fully understood before applying them to an ocs cluster. The below table provides information on the different values that can be used with these annotations:
Uninstall annotations descriptions
cleanup-policy = delete
Default value. Rook cleans up the physical drives and the DataDirHostPath
cleanup-policy = retain
Rook does not clean up the physical drives and the DataDirHostPath
mode=graceful
Rook and NooBaa pauses the uninstall process until the administrator/user removes the Persistent Volume Claims (PVCs) and Object Bucket Claims (OBCs)
mode=forced
Rook and NooBaa proceed with uninstalling even if the PVCs/OBCs provisioned using Rook and NooBaa exist respectively.
Edit the value of the annotation to change the cleanup policy or the uninstall mode.
$ oc annotate storagecluster -n openshift-storage ocs-storagecluster uninstall.ocs.openshift.io/cleanup-policy="<CLEANUP_POLICY>" --overwrite
$ oc annotate storagecluster -n openshift-storage ocs-storagecluster uninstall.ocs.openshift.io/mode="<MODE>" --overwrite
The expected output for both commands:
storagecluster.ocs.openshift.io/ocs-storagecluster annotated
Prerequisites
- Ensure that the Data Foundation cluster is in a healthy state.
The uninstall process can fail if some of the pods are not terminated successfully due to insufficient resources or node availability.
In case the cluster is in an unhealthy state, contact Red Hat Customer Support before uninstalling Data Foundation. - Ensure that applications are not consuming persistent volume claims (PVCs) or object bucket claims (OBCs) using the storage classes provided by Data Foundation.
If any custom resources (such as custom storage classes, cephblockpools) were created by the admin, they must be deleted by the admin after removing the resources that consumed them.
Procedure
- Delete the volume snapshots that are using Data Foundation
List the volume snapshots from all the namespaces.
$ oc get volumesnapshot --all-namespacesFrom the output of the previous command, identify and delete the volume snapshots that are using Data Foundation.
$ oc delete volumesnapshot <VOLUME-SNAPSHOT-NAME> -n <NAMESPACE><VOLUME-SNAPSHOT-NAME>Is the name of the volume snapshot<NAMESPACE>Is the project namespace
- Delete PVCs and OBCs that are using Data Foundation. In the default uninstall mode (graceful), the uninstaller waits till all the PVCs and OBCs that use Data Foundation are deleted. If you want to delete the Storage Cluster without deleting the PVCs beforehand, you may set the uninstall mode annotation to
forcedand skip this step. Doing this results in orphan PVCs and OBCs in the system.- Delete OpenShift Container Platform monitoring stack PVCs using Data Foundation.
For more information, see the section Removing the monitoring stack from Data Foundation. - Delete OpenShift Container Platform Registry PVCs using Data Foundation.
For more information, see the section Removing OpenShift Container Platform registry from Data Foundation. - Delete OpenShift Container Platform logging PVCs using Data Foundation.
For more information, see the section Removing the cluster logging operator from Data Foundation. - Delete other PVCs and OBCs provisioned using Data Foundation.
The following script is a sample script to identify the PVCs and OBCs provisioned using Data Foundation. The script ignores the PVCs that are used internally by Data Foundation.
#!/bin/bash RBD_PROVISIONER="openshift-storage.rbd.csi.ceph.com" CEPHFS_PROVISIONER="openshift-storage.cephfs.csi.ceph.com" NOOBAA_PROVISIONER="openshift-storage.noobaa.io/obc" RGW_PROVISIONER="openshift-storage.ceph.rook.io/bucket" NOOBAA_DB_PVC="noobaa-db" NOOBAA_BACKINGSTORE_PVC="noobaa-default-backing-store-noobaa-pvc" #Find all the OCS StorageClasses OCS_STORAGECLASSES=$(oc get storageclasses | grep -e "$RBD_PROVISIONER" -e "$CEPHFS_PROVISIONER" -e "$NOOBAA_PROVISIONER" -e "$RGW_PROVISIONER" | awk '{print $1}') # List PVCs in each of the StorageClasses for SC in $OCS_STORAGECLASSES do echo "======================================================================" echo "$SC StorageClass PVCs and OBCs" echo "======================================================================" oc get pvc --all-namespaces --no-headers 2>/dev/null | grep $SC | grep -v -e "$NOOBAA_DB_PVC" -e " $NOOBAA_BACKINGSTORE_PVC" oc get obc --all-namespaces --no-headers 2>/dev/null | grep $SC echo doneNote: Omit
RGW_PROVISIONERfor cloud platforms.Delete the OBCs.
$ oc delete obc <obc name> -n <project name><obc-name>Is the name of the OBC<project-name>Is the name of the project$ oc delete pvc <pvc name> -n <project-name><pvc-name>Is the name of the PVC<project-name>Is the name of the projectNote: Ensure that you have removed any custom backing stores, bucket classes, etc., created in the cluster.
- Delete the PVCs.
- Delete OpenShift Container Platform monitoring stack PVCs using Data Foundation.
Delete the Storage Cluster object and wait for the removal of the associated resources:
For ODF 4.18 and lower version: $ oc delete -n openshift-storage storagesystem --all --wait=true For ODF 4.19 and higher version: $ oc delete -n openshift-storage storagecluster --all --wait=trueIMPORTANT
If the resources are not deleted within 5 minutes, describe the object and determine the failure through the events or status:
$ oc describe -n openshift-storage <type> <name><type>Specify the type of resource. It can be an API resource or object.<name>Specify the name of the resource.Example:
$ oc describe -n openshift-storage storagecluster ocs-storagecluster- Determine the required resolution for the failure and implement it.
Check for cleanup pods if the
uninstall.ocs.openshift.io/cleanup-policywas set todelete(default) and ensure that their status isCompleted.$ oc get pods -n openshift-storage | grep -i cleanup NAME READY STATUS RESTARTS AGE cluster-cleanup-job-<xx> 0/1 Completed 0 8m35s cluster-cleanup-job-<yy> 0/1 Completed 0 8m35s cluster-cleanup-job-<zz> 0/1 Completed 0 8m35s
Note: If the cluster-cleanup-jobs are not created, then manually delete all the files under dataDirHostPath (default /var/lib/rook) on each storage node in the cluster where configuration is stored by ceph daemons.
Note: For ODF 4.18 and higher versions, ceph replicates the OSD metadata across different locations on the OSD disk. Make sure that all the OSD metadata is deleted from the disk, else the reinstall of ODF will fail on those disks. Follow this document to cleanup the OSD disks
Confirm that the directory
/var/lib/rookis now empty. This directory will be empty only if theuninstall.ocs.openshift.io/cleanup-policyannotation was set to delete(default).$ for i in $(oc get node -l cluster.ocs.openshift.io/openshift-storage= -o name); do oc debug ${i} -- chroot /host ls -l /var/lib/rook; done- If encryption was enabled at the time of install, remove
dm-cryptmanaged device-mapper mapping from OSD devices on all the Data Foundation nodes.Create a debug pod and chroot to the host on the storage node.
$ oc debug node/<node name> $ chroot /host<node-name>Is the name of the nodeGet Device names and make a note of the OpenShift Container Storage devices.
$ dmsetup ls ocs-deviceset-0-data-0-57snx-block-dmcrypt (253:1)Remove the mapped device.
$ cryptsetup luksClose --debug --verbose ocs-deviceset-0-data- 0-57snx-block-dmcryptNote: If the above command gets stuck due to insufficient privileges, run the following commands:
- Press CTRL+Z to exit the above command.
Find PID of the process which was stuck.
$ ps -ef | grep cryptTerminate the process using the kill command.
$ kill -9 <PID><PID>Is the process IDVerify that the device name is removed.
$ dmsetup ls
Delete the namespace and wait till the deletion is complete. You need to switch to another project if openshift-storage is the active project.
For example:
$ oc project default $ oc delete project openshift-storage --wait=true --timeout=5mThe project is deleted if the following command returns a
NotFounderror.$ oc get project openshift-storageNote: While uninstalling Data Foundation, if the namespace is not deleted completely and remains in the Terminating state, perform the steps in Troubleshooting and deleting remaining resources during Uninstall to identify objects that are blocking the namespace from being terminated.
Note: Make sure to remove the disaster-protection finalizers in mon and mon-endpoints after this step. For more information, see Removing the disaster-protection finalizers in mon and mon endpoints.
- Delete the local storage operator configurations if you have deployed Data Foundation using local storage devices. See Removing local storage operator configurations.
Unlabel the storage nodes.
$ oc label nodes --all cluster.ocs.openshift.io/openshift-storage- $ oc label nodes --all topology.rook.io/rack-
10. Remove the Data Foundation taint if the nodes were tainted.
$ oc adm taint nodes --all node.ocs.openshift.io/storage-
11. Confirm all PVs provisioned using Data Foundation are deleted. If there is any PV left in the Released state, delete it.
$ oc get pv
$ oc delete pv _<pv-name>_
<pv-name> Is the name of the PV
12. Delete the Multicloud Object Gateway storageclass.
$ oc delete storageclass openshift-storage.noobaa.io --wait=true --timeout=5m
13. Remove CustomResourceDefinitions.
oc delete crd backingstores.noobaa.io bucketclasses.noobaa.io cephblockpools.ceph.rook.io cephclusters.ceph.rook.io cephfilesystems.ceph.rook.io cephnfses.ceph.rook.io cephobjectstores.ceph.rook.io cephobjectstoreusers.ceph.rook.io noobaas.noobaa.io ocsinitializations.ocs.openshift.io storageclusters.ocs.openshift.io cephclients.ceph.rook.io cephobjectrealms.ceph.rook.io cephobjectzonegroups.ceph.rook.io cephobjectzones.ceph.rook.io cephrbdmirrors.ceph.rook.io storagesystems.odf.openshift.io cephblockpoolradosnamespaces.ceph.rook.io cephbucketnotifications.ceph.rook.io cephbuckettopics.ceph.rook.io cephcosidrivers.ceph.rook.io cephfilesystemmirrors.ceph.rook.io cephfilesystemsubvolumegroups.ceph.rook.io csiaddonsnodes.csiaddons.openshift.io networkfences.csiaddons.openshift.io reclaimspacecronjobs.csiaddons.openshift.io reclaimspacejobs.csiaddons.openshift.io storageclassrequests.ocs.openshift.io storageconsumers.ocs.openshift.io storageprofiles.ocs.openshift.io volumereplicationclasses.replication.storage.openshift.io volumereplications.replication.storage.openshift.io --wait=true --timeout=5m
14. For Fusion 2.11 (ODF v4.19+) clusters only: The storageclient resource does not have a namespace defined in the yaml. Therefore, when the project is deleted, this resource can be missed. This resource needs to be deleted and will block reinstallation if not removed. If you've arrived at this step, the resource has likely been marked for deletion and just needs a finalizer patch.
Validate via:
$ oc get storageclient
If exists, then delete via:
$ oc delete storageclient15. (Optional) If cluster-wide encryption is enabled using the Kubernetes authentication method for Key Management System (KMS), you need to remove the clusterrolebinding.
$ oc delete clusterrolebinding vault-tokenreview-binding
16. (Optional) To ensure that the vault keys are deleted permanently, you need to manually delete the metadata associated with the vault key.
IMPORTANT
Execute this step only if the Vault Key/Value (KV) secret engine API, version 2 is used for cluster-wide encryption with KMS, since the vault keys are marked as deleted and not permanently deleted during the uninstallation of OpenShift Container Storage. You can always restore it later if required.
List the keys in the vault.
$ vault kv list <backend_path><backend_path>Is the path in the vault where the encryption keys are stored.
For example:$ vault kv list kv-v2Example output:
Keys ----- NOOBAA_ROOT_SECRET_PATH/ rook-ceph-osd-encryption-key-ocs-deviceset-thin-0-data-0m27q8 rook-ceph-osd-encryption-key-ocs-deviceset-thin-1-data-0sq227 rook-ceph-osd-encryption-key-ocs-deviceset-thin-2-data-0xzszbList the metadata associated with the vault key.
$ vault kv get kv-v2/<key>For the Multicloud Object Gateway (MCG) key:
$ vault kv get kv-v2/NOOBAA_ROOT_SECRET_PATH/<key><key>Is the encryption key.For Example:
$ vault kv get kv-v2/rook-ceph-osd-encryption-key-ocs-deviceset-thin-0-data-0m27q8Example output:
====== Metadata ====== Key Value --- ----- created_time 2021-06-23T10:06:30.650103555Z deletion_time 2021-06-23T11:46:35.045328495Z destroyed false version 1Delete the metadata.
$ vault kv metadata delete kv-v2/<key>For the MCG key:
$ vault kv metadata delete kv-v2/NOOBAA_ROOT_SECRET_PATH/<key><key>Is the encryption key.
For Example:$ vault kv metadata delete kv-v2/rook-ceph-osd-encryption-key-ocs-deviceset-thin-0-data-0m27q8Example output:
Success! Data deleted (if it existed) at: kv-v2/metadata/rook-ceph-osd-encryption-key-ocs-deviceset-thin-0-data-0m27q8- Repeat these steps to delete the metadata associated with all the vault keys.
17. To ensure that Data Foundation is uninstalled completely, on the OpenShift Container Platform Web Console.
Removing local storage operator configurations
Use the instructions in this section only if you have deployed Data Foundation using local storage devices.
Note: For Data Foundation deployments only using localvolume resources, go directly to step 8.
Procedure
- Identify the
LocalVolumeSetand the correspondingStorageClassNamebeing used by Data Foundation. Set the variable SC to the StorageClass providing the
LocalVolumeSet.$ export SC="<StorageClassName>"Delete the
LocalVolumeSet.$ oc delete localvolumesets.local.storage.openshift.io <name-of-volumeset> -n openshift-local-storageDelete the local storage PVs for the given
StorageClassName.$ oc get pv | grep $SC | awk '{print $1}'| xargs oc delete pvDelete the
StorageClassName.$ oc delete sc $SCDelete the symlinks created by the
LocalVolumeSet.$ [[ ! -z $SC ]] && for i in $(oc get node -l cluster.ocs.openshift.io/openshift-storage= -o jsonpath='{ .items[*].metadata.name }'); do oc debug node/${i} -- chroot /host rm -rfv /mnt/local-storage/${SC}/; doneDelete the
LocalVolumeDiscovery.$ oc delete localvolumediscovery.local.storage.openshift.io/auto-discover-devices -n openshift-local-storageRemove the LocalVolume resources (if any).
Use the following steps to remove the LocalVolume resources that were used to provision the PVs in the current or previous Data Foundation version. Also, ensure that these resources are not being used by other tenants on the cluster.
For each of the local volumes, do the following:
- Identify the
LocalVolumeand the correspondingStorageClassNamebeing used by Data Foundation. Set the variable LV to the name of the
LocalVolumeand variable SC to the name of theStorageClass.
For example:$ LV=localblock $ SC=localblockDelete the local volume resource.
$ oc delete localvolume -n openshift-local-storage --wait=true $LVDelete the remaining PVs and StorageClasses if they exist.
$ oc delete pv -l storage.openshift.com/local-volume-owner-name=${LV} --wait --timeout=5m $ oc delete storageclass $SC --wait --timeout=5mClean up the artifacts from the storage nodes for that resource.
$ [[ ! -z $SC ]] && for i in $(oc get node -l cluster.ocs.openshift.io/openshift-storage= -o jsonpath='{ .items[*].metadata.name }'); do oc debug node/${i} -- chroot /host rm -rfv /mnt/local-storage/${SC}/; doneExample output:
Starting pod/node-xxx-debug ... To use host binaries, run `chroot /host` removed '/mnt/local-storage/localblock/nvme2n1' removed directory '/mnt/local-storage/localblock' Removing debug pod ... Starting pod/node-yyy-debug ... To use host binaries, run `chroot /host` removed '/mnt/local-storage/localblock/nvme2n1' removed directory '/mnt/local-storage/localblock' Removing debug pod ... Starting pod/node-zzz-debug ... To use host binaries, run `chroot /host` removed '/mnt/local-storage/localblock/nvme2n1' removed directory '/mnt/local-storage/localblock' Removing debug pod ...
- Identify the
Delete the
openshift-local-storagenamespace and wait till the deletion is complete. You will need to switch to another project if theopenshift-local-storagenamespace is the active project.For Example:
$ oc project default $ oc delete project openshift-local-storage --wait=true --timeout=5mThe project is deleted if the following command returns a
NotFounderror.$ oc get project openshift-local-storage
Removing the monitoring stack from Data Foundation
Use this section to clean up the monitoring stack from Data Foundation.
The Persistent Volume Claims (PVCs) that are created as a part of configuring the monitoring stack are in the openshift-monitoring namespace.
Prerequisites
- PVCs are configured to use the OpenShift Container Platform monitoring stack.
For more information, see configuring monitoring stack.
Procedure
List the pods and PVCs that are currently running in the
openshift-monitoringnamespace.$ oc get pod,pvc -n openshift-monitoring Example output: NAME READY STATUS RESTARTS AGE pod/alertmanager-main-0 3/3 Running 0 8d pod/alertmanager-main-1 3/3 Running 0 8d pod/alertmanager-main-2 3/3 Running 0 8d pod/cluster-monitoring- operator-84457656d-pkrxm 1/1 Running 0 8d pod/grafana-79ccf6689f-2ll28 2/2 Running 0 8d pod/kube-state-metrics- 7d86fb966-rvd9w 3/3 Running 0 8d pod/node-exporter-25894 2/2 Running 0 8d pod/node-exporter-4dsd7 2/2 Running 0 8d pod/node-exporter-6p4zc 2/2 Running 0 8d pod/node-exporter-jbjvg 2/2 Running 0 8d pod/node-exporter-jj4t5 2/2 Running 0 6d18h pod/node-exporter-k856s 2/2 Running 0 6d18h pod/node-exporter-rf8gn 2/2 Running 0 8d pod/node-exporter-rmb5m 2/2 Running 0 6d18h pod/node-exporter-zj7kx 2/2 Running 0 8d pod/openshift-state-metrics- 59dbd4f654-4clng 3/3 Running 0 8d pod/prometheus-adapter- 5df5865596-k8dzn 1/1 Running 0 7d23h pod/prometheus-adapter- 5df5865596-n2gj9 1/1 Running 0 7d23h pod/prometheus-k8s-0 6/6 Running 1 8d pod/prometheus-k8s-1 6/6 Running 1 8d pod/prometheus-operator- 55cfb858c9-c4zd9 1/1 Running 0 6d21h pod/telemeter-client- 78fc8fc97d-2rgfp 3/3 Running 0 8d NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/my-alertmanager-claim-alertmanager-main-0 Bound pvc-0d519c4f-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-storagecluster-ceph-rbd 8d persistentvolumeclaim/my-alertmanager-claim-alertmanager-main-1 Bound pvc-0d5a9825-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-storagecluster-ceph-rbd 8d persistentvolumeclaim/my-alertmanager-claim-alertmanager-main-2 Bound pvc-0d6413dc-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-storagecluster-ceph-rbd 8d persistentvolumeclaim/my-prometheus-claim-prometheus-k8s-0 Bound pvc-0b7c19b0-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-storagecluster-ceph-rbd 8d persistentvolumeclaim/my-prometheus-claim-prometheus-k8s-1 Bound pvc-0b8aed3f-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-storagecluster-ceph-rbd 8dEdit the monitoring
configmap.$ oc -n openshift-monitoring edit configmap cluster-monitoring-configRemove any config sections that reference the Data Foundation storage classes as shown in the following example and save it.
Before editing:[...] apiVersion: v1 data: config.yaml: | alertmanagerMain: volumeClaimTemplate: metadata: name: my-alertmanager-claim spec: resources: requests: storage: 40Gi storageClassName: ocs-storagecluster-ceph-rbd prometheusK8s: volumeClaimTemplate: metadata: name: my-prometheus-claim spec: resources: requests: storage: 40Gi storageClassName: ocs-storagecluster-ceph-rbd kind: ConfigMap metadata: creationTimestamp: "2019-12-02T07:47:29Z" name: cluster-monitoring-config namespace: openshift-monitoring resourceVersion: "22110" selfLink: /api/v1/namespaces/openshift-monitoring/configmaps/cluster-monitoring-config uid: fd6d988b-14d7-11ea-84ff-066035b9efa8 [...].After editing:
[...] apiVersion: v1 data: config.yaml: | kind: ConfigMap metadata: creationTimestamp: "2019-11-21T13:07:05Z" name: cluster-monitoring-config namespace: openshift-monitoring resourceVersion: "404352" selfLink: /api/v1/namespaces/openshift-monitoring/configmaps/cluster-monitoring-config uid: d12c796a-0c5f-11ea-9832-063cd735b81c [...]In this example,
alertmanagerMainandprometheusK8smonitoring components are using the Data Foundation PVCs.Delete the relevant PVCs. Make sure you delete all the PVCs that are consuming the storage classes.
$ oc delete -n openshift-monitoring pvc <pvc-name> --wait=true<pvc-name>Is the name of the PVC
Removing the OpenShift Container Platform registry from Data Foundation
Use this section to clean up the OpenShift Container Platform registry from Data Foundation. If you want to configure alternative storage, see Image registry.
The Persistent Volume Claims (PVCs) that are created as a part of configuring the OpenShift Container Platform registry are in the openshift-image-registry namespace.
Prerequisites
- The image registry should have been configured to use an Data Foundation PVC.
Procedure
Edit the
configs.imageregistry.operator.openshift.ioobject and remove the content in the storage section.$ oc edit configs.imageregistry.operator.openshift.ioBefore editing:
[...] storage: pvc: claim: registry-cephfs-rwx-pvc [...]After editing:
[...] storage: emptyDir: {} [...]In this example, the PVC is called
registry-cephfs-rwx-pvc, which is now safe to delete.Delete the PVC.
$ oc delete pvc <pvc-name> -n openshift-image-registry --wait=true<pvc-name>Is the name of the PVC
Removing the cluster logging operator from Data Foundation
Use this section to clean up the cluster logging operator from Data Foundation.
The Persistent Volume Claims (PVCs) that are created as a part of configuring the cluster logging operator are in the openshift-logging namespace.
Prerequisites
- The cluster logging instance should have been configured to use the Data Foundation PVCs.
Procedure
Remove the
ClusterLogginginstance in the namespace.$ oc delete clusterlogging instance -n openshift-logging --wait=true --timeout=5mThe PVCs in the openshift-logging namespace are now safe to delete.
Delete the PVCs.
$ oc delete pvc <pvc-name> -n openshift-logging --wait=true<pvc-name>Is the name of the PVC
Removing RADOS Gateway (RGW) component
Use these instructions to disable only the RGW component.
Set ignore for RGW in the storage cluster.
$ oc patch -n openshift-storage storagecluster ocs-storagecluster \ --type merge \ --patch '{"spec": {"managedResources": {"cephObjectStores": {"reconcileStrategy": "ignore"}}}}' $ oc patch -n openshift-storage storagecluster ocs-storagecluster \ --type merge \ --patch '{"spec": {"managedResources": {"cephObjectStoreUsers": {"reconcileStrategy": "ignore"}}}}'Delete all RGW OBCs
$ oc get obc --all-namespaces --no-headers |grep ceph #delete all obc from the list above $ oc delete obc <obc> -n projectDelete RGW.
$ oc delete sc ocs-storagecluster-ceph-rgw $ oc delete route s3-rgw $ oc delete service rook-ceph-rgw-ocs-storagecluster-cephobjectstore $ oc delete CephObjectStore ocs-storagecluster-cephobjectstore
Removing the Multicloud Object Gateway component
Use these instructions to disable only the MCG component.
Set ignore for Multicloud Object Gateway by editing the storage cluster.
$ oc edit storageclusterAdd the two lines marked with arrows.
apiVersion: ocs.openshift.io/v1 kind: StorageCluster metadata: name: ocs-storagecluster namespace: openshift-storage spec: multiCloudGateway: #<--- reconcileStrategy: ignore #<---Delete all OBCs.
$ oc get obc --all-namespaces --no-headers |grep noobaa #delete all OBCs from the list above $ oc delete obc <obc> -n projectDelete NooBaa.
oc delete noobaa noobaa- Remove the default bucket as described below.
Removing the default bucket created by the Multicloud Object Gateway
The Multicloud Object Gateway (MCG) creates a default bucket in the cloud. You need to remove this default bucket.
NOTE: In case MCG is not deployed, you can skip this section.
Prerequisites
- A running Data Foundation Platform.
- Download the MCG command-line interface for easier management:
# subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms
# yum install mcg
IMPORTANT
Specify the appropriate architecture for enabling the repositories using the subscription manager.
- For IBM Power, use the following command:
# subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms
- For IBM Z infrastructure, use the following command:
# subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms
- Alternatively, you can install the MCG package from the Data Foundation RPMs found at the Download Data Foundation page.
NOTE: Choose the correct Product Variant according to your architecture.
Procedure
- Identify the target bucket that you need to remove:
$ noobaa backingstore status noobaa-default-backing-store
Example output:
[...]
# BackingStore spec:
s3Compatible:
endpoint: https://**rook-ceph-rgw-ocs-storagecluster-cephobjectstore.openshift-storage.svc:443**
secret:
name: rook-ceph-object-user-ocs-storagecluster-cephobjectstore-noobaa-ceph-objectstore-user
namespace: openshift-storage
signatureVersion: v4
targetBucket: nb.1648032196977.apps.cluster1.vmware.ocp.team
type: s3-compatible
# Secret data:
[...]
In this example, rook-ceph-rgw-ocs-storagecluster-cephobjectstore.openshift-storage.svc:443 is the endpoint which indicates the service that this backing store is using, for example, rook-ceph, Amazon Web Service (AWS) S3, Azure, and so on. The value in the targetBucket field is the name of the bucket that you need to delete.
Removing the disaster-protection finalizers in mon and mon endpoints
Starting with 4.12, if you have configured disaster recovery for the clusters, you need to manually remove the 'ceph.rook.io/disaster-protection' finalizer from mon and the mon endpoints:
- Edit
configmapandsecretsto remove theceph.rook.io/disaster-protectionfinalizer from mon and mon endpoints.
$ oc edit configmap -n openshift-storage rook-ceph-mon-endpoints
$ oc edit secrets -n openshift-storage rook-ceph-mon
Removing finalizer on the `ocs-client-operator-config` ConfigMap
If Data Foundation version 4.16.0 or 4.16.1 is the initial deployment, remove the finalizer on the configmap as follows:
$ oc patch configmap ocs-client-operator-config -nopenshift-storage -p '{"metadata":{"finalizers":null}}' --type=mergeWas this topic helpful?
Document Information
Modified date:
25 November 2025
UID
ibm17234743