Using ODF CLI command
The Red Hat OpenShift Data Foundation command-line interface (CLI) command and its subcommands reduce repetitive tasks and provide a better experience. To download the ODF CLI tool, see customer portal.
Subcommands of odf get command
odf get recovery-profile
This command displays the recovery profile value set for the Object Storage Daemon (OSD). If the value is not set that using the odf set recovery-profile command, an empty value is displayed by default. When a value is set, the appropriate value is displayed.
Example:
odf get recovery-profile
# high_recovery_ops
odf get health
- At least three monitor (mon) pods are running on different nodes
- Mon quorum and Ceph health details
- At least three OSD pods are running on different nodes
- The running status of all pods
- Placement group status
- At least one MGR pod is running
Example:
odf get health
Info: Checking if at least three mon pods are running on different nodes
rook-ceph-mon-a-7fb76597dc-98pxz Running openshift-storage ip-10-0-69-145.us-west-1.compute.internal
rook-ceph-mon-b-885bdc59c-4vvcm Running openshift-storage ip-10-0-64-239.us-west-1.compute.internal
rook-ceph-mon-c-5f59bb5dbc-8vvlg Running openshift-storage ip-10-0-30-197.us-west-1.compute.internal
Info: Checking mon quorum and ceph health details
Info: HEALTH_OK
[...]
odf get dr-health
This command fetches the connection status of a cluster from another cluster in mirroring-enabled clusters. This command fetches Ceph block pool with mirroring-enabled information. If the information is not found, this command produces output with relevant logs.
Example:
odf get dr-health
Info: fetching the cephblockpools with mirroring enabled
Info: found "ocs-storagecluster-cephblockpool" cephblockpool with mirroring enabled
Info: running ceph status from peer cluster
Info: cluster:
id: 9a2e7e55-40e1-4a79-9bfa-c3e4750c6b0f
health: HEALTH_OK
[...]
odf get dr-prereq
This command checks and fetches the status of all the prerequisites to enable disaster recovery on a pair of clusters. The command takes the peer cluster name as an argument and uses it to compare the current cluster configuration with the peer cluster configuration. Based on the comparison results, it displays the status of the prerequisites.
Example:
odf get dr-prereq peer-cluster-1
Info: Submariner is installed.
Info: Globalnet is required.
Info: Globalnet is enabled.
odf get mon-endpoints
Displays the mon endpoints
$ odf get dr-prereq peer-cluster-1
Info: Submariner is installed.
Info: Globalnet is required.
Info: Globalnet is enabled.
Subcommands of odf operator command
odf operator rook set
This command sets the provided property value in the Rook Ceph operator configuration.
Example:
odf operator rook set ROOK_LOG_LEVEL DEBUG
configmap/rook-ceph-operator-config patched
The value of the ROOK_LOG_LEVEL can be DEBUG, INFO, or WARNING.
odf operator rook restart
This command restarts the Rook Ceph operator.
Example:
odf operator rook restart
deployment.apps/rook-ceph-operator restarted
odf restore mon-quorum
This command restores the mon quorum when the majority of mons are not in quorum and the cluster is down. When the majority of mons are lost permanently, restoring the quorum to a remaining good mon to bring the Ceph cluster up again is necessary.
Example:
odf restore mon-quorum c
odf restore deleted <crd>
This command restores the deleted Rook Custom Resource (CR) when data is left for the components, Ceph clusters, Ceph file systems, and Ceph block pools. Generally, when Rook CR is deleted and there is leftover data, the Rook operator does not delete the CR to ensure data is not lost, and the operator does not remove the finalizer on the CR. As a result, the CR is stuck in the deleting state, and cluster health is not ensured. Upgrades are blocked too. This command helps repair the CR without the cluster downtime.
Example:
odf restore deleted cephclusters
Info: Detecting which resources to restore for crd "cephclusters"
Info: Restoring CR my-cluster
Warning: The resource my-cluster was found deleted. Do you want to restore it? yes | no
[...]