You can create a storage class with a single replica to be used by your applications.
This avoids redundant data copies and allows resiliency management on the application
level.
About this task
Warning: Enabling this feature creates a single replica pool without data replication,
increasing the risk of data loss, data corruption, and potential system instability if your
application does not have its own replication. If any OSDs are lost, this feature requires very
disruptive steps to recover. All applications can lose their data, and must be recreated in case of
a failed OSD.
Procedure
- Enable the single replica feature using the following command:
$ oc patch storagecluster ocs-storagecluster -n openshift-storage --type json --patch '[{ "op": "replace", "path": "/spec/managedResources/cephNonResilientPools/enable", "value": true }]'
- Verify
storagecluster
is in Ready
state: Example output:
NAME AGE PHASE EXTERNAL CREATED AT VERSION
ocs-storagecluster 10m Ready 2024-02-05T13:56:15Z 4.15.0
New
cephblockpools
are created for each failure domain.
- Verify
cephblockpools
are in Ready
state: Example output:
NAME PHASE
ocs-storagecluster-cephblockpool Ready
ocs-storagecluster-cephblockpool-us-east-1a Ready
ocs-storagecluster-cephblockpool-us-east-1b Ready
ocs-storagecluster-cephblockpool-us-east-1c Ready
- Verify new storage classes have been created:
Example output:
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
gp2 (default) kubernetes.io/aws-ebs Delete WaitForFirstConsumer true 104m
gp2-csi ebs.csi.aws.com Delete WaitForFirstConsumer true 104m
gp3-csi ebs.csi.aws.com Delete WaitForFirstConsumer true 104m
ocs-storagecluster-ceph-non-resilient-rbd openshift-storage.rbd.csi.ceph.com Delete WaitForFirstConsumer true 46m
ocs-storagecluster-ceph-rbd openshift-storage.rbd.csi.ceph.com Delete Immediate true 52m
ocs-storagecluster-cephfs openshift-storage.cephfs.csi.ceph.com Delete Immediate true 52m
openshift-storage.noobaa.io openshift-storage.noobaa.io/obc Delete Immediate false 50m
New
OSD pods are created; 3 osd-prepare pods and 3 additional pods.
- Verify new OSD pods are in
Running
state: Example
output:
rook-ceph-osd-0-6dc76777bc-snhnm 2/2 Running 0 9m50s
rook-ceph-osd-1-768bdfdc4-h5n7k 2/2 Running 0 9m48s
rook-ceph-osd-2-69878645c4-bkdlq 2/2 Running 0 9m37s
rook-ceph-osd-3-64c44d7d76-zfxq9 2/2 Running 0 5m23s
rook-ceph-osd-4-654445b78f-nsgjb 2/2 Running 0 5m23s
rook-ceph-osd-5-5775949f57-vz6jp 2/2 Running 0 5m22s
rook-ceph-osd-prepare-ocs-deviceset-gp2-0-data-0x6t87-59swf 0/1 Completed 0 10m
rook-ceph-osd-prepare-ocs-deviceset-gp2-1-data-0klwr7-bk45t 0/1 Completed 0 10m
rook-ceph-osd-prepare-ocs-deviceset-gp2-2-data-0mk2cz-jx7zv 0/1 Completed 0 10m