New features
This section describes new features introduced in IBM Storage Fusion Data Foundation 4.16.
- Disaster recovery solution
- Weekly cluster-wide encryption key rotation
- Support custom taints
- Support for SELinux mount feature with ReadWriteOncePod access mode
- Support for ReadWriteOncePod access mode
- Faster client IO or recovery IO during OSD backfill
- Support for generic ephemeral storage for pods
- Cross storage class clone
- Overprovision Level Policy Control
Disaster recovery solution
For discovered applications not deployed using Red Hat Advanced Cluster Management for Kubernetes (RHACM) (discovered applications), the Fusion Data Foundation Disaster Recovery solution extends protection with a new user experience for failover and failback operations that are managed using RHACM.
The Fusion Data Foundation Disaster Recovery solution supports applications that are developed or deployed using an imperative model. The cluster resources for these discovered applications are protected and restored at the secondary cluster using Red Hat OpenShift APIs for Data Protection (OADP).
The Fusion Data Foundation Disaster Recovery solution now extends protection to discovered applications that span across multiple namespaces.
Regional disaster recovery (Regional-DR) solution can be easily set up for Red Hat OpenShift Virtualization workloads using Fusion Data Foundation.
Disaster recovery with stretch clusters for workloads based on Red Hat OpenShift Virtualization technology using Fusion Data Foundation can now be easily set up.
When a primary or a secondary cluster of Regional-DR fails, the cluster can be either repaired or wait for the recovery of the existing cluster, or replace the cluster entirely if the cluster is irredeemable. Fusion Data Foundation provides the ability to replace a failed primary or a secondary cluster with a new cluster and enable failover (relocate) to the new cluster.
The disaster recovery dashboard on Red Hat Advanced Cluster Management (RHACM) console is extended to display monitoring data for Subscription type applications in addition to ApplicationSet type applications.
Data such as the following can be monitored:
- Volume replication delays
- Count of protected Subscription type applications with or without replication issues
- Number of persistent volumes with replication healthy and unhealthy
-
Application-wise data like the following:
- Recovery Point Objective (RPO)
- Last sync time
- Current DR activity status (Relocating, Failing over, Deployed, Relocated, Failed Over)
- Application-wise persistent volume count with replication healthy and unhealthy
The Regional disaster recovery solutions of Fusion Data Foundation now support neutral site deployments and hub recovery of co-situated managed clusters using Red Hat Advanced Cluster Management. For configuring hub recovery setup, a 4th cluster is required which acts as the passive hub. The passive hub cluster can be set up in either one of the following ways:
- The primary managed cluster (Site-1) can be co-situated with the active RHACM hub cluster while the passive hub cluster is situated along with the secondary managed cluster (Site-2).
- The active RHACM hub cluster can be placed in a neutral site (Site-3) that is not impacted by the failures of either of the primary managed cluster at Site-1 or the secondary cluster at Site-2. In this situation, if a passive hub cluster is used it can be placed with the secondary cluster at Site-2.
Weekly cluster-wide encryption key rotation
Security common practices require periodic encryption key rotation. Fusion Data Foundation automatically rotates the encryption keys stored in kubernetes secret (non-KMS) on a weekly basis.
Support custom taints
Custom taints can be configured using the storage cluster CR by directly adding tolerations under the placement section of the CR. This helps to simplify the process of adding custom taints.
Support for SELinux mount feature with ReadWriteOncePod access mode
Fusion Data Foundation now supports SELinux mount feature with ReadWriteOncePod access mode. This feature helps to reduce the time taken to change the SELinux labels of the files and folders in a volume, especially when the volume has many files and is on a remote filesystem such as CephFS.
Support for ReadWriteOncePod access mode
Fusion Data Foundation provides ReadWriteOncePod (RWOP) access mode to ensure that only one pod across the whole cluster can read the persistent volume claim (PVC) or write to it.
Faster client IO or recovery IO during OSD backfill
Client IO or recovery IO can be set to be favored during a maintenance window. Favoring recovery IO over client IO significantly reduces OSD recovery time.
Support for generic ephemeral storage for pods
Fusion Data Foundation provides support for generic ephemeral volume. This support enables a user to specify generic ephemeral volumes in its pod specification and tie the lifecycle of the PVC with the pod.
Cross storage class clone
Fusion Data Foundation provides an ability to move from a storage class with replica 3 to replica 2 or replica 1 while cloning. This helps to reduce storage footprint.
Overprovision Level Policy Control
Overprovision control mechanism enables defining a quota on the amount of persistent volume claims (PVCs) consumed from a storage cluster, based on the specific application namespace.
When this overprovision control mechanism is enabled, overprovisioning the PVCs consumed from the storage cluster is prevented.