Limitations for storage data caching

Ensure that you understand the limitations and additional configuration requirements to use the caching feature. You must also consider the application restrictions for the target devices that must be cached.

Consider the following limitations for caching the storage data:

  • The caching software is configured as a read-only cache, which means that only read requests are processed from the flash solid-state drive (SSD). All write requests are processed by the original storage device.
  • Data that is written to the storage device is not populated in the cache automatically. If the write operation is performed on a block that is in the cache, the existing data in the cache is marked as invalid. The same block reappears in the cache based on the frequency and recency of the access to the block.
  • The caching software loads data into the cache based on local read patterns, and invalidates the cache entries locally. The target devices must not be shared by more than one LPAR concurrently. The target devices cannot be part of any clustered storage such as Oracle Real Application Clusters (RAC), DB2® pureScale®, and General Parallel File System (GPFS). Target devices that are part of a high-availability cluster can be cached only if the access specifies that a single host is reading or writing data from the target device at a time and caching is enabled only on the active node.
  • The cache disk can be provisioned either to an AIX® LPAR or to a Virtual I/O Server (VIOS) LPAR. Cache devices cannot be shared.
  • The caching software must open the target devices to intercept any I/O requests to the target devices. If a workload needs to open the target device exclusively after caching is started, the exclusive open operation fails. In these instances, caching must be stopped and restarted after the workload starts. An example for exclusive open operation is setting the physical volume identifier (PVID) for target disks.
  • If the disks are used as target devices, the reserve_policy attribute of the disk must not be set to single_path.
  • When the caching operation is started for a target device, the cache engine logic delays the promotion of data into the cache. This delay is required to ensure that all outstanding I/O operations on the target device, which are issued before the caching operation is started, are completed before starting the caching operation. The exact time of delay is calculated internally based on the number of available paths and the rw_timeout attribute (if any) of the target disk. If the internally calculated time must be overridden by a user-defined time, you can set the DEFAULT_IO_DRAIN_TIMEOUT_PD environment variable in the /etc/environment file to a custom timeout value, in seconds.
  • NVMe devices cannot be used as targets.
  • Additional memory is required on each AIX logical partition (LPAR) because the caching software manages metadata on each cache block. Reducing the number of storage protect keys might also be required to ensure that the caching software can allocate a sufficient contiguous block of memory.
    Table 1. Minimum required memory and maximum allowed storage protect keys to be able to start a flash cache partition of the indicated size
    Cache partition size Storage protect keys Required memory
    <= 25 TB <= 31 > 2 GB
    26 - 50 TB <= 8 > 4 GB
    50 - 100 TB <= 4 > 8 GB
    101 - 200 TB <= 2 > 16 GB
    201 - 400 TB <= 1 > 32 GB