Question & Answer
Question
How do I setup Flash Cache?
Answer
Flash Cache is considered as server side caching of data. It allows an LPAR to use SSDs or Flash storage as read-only cache to improve read performance for spinning disks.The cache can be significantly smaller than the data it is caching and can be direct attached or SAN based.
Once flash cache is set up, AIX decides what data is hot and stores a copy in the flash cache.
The caching function can be enabled and disabled dynamically and is transparent to the workloads taking advantage of it. When caching is enabled, read requests for the target devices are sent to the caching software, which checks whether the block is in the cache. If it is, then the disk block is provided from the cache. All other reads and all writes will be sent through to the original disk.
Flash cache requires AIX version 7.2, which in turn requires POWER7 or higher servers. The LPAR must
Have a minimum of 4GB of memory since the cache uses memory to keep track of read access and other things.
■ There are several terms that we need to understand before using flash cache:
- Cache device: The SSD or flash device that will be used for caching
- Cache pool: The group of cache devices set up to be used for caching
- Cache partition: A logical cache device that exists in the cache pool
- Target device: The storage device that is being cached
■ In addition, there are two components we need to be aware of:
- Cache Management: There’s a new command that is used to create, assign, destroy and report on the flash cache (/usr/sbin/cache_mgt).
- Cache engine: This is the algorithm that determines what will be cached and that retrieves the data from the cache.
■ Setting up flash cache:
There are several ways to implement flash cache, It can be direct attached to an AIX LPAR or Virtualized through a VIO server. Using a VIO server provides support for LPM but it is always possible to turn off caching prior to an LPM move, especially if the target hardware doesn't have a Flash cache.
■ The first step is to ensure the correct LPPs are installed—these consist of bos.pfcdd and cache.mgt
And can be found on the AIX 7.2 base install DVD. After installation you should see something like:
# lslpp -l | grep Cache
bos.pfcdd.rte 7.2.1.0 COMMITTED Power Flash Cache
cache.mgt.rte 7.2.1.0 COMMITTED AIX SSD Cache Device
bos.pfcdd.rte 7.2.1.0 COMMITTED Power Flash Cache
cache.mgt.rte 7.2.1.0 COMMITTED AIX SSD Cache Device
# lslpp -l | grep lash
bos.pfcdd.rte 7.2.1.0 COMMITTED Power Flash Cache
7.2.1.0 COMMITTED Common CAPI Flash Adapter
7.2.0.0 COMMITTED CAPI Flash Adapter Diagnostics
7.2.0.0 COMMITTED CAPI Flash Adapter Device
devices.common.IBM.cflash.rte
7.2.1.0 COMMITTED Common CAPI Flash Device
bos.pfcdd.rte 7.2.1.0 COMMITTED Power Flash Cache
7.2.1.0 COMMITTED Common CAPI Flash Adapter
devices.common.IBM.cflash.rte
7.2.0.0 COMMITTED Common CAPI Flash Device
- We will see the cache_mgt command even if we didn't install pfcdd but no caching engine will be active.
- Setting up the cache pool of SSDs.
- We have 3 disks: hdisk1, hdisk2 and hdisk3 (They are not in any VG):
# cache_mgt pool create -d hdisk1,hdisk2,hdisk3 -p mashpool0
This creates a volume group called mashpool0 as well as a pool called mashpool0.
Note: We should not create the volume group directly — use cache_mgt command.
- The next step is to create the cache partition:
# cache_mgt partition create -p mashpool0 -s 3072M -P mashpart1
# lsvg -l mashpool0
mashpool0:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
mashpart1 jfs 60 60 3 closed/syncd N/A
- The partition it creates is a JFS partition, not JFS2. This does not matter as the type is
Kust a property string and is not used to enforce a how the disks are actually accessed.
# lsvg -p mashpool0
mashpool0:
PV_NAME PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION
hdisk1 active 20 0 00..00..00..00..00
hdisk2 active 20 0 00..00..00..00..00
hdisk3 active 20 0 00..00..00..00..00
Above we see how the PPs are spread among the SSDs that were assigned.
- Below is the list of disks that could be candidates for the cache pool. mashpool0 is the pool
We are using.
# cache_mgt device list -l
hdisk0,rootvg
hdisk1,mashpool0
hdisk2,mashpool0
hdisk3,mashpool0
- We can also check the pool allocations as follows:
# cache_mgt pool list -l
mashpool0,hdisk1,hdisk2,hdisk3
■ At this point we can assign hdisks to be targets for caching:
Assign hdisk4-6 as the source disks to be cached from:
# cache_mgt partition assign -t hdisk4 -P mashpart1
# cache_mgt partition assign -t hdisk5 -P mashpart1
# cache_mgt partition assign -t hdisk6 -P mashpart1
■ Checking the partition and assignments:
# cache_mgt partition list -l
mashpart1,3072M,mashpool0,hdisk4,hdisk5,hdisk6
# cache_mgt cache list
hdisk4,mashpart1,inactive
hdisk5,mashpart1,inactive
hdisk6,mashpart1,inactive
INACTIVE means cache is not started!
■ Checking the engine:
# cache_mgt engine list -l
/usr/ccs/lib/libcehandler.a,max_pool=1,max_partition=1,tgt_per_cache=unlimited,cache_per_tgt=1
■ Starting caching can be in two ways:
- Either to start it for all assigned disks:
# cache_mgt cache start -t all
- Or to start it disk by disk:
# cache_mgt cache start -t hdisk4
# cache_mgt cache start -t hdisk5
# cache_mgt cache start -t hdisk6
■ Similarly, Stopping the caching can be the same as well:
- Either to stop it for all assigned disks:
# cache_mgt cache stop -t all
- Or to stop it disk by disk:
# cache_mgt cache stop -t hdisk4
# cache_mgt cache stop -t hdisk5
# cache_mgt cache stop -t hdisk6
■ Removal of target disk requires stopping caching of that target prior to the
Unassign of that disk:
# cache_mgt cache stop -t hdisk4
# cache_mgt partition unassign -t hdisk4
■ We can get caching statistics using the cache_mgt monitor command which is by default
Started - however below you can start, stop and get its status:
# cache_mgt monitor start
# cache_mgt monitor stop
# cache_mgt monitor get -h -s
Thank you very much for taking the time to read through this document.
I hope it has been helpful. If you feel you have found any inconsistencies,
Please don’t hesitate to email me at ahdmashr@eg.ibm.com
Ahmed Mashhour
Was this topic helpful?
Document Information
Modified date:
17 June 2018
UID
isg3T1026004