News
Abstract
By Varun Kumar, IBM Systems Lab Services
FlashCopy is a point-in-time copy of data that enables IBM Storage clients to support testing, development, user acceptance testing (UAT), management information system (MIS) and backup requirements. Different vendors provide different implementation approaches to create point-in-time copy. IBM Spectrum Virtualize offers virtualization of different tiers of storage and leverages a copy-on-write approach so that the FlashCopy target can be designated on a slower tier or a tier of choice, wherein a point-in-time copy of a volume is created using the pre-designated space for the FlashCopy.
Content
FlashCopy is a point-in-time copy of data that enables IBM Storage clients to support testing, development, user acceptance testing (UAT), management information system (MIS) and backup requirements. Different vendors provide different implementation approaches to create point-in-time copy. IBM Spectrum Virtualize offers virtualization of different tiers of storage and leverages a copy-on-write approach so that the FlashCopy target can be designated on a slower tier or a tier of choice, wherein a point-in-time copy of a volume is created using the pre-designated space for the FlashCopy. If type has been set to clone, all tracks of the source volumes get copied to the FlashCopy target in the background. If any write activity that might occur on the source and the point-in-time data hasn’t been copied to the target yet, the original source data must be copied to the target before the write to the source is permitted—hence the name copy-on-write.

Because one size doesn’t fit all, IBM Spectrum Virtualize offers many flexible parameters that can be leveraged to optimize FlashCopy based on your specific environment and workload requirements. In the remainder of the post, I’m going to discuss some key parameters and other factors that play a significant role when you’re planning for an implementation of FlashCopy.
Key parameters for planning FlashCopy on Spectrum Virtualize
Grain Size
When data is copied between a source and a target, it’s copied in chunks known as “grains.” Grain size is determined at the time of FlashCopy mapping creation and can’t be changed later. Supported values are 256KB and 64KB. The 256KB grain size is the default and suited for most situations; however, in some cases you might need to evaluate the best option for you. In an environment like online transaction processing (OLTP), where high input/output is expected during FlashCopy, a 256KB grain size will result in a synchronous copy-on-write to the FlashCopy target volume unless that grain has already been copied to the target. In such cases, a 64KB grain size might be the better option as OLTP block sizes are generally small, and we can avoid copying the whole 256KB chunk when there’s only a few KB change on the source.
Another case could be thin FlashCopy targets, where it’s recommended to match thin volume grain size with grain size of FlashCopy mapping. However, if you have a large number of FlashCopy relations in your environment, you might need to stick to the 256KB grain size as 64KB grains would require more bitmap space in memory.
Copy rate
Copy rate determines the data rate (grains/s or MB/s) at which background copy from the source to the target is attempted. Supported values start with 128KB/s through 2GB/s.
The default background copy rate is 2MB/s. This value may minimize impact to response time for the host system I/O if a copy-on-write occurs, but it may not complete background copies as quickly as desired. Increasing the background copy rate may complete background copy more quickly; however, this can significantly affect response times to I/O from host systems, as you eventually cut down the system resources available for foreground I/Os.
If there’s no SLA or dependency associated with the background copy completion, it’s advisable to keep the copy rate to default. Should a need arise to increase the background copy rate, it requires a careful study of system load and system resource utilization first.
Target sizing
A slow target can impact the performance of source volumes. Yes, you read it right. If a target is slow enough to handle copy-on-write rate, data will be held in cache until the source volume grain gets copied to the target. This will eventually overload the cache and lead to cache miss affecting overall host I/O response time. Therefore, irrespective of SLAs around FlashCopy completion time, it’s important to size the target for performance to cater copy-on-write I/Os fast enough to minimize the impact on the source. If there are any SLAs around background copy, then consider evaluating the workload during the FlashCopy window and consider grain size and copy rate together as inputs for target sizing.
Target placement
Data flow among the source and the target volume’s preferred nodes occurs through node-to-node connectivity. Therefore, it’s advised to keep both source and target volume on the same node to avoid traffic on inter-node links. Also, always place target volumes on a different pool than its source to avoid any potential issues on the source due to I/Os on target.
Conclusion
My team has observed significant benefits and improvements if FlashCopy is planned based on the parameters outlined above. Add FlashCopy as an important factor while discussing your requirements with IBM so that you get most out of it.
If you need any assistance in planning a FlashCopy implementation, reach out to IBM Systems Lab Services.
Related Information
Was this topic helpful?
Document Information
Modified date:
10 June 2021
UID
ibm11125465