Q&A Session for FlashSystem Family Technical Update and Hot Topics Session Number: 926159204 Date: 2020-4-8 Starting time: 10:30 _______________________________________________________________ Q: What is the perf. difference between NVMe based FCM Gen1 and Gen2 and other parameters? A: The next Gen FCMs are improving many performance parameters and what you see is somewhat dependent on workload, but there is a chart which will show the primary change, which is latency A: For SVC, there is no NVMe consideration, as it is only supporting virtualized storage. For the FlashSystem controllers, you can see that latency comparison on the charts now ________________________________________________________________ Q: With only one internal battery, does battery replacement require a node outage to repair on the SV2? A: The node would need to go down for a battery replacement. A: This is an node offline operation on the SA2 and SV2, however, in a fault case, the node would be in service anyway. ________________________________________________________________ Q: Are these FlashSystems for mainframe as well? How to compare with the other mainframe high-end storage systems? A: No, it's just for Open Systems or zLinux ( no CKD support) ________________________________________________________________ Q: on the new SVC nodes model , is the bitmap caching still the same allowed per I/O grp , or has this been changed , to allow for more to be used for flash copy's and mirroring A: The bitmap memory is a consequence of the Spectrum Virtualize software, which has not increased these values in this release. This limit is shared with the FlashSystem 7200 and 9200 as well as the 9110 and 9150 ________________________________________________________________ Q: Can you share the PCIe switch generation & count in the SV2 & SV1? i.e., PCIe 3 or 4? 3 ea. PCIe switches/buses (however that is properly termed), etc. ...and, if possible, where the various adapters connect through the PCIe infrastructure. THANKS! A: PCIe-3 switch. 2 adapters flow through 1 controller CPU, the other one flows through the other controller CPU, however we have 16 lanes of PCI-3 per slot so we do full bandwidth on a 4 port 32Gb FC card. A: The IO Adapters do not go through a PCIe switch on the SV2. SV1 has x8 PCIe G3 buses, SV2 has x16 PCIe G3 buses. On SV2/SA2 we also use some of the PCIe lanes coming out of the CPU for the on board compression assist, which is 2-5X the bandwidth of SV1 ________________________________________________________________ Q: The warranty of FlashSystem 7200 was announced as 3 years, you pointed it as 1 year. What is correct? A: It is a 3 year warranty. ________________________________________________________________ Q: What stands SLC for ? A: Single level cell ________________________________________________________________ Q: How Gen1 FCMs 4.8 and 9.6 TiB have effective cap as 22 TiB ? A: This is limited to do the ability of the FCMs to store the metadata, so there is a cap to the internal tables that physically fit. The more flash chips are on the device, the less room there is for the meta data storage.  ________________________________________________________________ Q: Do these SCMs wear out? A:  Eventually, yes, but it has a much higher endurance than NAND. ________________________________________________________________ Q: Is NVMe over Fabric supported over any other transport than FC at this point? TCP/RoCE/IB? A: 8.3.1 has support for NVMe/Fibre Channel and iSER - iSCSI/RDMA which when combined with scsi-mq will give a good/easy path to NVMe/RDMA ________________________________________________________________ Q: Do SCM Price needs to be less than DRAM for its selling point? A: SCM is less expensive than DRAM per GB, but MUCH more expensive than NAND. ________________________________________________________________ Q: In previous slides, observed that SV2 SVC supports only DRP. What would be the suggested upgrade approach from DH-8 SVC RtC compressed volumes to SV2 SVC DRP ? A: You are going to need to upgrade the software to 8.3.1 and then either uncompress your volumes or convert to DRP BEFORE you swap out your nodes. You can convert on a I/O group by I/O group basis if you don't have the storage to do everything at once. ________________________________________________________________ Q: any performance impact after adding disk(s)? A: Goal is to run in the background without impacting host IO performance  ________________________________________________________________ Q: but it does help with rebuild time...right? A: DRAID (Distributed RAID) helps with rebuild time. And the new changes help with rebuild time as well. The target for a FCM module is 2.5TB/hr A: Expansion does not in an of itself improve rebuild times, however, a larger DRAID has more bandwidth which may improve rebuilds. ________________________________________________________________ Q: Is it possible to remove drives from Draid? A: No. Expand only. ________________________________________________________________ Q: [sic] Does DRAID re-creates the RAID Parity when adding or removing disks? Q: so, if i add a disk, how does parity cover that new disk add Q: I am trying to understand the underlying mechanic of RAID protection when adding one (or more ) disks to an existing RAID set A: Distributed RAID takes a stripe (a set of data segments + parity segments) and stripes them across a subset of drives to provide the RAID protection. DRAID expansion adds capacity to the array by redistributing the data and parity strips across the new capacity ________________________________________________________________ Q: any chance of changing flash copy to ROW instead of COW A: We cannot discuss roadmap/future directions or NDA topics on this call. Please contact your IBM seller or business partner to get an NDA discussion set up. ________________________________________________________________ Q: Can DRAID expansion be used on Storwize V7000 Gen3/V5030E V8.3.1 or only the new Flashsystems running V8.3.1? A: Any Spectrum Virtualize running 8.3.1 ________________________________________________________________ Q: Is v8.3.1 a good choice for V9000 with 256GB memory? In this case, a large 52TiB DRP pool using compression for VMware datastores. I assume the answer is yes?  A: Yes. There are some very good performance improvements for DRP in 8.3.1 and the memory footprint really didn't go up. So if you are already using DRP, 8.3.1 is an excellent choice. ________________________________________________________________ Q: Cache changes in regards single object able to use all CPU cores, apply also to Traditional pool vdisks ? A: The Changes apply to all pools ________________________________________________________________ Q: What is the extent size for EZ Tier? A: EZ Tier uses the extent size of the pool. So whatever the pool was configured with on creation. The default in the GUI is currently 1GB for Traditional pools and 4GB for DRP ________________________________________________________________ Q: EZ Tier Stat is Nice improvement over the excel spreadsheet. only available from 8.3.x? A: In Spectrum Virtualize 8.3.1+ ________________________________________________________________ Q: Is easy tier reports includes analysis at the volume level? A: It does, but I'm pretty sure the GUI does not include volume level detail. You could download the reports and get to that information, however. ________________________________________________________________ Q: What is the RPO at the Tertiary (DR) site? Q: What is the RPO for the Star configuration? A: Minimum cycle period for far site replication is 5 minutes and RPO is 10 minutes A: Your actual RPO will depend on your link bandwidth and host data rate.  ________________________________________________________________ Q: Can DRP Pool be configured across 2 Pools residing on 2 scaled out FlashSystem 9200R? A: DRP is a single pool. We don't have pools inside of pools. You can create it using multiple arrays on two I/O Groups, but that is not following best practices for availability and performance. A: Best practice is to create one pool per control enclosure, with one DRAID per drive type. ________________________________________________________________ Q: 3-site replication can mix FC and IP replication ?, is available for the whole family ? A: No. At this time it is FC based replication only. ________________________________________________________________ Q: Can A&B both take host load and metromirror bidirectionally? A: It is not an active active solution, but you can have different consistency groups going in different directions ________________________________________________________________ Q: Is 3-Site Replication available through the use of SVC only or is it available on FS? A: all of Spectrum Virtualize (SVC and FlashSystem and Storwize running 8.3.1) ________________________________________________________________ Q: With 3-site replication it is not possible to have all 3 replication links using async? BTW... very happy to see this differentiating feature.  A: this implementation requires 2 sites with synchronous replication and 1 with asynchronous (this was the specific use case that was targeted).  A: Thank you for the feedback, I will make sure it gets to our designers, although if you could put this on the RFQ site as a request that would be very helpful for visibility and rationale/use case description ________________________________________________________________ Q: [sic] Does 3-site supports iSCSI ? A: The replication links must be fibre channel. Hosts or virtualised storage can be attached via iSCSI ________________________________________________________________ Q: In 3 site replication, where does the Quorum disk come in. Is it still a requirement? A: Since this is not HyperSwap environment , these are 3 separate systems. So an IP quorum is not required. Each cluster has it's own quorum devices. ________________________________________________________________ Q: With SCSI LUN IDs enforcing, doing an NDVM SCSI IDs will keep the same IDs as opposite to the old firmware? A: In pre 8.3.1, nothing enforces the same SCSI ID and mismatches can be common. The 8.3.1 feature enforces matching SCSI IDs unless overridden. ________________________________________________________________ Q: Are new disk expansion enclosures NVMe or still SAS ? A: SAS ________________________________________________________________ Q: Do we have the option in 8.3.1, to pin the Hyperswap volumes to a mdisk in a pool ?  A: Depends on the type of pool. If a standard pool, technically you in theory could (not easy to do). In DRP, less easy to do. In many of your newer systems, many people only have 1 MDisk in the pool anyway. But nothing added to 8.3.1 for that. A: it would make more sense not to tier if that is the requirement, there are a large number of ways to solve this kind of problem, if the solution is getting complicated/hairy, chances are there's a better path to the end goal A: In particular, focus on the problem being solved, rather than assuming a solution. e.g. do you need to set up volume throttling to prevent workloads from impacting your high performance?, etc ________________________________________________________________ Q: With Regard to NPIV - What if a host ONLY uses its physical WWPN? A: Doesn't matter, you use the physical WWPN on the host and map it to the virtual port on the Spectrum Virtualize. ________________________________________________________________ Q: Does "IP Quorum Preferred site and Winner site." in SVC 8.3.0 if there is SAN Fabrics failure will the nodes continue to lease expire? A: The Preferred site/winner site means that in the event of a split brain (2 halves of the cluster), one of the halves will be set to win, so the other nodes will be ejected/lose the quorum race until connection is restored. ________________________________________________________________ Q: Is there some info on Spectrum Virtualize for OCP for POWER? A: Spectrum Virtualize does not support OCP and Power. ________________________________________________________________ Q: With Storage Insights is it possible use to monitoring IOPS and Latency from environment´s customers ? A: With Storage Insights Pro you could have a complete perf analysis. ________________________________________________________________ Q: What is the difference in processors between an FS5010 and an FS5030? not in number of processors if not in characteristics. Q: Why was a issue in FS5010 with the procesors and no in FS5030, it was a bug of microcode? A: FlashSystem 5010 is a 2 core Broadwell DE Processor. FS5030 is a 6 core Broadwell DE Processor. These machines don't run micro code, they run the Spectrum Virtualize Software stack (with an identical binary executable), A: so without knowing anything more about the specific issue, I cannot comment on why/if it would impact the 5010 and not the 5030 ________________________________________________________________ Q: What about expand distributed RAID arrays? https://www.ibm.com/support/pages/node/6125061 think you we have to wait to use this feature? A: If you are using FCMs, this flash applies to you and you should wait. Otherwise, go ahead. ________________________________________________________________ Q: Do you recommend to use both dedup and compression for database volumes? Q: With database copies used for non production support and development dedup could be important A: You should model the data set to determine compressibility and deduplication ratios if this is a concern. Tools are available at https://www.ibm.com/support/knowledgecenter/STSLR9_8.2.1/com.ibm.fs9100_821.doc/dret_overview.html ________________________________________________________________ Q: How is to know that 9200 is active active controller architecture despite whatever multipathing mechanism hosts use(Persistent, Round Robin or Active-Active) and traffic flows through all controllers? A: The 9200 is an active/active architecture. You can send reads/writes to either controller. The "preferred" controller is for improved read cache efficiency. A: The 9200 will accept reads and writes from any mapped host to both the IO Caching pair nodes and can post reads/writes from both canisters to the drives simultaneously. I'm not sure I understand your question beyond that ________________________________________________________________ Q: What is the recommended cache for Flash System 5030 if we using DRP ? Q: or in V5010E or 5030E A: Max. Please note that the 5010E cannot compress or deduplicate, but can use DRP for thin provisioning and unmap. ________________________________________________________________