News
Abstract
By Vijay Dharmaraj, Technical Solution Architect, IBM Systems Lab Services
IBM Spectrum Virtualize has come a long way in offering data virtualization capabilities. Another addition to these capabilities is the N_port ID Virtualization (NPIV) feature available from version V7.7.0 onward. This is available on all IBM Spectrum Virtualize products such as IBM SAN Volume Controller, IBM Storwize V7000, IBM Storwize V5000 and so on. The NPIV feature has added one more feather in the cap of Spectrum Virtualize and continues to drive its leadership in the storage virtualization domain. In this blog post, I’m going to cover details about this NPIV feature, steps to migrate to NPIV and its benefits. This is based on real-world experience deploying the NPIV feature for a banking industry client.
Content
Why NPIV in IBM Spectrum Virtualize?
IBM Spectrum Virtualize provides a highly available, redundant configuration in all its offerings (SAN Volume Controller, Storwize V7000, Storwize V5000 and so forth). This capability allows the partner node/controller to take up the responsibilities of the failed node/controller to continue operations in a seamless manner without any external intervention.
Without NPIV, a node or controller failure can mean the loss of certain paths being served from that particular node/controller. However, with the introduction of NPIV, the worldwide port names (WWPNs) of the failing node/controller will get seamlessly migrated to the active node, resulting in an even faster transition to the active node. This migration is seamless and allows for continuing operations without any overhead to the host’s performance.
More details on the NPIV capability are available in the Hot-Spare Node and NPIV Target Ports IBM Redbooks Redpaper.
NVMe over Fibre Channel (FC-NVMe) is the next big thing in storage input/output (I/O) access. FC-NVMe host attachment is also supported in IBM Spectrum Virtualize from V8.2.1.0 onward. FC-NVMe is supported to run on existing Gen5/Gen6 SAN fabric. NVMe and Small Computer System Interface (SCSI) protocol can coexist on the same FC SAN connections. Spectrum Virtualize will use the NPIV feature for using the FC-NVMe.
To conclude, NPIV based implementation will become the norm in the future.
NPIV deployment
The following steps were performed for a setup running on DH8 SAN Volume Controller nodes, and supporting a critical, performance-sensitive workload. The firmware was upgraded to a recommended version (8.1.1.1) to meet the NPIV prerequisite. After the upgrade, the NPIV ports were identified using the command “lstargetportfc” as shown in following sample.
Step 1: NPIV port identification
superuser>lstargetportfc -delim :
id:WWPN:WWNN:port_id:owning_node_id:current_node_id:nportid:host_io_permitted:virtualized
1:500507680C11094E:500507680C00094E:1:20:20:650000:yes:no
2:500507680C15094E:500507680C00094E:1:20:20:650001:no:yes
3:500507680C12094E:500507680C00094E:2:20:20:660000:yes:no
4:500507680C16094E:500507680C00094E:2:20:20:660001:no:yes
5:500507680C13094E:500507680C00094E:3:20:20:650200:yes:no
6:500507680C17094E:500507680C00094E:3:20:20:650201:no:yes
7:500507680C14094E:500507680C00094E:4:20:20:660200:yes:no
8:500507680C18094E:500507680C00094E:4:20:20:660201:no:yes
In the preceding sample, all the ports listed as “virtualized=yes” are the NPIV target ports. The other ports are called primary ports.
The NPIV feature offers an additional advantage by providing independent WWPNs for host access and is different from the inter-node communication and back-end storage access. At this state, the NPIV is currently disabled for host communication as mentioned by the option “host_io_permitted=no”.
Step 2: Host zoning with NPIV ports
New host zones were created using the virtualized WWPNs as the next step of the migration.
Step 3: NPIV port “transitional” state setting
The NPIV ports are set to a transitional state using the “chiogrp” command as shown in the following sample.
superuser>chiogrp -fctargetportmode transitional 0
At the NPIV transitional state, the NPIV ports get enabled for host communication. We can observe in the following sample that the option “host_io-permitted=yes” was set for all the NPIV ports.
superuser>lstargetportfc -delim :
id:WWPN:WWNN:port_id:owning_node_id:current_node_id:nportid:host_io_permitted:virtualized
1:500507680C11094E:500507680C00094E:1:20:20:650000:yes:no
2:500507680C15094E:500507680C00094E:1:20:20:650001:yes:yes
3:500507680C12094E:500507680C00094E:2:20:20:660000:yes:no
4:500507680C16094E:500507680C00094E:2:20:20:660001:yes:yes
5:500507680C13094E:500507680C00094E:3:20:20:650200:yes:no
6:500507680C17094E:500507680C00094E:3:20:20:650201:yes:yes
7:500507680C14094E:500507680C00094E:4:20:20:660200:yes:no
8:500507680C18094E:500507680C00094E:4:20:20:660201:yes:yes
The newly detected NPIV paths can be confirmed from the AIX host-side using the following command sample:
# odmget -q "path_status=1" CuPath | grep -p hdisk13
After confirmation of new paths, the host zones with physical ports need to be removed. Execute “cfgmgr” once again to confirm the physical paths getting failed or missing as expected.
Step 4: NPIV port enablement
The final stage of the NPIV migration is disabling the host I/O access through the physical ports.
During this client’s deployment, we got the following warning when we executed without the “force” option:
superuser> chiogrp -fctargetportmode enabled 0
WARNING: CMMVC8019E Task could interrupt IO and force flag not set
We executed the same command with the “force” option.
superuser> chiogrp -fctargetportmode enabled -force 0
At the NPIV enabled state, the physical ports get disabled for host communication. We can observe in the following sample that the option “host_io-permitted=no” was set for all the physical ports.
superuser>lstargetportfc -delim :
id:WWPN:WWNN:port_id:owning_node_id:current_node_id:nportid:host_io_permitted:virtualized
1:500507680C11094E:500507680C00094E:1:20:20:650000:no:no
2:500507680C15094E:500507680C00094E:1:20:20:650001:yes:yes
3:500507680C12094E:500507680C00094E:2:20:20:660000:no:no
4:500507680C16094E:500507680C00094E:2:20:20:660001:yes:yes
5:500507680C13094E:500507680C00094E:3:20:20:650200:no:no
6:500507680C17094E:500507680C00094E:3:20:20:650201:yes:yes
7:500507680C14094E:500507680C00094E:4:20:20:660200:no:no
8:500507680C18094E:500507680C00094E:4:20:20:660201:yes:yes
Precaution
Immediately after executing the “chiogrp” command to enable NPIV, the database host was observed to experience very high latency. The core banking software (CBS) application too was observed to “hang” for almost 30 seconds with an increase in the transaction queue. The CBS application got back to normal after this period. Since this action was executed on a holiday with less workload, the impact was negligible.
Conclusion and NPIV advantages
The migration of SAN Volume Controller storage from NPIV “disabled” to NPIV “enabled” can be executed completely online with appropriate planning and completion of prerequisites. Based on our field experience, we highly recommend that you execute NPIV migration during a low I/O period to mitigate the impact of delayed I/O while enabling the NPIV ports.
The banking client was facing significant performance issues during node failures and code upgrades due to multiple SCSI I/O errors experienced at host-side prior to enabling NPIV ports in SAN Volume Controller storage. This was due to the host multipath failover to the alternate path available with the partner node. This issue was addressed, thanks to NPIV supporting the availability of the failed node port WWPNs in the partner node seamlessly and transparent to host multipath. This has improved the host I/O performance significantly, from 30 seconds to less than 4 seconds.
In addition, the client setup is already future-ready to utilize the following availability and performance benefits:
- The NPIV ports can be used to configure “hot spare” nodes to improve availability by introducing node-level redundancy into the SAN Volume Controller cluster.
- With the presence of NPIV ports, we can run both the NVMe and Fibre Channel communication simultaneously over the same physical connection. The adoption of NVMe will greatly improve the performance of the SAN Volume Controller storage overall.
IBM Systems Lab Services has proven expertise in designing and implementing such niche features involving complex solutions. For more support on Storage solutions, reach out to IBM Systems Lab Services today.
Related Information
Was this topic helpful?
Document Information
Modified date:
10 June 2021
UID
ibm11126017