Spectrum Virtualize Update 7.7.0
bwhyte 310000B8UF Comment (1) Visits (11125)
Introducing Spectrum Virtualize Version 7.7.0 Software
This week I have been travelling in the Nordics. Despite now living in New Zealand, I can’t escape from the great User Group events that the Swedish team have been running for many years now, and its great to actually visit Stockholm in the ‘almost’ summer, rather than my usual visit in chilly December. On Tuesday we also held our first Spectrum Storage User Group in Denmark, and it was great to see over 50, new and familiar faces at this inaugural event which everyone agreed was a great success.
Coincidentally on Tuesday we also announced the latest version of Spectrum Virtualize (7.7.0) which will be available for download next month for SVC, Storwize FlashSystem V9000 and VersaStack.
Highlights of 7.7.0
The usage model for all Spectrum Virtualize products is based around 2-way active/active node models. That is a pair of distinct control modules that share active/active access for a given volume. These nodes each have their own Fibre Channel WWNN. Thus all ports presented from each node have a set of WWPNs that are presented to the fabric.
Traditionally, should one node fail or be removed for some reason, the paths presented for volumes from that node would go offline, thus we rely on the native OS multipathing software to failover from using both sets of WWPN to just those that remain online. While this is exactly what multipathing software is designed to do, it can be problematic, particularly if paths are not seen as coming back online for some reason. (Linux used to be terrible for this!) More importantly we are at the mercy of software outwit our control.
Starting from 7.7.0, we now can switch into a mode where we take care of this at the node level using NPIV. Essentially when enabled (and there is a transitional mode for backwards compatibility during the transition period) each physical WWPN will report up to 3 virtual WWPNs.
Thats a lot of words, and hopefully the pictures below will make things much clearer!
This shows the 3 virtual ports, note that the failover host attach port (in pink) is not active at this time. The second example shows what happens if the second node has failed. Now the failover host attach ports on the remaining node are active and have taken on the WWPN of the failed node's primary host attach port.
In release 7.7.0, this all happens automatically when you have enabled NPIV at a system level. At this time, the failover only happens automatically between the two nodes in an I/O Group. However, a new command 'swapnode' can be used to take a spare node and swap that node into the cluster for any given node. This can be used to regain redundancy of the node failure is going to take some time to recover. You can expect further enhancements to the feature set that will make use of NPIV in the not too distant future.
I will cover the other features in a few more posts over the next week or so, for now I have the small matter of travelling back to New Zealand from Sweden... see you on the other side.