IBM Support

Upgrading IBM SAN Volume Controller from CG8 to SV1 model

Technical Blog Post


Abstract

Upgrading IBM SAN Volume Controller from CG8 to SV1 model

Body

This post was coauthored by Ashutosh Pathak and Gauurav Sabharwal, Technical Solution Architects for IBM Systems Lab Services Storage and Software Defined Infrastructure.

 

In this digital era, business agility depends on IT services and their IT infrastructure foundation. Availability is a key challenge in IT management — and has been almost since the start of technology as we know it. Providing 24x7 IT services operation in this digital world is a very important business need. 

The real causes of unplanned and planned downtime are not just power outages and network issues in your IT infrastructure but many other challenges:

  1. Outdated software and hardware
  2. Maintenance services
  3. Hurricanes and floods
  4. Human error

There are many methodologies and technologies to help make the upgrade or migration of your software and firmware manageable, but the real-world challenge is to replace the existing IT infrastructure hardware with no downtime when your applications are running on that hardware. This is particularly challenging in the case of data storage.

Built with IBM Spectrum Virtualize software (part of the IBM Spectrum Storage family), IBM SAN Volume Controller (SVC) is an enterprise-class storage system that helps organizations achieve better data economics by supporting the new large-scale workloads that are critical to success. The upgrade from previous models of Spectrum Virtualize to new hardware can be easy and seamless, without any downtime.

IBM SVC has proven its mettle in the market for the last 15 years and continues to serve enterprise clients successfully. The IBM SVC-SV1 model was launched in 2016 as an upgrade to the existing model to meet ever-growing new workloads and performance requirements.
 

This blog post provides a tutorial for an in-place upgrade from SVC CG8 to SV1 nodes. This can also be referred to as upgrading DH8 nodes to SV1 nodes. A key difference between CG8 nodes and DH8 nodes is that CG8 comes with internal solid-state drives (SSDs), whereas DH8 comes without SSDs.

 

Before you begin, the following are some key points to remember to upgrade seamlessly and without downtime from CG8 to SV1:

 

Planning the upgrade:

 

  • The new SV1 hardware should have a minimum of 4 FC port adapters available for the upgrade.
  • Collect the support logs from the existing SVC cluster, which will help you understand the configuration.
  • Capture the output of the following commands, which will help prepare the node replacement commands.
    • lsnode -delim : (to check the status of nodes and config node, and for FC port mapping)
    • lsnodevpd -delim : (command will capture the front_panel_id of the nodes to determine the physical location of the nodes)
    • sainfo lsservicestatus -node <node1d> -delim : (captures the FC ports WWPNs of node)
    • lsportip -delim : (captures the details of IP, if IP replication is configured)
    • lssystem -delim :
    • lsportfc -delim :
    • lsfabric -delim : (will help to identify the FC ports used for hosts zoning, internode communication and replication)
  • Check the placement of existing SSDs, if there are any SSDs in existing CG8 nodes, in the existing storage pool configuration.
    • Remove all the SSDs from the existing configuration.
      • Remove the SSD mdisk from the existing storage pool and check to see if storage pool has enough space available to migrate the data from SSD mdisk to other mdisks in the storage pool.
      • Change the removed SSD from candidate to unused state.
  • To prepare the FC port mapping commands, use the following link, which will help to create exact mapping and a command to be run on each node. The link asks to provide the output of lsnode, lssystem and lsportfc commands output.
    • http://ports.eu-gb.mybluemix.net/
    • Sample command: satask chvpd -wwnn 500507680100E83A -fcportmap 34-11,33-12,31-13,32-14,41-21,42-22,43-23,44-24,61-31,62-32,63-33,64-34,71-51,72-52,73-53,74-54.
  • Firmware of the existing cluster: The minimum supported firmware is 7.7.1 for the upgrade to SVC-SV1 nodes.
  • All the hosts accessing SVC volumes should be running with correct multipath. Check all the hosts’ multipath before upgrade.
  • All the recommended events should be fixed.
  • Capture the existing zoning information and SAN planning from the client. The information will help you understand the SVC zones in the SAN infrastructure and how the upgrade may affect the overall production.
    • Capture the Switch Port location for each worldwide port name (WWPN) of SVC CG8 node1 and node2 FC ports.
    • Check the existing node-to-node zoning, which FC ports are part of.
    • Check the nodes to hosts zoning.
    • Check the replication zoning, which FC ports are part of.
    • The new SVC-SV1 nodes may come with a greater number of FC ports.
    • Plan for the new zoning configuration based on the new ports. Follow the port designation recommendations for isolating traffic on 2145-SV1 nodes.

 

  • Prepare the zoning commands based on the existing configuration and new SV1 FC ports. Create all the zones.
  • Take all the backups before upgrade: SVC backup, SAN switch config backup.
  • Validate if all hosts are running with the correct multipathing configuration to allow path failover during the replacement.

 

Upgrade of the nodes:

 

  • Power-on of the new SVC-SV1 nodes can be done prior to execution day for any hardware issues, so that you do not face any issues on execution day.
  • Do not connect SV1 node on network or SAN switches.
  • Check all the zoning configurations before the node upgrade and ensure redundant paths are available, so that once you change the node the host initiator will still be able to access the online path of storage.
  • Remove the CG8 SSD, if any, from the logical configuration and change the disk use type from Candidate to Unused.
  • Replace the nodes one by one, and then take a 30-minute cool-down time, so that all the paths on hosts can come online.
    • Connect your laptop on SV1 node technician port using a RJ45 standard network cable.
    • Technician port automatically assigns a DHCP IP to your laptop, if DHCP is on for laptop network port.
      • Technician port default IP address is 192.168.0.1, assign the same range IP to laptop network port, if DHCP is not enable on laptop.
    • Run the sainfo lsservicestatus -delim : to capture the WWPNs of SV1 nodes. These WWPNs should be different from the old SVC nodes, as this is a new node.
    • Assign the worldwide node name (WWNN) and a hardware location in the new 2145-SV1 node for each FC port that is defined on the node you are replacing.
      • satask chvpd -wwnn <WWNN of node> -fcportmap <> command
    • Check the updated information of the SV1 node. Now the WWPN of SV1 node’s FC ports should be same as the old node.
      • lsnodecandidate' and 'sainfo lsservicestatus
    • Issue the rmnode command to delete the old node from the system and input/output (I/O) group.
      • rmnode <node_ID> or <name>
    • Enter the lsnode command to ensure that the node is no longer a member of the system.
      • lsnode -delim :
    • Connect the FC cables on SV1 node adapter 3 at same location on SAN switch ports, where CG8 FC ports were connected.
    • CG8 Node WWPNs are numbered differently as shown below. The physical FC port location mapping to logical FC port mapping is different than SV1 nodes.

 

 

    • The SV1 nodes FC port has same mapping for the physical and logical location.
    • Add the new 2145-SV1 replacement node to the system.
      • addnode -wwnodename WWNN -iogrp <iogroup_name> or <id>
    • Check the status of node using the following command
      • lsnode -delim :
    • If the node is added successfully in the existing cluster, the lsnode command will show SV1 node as part of cluster.
    • The SVC GUI > System menu will show one CG8 node and one SV1 node in a graphical representation of the cluster after the upgrade of one node.
    • Run the rescan on all hosts to verify that all the paths are online.
    • Once both the nodes are replaced successfully and you have more than 64GB memory in the new nodes, run the following command to increase the memory.
      • chnodehw <nodeid>

 

Each SVC node is replaced one by one, without downtime, as the host accesses the data from one path during the upgrade. Plan for a firmware upgrade also, once nodes are upgraded successfully.

 

By the end of this process, system admins and field engineers can perform regular health checks to ensure their hardware is running smoothly.

 

If you’re looking for any support on IBM software-defined storage and the IBM Spectrum Storage Suite, contact IBM Systems Lab Services. The Lab Services Storage and Software Defined Infrastructure team has helped clients around the world efficiently capture, deliver, manage and protect data with superior performance.

[{"Business Unit":{"code":"BU054","label":"Systems w\/TPS"},"Product":{"code":"HW206","label":"Storage Systems"},"Component":"","Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"","Edition":"","Line of Business":{"code":"LOB26","label":"Storage"}}]

UID

ibm16165063