Here is a little test. Check your documentation: Do you know how to power down and power up the equipment in your computer room? If you had to power off your site in a hurry would you know how? If you wanted to script a shutdown, could you do it?
Here are some hints and tips that might help you with some of my favourite products:
The process to power up or down your DS8000 is documented in the Information Center here.
If you want to script powering off a DS8000 storage unit you can use thechsu -pwroff DSCLI command. This command will shut down and power off the specified DS8000 unit. Be careful before powering off the unit to ensure that all I/O activity has been stopped. An example of the command is shown below. Your machine will have a different IP address, password and serial number to mine. Note the serial number always ends in zero because we send the command to the storage unit.
Last week I talked about the differences between the XIV Generation 2 and XIV Gen3 by just looking at the rack. This week we open the front door to see if we can spot any more differences...
First up you notice that it looks almost exactly the same..... but appearances can be deceiving.
So what actually is different? From the front there are three obvious visible differences, two of which are not that interesting....
The XIV Generation 2 has a storage grid that uses two 48 port Gigabit Ethernet network switches for interconnection between the modules (these are only visible from around the back of the rack). However these switches get redundant power via an RPS-600 Redundant Power Supply (RPS) which sits at the front of the rack (directly above module 6). The XIV Gen3 on the other hand uses two 36 port Infinband switches that have redundant power supplies built in. So the Gen3 does not need the RPS. Thus in the XIV Gen3 it is no longer there. But its spot has not remained empty....
The XIV Generation2 has a special server called a Maintenance Module located at the rear of the rack. You may notice the USB modem plugged into it. The XIV Gen3 uses an IBM System x3250 M3 mounted at the front of the rack. This server is used for maintenance, upgrades and remote access (if necessary, via modem). You can spot it here directly below the nameplate where the RPS used to be:
If you look closely at the disks in a Gen3 you will notice they are marked as SAS drives, not SATA. This gives us a performance boost even though the rotation speed remains the same.If you want to see this closeup yourself, check out this Kaon 3D model of the XIV Gen3.
This got me wondering why SAS drives that have the same rotational speed and seek time as SATA drives, could perform better. The main two reasons are that SAS is full duplex and that SAS supports tagged command queuing. There is a great article regarding the differences here that references SPC testing that Seagate performed. A quote from the article:
SAS drives offer a significant improvement in performance over SATA drives in both throughput and IOPs primarily due to their full duplex, bi-directional I/O capabilities. Published Storage Performance Council (SPC) benchmark results demonstrate this feature with up to 64 percent improvement in the SPC-2 benchmark (based on multiple workload testing).
So is that it for differences between XIV Generation 2 and Gen3? Well visibly from the front... yes it is. The big changes are around the back and inside the modules, which I will cover in a future blog post.
In the meantime, check out my new Visios for XIV. I have added three new stencils (which I am still working on). Check them out and let me know what you think. You will find them here. If and when I update them, you will get a notification so you can keep up to date.
The IBM Storage Management Console for VMware vCenter version 2.5.1 is now available for download and install. This version supports XIV, SVC and Storwize V7000 as per the versions on the following table (the big change being support for version 6.2):
If you want to see a video showing the capabilities of the new console, check out this link.
After installing the console, you will get this lovely new icon:
Start it up and select the option to add new storage, you now get three choices:
If your using SVC or Storwize V7000 you need to specify an SSH private key. This key MUST be in Open SSH format. This caused me a problem as I kept getting this message when trying to add my Storwize V7000 to the plug-in:
Unable to connect to 10.1.60.107. Please check your network connection, user name, and other credentials.
I could use the same IP address, userid and SSH private key to logon to the Storwize V7000 using putty, so I knew none of these things were wrong.
I reread the Installation Instructions closely and realized my mistake. It clearly states:
Important: The private SSH key must be in the OpenSSH format.
If your key is not in the OpenSSH format, you can use a certified
OpenSSH conversion utility.
I pondered what conversion utility I could use when I realized I had the utility all the time:Puttygen. I opened PuttyGen, imported my private key (the .ppk file) and exported my SSH private key using OpenSSH format. You don't need to do anything with the public key.
I was then able to add the Storwize V7000 by specifying the private SSH key exported using OpenSSH format.
Now I have both IBM XIV and Storwize V7000 in the vCenter plug-in and can get detailed information about and manipulate both. In this example I have highlighted the Storwize V7000, revealing it is on 184.108.40.206 firmware.
I was tempted to detail all the many things you can do with the plug-in, but your better off watching the video via this link.
So are you using the plug-in? Have you upgraded to version 2.5.1 yet? Comments very welcome!
Hopefully if your were in Melbourne last week you made it to the IBM Pulse 2011 conference at the Crown Promenade. It was a great success and with 850 attendees, the facilities were packed, especially the main hall.
My highlights? Well apart from visiting the IBM developerWorks stand and getting a free IBM floppy disk T-Shirt...
... it was listening to customers. There were 14 customer case study presentations where attendees could hear real world experiences from real world customers. For the storage track we were lucky to have Angus Griffin from Edith Cowan University talking about how they use IBM solutions including IBM SVC with VMware SRM, to build their Disaster Recovery solution. Angus is a great presenter who used a sort of Takahashi MethodPowerPoint deck where each slide was just one sentence. Below is an example. Can you guess what he was talking about?
It was of course why clients sometimes do not have a comprehensive disaster recovery strategy.
I presented on Storage Virtualization and the Storwize V7000. You can check out my presentation on Slideshare. I have struggled for some time to match my presentation style to the sort of material that IBM produces. I am working to a more pared back approach. If you view this presentation on my Slideshare channel you will also get some speaker notes.
If you want a copy of the presentation and your an IBMer, you can find it on Cattail. For everyone else, please send me an email or leave a comment.
The other client who presented in the storage track, was Richard Whybrow from Hertz Australia. Richards presentation on how Hertz use IBM solutions to manage their backups and encryption requirements was short and to the point. But the highlight was Richard's movies. I want to point you to two of them which you can find on his Youtube channel. The first one is hilarious.... here is the SAL 9000 restoring 1.6 TB of data in seconds!
If your looking for something slightly more serious, here is Richard's winning entry to theIBM Tivoli Software Products Rock competition. Richard is sitting at Southbank, close to the IBM Building here in Melbourne. There is also a great shot of Melbourne's Flinders Street Station at the end (as well as a tribute to the film Minority Report)
Rob Jackard from the ATS Group does a great job amalgamating IBM storage site updates so I am sharing them here with you. Here is my high level view:
AIX Users: Review the service dates for your technology level. DS3500 users: Upgrade your firmware to 7.70.45.00 or 7.77.19.00. DS8000 users: Take note of the limitation on resizing a space efficient repository. I dealt with this recently at a client but writing a script to delete the flashcopy targets, delete and recreate the repositories and then create the flashcopy targets again. SVC and Storwize V7000 users: Upgrade to 220.127.116.11 or 18.104.22.168. Be aware of the limitations on split cluster and Global Mirror intra-cluster mirroring.
(2011.06.28) Technical Bulletin: AIX 5.3 Support Lifecycle Notice: NOTE-1: After October, 1, 2011, IBM will no longer provide generally available fixes or interim fixes for new defects on systems at AIX 5300-11. NOTE-2: End of Support for AIX 5.3 has been announced as 04/30/2012. NOTE-3: IBM is no longer providing generally available fixes or interim fixes for new defects on systems at AIX 53-TL06, -TL07, -TL08, -TL09, -TL10. http://www14.software.ibm.com/webapp/set2/subscriptions/onvdq?mode=18&ID=2110&myns=pwraix53
(2011.06.28) Technical Bulletin: AIX 6.1 Support Lifecycle Notice: NOTE-1: After October, 1, 2011, IBM will no longer provide generally available fixes or interim fixes for new defects on systems at AIX 6100-04. NOTE-2: Sometime after May 1, 2012, IBM will no longer provide generally available fixes or interim fixes for new defects on systems at AIX 6100-05. NOTE-3: IBM is no longer providing generally available fixes or interim fixes for new defects on systems at AIX 61-TL00, -TL01, -TL02, -TL03. http://www14.software.ibm.com/webapp/set2/subscriptions/pqvcmjd?mode=18&ID=5488&myns=paix61
(2011.06.29) Space Efficient Flash Copy Repository size should not be changed for an existing repository. NOTE: The code fix, to fail the chsestg command if the size of the repository is changed, is available for Release 4.3 (Bundle 22.214.171.124) and Release 5.1.5 (Bundle 126.96.36.199). https://www-304.ibm.com/support/docview.wss?uid=ssg1S1003793
(2011.07.11) Storwize V7000 and SAN Volume Controller Software Upgrades to V188.8.131.52 May Stall if Performance Monitoring Activities are Performed During the Upgrade Process. NOTE: This issue has been fixed by APAR IC77000 in the V184.108.40.206 PTF release. https://www-304.ibm.com/support/docview.wss?uid=ssg1S1003846
(2011.06.10) Storwize V7000 and SAN Volume Controller FlashCopy Replication Operations Involving Volumes Greater Than 2 TB in Size Will Result in Incorrect Data Being Written to the FlashCopy Target Volume. NOTE: This issue is fixed by APAR IC76806 in the 220.127.116.11 and 18.104.22.168 PTF releases. https://www-304.ibm.com/support/docview.wss?uid=ssg1S1003840
As a child I used to love the spot the difference cartoon in the Sunday paper. You usually had 10 differences to circle... and I could never find the last one. Look carefully at the two machines below. Can you spot the differences?
It's an XIV Generation 2 on the left and an XIV Gen3 on the right. For me it's the side panels that give it away (of course the Gen3 printed on the front panel helps).
The big change is the rack that the product uses:
The Generation 2 IBM XIV uses an APC AR3100 (also called a NetShelter). The Gen3 IBM XIV uses an IBM T42 rack .
So why the change?
Three good reasons:
Using the T42 lets us offer an optional Ruggedized Rack Feature, providing additional hardware that reinforces the rack and anchors it to the floor. This hardware is designed primarily for use in locations where earthquakes are a concern. As you may be aware there have been some major earthquakes around the world recently (with tragic results). Clearly our clients in earthquake prone areas need us to provide a model that can be hardened for use in earthquake zones.
Using the IBM T42 rack lets us offer an optional IBM Rear Door Heat Exchanger, which is an effective way to assist your Air Conditioning system in keeping your datacenter cool. It removes heat generated by the modules in the XIV before the heat enters the room. Inside the door of the heat exchanger are sealed tubes filled with circulating chilled water. Its unique design uses standard fittings and couplings and because there are no moving or electrical parts, helps increase reliability. It can be opened like any rear cover, so serviceability of an XIV Gen3 fitted with a heat exchanger is as easy as the standard air cooled version.
Using the T42 lets us offer a rack which matches our standard rack offering. It's a sturdier rack and travels far better over both short and long distances. To put it simply: Its a more substantial rack.
One nice feature that both products offer is feature code 0200 (weight reduction for shipping). When ordered it tells the plant to ship the XIV in a weight reduced format. For Generation 2 this means the rack that IBM ship will weigh around 300kg (unpacked from the shipping crate). The rest of the machine (the modules and the UPSs) are shipped in separate boxes. The XIV Gen3 will weigh more as less hardware is removed, although I am still confirming what that will be. The advantage is that you can user lower rated goods lifts and move the XIV across floors that are not rated for the maximum weight. You just have to ensure that planned location of the XIV can support the final weight. And the really nice thing? This feature is available at no extra cost.
(edited 27/7/11 to clarify feature code 0200 will be different for XIV Gen3).
Tiny toast or giant hand? It's not an optical illusion.
When IBM offered 2 TB drives on the XIV, I thought I was seeing an illusion: the overall power consumption had dropped (not risen). Guess what? With XIV Gen3, power consumption drops yet again.
Using worst case power consumption numbers I can see the following maximums:
180 drive XIV Generation 2 with 1 TB drives(79TB usable): 8.4 kVA
180 drive XIV Generation 2 with 2 TB drives(161TB usable): 7.1 kVA
180 drive XIV GEN3 with 2 TB drives (161TB usable): 6.7 kVA
So with every update to the product, power consumption has kept dropping. Lets compare the first Generation 2 to the XIV Gen3:
Usable capacity up by 103%
Power consumption down by 20%
Heat output down by 20%
Noise level down by 33%
Performance up by up to 400%
How about this as a measure: lets compare the Microsoft Exchange ESRP report for XIV Generation 2 found here with the newly released report for XIV Gen3 found here. While the ESRP program is not a bench marking program, the results are truly impressive.
For more information on Power Consumption and the cost of running XIV, check out these white papers:
If your an existing XIV user and your interested in measuring your current power consumption, check out my tutorial here. If you want the spreadsheet shown in the video, drop me a comment (your email address will appear in my comments dashboard but will not be visible to anyone else).
IBM has been selling IBM branded Brocade switches since 2001 when we announced the 8-port 2109-S08 and 16-port 2109-S16. These were classic switches that ran at 1 Gbps. They had a front operator panel with a small keypad (a feature which in the rush to fit in more SFPs, did not appear in future models). Since then IBM has gone on to sell many of Brocades switches and directors.
Sometimes you need to convert a Brocade model name to an IBM model name (or the other way around). One way to assure yourself with scientific accuracy which type of switch you are working on, is to telnet or SSH to a switch and issue a switchshowcommand. You will get a switchType value. In this example, my switch is a switchtype 27.2.
Or if you are using the Web GUI, you can also see the switch type on the opening screen. In this example the switch is a type 34.0.
Having scientifically determined the type of switch, we can now use my decoder ring to determine the IBM machine type, IBM model name and the Brocade model name. I have ordered the switches by Type number. There are three things to note:
Brocade have dropped the Silkworm branding, so I have dropped it too.
Each switch type has sub-types, for example 34.0 and 34.1. The difference is a sub-version number which is normally not published or documented.
IBM announced 16 Gbps SAN switches on August 16, 2011 so I updated the chart on that date.
If you use Data Center Fabric Manager (DCFM), it actually displays the Switch Type using Brocade model names. Here is an example report from the DCFM we are running in my lab. This level of information is very helpful.
If you have Brocade fibre channel switches in your SAN, you need to be aware of the method which Brocade use to manage firmware releases. All 4 and 8 Gbps Brocade SAN switches use a Linux based firmware which Brocade call Fabric Operating System or FOS. Updates to this firmware are released in families. This started with version 4, then version 5 and then version 6. Each family has had a series of updates. Version 5.0.x went to 5.1.x, 5.2.x and 5.3.x. Version 6.0.x went to 6.1.x, 6.2.x, 6.3.x and currently 6.4.x.
The good news is that you can non-disruptively update firmware on Brocade switches: So you can move to higher releases without an outage (note there may be exceptions to this, always read your release notes to be sure). However you need to be aware of a rule regarding thefromandto versions. Since FOS 6.0.0 Brocade have a one-release migration policy. This allows more reliable and robust migrations for customers. By having fewer major changes in internal databases, configurations, and subsystems, the system is able to perform the upgrade more efficiently, taking less time and ensuring a truly seamless and non-disruptive process for the fabric. The one-release migration policy also reduces the large number of upgrade/downgrade permutations that must be tested, allowing Brocade to spend more effort ensuring the supported migration paths are thoroughly and completely verified.
Disruptive upgrades are allowed, but only for a two-level migration (for instance from 6.2 to 6.4, skipping 6.3).
So why should you care?
Well your upgrade philosophy may be: If it aint broke, don't fix it. Or you may have the policy: We do fix on fail, apart from that, we don't update firmware. Much as I can understand the attraction of this, when you finally do perform an update, you may find yourself having to do many upgrades orkangaroo hopsas I call them.
Lets document some possible kangaroo hops from an old to what is currently the newest release:
As you can see from the steps above, you may have a very long change window if you are choosing to not perform updates on a regular basis. There are also lots of caveats and restrictions based on the hardware of the switch you are running. It is very important you consult the release notes that can be found at the following links:
If your looking for a recommended version to install, the Version 6 release notes page above gives advice on this. Currently it says: IBM recommends that Open System customers that currently use FOS 6.1 or earlier limit migration to FOS 6.2.2b, 6.3.0d, 6.3.1a, or 6.4.0c or later only.
Please notes that the release notes published on IBM's website are Brocade documents. While this is good (since it means they come straight from the manufacturer), you need to decode which Brocade hardware model is which IBM machine type. Theversion 6 URL above also contains a product cross-reference which lets you convert Brocade product names to IBM product names. I am also working on a post which will help you with this, so watch this space.
Edit July 19, 2011. The original post suggests 6.4.0a as a go-to level, this has since been removed. It has also been pointed out to me that some blade type switches may not be capable of hot-code-load (HCL). You need to check your vendor release notes to be certain.
Just a quick post as I am leaving Singapore to return to Melbourne, I thought I would share two more photos with you.
No trip to Singapore is complete without a visit to the Merlion, the mythical creature who acts as a mascot for Singapore. The fish body represents Singapore's origin as a fishing village when it was called Temasek, which means "sea town" in Javanese. The lion head represents Singapore's original name — Singapura — meaning "lion city" or "kota singa" (thanks to Wikipedia for the text).
Here is a night view across to the Marina Bay Sands, the integrated resort fronting Marina Bay in Singapore. Developed by Las Vegas Sands, it is billed as the world's most expensive standalone casino property at S$8 billion, including cost of the prime land. The remarkable building on the left hand side is the ArtScience Museum. The architecture is said be a form reminiscent of a lotus flower (again, thanks to Wikipedia for the information).
I would love to return to Singapore soon in the form of a tourist, it is an amazing city full of vibrant energy. With Singapore National Day coming up on August 9 and the Formula One in September, there will certainly be plenty for visitors to see and do.
If you want to see more of my photographs, feel free to visit my Flikr account.
I am in Singapore this week running a teach the teacher seminar on Storwize V7000. We are creating more instructors as demand for courses on the product continues to increase.
I am fortunate to be staying at the Marina Bay Sands Resort, which is one of the most mind blowing facilities I have ever seen. Check out this view of the Infinity Pool from the Skydeck up on the 57th floor.
Out my hotel window they are building a huge new facility known as Gardens by the Bay, but frankly I think it looks more a spaceport!
According to Wikipedia these are Supertrees: tree-like structures that dominate the Gardens landscape with heights that range between 25 and 50 metres. They are vertical gardens that perform a multitude of functions, which include planting, shading and working as environmental engines for the gardens.
But doesn't this look like two huge crashed spaceships?
Actually they will be giant conservatories, again according to Wikipedia they are the Flower Dome and the Cloud Forest.
They certainly know how to think big in Singapore!
XIV Gen 3 modules are built on a new generation of Intel microprocessors based on the Nehalemmicro-architecture. Nehalem is the most profound architecture change that Intel has introduced in the 21st century. Some of the key changes and their benefits are:
Integrated memory controller: The memory controller now sits on the same silicon die as the processors. It runs at the same clock-speed as the processors instead of at the lower speed of an external front-side bus. This dramatically improves memory performance and therefore overall system performance.
No need for buffered memory: Previously, buffered memory was required to improve the performance of the memory sub-system. Buffered memory is relatively expensive and energy hungry. With the faster Nehalem integrated memory controller, the system can deliver improved performance without needing buffered memory, saving cost as well as energy. XIV Gen 3 will be faster and cooler at the same time using unbuffered DDR3 RAM. And since the memory is cheaper, we can put more in.
Increased memory capacity: Nehalem supports more memory chips at higher speeds. In XIV Gen 3 this translates into a 50 to 200% increase in system cache, significantly lifting the performance headroom of an already stellar performer.
No more front-side bus: Memory, second CPU package and peripherals no longer have to share and wait on a single bus to communicate. The connections are now direct or switched, enabling increased parallelism and the ability to do more work simultaneously.
PCI Express Generation 2: The I/O sub-system doubles in speed with the introduction of PCI Gen-2. This enables faster network and I/O adapters for XIV Gen 3: - 8 Gbps fibre-channel host connections. - More iSCSI host connections (including at the entry configuration of 6 modules) - Multi-channel, low latency infiniband as the inter-module connection. - A slot for solid state disk (SSD).
Better systems management instrumentation: The system supports increased monitors for sub-systems for more sophisticated self diagnostics and healing. Remote management capability has also been improved.
Furthermore, the new motherboards have additional expansion capacity (more processors, memory and I/O) that can be utilized to deliver future improvements in performance and increased software functionality.
XIV Gen 3 is not the first storage sub-system to adopt the Nehalem architecture. Some of our competitors (EMC and NetApp for example) have already done so with their dual-controller arrays. XIV Gen 3 takes the Nehalem architecture advantage forward, not twice, but six to fifteen times.
Many thanks to Patrick Lee for writing up this great summation.
Today IBM is announcing a new member of the XIV family, which we are calling XIV Gen3. I thought I would give a brief history of how we got here before I get too carried away with details.
What was Generation 1 of the XIV?
In 2002 an Israeli startup began work on a revolutionary new grid storage architecture. They devoted three years to developing this unique architecture that they called XIV. They delivered their first system to a customer in 2005. Their product was called Nextra(does it look familiar?).
What was Generation 2 of the XIV?
In December 2007, the IBM Corporation acquired XIV, renaming the product the IBM XIV Storage System. The first IBM version of the product was launched publicly on September 8, 2008. Unofficially within IBM we refer to this as Generation 2 of the XIV.
The differences between Gen1 and Gen2 were not architectural, they were mainly physical. We introduced new disks, new controllers, new interconnects, improved management, additional software functions.
As anyone who has read my blog knows, I have been working on the Generation 2 XIV since the day IBM began planning to release it as an IBM product. So it is very exciting to be able to share with you that we are now releasing Generation 3 of the IBM XIV Storage System.
What is Generation 3 of the XIV?
Generation 3 of the XIV is a new member of the XIV family, that will be an alternative to the Generation 2 XIVs we currently offer. It does not change the fundamental architecture, that remains the same. What it does do is bring significant updates to almost every part of the XIV, including:
Introducing Infiniband interrconnections between the modules.
Upgrading the modules to add 2.4 Ghz quad core Nehalem CPUs; new DDR3 RAM and PCI Gen 2 (using 8x slots that can operate at 40 Gbps) .
Upgrading the host HBAs to operate at 8 Gbps.
Upgrading the SAS adapter.
Upgrading the disks to native SAS.
A New rack.
A new dedicated SSD slot (per module) for future SSD upgrades.
Enhancements to the GUI plus a native Mac OS version.
I will be blogging about each of these changes over the coming days and weeks as we move to general availability date, so watch this space. In the meantime, why not visit the official XIV page here and check out the ITG Report linked there.
I have received this question several times, so it's clearly something people are interested in.
The Storwize V7000 has two controllers known as node canisters. It's an active/active storage controller, in that both node canisters are processing I/O at any time and any volume can be happily accessed via either node canister.
The question then gets asked: what happens if a node canister fails and can I test this? The answer to the question of failure is that the second node canister will handle all the I/O on its own. Your host multipathing driver will switch to the remaining paths and life will go on. We know this works because doing a firmware upgrade takes one node canister offline at a time, so if you have already done a firmware update, then you have already tested node canister fail over. But what if you want to test this discretely? There are four ways:
Walk up to the machine and physically pull out a node canister. This is a bit extreme and is NOT recommended.
Power off a node canister using the CLI (using the satask stopnode command). This will work for the purposes of testing node failure, but the only way to power on the node canister is to pull it out and reinsert it. This is again a bit extreme and is not recommended. This is also different to an SVC, since each SVC has it's own power on/off button.
Use the CLI to remove one node from the I/O group (using the svctask rmnode command). This works on an SVC because the nodes are physically separate. On a Storwize V7000 the nodes live in the same enclosure and a candidate node will immediately be added back to the cluster, so as a test this is not that helpful.
Place one node into service state and leave it there will you check all your hosts. This is my recommended method.
First up this test assumes there is NOTHING else wrong with your Storwize V7000. We are not testing multiple failure here. You need to confirm the Recommend Actions panel as shown below, contains no items. If there are errors listed, fix them first.
Once we are certain our Storwize V7000 is clean and ready for test, we need to connect via the Service Assistant Web GUI. If you have not set up access to the service assistant, please read this blog post first.
So what's the process?
Firstly logon to the service assistant on node 1 and place node 2 into service state. I chose node 2 because normally node 1 is the configuration node (the node that owns the cluster IP address). You need to confirm your connected to node 1 (check at top right) and select node 2 (from the Change Node menu) and then choose to Enter Service State from the drop down and hit GO.
You will get this message confirming your placing node 2 into service state. If it looks correct, select OK.
The GUI will pause on this screen for a short period. Wait for the OK button to un-grey.
You will eventually get to this with Node 1 Active and Node 2 in Service.
Node 2 is now offline. Go and confirm that everything is working as desired on your hosts (half your paths will be offline but your hosts should still be able to access the Storwize V7000 via the other node canister).
When your host checking is complete, you can use the same drop down to Exit Service State on node2 and select GO.
You will get a pop up window to confirm your selection. If the window looks correct, select OK.
You will get the following panel. You will need to wait for the OK button to become available (to un-grey).
Provided both nodes now show as Active, your test is now complete.