Here is a little test. Check your documentation: Do you know how to power down and power up the equipment in your computer room? If you had to power off your site in a hurry would you know how? If you wanted to script a shutdown, could you do it?
Here are some hints and tips that might help you with some of my favourite products:
The process to power up or down your DS8000 is documented in the Information Center here.
If you want to script powering off a DS8000 storage unit you can use thechsu -pwroff DSCLI command. This command will shut down and power off the specified DS8000 unit. Be careful before powering off the unit to ensure that all I/O activity has been stopped. An example of the command is shown below. Your machine will have a different IP address, password and serial number to mine. Note the serial number always ends in zero because we send the command to the storage unit.
Last week I talked about the differences between the XIV Generation 2 and XIV Gen3 by just looking at the rack. This week we open the front door to see if we can spot any more differences...
First up you notice that it looks almost exactly the same..... but appearances can be deceiving.
So what actually is different? From the front there are three obvious visible differences, two of which are not that interesting....
The XIV Generation 2 has a storage grid that uses two 48 port Gigabit Ethernet network switches for interconnection between the modules (these are only visible from around the back of the rack). However these switches get redundant power via an RPS-600 Redundant Power Supply (RPS) which sits at the front of the rack (directly above module 6). The XIV Gen3 on the other hand uses two 36 port Infinband switches that have redundant power supplies built in. So the Gen3 does not need the RPS. Thus in the XIV Gen3 it is no longer there. But its spot has not remained empty....
The XIV Generation2 has a special server called a Maintenance Module located at the rear of the rack. You may notice the USB modem plugged into it. The XIV Gen3 uses an IBM System x3250 M3 mounted at the front of the rack. This server is used for maintenance, upgrades and remote access (if necessary, via modem). You can spot it here directly below the nameplate where the RPS used to be:
If you look closely at the disks in a Gen3 you will notice they are marked as SAS drives, not SATA. This gives us a performance boost even though the rotation speed remains the same.If you want to see this closeup yourself, check out this Kaon 3D model of the XIV Gen3.
This got me wondering why SAS drives that have the same rotational speed and seek time as SATA drives, could perform better. The main two reasons are that SAS is full duplex and that SAS supports tagged command queuing. There is a great article regarding the differences here that references SPC testing that Seagate performed. A quote from the article:
SAS drives offer a significant improvement in performance over SATA drives in both throughput and IOPs primarily due to their full duplex, bi-directional I/O capabilities. Published Storage Performance Council (SPC) benchmark results demonstrate this feature with up to 64 percent improvement in the SPC-2 benchmark (based on multiple workload testing).
So is that it for differences between XIV Generation 2 and Gen3? Well visibly from the front... yes it is. The big changes are around the back and inside the modules, which I will cover in a future blog post.
In the meantime, check out my new Visios for XIV. I have added three new stencils (which I am still working on). Check them out and let me know what you think. You will find them here. If and when I update them, you will get a notification so you can keep up to date.
The IBM Storage Management Console for VMware vCenter version 2.5.1 is now available for download and install. This version supports XIV, SVC and Storwize V7000 as per the versions on the following table (the big change being support for version 6.2):
If you want to see a video showing the capabilities of the new console, check out this link.
After installing the console, you will get this lovely new icon:
Start it up and select the option to add new storage, you now get three choices:
If your using SVC or Storwize V7000 you need to specify an SSH private key. This key MUST be in Open SSH format. This caused me a problem as I kept getting this message when trying to add my Storwize V7000 to the plug-in:
Unable to connect to 10.1.60.107. Please check your network connection, user name, and other credentials.
I could use the same IP address, userid and SSH private key to logon to the Storwize V7000 using putty, so I knew none of these things were wrong.
I reread the Installation Instructions closely and realized my mistake. It clearly states:
Important: The private SSH key must be in the OpenSSH format.
If your key is not in the OpenSSH format, you can use a certified
OpenSSH conversion utility.
I pondered what conversion utility I could use when I realized I had the utility all the time:Puttygen. I opened PuttyGen, imported my private key (the .ppk file) and exported my SSH private key using OpenSSH format. You don't need to do anything with the public key.
I was then able to add the Storwize V7000 by specifying the private SSH key exported using OpenSSH format.
Now I have both IBM XIV and Storwize V7000 in the vCenter plug-in and can get detailed information about and manipulate both. In this example I have highlighted the Storwize V7000, revealing it is on 18.104.22.168 firmware.
I was tempted to detail all the many things you can do with the plug-in, but your better off watching the video via this link.
So are you using the plug-in? Have you upgraded to version 2.5.1 yet? Comments very welcome!
Hopefully if your were in Melbourne last week you made it to the IBM Pulse 2011 conference at the Crown Promenade. It was a great success and with 850 attendees, the facilities were packed, especially the main hall.
My highlights? Well apart from visiting the IBM developerWorks stand and getting a free IBM floppy disk T-Shirt...
... it was listening to customers. There were 14 customer case study presentations where attendees could hear real world experiences from real world customers. For the storage track we were lucky to have Angus Griffin from Edith Cowan University talking about how they use IBM solutions including IBM SVC with VMware SRM, to build their Disaster Recovery solution. Angus is a great presenter who used a sort of Takahashi MethodPowerPoint deck where each slide was just one sentence. Below is an example. Can you guess what he was talking about?
It was of course why clients sometimes do not have a comprehensive disaster recovery strategy.
I presented on Storage Virtualization and the Storwize V7000. You can check out my presentation on Slideshare. I have struggled for some time to match my presentation style to the sort of material that IBM produces. I am working to a more pared back approach. If you view this presentation on my Slideshare channel you will also get some speaker notes.
If you want a copy of the presentation and your an IBMer, you can find it on Cattail. For everyone else, please send me an email or leave a comment.
The other client who presented in the storage track, was Richard Whybrow from Hertz Australia. Richards presentation on how Hertz use IBM solutions to manage their backups and encryption requirements was short and to the point. But the highlight was Richard's movies. I want to point you to two of them which you can find on his Youtube channel. The first one is hilarious.... here is the SAL 9000 restoring 1.6 TB of data in seconds!
If your looking for something slightly more serious, here is Richard's winning entry to theIBM Tivoli Software Products Rock competition. Richard is sitting at Southbank, close to the IBM Building here in Melbourne. There is also a great shot of Melbourne's Flinders Street Station at the end (as well as a tribute to the film Minority Report)
Rob Jackard from the ATS Group does a great job amalgamating IBM storage site updates so I am sharing them here with you. Here is my high level view:
AIX Users: Review the service dates for your technology level. DS3500 users: Upgrade your firmware to 7.70.45.00 or 7.77.19.00. DS8000 users: Take note of the limitation on resizing a space efficient repository. I dealt with this recently at a client but writing a script to delete the flashcopy targets, delete and recreate the repositories and then create the flashcopy targets again. SVC and Storwize V7000 users: Upgrade to 22.214.171.124 or 126.96.36.199. Be aware of the limitations on split cluster and Global Mirror intra-cluster mirroring.
(2011.06.28) Technical Bulletin: AIX 5.3 Support Lifecycle Notice: NOTE-1: After October, 1, 2011, IBM will no longer provide generally available fixes or interim fixes for new defects on systems at AIX 5300-11. NOTE-2: End of Support for AIX 5.3 has been announced as 04/30/2012. NOTE-3: IBM is no longer providing generally available fixes or interim fixes for new defects on systems at AIX 53-TL06, -TL07, -TL08, -TL09, -TL10. http://www14.software.ibm.com/webapp/set2/subscriptions/onvdq?mode=18&ID=2110&myns=pwraix53
(2011.06.28) Technical Bulletin: AIX 6.1 Support Lifecycle Notice: NOTE-1: After October, 1, 2011, IBM will no longer provide generally available fixes or interim fixes for new defects on systems at AIX 6100-04. NOTE-2: Sometime after May 1, 2012, IBM will no longer provide generally available fixes or interim fixes for new defects on systems at AIX 6100-05. NOTE-3: IBM is no longer providing generally available fixes or interim fixes for new defects on systems at AIX 61-TL00, -TL01, -TL02, -TL03. http://www14.software.ibm.com/webapp/set2/subscriptions/pqvcmjd?mode=18&ID=5488&myns=paix61
(2011.06.29) Space Efficient Flash Copy Repository size should not be changed for an existing repository. NOTE: The code fix, to fail the chsestg command if the size of the repository is changed, is available for Release 4.3 (Bundle 188.8.131.52) and Release 5.1.5 (Bundle 184.108.40.206). https://www-304.ibm.com/support/docview.wss?uid=ssg1S1003793
(2011.07.11) Storwize V7000 and SAN Volume Controller Software Upgrades to V220.127.116.11 May Stall if Performance Monitoring Activities are Performed During the Upgrade Process. NOTE: This issue has been fixed by APAR IC77000 in the V18.104.22.168 PTF release. https://www-304.ibm.com/support/docview.wss?uid=ssg1S1003846
(2011.06.10) Storwize V7000 and SAN Volume Controller FlashCopy Replication Operations Involving Volumes Greater Than 2 TB in Size Will Result in Incorrect Data Being Written to the FlashCopy Target Volume. NOTE: This issue is fixed by APAR IC76806 in the 22.214.171.124 and 126.96.36.199 PTF releases. https://www-304.ibm.com/support/docview.wss?uid=ssg1S1003840
As a child I used to love the spot the difference cartoon in the Sunday paper. You usually had 10 differences to circle... and I could never find the last one. Look carefully at the two machines below. Can you spot the differences?
It's an XIV Generation 2 on the left and an XIV Gen3 on the right. For me it's the side panels that give it away (of course the Gen3 printed on the front panel helps).
The big change is the rack that the product uses:
The Generation 2 IBM XIV uses an APC AR3100 (also called a NetShelter). The Gen3 IBM XIV uses an IBM T42 rack .
So why the change?
Three good reasons:
Using the T42 lets us offer an optional Ruggedized Rack Feature, providing additional hardware that reinforces the rack and anchors it to the floor. This hardware is designed primarily for use in locations where earthquakes are a concern. As you may be aware there have been some major earthquakes around the world recently (with tragic results). Clearly our clients in earthquake prone areas need us to provide a model that can be hardened for use in earthquake zones.
Using the IBM T42 rack lets us offer an optional IBM Rear Door Heat Exchanger, which is an effective way to assist your Air Conditioning system in keeping your datacenter cool. It removes heat generated by the modules in the XIV before the heat enters the room. Inside the door of the heat exchanger are sealed tubes filled with circulating chilled water. Its unique design uses standard fittings and couplings and because there are no moving or electrical parts, helps increase reliability. It can be opened like any rear cover, so serviceability of an XIV Gen3 fitted with a heat exchanger is as easy as the standard air cooled version.
Using the T42 lets us offer a rack which matches our standard rack offering. It's a sturdier rack and travels far better over both short and long distances. To put it simply: Its a more substantial rack.
One nice feature that both products offer is feature code 0200 (weight reduction for shipping). When ordered it tells the plant to ship the XIV in a weight reduced format. For Generation 2 this means the rack that IBM ship will weigh around 300kg (unpacked from the shipping crate). The rest of the machine (the modules and the UPSs) are shipped in separate boxes. The XIV Gen3 will weigh more as less hardware is removed, although I am still confirming what that will be. The advantage is that you can user lower rated goods lifts and move the XIV across floors that are not rated for the maximum weight. You just have to ensure that planned location of the XIV can support the final weight. And the really nice thing? This feature is available at no extra cost.
(edited 27/7/11 to clarify feature code 0200 will be different for XIV Gen3).
Tiny toast or giant hand? It's not an optical illusion.
When IBM offered 2 TB drives on the XIV, I thought I was seeing an illusion: the overall power consumption had dropped (not risen). Guess what? With XIV Gen3, power consumption drops yet again.
Using worst case power consumption numbers I can see the following maximums:
180 drive XIV Generation 2 with 1 TB drives(79TB usable): 8.4 kVA
180 drive XIV Generation 2 with 2 TB drives(161TB usable): 7.1 kVA
180 drive XIV GEN3 with 2 TB drives (161TB usable): 6.7 kVA
So with every update to the product, power consumption has kept dropping. Lets compare the first Generation 2 to the XIV Gen3:
Usable capacity up by 103%
Power consumption down by 20%
Heat output down by 20%
Noise level down by 33%
Performance up by up to 400%
How about this as a measure: lets compare the Microsoft Exchange ESRP report for XIV Generation 2 found here with the newly released report for XIV Gen3 found here. While the ESRP program is not a bench marking program, the results are truly impressive.
For more information on Power Consumption and the cost of running XIV, check out these white papers:
If your an existing XIV user and your interested in measuring your current power consumption, check out my tutorial here. If you want the spreadsheet shown in the video, drop me a comment (your email address will appear in my comments dashboard but will not be visible to anyone else).
IBM has been selling IBM branded Brocade switches since 2001 when we announced the 8-port 2109-S08 and 16-port 2109-S16. These were classic switches that ran at 1 Gbps. They had a front operator panel with a small keypad (a feature which in the rush to fit in more SFPs, did not appear in future models). Since then IBM has gone on to sell many of Brocades switches and directors.
Sometimes you need to convert a Brocade model name to an IBM model name (or the other way around). One way to assure yourself with scientific accuracy which type of switch you are working on, is to telnet or SSH to a switch and issue a switchshowcommand. You will get a switchType value. In this example, my switch is a switchtype 27.2.
Or if you are using the Web GUI, you can also see the switch type on the opening screen. In this example the switch is a type 34.0.
Having scientifically determined the type of switch, we can now use my decoder ring to determine the IBM machine type, IBM model name and the Brocade model name. I have ordered the switches by Type number. There are three things to note:
Brocade have dropped the Silkworm branding, so I have dropped it too.
Each switch type has sub-types, for example 34.0 and 34.1. The difference is a sub-version number which is normally not published or documented.
IBM announced 16 Gbps SAN switches on August 16, 2011 so I updated the chart on that date.
If you use Data Center Fabric Manager (DCFM), it actually displays the Switch Type using Brocade model names. Here is an example report from the DCFM we are running in my lab. This level of information is very helpful.
If you have Brocade fibre channel switches in your SAN, you need to be aware of the method which Brocade use to manage firmware releases. All 4 and 8 Gbps Brocade SAN switches use a Linux based firmware which Brocade call Fabric Operating System or FOS. Updates to this firmware are released in families. This started with version 4, then version 5 and then version 6. Each family has had a series of updates. Version 5.0.x went to 5.1.x, 5.2.x and 5.3.x. Version 6.0.x went to 6.1.x, 6.2.x, 6.3.x and currently 6.4.x.
The good news is that you can non-disruptively update firmware on Brocade switches: So you can move to higher releases without an outage (note there may be exceptions to this, always read your release notes to be sure). However you need to be aware of a rule regarding thefromandto versions. Since FOS 6.0.0 Brocade have a one-release migration policy. This allows more reliable and robust migrations for customers. By having fewer major changes in internal databases, configurations, and subsystems, the system is able to perform the upgrade more efficiently, taking less time and ensuring a truly seamless and non-disruptive process for the fabric. The one-release migration policy also reduces the large number of upgrade/downgrade permutations that must be tested, allowing Brocade to spend more effort ensuring the supported migration paths are thoroughly and completely verified.
Disruptive upgrades are allowed, but only for a two-level migration (for instance from 6.2 to 6.4, skipping 6.3).
So why should you care?
Well your upgrade philosophy may be: If it aint broke, don't fix it. Or you may have the policy: We do fix on fail, apart from that, we don't update firmware. Much as I can understand the attraction of this, when you finally do perform an update, you may find yourself having to do many upgrades orkangaroo hopsas I call them.
Lets document some possible kangaroo hops from an old to what is currently the newest release:
As you can see from the steps above, you may have a very long change window if you are choosing to not perform updates on a regular basis. There are also lots of caveats and restrictions based on the hardware of the switch you are running. It is very important you consult the release notes that can be found at the following links:
If your looking for a recommended version to install, the Version 6 release notes page above gives advice on this. Currently it says: IBM recommends that Open System customers that currently use FOS 6.1 or earlier limit migration to FOS 6.2.2b, 6.3.0d, 6.3.1a, or 6.4.0c or later only.
Please notes that the release notes published on IBM's website are Brocade documents. While this is good (since it means they come straight from the manufacturer), you need to decode which Brocade hardware model is which IBM machine type. Theversion 6 URL above also contains a product cross-reference which lets you convert Brocade product names to IBM product names. I am also working on a post which will help you with this, so watch this space.
Edit July 19, 2011. The original post suggests 6.4.0a as a go-to level, this has since been removed. It has also been pointed out to me that some blade type switches may not be capable of hot-code-load (HCL). You need to check your vendor release notes to be certain.
Just a quick post as I am leaving Singapore to return to Melbourne, I thought I would share two more photos with you.
No trip to Singapore is complete without a visit to the Merlion, the mythical creature who acts as a mascot for Singapore. The fish body represents Singapore's origin as a fishing village when it was called Temasek, which means "sea town" in Javanese. The lion head represents Singapore's original name — Singapura — meaning "lion city" or "kota singa" (thanks to Wikipedia for the text).
Here is a night view across to the Marina Bay Sands, the integrated resort fronting Marina Bay in Singapore. Developed by Las Vegas Sands, it is billed as the world's most expensive standalone casino property at S$8 billion, including cost of the prime land. The remarkable building on the left hand side is the ArtScience Museum. The architecture is said be a form reminiscent of a lotus flower (again, thanks to Wikipedia for the information).
I would love to return to Singapore soon in the form of a tourist, it is an amazing city full of vibrant energy. With Singapore National Day coming up on August 9 and the Formula One in September, there will certainly be plenty for visitors to see and do.
If you want to see more of my photographs, feel free to visit my Flikr account.
I am in Singapore this week running a teach the teacher seminar on Storwize V7000. We are creating more instructors as demand for courses on the product continues to increase.
I am fortunate to be staying at the Marina Bay Sands Resort, which is one of the most mind blowing facilities I have ever seen. Check out this view of the Infinity Pool from the Skydeck up on the 57th floor.
Out my hotel window they are building a huge new facility known as Gardens by the Bay, but frankly I think it looks more a spaceport!
According to Wikipedia these are Supertrees: tree-like structures that dominate the Gardens landscape with heights that range between 25 and 50 metres. They are vertical gardens that perform a multitude of functions, which include planting, shading and working as environmental engines for the gardens.
But doesn't this look like two huge crashed spaceships?
Actually they will be giant conservatories, again according to Wikipedia they are the Flower Dome and the Cloud Forest.
They certainly know how to think big in Singapore!
XIV Gen 3 modules are built on a new generation of Intel microprocessors based on the Nehalemmicro-architecture. Nehalem is the most profound architecture change that Intel has introduced in the 21st century. Some of the key changes and their benefits are:
Integrated memory controller: The memory controller now sits on the same silicon die as the processors. It runs at the same clock-speed as the processors instead of at the lower speed of an external front-side bus. This dramatically improves memory performance and therefore overall system performance.
No need for buffered memory: Previously, buffered memory was required to improve the performance of the memory sub-system. Buffered memory is relatively expensive and energy hungry. With the faster Nehalem integrated memory controller, the system can deliver improved performance without needing buffered memory, saving cost as well as energy. XIV Gen 3 will be faster and cooler at the same time using unbuffered DDR3 RAM. And since the memory is cheaper, we can put more in.
Increased memory capacity: Nehalem supports more memory chips at higher speeds. In XIV Gen 3 this translates into a 50 to 200% increase in system cache, significantly lifting the performance headroom of an already stellar performer.
No more front-side bus: Memory, second CPU package and peripherals no longer have to share and wait on a single bus to communicate. The connections are now direct or switched, enabling increased parallelism and the ability to do more work simultaneously.
PCI Express Generation 2: The I/O sub-system doubles in speed with the introduction of PCI Gen-2. This enables faster network and I/O adapters for XIV Gen 3: - 8 Gbps fibre-channel host connections. - More iSCSI host connections (including at the entry configuration of 6 modules) - Multi-channel, low latency infiniband as the inter-module connection. - A slot for solid state disk (SSD).
Better systems management instrumentation: The system supports increased monitors for sub-systems for more sophisticated self diagnostics and healing. Remote management capability has also been improved.
Furthermore, the new motherboards have additional expansion capacity (more processors, memory and I/O) that can be utilized to deliver future improvements in performance and increased software functionality.
XIV Gen 3 is not the first storage sub-system to adopt the Nehalem architecture. Some of our competitors (EMC and NetApp for example) have already done so with their dual-controller arrays. XIV Gen 3 takes the Nehalem architecture advantage forward, not twice, but six to fifteen times.
Many thanks to Patrick Lee for writing up this great summation.
Today IBM is announcing a new member of the XIV family, which we are calling XIV Gen3. I thought I would give a brief history of how we got here before I get too carried away with details.
What was Generation 1 of the XIV?
In 2002 an Israeli startup began work on a revolutionary new grid storage architecture. They devoted three years to developing this unique architecture that they called XIV. They delivered their first system to a customer in 2005. Their product was called Nextra(does it look familiar?).
What was Generation 2 of the XIV?
In December 2007, the IBM Corporation acquired XIV, renaming the product the IBM XIV Storage System. The first IBM version of the product was launched publicly on September 8, 2008. Unofficially within IBM we refer to this as Generation 2 of the XIV.
The differences between Gen1 and Gen2 were not architectural, they were mainly physical. We introduced new disks, new controllers, new interconnects, improved management, additional software functions.
As anyone who has read my blog knows, I have been working on the Generation 2 XIV since the day IBM began planning to release it as an IBM product. So it is very exciting to be able to share with you that we are now releasing Generation 3 of the IBM XIV Storage System.
What is Generation 3 of the XIV?
Generation 3 of the XIV is a new member of the XIV family, that will be an alternative to the Generation 2 XIVs we currently offer. It does not change the fundamental architecture, that remains the same. What it does do is bring significant updates to almost every part of the XIV, including:
Introducing Infiniband interrconnections between the modules.
Upgrading the modules to add 2.4 Ghz quad core Nehalem CPUs; new DDR3 RAM and PCI Gen 2 (using 8x slots that can operate at 40 Gbps) .
Upgrading the host HBAs to operate at 8 Gbps.
Upgrading the SAS adapter.
Upgrading the disks to native SAS.
A New rack.
A new dedicated SSD slot (per module) for future SSD upgrades.
Enhancements to the GUI plus a native Mac OS version.
I will be blogging about each of these changes over the coming days and weeks as we move to general availability date, so watch this space. In the meantime, why not visit the official XIV page here and check out the ITG Report linked there.
I have received this question several times, so it's clearly something people are interested in.
The Storwize V7000 has two controllers known as node canisters. It's an active/active storage controller, in that both node canisters are processing I/O at any time and any volume can be happily accessed via either node canister.
The question then gets asked: what happens if a node canister fails and can I test this? The answer to the question of failure is that the second node canister will handle all the I/O on its own. Your host multipathing driver will switch to the remaining paths and life will go on. We know this works because doing a firmware upgrade takes one node canister offline at a time, so if you have already done a firmware update, then you have already tested node canister fail over. But what if you want to test this discretely? There are four ways:
Walk up to the machine and physically pull out a node canister. This is a bit extreme and is NOT recommended.
Power off a node canister using the CLI (using the satask stopnode command). This will work for the purposes of testing node failure, but the only way to power on the node canister is to pull it out and reinsert it. This is again a bit extreme and is not recommended. This is also different to an SVC, since each SVC has it's own power on/off button.
Use the CLI to remove one node from the I/O group (using the svctask rmnode command). This works on an SVC because the nodes are physically separate. On a Storwize V7000 the nodes live in the same enclosure and a candidate node will immediately be added back to the cluster, so as a test this is not that helpful.
Place one node into service state and leave it there will you check all your hosts. This is my recommended method.
First up this test assumes there is NOTHING else wrong with your Storwize V7000. We are not testing multiple failure here. You need to confirm the Recommend Actions panel as shown below, contains no items. If there are errors listed, fix them first.
Once we are certain our Storwize V7000 is clean and ready for test, we need to connect via the Service Assistant Web GUI. If you have not set up access to the service assistant, please read this blog post first.
So what's the process?
Firstly logon to the service assistant on node 1 and place node 2 into service state. I chose node 2 because normally node 1 is the configuration node (the node that owns the cluster IP address). You need to confirm your connected to node 1 (check at top right) and select node 2 (from the Change Node menu) and then choose to Enter Service State from the drop down and hit GO.
You will get this message confirming your placing node 2 into service state. If it looks correct, select OK.
The GUI will pause on this screen for a short period. Wait for the OK button to un-grey.
You will eventually get to this with Node 1 Active and Node 2 in Service.
Node 2 is now offline. Go and confirm that everything is working as desired on your hosts (half your paths will be offline but your hosts should still be able to access the Storwize V7000 via the other node canister).
When your host checking is complete, you can use the same drop down to Exit Service State on node2 and select GO.
You will get a pop up window to confirm your selection. If the window looks correct, select OK.
You will get the following panel. You will need to wait for the OK button to become available (to un-grey).
Provided both nodes now show as Active, your test is now complete.
Over at SearchStorage.com.AU they recently published an article entitled Six reasons to adopt storage virtualisation. You can find the article here. The six given reasons are:
Storage virtualisation reduces complexity
Storage virtualisation makes it easier to allocate storage
Better disaster recovery
Better tiered storage
Virtual storage improves server virtualisation
Virtual storage lets you take advantage of advanced virtualisation features
Its a well written article and I agree with every point. But one could be forgiven for reading the article and thinking that either storage virtualisation is new, or that storage virtualisation is something you might consider AFTER doing server virtualisation. Both of which are not true.
IBM embraced storage virtualisation in June 2003 when we announced our SAN Volume Controller (the IBM SVC). I even found a CNET.com article from way back then. You can find it here (the image below is a screen capture of that CNET website).
IBM's SVC product has been enhanced repeatedly since 2003 with an enormous list of supported host servers and backend storage controllers. We have added new functions every year including Easy Tier, split cluster, VAAI, an enhanced GUI and a new form factor for the SVC code in the form of the Storwize V7000.
So let me give you a seventh reason for adopting storage virtualisation: A vendor who has shown genuine support for this technology. No vendor has embraced storage virtualisation with more enthusiasm than IBM. We have an industry leading solution with phenomenalSPC benchmarks, an enormous number of case studies and an architecture that does not lock you in. Indeed it is an architecture that can grow as you grow and that can be upgraded without disruption.
So please consider storage virtualisation from IBM, using either the SVC or the Storwize V7000. If your in Australia, we have demo centers dotted around the country. Many of our Business Partners can also demonstrate IBM storage virtualisation using their own Storwize V7000s. If your in Melbourne feel free to give me a call and schedule a time to drop into Southgate.
Bob Leah is one of our leading lights in the developerWorks team. His blog (found here) is a great resource for Web designers. He recently created a new set of templates to enable a mobile page for developerWorks blogs. You can read his article about the new template here.
This morning I boldly went and installed the new templates and so far I think it looks fantastic, not only on the iPhone, but also the iPad and on regular browsers. My only complaint is that I lost the banner image of my Golden Retriever (my loyal hound Suzie). Bob assures me she will reappear soon. In the meantime, I would love to hear feedback about the new template. This is what it looks like on my iPhone:
Tivoli Pulse is coming to Melbourne July 27 and 28, 2011 at the Crown Promenade in Melbourne.
How many chances do you get to listen to the following speakers in one place?
Nigel Phair, Director, Centre for Internet Safety, University of Canberra
Steve Van Aperen, Human Lie Detector and Director of SVA Training and Australian Polygraph Services
Laura Guio, Vice President, Storage Sales, STG Growth Markets, IBM
Joao Perez, Vice President of Worldwide Tivoli Software Sales, IBM USA
Jamie Thomas, Vice President Tivoli Strategy and Development, IBM USA
Glenn Wightwick, Director, IBM R&D Australia
There are 12 customer case studies presented mainly by customers. There are over 70 sessions in eight streams, expert keynote speakers, case studies and presentations. Pulse 2011 provides the tools to help advance your infrastructure goals.
Registration is free so what excuse do you have?
Registration is free so what excuse do you have? Find out more here. You can enroll here. The agenda front page is here. The detailed agenda is here.
I will presenting on day two. talking about Virtualization and Storwize V7000 so maybe I will see you there!
I have an admission: I am a bit of an Apple fanboy. Well actually not a full on Apple fanboy, I have an iPhone and an iPad but I don't have a MacBook (although if IBM start offering a cash payment instead of giving laptops to mobile employees, that might change). But not everything is perfect in the land of Apple. Let me give you an example, one that I routinely find people are not aware of (apologies if you learnt all of this months ago).
The picture below appears to show three identical Apple charger packs (with Australian pins). You may have a similar collection. But are they all identical? Sadly not.
Only those with very good eyes can spot the difference by reading the rather pale decal on the bottom section of each charger. The text is so small and faint, I struggled to take a decent picture, but here is my sad attempt for one of them (they are all different):
So how are my three power adapters different?
The first is marked as a 10 Watt USB Power Adapter (it came with an iPad). Its output is amusingly marked as 5.1 Volts DC at 2.1 Amps, which suggests 10.7 Watts.
The second one is marked as a 5 Watt USB Power Adapter (it came with my iPhone). Its output is marked as 5 Volts DC at 1 Amp, which is indeed 5 Watts.
The third is marked as an iPod USB Power Adapter. No stated wattage, but its output is marked as 5 Volts DC at 1 Amp, which again suggest 5 Watts. So perhaps my 5 Watt adapter and my iPod adapter are actually the same.
The big question that comes up: Are they interchangeable? The answer: Yes but with caveats.
If you have an iPad you should use the 10W adapter. If you use the 5W adapter it will still charge but at a much slower rate. Apple confirm this here where they state: iPad will also charge, although more slowly, when attached to an iPhone Power Adapter (by which they mean a 5 Watt adapter).
If you have an iPhone or an iPod can you use the 10W adapter? The answer is yes! It will recharge with no ill effects. Apple confirm this here, where they state: While designed for use with the iPad, you can use the iPad 10W USB Power Adapter to charge all iPhone and iPod models.
So I am putting my 5 Watt and iPod adapter in the cupboard and using the 10 Watt adapter exclusively. If you have an iPad and finds it's recharging slowly, you may be using an older 5 Watt adapter (but you may need a magnifying glass to spot the difference!).
My suggestion to Apple? A few more cents worth of ink please, to make things more obvious.
To close, on my first Apple focused blog entry, let me pose a question:
I read a great blog post recently on Written Impact that talked about how to create effective presentations. It's well worth reading and can be found here. They describe several different formats that will help you develop interesting presentations, ones that don't put your subjects to sleep.
Talking of presenting, I recently presented at the IBM Power and Storage Symposium in Manila. It was a great event and was very well attended. We even had cake to celebrate IBM's 100th birthday.
There are two IBM Symposiums coming up in Australia that I would love for you to attend:
The next IBM Power Systems Symposium will be held in Sydney running from August 16 to 19, 2011. We are currently finalizing the agenda on this one and while this symposium is dedicated mainly to IBM Power Systems... I will be attending and presenting on storage related topics. To check out the details and enroll, please head over to here.
An IBM Storage Symposium will be held in Melbourne running from November 15 to 17, 2011. The agenda is still being set, so if you have ideas about what you would like to see, please let me know. To check out the details and enroll, please head over to here. And yes! I will be attending and I will be presenting.
The Storwize V7000 and SVC release 6.1 introduced a new WEB GUI interface to assist with service issues, known as the Service Assistant. The Service Assistant interface is a browser-based GUI that is used to service your nodes. Much of what you traditionally did with the SVC front panel can all be done using the Service Assistant GUI. You can see a screen capture of the Service Assistant below:
While I would like to be optimistic and hope that you will never have to use the Service Assistant, you should always ensure your toolkit is equipped with every possible tool. I say this because one thing I have noted is that the majority of installs are not configuring the Service Assistant IP addresses. This is particularly apparent as clients upgrade their SVC clusters to release 6.1.
By default on Storwize V7000, the Service Assistant is accessible on IP addresseshttps://192.168.70.121 for node 1 and https://192.168.70.122 for node 2 (don't try and point your browser at them right now, as your network routing won't work - you would need to set your laptop IP address to the same subnet and be on the same switch. Details to do that are here). For SVC there are no default IP addresses, although we traditionally asked the client to configure one service address per cluster. The best thing for you to do is approach your network admin and ask for two more IP addresses for each Storwize V7000 and/or SVC I/O group. Once you have these two extra IP addresses, record them somewhere and then set them using the normal GUI.
Its an easy five step process as shown in the screen capture below. Go to the Configuration group and then choose Network (step 1). From there select Service IP addresses (step 2) and the relevant node canister (step 3). Choose port one or port two (step 4) and then set the IP address, mask and gateway (step 5).
You can also set them using CLI (replace the word panelname with the panel name of each node, which you can get using the svcinfo lsnode command).
If you forget these IP addresses, you can reset them using the same CLI commands or using the Initialization tool as documented here.
Finally having set the IP addresses, visit the service assistant by pointing your browser at each address. This is just to confirm you can access it. You logon with your Superuser password. With the process complete, ensure the IP addresses are clearly documented and filed away. So now if requested, you will be able to perform recovery tasks (in the unlikely chance they are needed). If for some reason your browser keeps bringing you to the normal GUI rather than the Service Assistance GUI, just add /service to the URL, e.g. browse to https://10.10.10.10/service rather than https://10.10.10.10.
So what should you do now?
If your an SVC customer on SVC code version v5 and below, please get two IP addresses allocated for each SVC I/O group, so you can set them the moment you upgrade to V6. Do this once the upgrade is complete.
If your an existing Storwize V7000 client or an SVC client already on V6.1 or V6.2 code, then hopefully you should already have set the service IP addresses. If not, please do so and test them.
I thought I would write a quick post about an issue that's not new, but is certainly worth being aware of....
One of the interesting tricks with the change to 8 Gbps Fibre Channel is that it required a change to the way the switch handles its idle time... the quiet time when no one is speaking and nothing is said. In these periods of quiet contemplation, a fibre channel switch will send idles. When the speed of the link increased from 4 Gbps to 8 Gbps, the bit pattern used in these idles proved to not always be suitable, so a different fill pattern was adopted, known as an ARB. All of this came to intrude on our lives when it became apparent that some 8 Gbps storage devices were having trouble connecting to IBM branded 8 Gbps capable Brocade switches because of this change. This led to two things:
IBM released several alerts regarding how to handle the connection of 8 Gbps capable devices to 8 Gbps capable fibre channel switches.
Brocade changed their firmware to better handle this situation.
An example of what was said?
"Starting with FOS levels v6.2.0, v6.2.0a & v6.2.0b, Brocade introduced arbff-arbff as the new default fillword setting. This caused problems with any connected 8Gb SVC ports and these levels are unsupported for use with SVC or Storwize V7000.
In 6.2.0c Brocade reintroduced idle-idle as the default fillword and the also added the ability to change the fillword setting from the default of idle-idle to arbff-arbff using the portcfgfillword command. For levels between 6.2.0c and 6.3.1 the setting for SVC and Storwize V7000 should remain at default mode 0.
From FOS v6.3.1a onwards Brocade added two new fillword modes with mode 3 being the new preferred mode which works with all 8Gb devices. This is the recommended setting for SVC and Storwize V7000"
So there are several tips that I will point you to, depending on your product of interest:
Brocade Release Notes
For most environments, Brocade recommends using Mode 3, as it provides more flexibility and compatibility with a wide range of devices. In the event that the default setting or Mode 3 does not work with a particular device, contact your switch vendor for further assistance. IBM publishes all the release notes for Brocade Fabric OS here.
The XIV Gen3 comes with 8 Gbps capable Fibre Channel connections. It does not support idle Fill Words meaning that the portCfgFillWord value should not be set to 0.
When an IBM System z server attaches an 8 Gbps capable FICON Express-8 CHPID to a Brocade switch with 8 Gbps capable SFPs, you should upgrade your switches or directors to Fabric OS (FOS) 6.4.0c or 6.4.2a and set the fill word to 3 (ARBff).
LTO5 and TS1140
IBM have two tape drives that are capable of 8 Gbps, the LTO-5 drive and the TS1140. Setting the fill word to 3 can actually cause issues with these drives. To avoid issues do one of the following (you only have to do one of these, not all three):
Load the tape drive with firmware that has the access fairness algorithm fix for loop: - LTO5 drives should be on BBN0 and beyond (you may need to contact IBM support to get this code. - TS1140 drives should be on drive firmware 5CD or beyond.
Change the Fibre Channel topology to point-to-point (N port) (as opposed to L or NL). This is my preferred option.
Change the Fibre Channel speed to 4Gbps. This sounds slightly retrograde, but it is very rare for an individual drive to sustain a speed above 400 MBps (unless your data is very very compressible).
**** UPDATED 28 Feb 2012 - Added System z FICON and Tape info ****
I recently got a great email from an IBMer in the Netherlands by the name of Jack Tedjai. He sent me two screen shots, taken with the new performance monitor panel (that comes with the SVC and Storwize V7000 6.2 code). He wrote:
I am working on a project to migrate VMware/SRM/DS5100 to SVC Stretch Cluster and one of the goals is to prevent using ISL (4Gbps) and VMware Hypervisor/HBA load during the migration. For the migration we are using VMware Storage vMotion. To minimize the impact of the migration on production, we tested VAAI for Storage vMotion and template deployment and it worked perfectly.
So whats this all about? Well one of the improvements provided with VAAI support is the ability to dramatically offload the I/O processing generated by performing a storage vMotion. Normally a storage vMotion requires an ESX server to issue lots of reads from the source datastore and lots of writes to the target datastore. So there is a lot of I/O flowing from ESX to the SVC, and then from the SVC to its backend disk. What you get is something that looks like the image below. In the top right graph we have traffic from SVC to ESX (host to volume traffic). In the bottom right graph we have traffic from the SVC to its backend disk controllers (DS5100 in this case). This is SVC to MDisk traffic.
When we add VAAI support to the SVC, we suddenly change the picture. Suddenly VMWare does not need to do any of the heavy lifting. There is almost no I/O between VMWare and the SVC (no host to SVC volume traffic) related to the vMotion. The SVC is still doing the work, but it is happening in the background without burning VMWare CPU cycles or HBA ports (in that there is still SVC to MDisk traffic).
This difference translates to: Faster vMotion times, far less SAN I/O and far less VMware CPU being used on this process.
So do VMware support this? They sure do! Check this link here. It currently shows something like this (taken on June 23, 2011):
So what are your next steps?
Upgrade your Storwize V7000 or SVC to version 6.2 code. Download details arehere.
Download and install the VAAI driver onto your ESX servers. You can get it from here. If your already using the XIV VAAI driver you need to upgrade from version 188.8.131.52 to version 1.2. There is an installation guide at the same link.
And the blog title? It means friendly greetings in Dutch. So to Jack (and to all of you), vriendelijke groeten and please keep sending me those screen captures.
If your a user of XIV, or your considering purchasing an XIV, then there is one tool that you will truly love. It's called XIVTop. The XIVTop application comes packaged with the XIV GUI and is one of the handiest add-ons I have ever seen. It lets you monitor your XIV in real time, seeing exactly how much IO or throughput is being achieved and at what response time (in milliseconds). You can immediately answer questions like:
Is poor application response time being caused by poor storage response time?
What application is currently generating so much traffic on the SAN?
What effect has performing file de-fragmentation had on performance?
Are the backups running and how much traffic are they generating?
What happens when I run multiple application batch jobs at the same time?
The ability to get this information in real time is what makes XIVTop so invaluable.
So in the tradition of always pushing my boundaries, I thought I would create a narrated video about XIVTop. What I discovered is just how terribly hard doing narrated videos are: You need to write a script... you need to stick to the script... you need to not fluff any words.... you need to speak slowly and clearly and not start talking in a strange accent. I had trouble with all of these, so I made take after take after take after take, until I was heartily sick of the process. I have now got a much greater respect for newsreaders and film actors. This narration stuff is hard!
So please check out my final take. It's still far from perfect, but all feedback is very welcome. The only other thing that is quite strange is Youtubes choice of videos to watch after mine. Its worth watching just to see the list. I think the term performance confuses the algorithm.
I joined IBM on June 26 1989, so this Sunday brings up my 22 year anniversary with the company. No small achievement, but I am still three years away from the mystical IBM Quarter Century Club. Of course for some: 22 years is nothing! I recently learned that Robert Neidig, who has been (and remains) a leading light in promoting IBM's Mainframe products, joined IBM on June 21, 1961. So this year bring up his 50th anniversary with the company!
For those with long memories, Bob has worked with the following IBM systems: 1401, 1410, S/360, S/370, 3031, 3032, 3033, 3081, 3083, 3084, 3090, ES9000, S/390, eServer zSeries, and System z. They have all been enhanced by Bob's contributions.
If you want to check out the history of some these world changing products, visit The IBM Mainframe Room. I particularly loved the Photo Album. There are some truly classic images of IBM products of old. If your forward looking, feel free to also visit the System z homepage.
So thanks Bob for your commitment and leadership on your half-centennial, truly a remarkable achievement!
I wrote a blog post recently about my favourite podcasts. One of those I listed wasBackground Briefing, a radio program broadcast by the Australian Broadcasting Corporation (Australia's ABC). A recent episode entitled Fatigue Factor really sparked my interest. It talked about the affects of fatigue on professions such as:
Air Traffic Controllers
It contained some alarming facts about the potential affects of fatigue and is well worth taking the time to listen to. However in my opinion there was one major omission: It did not mention workers in the IT industry.
For many years I worked as the Account Engineer for several of IBM's System z customers, mainly banks. Most weekends I skipped Saturday night as a sleep night. If I was lucky I might get to sleep from 10pm to 1am and would then head off to vast, noisy, dehydrating air conditioned computer rooms to perform various system changes. If I did my job well, had no hardware issues and the client confirmed everything was running as expected, I got to head home about 7am on Sunday. So that night I would have slept somewhere between zero and three hours. I would then spend the rest of the week recovering, before doing it all over again the following weekend at a different customer.
I mention all of this because fatigue was something I learnt to live with. Even when I moved to a support role, I still occasionally worked through the night on critical situations (something IBM calls Crit Sits). I also worked on a support roster which could involve 3am callouts to assist my fellow IBMers across the Asia Pacific region. So when I later moved to a Pre-Sales role, it certainly did wonders in helping me re-establish normal sleep patterns.
Listening to this podcast really brought home to me that the IT industry is just as guilty of failing to deal with fatigue, as the other industries that the podcast discusses. Now if your thinking this means it's an IBM problem, think again. Most weekends I was working alongside representatives from EMC, HDS, Storagetek, etc. Plus of course there were the clients themselves, many of whom were also missing a nights sleep to satisfy their change and business requirements.
One of the major issues raised in the podcast is that there is no accepted way to measure how fatigued an employee actually is. This is a major problem. There are established tests to confirm how affected someone is by alcohol, or by drugs. But we cannot easily confirm how badly fatigued a worker is; plus many people are unwilling or unable to admit that they are suffering from fatigue
If we think about many of the major IT related outages that have occurred recently, I ponder what role fatigue played in each one. Even if it didn't cause the initial issue, did making your employees work around the clock to resolve an issue, actually extend the outage time? For example, have a read of Amazons explanation of its recent Service Disruption. Just picking on some of the lines in the report:
At 12:47 AM PDT on April 21st, a network change was performed... At 2:40 AM PDT on April 21st, the team deployed a change... By 5:30 AM PDT, error rates and latencies again increased .... At 11:30AM PDT, the team developed a way to prevent....
Was the person doing the change working out of their usual sleep pattern? Was the team working to resolve the issue working out of their normal sleep pattern? Did fatigue compound the outage? Its an interesting idea. Now it may well be that fatigue hadNOTHING to do with this outage. It is pure speculation on my part. But I am certain that the root causes of many of the recent IT meltdowns and their extended after affects (such as Sony's ongoing issues), MUST include the debilitating affects of fatigue.
Plus here is another rather disturbing fact. To quote from the podcast:
... if you're sleep deprived, you're more likely to crave chips over lettuce, and feel less like climbing the stairs. And that can become a vicious cycle, because many people who are overweight are even more prone to sleep disorders....
So please take the time to listen to the podcast. You will find it here and in places likeiTunes.
If your reading my blog, your probably interested in IBM Storage hardware (since apart from Bow Ties, thats all I talk about). So I would hope your already subscribed to IBM's notification service that you will find here. Rob Jackard from the ATS Group (an IBM Business Partner based in the USA) puts together a summary of these notifications which he sends to me on a regular basis. So I am bringing them to you here. Now hopefully none of these alerts are news to you... but please, have a read and if you have not done so already.... SUBSCRIBE!
DS3000 / DS4000 / DS5000:
(2011.06.09) IBM Retain Tip# H202771 – Expanding Dynamic Capacity Expansion (DCE) large arrays may fail due to out of memory conditions.
NOTE: The 7.xx firmware for the DS Storage Controller is affected. This is a permanent restriction. Possible workarounds are available.
(2011.06.10) Storwize V7000 and SAN Volume Controller FlashCopy Replication Operations Involving Volumes Greater Than 2 TB in Size Will Result in Incorrect Data Being Written to the FlashCopy Target Volume.
NOTE: This issue is fixed by APAR IC76806 in the 184.108.40.206 and 220.127.116.11 PTF releases.
(2011.05.27) Storwize V7000 Systems Running V18.104.22.168-V22.214.171.124 Code May Shut Down Unexpectedly During Normal Operation, Resulting in a Loss of Host Access and Potential Loss of Fast-Write Cache Data.
NOTE: If a single node shutdown event does occur when running V126.96.36.199, this node will automatically recover and resume normal operation without requiring any manual intervention. IBM Development is continuing to work on a complete fix for this issue, to be released in a future PTF, however customers should upgrade to V188.8.131.52 to avoid an outage.
(2011.05.24) Potential Problem on XIV Storage System ranging microcode versions 10.2.2 thru 10.2.4.a that can be caused by changing system time via Network Time Protocol (NTP) or when changing the clock via XCLI.
I thought I would quickly check out two of the announced features of the 6.2 release: the new Performance Monitor panel and support for greater than 2 TiB MDisks. So on Sunday I got busy and upgraded my lab Storwize V7000 to version 184.108.40.206.
Remember that in nearly every aspect the firmware for the SVC and Storwize V7000 are functionally identical, so while I am showing you a Storwize V7000, it equally applies to an SVC.
Firstly I tried the performance monitor panel, and what better way to show you what I saw than on YouTube? This is my first YouTube video so please forgive me if its not slick. I started the performance monitor and captured two minutes of performance data using Camtasia Recorder. Because it is fairly boring to stare at graphs slowly moving right to left, I then sped it up eight times, and this is the result:
The video is shot in HD, so if what your seeing is grainy or hard to read, change the display to 720p or 1080p. Now if you want to see the performance monitor at its actual speed, here is the original normal speed video. Remember this is the same video as above, just slower. It can also be viewed in 720p.
The top right hand quadrant is volume throughput in MBps as well as current volume latency and current IOPS.
The bottom left hand quadrant is Interface throughput (FC, SAS and iSCSI).
The bottom right hand quadrant is MDisk throughput in MBps as well as current MDisk latency and current IOPS.
You will note that each metric has a large number (which is the current metric in real time) and a historical graph showing the previous five minutes. You can also change the display to show either node in the I/O group.
I found the monitor to be genuinely real time: the moment I changed something in the SAN (such as starting or stopping IOMeter or starting or stoping a Volume Mirror), I immediately saw a change.
Greater than 2 TB MDisk support
Next I logged onto my lab DS4800 and created two 3.3TiB volumes to present to the Storwize V7000. I chose this size because I had exactly 6.6 TiB worth of available free space on the DS4800 and I wanted to demonstrate multiple large MDisks. On versions 6.1 and below, the reported size of the MDisks would have been 2 TiB (as I discussedhere). Now that I am on release 6.2 with a supported backend controller, I can present larger MDisks. In the example below you can clearly see that the detected (and useable size) is 3.3 TiB per MDisk.
What controllers are supported for huge MDisks?
The supported controller list for large MDisks has been updated. The links for Storwize V7000 6.2 are here and for SVC here. If your backend controller is not on the list, then talk to your IBM Sales Representative about submitting a support request (known as an RPQ).
I recently created a post about the XIV Host Attachment Kit (amusingly called the HAK). IBM has released an update to the HAK, taking us from version 1.5 to version 1.6. The updated versions, along with release notes and installation instructions can be found at the following links:
Whats changed you asked? Great question! Checking the Release Notes for each Operating System (which can be found in the links above), I found some common improvements to the HAK for every OS:
The xiv_diag command now provides the HAK version number when used with the --version argument. This is handy to confirm what version of HAK you are currently running.
More information is collected with the xiv_diag command.
The xiv_devlist command can now display LUN sizes in different capacity units, by using the –u or --size-unit argument. I give an example below. Usage: -u SIZE_UNIT, --size-unit=SIZE_UNIT Valid SIZE_UNIT values: MB, GB, TB, MiB, GiB, TiB
The xiv_devlist output can be saved to a file in CSV or XML format, by adding the –f or --file argument. I give an example below.
There are also several other fixes which are mainly common between Operating Systems. Given that a major part of the HAK are Python scripts such as xiv_attach, xiv_devlistand xiv_diag and given that the output and behaviors of these script are very similar for each OS, this is not surprising.
I installed the new version 1.6 HAK onto my 64-bit Windows 2008 server and found another pleasant surprise: When I ran the xiv_attach command it detected that my Qlogic driver was downlevel. In this example it detected I am running a Qlogic QLE2462 on driver version 9.18.25 and suggested I should instead run driver version 9.19.25.
I then tried out the xiv_devlist command, displaying volume sizes in both decimal (GB) and binary (GiB). Note the syntax I used to get the GiB output: xiv_devlist -u GiB
Finally I offloaded the output of the xiv_devlist command to a CSV file. Again please note the syntax as you may find it useful:
xiv_devlist -t csv -f devlist.csv -u GiB
You could use -t xml instead of getting CSV output. Clearly you could also change the file name devlist.csv to any filename you like.
You do not need to worry about which version of firmware your XIV is running. The release notes confirm HAK version 1.6 will work with XIV firmware 10.1.0, 10.2.0, 10.2.2, 10.2.4 and 10.2.4a, which should cover pretty well every machine in the world.
One final note: Under Known Limitations the release notes state that you should not map a LUN0 volume. This simply means leaving LUN0 disabled (which is the default). In the example below I start mapping volumes from LUN1 and have NOT clicked to enable mapping of volumes to LUN0. This should be the norm.
Any confusion or questions? You know where to find me.
Many months ago I set up my WordPress blog (this is not the one your reading now, but the mirror of this blog I maintain over at Wordpress). One of the configuration choices I had was to enable a mobile version of the site. This setting changes the user experience when using a mobile device. It was a very easy thing to set up:
The difference between the mobile version and the non-m0bile version is fairly stunning as can been seen below, both views from an iPhone 3GS. The mobile version is on the left and the non-mobile version is on the right. Note that there is no difference in the selected URL:
In March, WordPress added a new feature from Onswipe to allow Apple iPad users to have a more iPad friendly user experience. You can read the announcement on Onswipe's blog. Again for the content creator (me), the work to set to this up was practically non-existent, in fact I don't even recall having to turn it on.
And the result? If you visit my blog on an iPad, the look and feel is amazing. It grabs the first image from each blog post to build a really nice front page. It means I will have to take more care with my opening images!
Now the obvious question is: What about Android? If I check the WordPress FAQ found here, it says that support is coming.
So if you like the look of the mobile version, feel free to switch to using my Wordpress blog. It contains all the same posts and is found here: