Anthony's Blog: Using System Storage - An Aussie Storage Blog
The XIV GUI impressed me the very first time I used it.
Its truly a user-friendly GUI for the 21st century.
Dare I say... its a standard setter for expectations on how a GUI should work.
The latest version (2.4.3) that has just been released and it continues the tradition of adding new features and widgets.
The latest solves a problem for me since I started testing Windows 7.
While the XIV GUI posts every command you execute into a text file (for use when creating a script), Windows 7 seems to place it in a less than memorable place
(for the record on my machine it is: C:\Users\Anthonyv\AppData\Local\XIV\GUI10\logs ok so I remembered.... but it seemed easier on XP)
So I was pleasantly surprised to find it now appears in the Tools menu.
What we get is a popup window that will dynamically get updated as commands are executed.
In the example below, I used the GUI to create a volume, then snapshot it, then unlock it, then map it to a host.
Each command appears in the command log as a CLI command that I can run again in a xCLI window or an xCLI script.
The current XIV GUI is version 2.4.3 which can be downloaded from here:
I received a question the other day about how the XIV interacts with a customers building UPS.
So I thought I would share my answer with you.
The XIV has two separate line cords (there is an option to have four line cords but I am trying not to complicate this). This means the clients building power provides the XIV with two separate power sources.
As long as one of those two line cords provides input power, then the XIV will continue to operate normally.
If both power sources stop supplying input power then the client is not providing any electricity to the XIV (none at all).
This would suggest the clients computer room has suffered a severe building facility failure and that all of their other equipment has lost power too.
In this situation the XIV will continue to operate normally for 30 seconds on battery power, waiting in the hope that the clients power will come back on at least one of the two line cords.
If after 30 seconds, the XIV has not detected the return of any input power, it must take action to ensure it does not flatten it's internal UPS batteries. So it performs a graceful shutdown and powers itself off. Why wait only 30 seconds? The main reason is brown-out protection. If the client loses power for 20 seconds and then returns power, and does this recursively, they could progressively flatten the battery to the point where the XIV may not be able to gracefully shut down. This is not desirable, so the 30 second timer is a good compromise.
Overall this design allows the client the greatest levels of availability and data protection.
In terms of site EPO, the XIV does not have an EPO switch or interface, because the XIV design has a strict requirement to perform a graceful shutdown prior to power off.
If the client wants to manually power the machine off, they could instead issue a CLI or GUI command to the machine to request shutdown.
Shutdown takes about 30 seconds to complete because the machine needs time to destage cache and meta data to disk prior to shutting down the Linux OS that runs on each module.
So how do you power the XIV back on?
Just press the On switch on each of the three UPS modules (preferably all at once).
So how do you manually power the XIV off?
Always use the xCLI or XIV GUI to shut the XIV down. There are power off buttons on each XIV UPS, but these should be covered by a plate and never used (if they are not covered up, please contact IBM to have this done). We don't use these buttons as they don't let the modules shutdown gracefully.
If you launch an xCLI session from the XIV GUI, issue the following command and then respond to the prompts:
If you want to script the command then you need a script that looks like this:
xcli -m 192.168.30.91 -u admin -p adminadmin shutdown –y
If you choose to use emergency=yes then you may cause data loss, which is clearly not a good idea.
We add the -y parameter because the shutdown command is normally interactive. Clearly this assumes you have not changed the default password ( which is also not a good idea).
The GUI of course also has a shutdown option (that will give you some warning prompts as well):
I know I have already blogged about the XIV GUI and how it just keeps getting better and better....
But I feel the need to share yet another improvement to the way the GUI presents information.
In this case its an additional widget to show the health of each module in the XIV.
A fully configured XIV consists of 15 modules, each module containing components like fans, power supplies and disk drives.
In addition it is of course running firmware (the secret sauce of the XIV).
Another new feature is that now when you hover over the left hand side of each module, a small magnifying glass appears.
When it does, just left click on your mouse and the module slides out (virtually of course!).
If you then hover over a component within the module, health information is displayed. How nice is that?
Don't forget, you can download the GUI and just logon as p10demomode to check it out for yourself (without an XIV).
If you have an existing XIV, its time to upgrade your XIV GUI to version 2.4.3 Build 11.
Rob Jackard from the ATS Group has kindly supplied me with this great summation of recent updates to various parts of the IBM Support sites.
Its worth just running your eyes down the list to see if there is anything that might apply to you and your environment.
For XIV users, please update to the latest GUI release, version 2.4.3a (see the link below).
The release notes (which you can also find at the link below) detail an important fix that could prevent an outage when mapping new volumes
to a clustered group of hosts which uses private mappings, (which you might do if you have separate dump or boot drives for clustered hosts).
DS3000 / DS4000 / DS5000:
(2010.06.21) Best Practices for Running an Oracle Database on an IBM Midrange Storage Subsystem.
(2010.06.15) IBM DS3500 ESM and firmware bundle version 3.16.
(2010.06.15) IBM DS3000 HDD firmware package version 4.7.
(2010.06.14) IBM DS3500 Controller Firmware bundle version 7.70.16.01.
(2010.06.03) Retain Tip# H196875: Upgrade from 6.xx firmware to 7.60.28.00 may fail - IBM System Storage.
NOTE: Affected configurations- DS4200 (Type: 1814), DS4700 (Type: 1814), DS4800 (Type: 1815). Release scheduled 2nd QTR 2010.
DS6000 / DS8000:
(2010.06.30) A quick capacity (physical/effective) table for DS6000 and DS8000.
(2010.06.30) Accelerate with ATS: DS8700 Easy Tier Webinar.
(2010.06.23) Potential Data Error on FlashCopy targets using Space Efficient FlashCopy (SEFLC) volumes after a DS8000 LPAR Failover or Failback.
(2010.06.21) IBM DS8000 Storage Virtualization Overview Including Storage Pool Striping, Thin Provisioning, Easy Tier.
(2010.06.02) Support for VMware Site Recovery Manager.
(2010.05.27) Enabling multipath SAN booting with DS8000 and DMMP.
(2010.07.02) Hard Disk Drive (HDD) Firmware for N series Publication Matrix.
(2010.06.29) Firmware release NA01 available for the Seagate SAS disk drive identifiers: X286_S15K6146A15, X287_S15K6288A15, and X289_S15K6420A15.
(2010.06.29) Firmware release NA01 available for the Seagate FC disk drive identifiers: X278_S15K6146F15, X279_S15K6288F15, and X291_S15K6420F15.
(2010.06.07) IBM System Storage N series FRU (Field Replaceable Unit) lists.
(2010.06.01) NEWS: Recommended Release for IBM System Storage N series Data ONTAP.
(2010.06.01) Data ONTAP 7.3.3 recommended for IBM Systems Storage N series.
(2010.06.14) IBM SAN b-type Firmware Version 6.x Release Notes.
(2010.06.01) Data Center Fabric Migration Guide.
(2010.06.16) IBM System Storage SVC Code V184.108.40.206.
(2010.06.16) 2145-CF8 Nodes May Stall on Error 231 During Upgrade to V220.127.116.11.
(2010.06.16) SAN Volume Controller Concurrent Compatibility and Code Cross Reference.
(2010.06.02) Support for VMware Site Recovery Manager.
(2010.06.02) SVC V5.1.0.x Cluster Nodes May Repeatedly Reboot When Performing Multiple Image Mode Migration Commands on the Same VDisk.
NOTE: This problem will be resolved in a future SVC release.
(2010.05.14) IBM System Storage SVC Code V18.104.22.168.
SSPC / TPC / TPC-R:
(2010.06.24) Accelerate with ATS: Working with TPC Disk- Midrange Edition.
(2010.06.24) Accelerate with ATS: TPC Disk Midrange Edition- Installation and Tailoring.
(2010.06.22) (IBM Internal/BP) Managing Virtualized Storage Environments with IBM Tivoli Storage Productivity Center.
(2010.06.21) TPC 4.1.x – Supported Storage Product List.
(2010.06.09) Latest Downloads for Tivoli Storage Productivity Center.
(2010.06.06) Ten things for the new TPC-SE Administrator to do to make TPC 4.1.1 more valuable.
(2010.06.04) Q2, 2010- IBM Tivoli TotalStorage Productivity Center Suite Customer Support Technical Information Update.
(2010.05.12) Tivoli Storage Productivity Center v4.1.x fix pack history technote (FAQ).
NOTE: Latest includes TPC 22.214.171.124 and TPC-R 126.96.36.199.
(2010.07.01) IBM XIV Host Attachment Kit for AIX version 1.5.2.
(2010.07.01) IBM XIV Remote Support Proxy version 1.0.0.
(2010.06.23) IBM XIV Management Tools (XIVGUI, XIVTop, XCLI) version 2.4.3a for all platforms.
(2010.06.23) IBM XIV XCLI (only) for Linux/AIX/Solaris/HPUX, version 2.4.3.
(2010.06.21) ATS XIV- Asynchronous Mirror Webinar.
(2010.06.10) IBM XIV Storage System Application Programming Interface Reference.
(2010.07.04) Oracle Architecture and Tuning on AIX v2.0.
(2010.06.30) Basic Monitoring of I/O on AIX.
(2010.06.11) AIX SDDPCM System Crash During a Dynamic Hardware Replacement of Controller A or B on a DS4K/DS5K Storage Subsystem
(2010.06.07) All hdisks and vpath devices must be removed from host system before upgrading to SDD host attachment script 188.8.131.52 and above. All MPIO hdisks must be removed from host system before upgrading to SDDPCM host attachment script 184.108.40.206.
(2010.06.04) SDDPCM open path failures and VIOS VTD configuration failures.
(2010.06.03) SDD User’s Guide correction for pcmquerypr tool.
I spent the first week of my recent vacation in Sydney (I live in Melbourne).
For someone who has spent many weeks in Sydney on business, this was the first time I actually went there as a real tourist.
First up, it was a great break and my family and I can heartily recommend Sydney to anyone (from any part of the world) who is looking for a "holiday with the lot".
As part of our planning, we clearly had a budget to work to as well as many idea about things to do.
Of course one of the big considerations was where to stay. I did a great deal of Googling around but relied on TripAdvisor heavily to help make our final decision.
I really love tripadvisor for two reasons:
So what has that go to do with IT? Well those who follow storage blogs will have seen a spike recently in discussions on two subjects:
I hope that its obvious that I work for IBM, so I accept that you may choose to view anything I say as potentially coloured by my relationship with my employer.
It has left me pondering where clients should go to get a good handle on what really matters most to them.
To chose the hotel that best matched my holiday requirements I used TripAdvisor.
But what can clients do?
I personally read many vendor and 'independent' blogs to try and ensure my 'world view' is as realistic and informed as possible.
There are some great blogs out there, but I do not know many of these people or their organizations personally. So I always read them through a haze of mild cynicism.
So is there a TripAdvisor for Storage IT? A place for genuine end-users to share their experience with specific solutions?
I have to say that XIV is one product that is crying out for such a beast. My experience is that client satisfaction levels with their XIVs is remarkably high but is that always reflected on the Web?
So far the best place I have seen for shared experiences from 'real' people is the XIV group on LinkedIn.
It would be great to see more actual end users head over there and contribute (so please do!).
Finally I should also point out that if there is a business relationship between IBM (my employer) and TripAdvisor or Linked In, I am not aware of it.
My opinions on these organizations are totally my own.
And the hotel we stayed in? The Quay Grand on Circular Quay. The view from our balcony was priceless. Check it out:
I recently listed to a great Podcast on VAAI from Greg Knierieman over at the Storage Monkeys website. You can find the podcast here (the whole site is well worth a visit).
He was talking with Marc Farley (his co-host, from 3PAR), Chad Sakac (from EMC) and Chris Evans (the 'Storage Architect'). The topic was VMWares newly announced VAAI.
To quote from the podcast, VAAI is a set of APIs focused on the VMWare kernel to off-load various functions onto the storage.
To get the newly announced VAAI functions you need to upgrade to the newly announced VSphere 4.1 and you will need to upgrade your storage hardware firmware to a version that supports it (when such a version comes out).
Some of the major new functions are:
Hardware accelerated locking (to avoid the need for ESX to use SCSI reserves when doing meta-data updates)
Hardware accelerated full copy (to help VMWare clone data without having to do lots of read and writes)
Hardware accelerated zero (to avoid the need to send vast numbers of 'empty' SCSI write I/Os to zero out blocks)
Given who was on the call, the conversation focused mainly on what EMC, 3PAR and to some extent what NetApp are doing in regards to this development.
Apart from some (good natured?) digging at HP, no other Vendor was really mentioned.
One thing that was mentioned was that some storage hardware architectures will lend themselves far better to VAAI than others.
In particular Chad mentioned that he would expect fullcopy and hardware accelerated zero would work better on V-Max than CLARiiON, due to hardware architecture differences that also benefit 3PAR.
I found that a really interesting observation.
What wasn't mentioned on the podcast was XIV.
So to be clear, the architecture of XIV lends itself very very well to the changes required to support VAAI.
To give an example of how we have done this with other vendors, XIV firmware 10.2.0a brought in support for Symantec Storage Foundation Thin Reclamation.
XIV support for Symantec's Storage Foundation and Thin Reclamation API means that when data is deleted by a user who uses the thin provisioning aware Veritas File System (VxFS), XIV will immediately free unutilised blocks and reclaim such blocks, rather than leaving them with 'garbage' data that wastes space.
So have no doubts, XIVs architecture is very 'friendly' to the sorts of things VMware are trying to achieve with VAAI. To underscore this, Chad also said that a VMware goal was that VMware admins should need "no requisite knowledge of the underlying infrastructure for any task". The goal is to use policies instead. Given this goal, XIV is also a perfect match. With XIV there is no need to think about raid types, raid sizes, disk types, disk sizes, LUN allegiances and trespassing, controller workload balancing or hot spot detection prevention or correction. All of these concerns simply don't exist.
The good news is that XIV is working with the VMware Reference Architecture Lab and the statement of direction is that we will announce VAAI support for XIV later this year. XIV continues to be an excellent choice for VMware environments and when VAAI support is added to XIV, this will only improve.
Finally, Chad made a great quote on the podcast. He said: "Never trust any vendor when they talk about what other vendors are doing"
I think this is a really great statement and one that everyone should take to heart.
So something I truly hate to see when visiting computer rooms is fibre cables hanging in the breeze with no dust covers, their precious glass connectors exposed to the world.
Even worse are fibre patch panels and HBAs without dust covers.
When new equipment arrives, every HBA, every patch panel port and every fibre optic cable will have a dust cover.
So what to do with these little guys once you remove them?
When you unplug a cable later you need to immediately re-install those precious covers both onto the cable and into the HBA or patch panel port, to protect the fibre optics from contamination.
I recommend storing dust covers in sealed plastic bags, preferably kept in the relevant rack so they are close to hand.
The picture below (taken from the rear of an XIV) is cute in that it shows a clever re-use of the XIV power cable covers,
but the dust covers are now exposed to contamination from the open air.
Since we are on the subject, take note of the colour coded power cables in the XIV.
Its another example of clever design to make power redundancy visually obvious.
The ends of the power cords are red, yellow and green to indicate which of the three UPSs these cables come from.
The unused cords at the top of the image are free because this machine is not fully populated with modules.
So I had the pleasure last night of observing a capacity upgrade on a client's XIV (so yes this is all real world).
The client in question had an 11 module XIV with 1 TB drives. This meant they had 54,649 GB of useable capacity ( approx 54TB ).
They client had ordered one new XIV module as an incremental capacity upgrade
All hardware upgrades are performed by IBM, so the upgrade work was all done by an IBM System Service Representative (SSR)
Task one for the IBM SSR was to remove the blanking plate at the front of the machine and slide the new module into place.
The module was then secured into place with its two captive screws (the only time a tool was needed).
The next task was delegated to me.... which was attaching the sticker (decal) which showed the relevant module number.
In our case the new module was module number 12.
Here you can see the new module with all the available decals (I think I did a good job).
Note the cables have not been plugged in yet.
Because the XIV is pre-cabled, all that the IBM Service Representative needed to do was plug the ethernet and power cables into the new module.
You can see all the cables plugged in and the lights are now on:
Once this was done, the module booted up and became available in the XIV GUI.
The IBM Service Representative needed to issue two commands to complete the upgrade (literally two mouse clicks in the GUI).
The first command, called an equip, introduces the module to the XIV and places it into the Ready state, as shown below:
The second command issued by the IBM SSR starts a process known as re-distrubution.
At bottom right the message changed from 'Full redundancy' to 'Redistributing'.
What does this mean?
It means the XIV is automatically spreading existing data across the new disks to load level the amount of data on each disk.
The machine will then be in a state of workload balance without any user intervention or host interruption.
This process is done with low priority, meaning the predicted end time kept jumping around as host I/O workload rose and fell.
To monitor the process, we used the XCLI command, monitor_redist as shown below:
The process actually ran all night, moving 8TB of data around the machine to ensure the most ideal data layout.
In the morning the redistribution had finished.
The machine now reported 'Full redundancy' and the useable capacity had risen from 54649 GB to 61744 GB
(the increase varies according to which module is being added).
Two tasks remained for the customer to complete:
1) Increase the size of the relevant storage pool(s) - takes a couple of mouse clicks, perhaps 20 seconds work.
2) Creates new volumes in those pools. Which again, takes just seconds to perform.
The XIV has two concepts when it comes to space:
Hard limit --> How much useable capacity you ACTUALLY have
Soft limit --> How much space you can allocate to volumes.
The ability to set the Soft Limit above the Hard Limit means you can over allocate the hard capacity (if you so choose!).
In the screen captures, the Hard and Soft Limits went up because the amount of useable capacity went up.
So to sum up..... to increase the capacity of the XIV while maintaining perfect load balance, the IBM Service representative performed about
2-3 minutes of physical work and then used two mouse clicks. Redistribution began and ran as a background task.
In the morning the client then increased the size of a storage pool (two mouse clicks) and the space was now ready for new volumes.
Easy as..... XIV.
Its been an interesting week in IT retractions.
Microsoft seriously went off the rails with their Meter Maid Booth Babes on the Gold Coast.
Check out the story here or here.
I mention this story not because I want to embarrass Microsoft (who I don't think quite realised what they had signed up for).
To their credit they quickly apologised and moved to correct their mistake.
Instead I mention this because several Microsoft people were more than willing to (quite rightly) publicly express their opinions on the subject.
I thought this was fantastic.
But with great power comes great responsibility.
As an IBMer I have never been told what I can or cannot blog.
However I do of course follow IBM Business Conduct Guidelines as well as IBM Social Networking Guidelines.
So I have to say that I viewed with dismay HDS blogger Pete Gerrs extraordinary attack on Moshe Yanai and the IBM XIV.
He has since rather gracelessly withdrawn the blog entry but his follow on comments need some response.
The XIV has been (and continues to be) a fantastic product for IBM.
Not only is it a great sales success, it has also allowed us to talk to clients who would not normally purchase IBM storage.
Far from damaging IBMs existing product line, it has resulted in those lines growing stronger (just wait and see).
We have a new focus on usability and simplicity, on making the experience of using and managing storage easier and smarter.
To some part, XIV has brought that focus. I personally think we needed it and that we are stronger for it.
As the year comes to a close you will see the benefits of this reinvigoration with some truly fantastic storage product announcements (across the board).
So while hopefully Pete can take some lessons from his very creditable and measured fellow blogger Hu Yoshida,
I will patiently wait for Barry Burke to post that he was wrong about DS8000.
And I will keep trying to get it right the first time.
I have been asked this question a few times now, so its worth a blog entry.
Clients love being able to easily view XIV performance statistics.
There is a simple panel that lets you display IOPS, throughput and response times for each host or volume or for the entire machine.
When viewing XIV performance statistics using the built in GUI panels, write I/Os are broken into two types: write hits and write misses.
The question that comes up is... what is the difference? And should I be worried about misses?
The use of the term miss can have negative connotations. To explain why:
So what about a write miss? Does it mean that the write I/O 'missed' the cache?
The answer is.... no!
To explain the difference:
A write hit is the situation where a host write generates less back end disk operations. This is because:
anthonyv 2000004B9K Tags:  ats xiv ds8700 ibm aix rob svc group ds5300 jackard ibmstorage 2 Comments 11,615 Views
So its that time of the month again. Rob Jackard from the ATS group does a fantastic job summarizing changes to the IBM Storage Support site
and you get all the benefit of his hard work (via me!).
So cast your eyes down the list and look for issues that may affect you....
(2010.08.21) AIX Support Lifecycle Notice- AIX 5.3 TL9 & TL10.
NOTE-1: After November 2010, IBM will no longer provide generally available fixes or interim fixes for new defects on systems at AIX 5300-09 (applies to all Service Packs within TL9). Sometime after May 1, 2011, IBM will no longer provide generally available fixes or interim fixes for new defects on systems at AIX 5300-10 (applies to all Service Packs within TL10).
NOTE-2: As a reminder, IBM is no longer providing generally available fixes or interim fixes for new defects on systems at AIX 5300-06, AIX 5300-07 or AIX 5300-08.
(2010.08.21) AIX Support Lifecycle Notice- AIX 6.1 TL2 & TL3.
NOTE-1: After November 2010, IBM will no longer provide generally available fixes or interim fixes for new defects on systems at AIX 6100-02 (applies to all Service Packs within TL2). Sometime after May 1, 2011, IBM will no longer provide generally available fixes or interim fixes for new defects on systems at AIX 6100-03 (applies to all Service Packs within TL3).
NOTE-2: As a reminder, IBM is no longer providing generally available fixes or interim fixes for new defects on systems at AIX 6100-00 or AIX 6100-01.
(2010.07.29) Disable the TCP/IP port for SDDSRV/PCMSRV if the port is enabled.
(2010.07.23) SDDSRV respawn incorrectly causes system log filesystem overflow with AIX SDD version 220.127.116.11.
NOTE- Contact IBM Support Center to obtain the required temporary fix 18.104.22.168, which is now available.
(2010.07.23) AIX SDD Version 22.214.171.124 cannot be deinstalled due to sddsrv respawning incorrectly.
NOTE: Contact IBM Support Center to obtain the required temporary fix 126.96.36.199, which is now available.
(2010.07.19) IBM TechDoc- Diagnosing Oracle DB performance on AIX using IBM NMON and Oracle Statspack Reports.
(2010.07.19) IBM TechDoc- Maintaining two switch fabrics on NPIV migrations.
(2010.07.07) AIX 5.3 HIPER Alert- devices.common.IBM.mpio.rte 188.8.131.52.
NOTE: Users of MPIO storage running the 5300-12 TL- An operation to change the preferred path of a LUN could hang. A similar hang could be experienced during LPAR migration where it will try to switch the preferred paths. Install APAR IZ77907.
(2010.07.07) AIX 6.1 HIPER Alert- devices.common.IBM.mpio.rte 184.108.40.206.
NOTE: Users of MPIO storage running the 6100-02 TL- An operation to change the preferred path of a LUN could hang. A similar hang could be experienced during LPAR migration where it will try to switch the preferred paths. Install APAR IZ77908.
DS3000 / DS4000 / DS5000:
(2010.08.23) Updated ESM and HDSS Firmware v1.69 package.
(2010.08.20) Updated Disk Controller Firmware v7.70.23.00 code package.
NOTE: Code for DS3950, DS5020, DS5100, DS5300 subsystems.
(2010.08.17) RETAIN Tip# H197049- Issues on full synchronization of RVM LUNS > 2 TB.
(2010.08.17) SAS connectivity to IBM System Storage DS3200 is not supported- IBM BladeCenter JS23, JS43.
(2010.08.16) RETAIN Tip# H197402- Multi-node server ports require same host group for failover- IBM Disk Systems.
(2010.08.09) Updated Disk Controller Firmware v7.60.40.00 code package.
NOTE: Code for DS3950, DS4200 Express, DS4700 Express. DS4800, DS5020, DS5100, DS5300 subsystems.
(2010.07.23) IBM TechDoc- Power Cord Technical Guide for DS5000 Systems.
(2010.07.20) RETAIN Tip# H197172- UEFI systems with HBA and >2 TB LUN may have data errors.
(2010.07.07) RETAIN Tip# H196538: Controller reboots if multiple CASDs attempted.
DS6000 / DS8000:
(2010.08.24) Potential Data Error using Fast Reverse Restore following Establish FlashCopy without Change Recording.
(2010.08.21) DS8000 Code Bundle Information.
(2010.08.20) DS8700 Code Bundle Information.
(2010.08.04) IBM TechDoc- Effective Capacity for IBM DS8700 R5.1.
(2010.07.29) IBM Whitepaper- IBM DS8000 Metro Mirror DR within a Remote Cluster.
(2010.07.28) IBM TechDoc- DS6000 and DS8000 Data Replication.
(2010.07.15) IBM Whitepaper- IBM Handbook using DS8000 Data Replication for Data Migration.
(2010.07.14) DS6000 Microcode Release 220.127.116.11.
(2010.08.26) Important information for N series support.
(2010.08.18) IBM System Storage N series FRU lists.
(2010.08.17) Data ONTAP 7.3.4 Filer Publication Matrix.
(2010.08.17) Data ONTAP 7.3.4 Gateway Publication Matrix.
(2010.07.29) Data ONTAP 8.0 7-Mode Gateway Publication Matrix.
(2010.07.29) Data ONTAP 8.0 7-Mode Filer Publication Matrix.
(2010.07.20) Data Fabric Manager (DFM) 4.0 Publications Matrix.
(2010.07.17) RLM Update to Firmware Version 4.0 Fails with “Error Flashing linux“.
(2010.07.12) IBM System Storage N series FRU (Field Replaceable Unit) lists.
(2010.08.24) IBM SAN b-type Firmware Version 6.x Release Notes.
(2010.08.20) Cisco MDS Supervisor 1-to-2 Upgrade Process.
(2010.07.15) Cisco MDS9000 Field Notice: FN-63132. Potential DIMM Memory Issue in a Small Number of DS-X9530-SF2-K9 Supervisor Cards Manufactured between September 2007 and February 2008.
(2010.08.17) NPIV clients of SSDPCM hosts may experience permanent application errors during SVC concurrent code upgrade or node reset with certain APARs and SSDPCM versions. The risk, although rare, exists in any AIX SSDPCM host or client.
NOTE: The changes made for VIOS client hangs in Technote SSG1S1003579 require additional AIX driver and SDDPCM code updates for a specific SVC error condition.
(2010.08.11) SVC V4.3.x and V5.1.x Cluster Nodes May Repeatedly Reboot and CLI/GUI Access Loss May Occur When Shrinking Space Efficient VDisks.
(2010.08.05) SVC Console (GUI) Requirements for using IPv6.
(2010.08.05) Guidance for Identifying and Changing Managed Disks Assigned as Quorum Disk Candidates.
(2010.08.05) Offline or Degraded Disks May Result in Loss of I/O Access During Code Upgrade.
(2010.08.05) IBM System Storage SAN Volume Controller V5.1.0- Software Installation and Configuration Guide (English Version).
(2010.08.05) IBM System Storage SVC Code V18.104.22.168.
(2010.08.05) IBM System Storage SVC Console (SVCC) V22.214.171.1246.
(2010.08.02) IBM System Storage SVC Code V126.96.36.199.
(2010.08.02) IBM System Storage SVC Console (SVCC) V188.8.131.523.
(2010.08.02) SAN Volume Controller Concurrent Compatibility and Code Cross Reference.
(2010.07.23) SNMP MIB file for SVC V5.1.0.
(2010.07.21) 2145-CF8 Nodes May Repeatedly Loop Between Boot Codes 100 and 137 When Upgrading to SVC V184.108.40.206 or Later.
NOTE: This issue is resolved in SVC v220.127.116.11.
(2010.07.17) Changes in handling of SSH keys in SVC V5.1.
(2010.07.16) Incorrect 2145-8G4 Node Hardware Shutdown Temperature Setting in V18.104.22.168 – V22.214.171.124.
NOTE: This issue is resolved by APAR IC60083 in SVC V126.96.36.199.
(2010.07.16) Incorrect 2145-8A4 Node Hardware Shutdown Temperature Setting in V188.8.131.52 – V184.108.40.206.
NOTE: This issue is resolved by APAR IC68234 in the SVC V220.127.116.11 release.
(2009.11.19) 20091015 Drive Microcode Package for Solid State Drive.
SSPC / TPC / TPC-R:
(2010.08.30) Open HyperSwap status may report incorrectly via the Tivoli Productivity Center for Replication GUI.
(2010.08.25) IBM TechDoc- Basic Automation of TPC Performance Graphs.
(2010.08.04) TPC 4.1.x – Platform Support: Agents, Servers and GUI.
(2010.08.04) Q3, 2010- IBM Tivoli TotalStorage Productivity Center Suite Customer Support Technical Information Update.
(2010.08.25) IBM TechDoc- Utilizing IBM XIV Storage System snapshot technology in SAP environments.
(2010.08.24) How to Avoid Potential Problems During a Data Migration to an XIV Storage System.
(2010.08.02) IBM XIV Remote Support Proxy version 1.1.0.
(2010.07.19) XIV Volume Sizing Spreadsheet Tool.
(2010.07.08) Potential to inadvertently overwrite volumes using IBM XIV Management Tools (XIVGUI, XIVTop, XCLI) version 2.4.3.
NOTE: This issue is resolved with release 2.4.3.a.
(2010.07.01) IBM Certification: IBM Certified Specialist – XIV Storage System Technical Solutions Version 2.
(2010.07.01) IBM Certification: IBM Certified Specialist – XIV Storage System Replication and Migration Services Version 1.
The XIV GUI is all about simplicity. Its about taking tasks which on other products are difficult or time consuming, and either eliminating them, or making them as simple as possible.
But for those who like to issue commands via a command line interface (a CLI), the XIV also has a very rich CLI called XCLI.
If your familiar with the XCLI your hopefully aware that list commands can produce much more detailed output if the -x option is used (-x requests XML output).
So here is something you can try out.
If your XIV is on 10.2.1 firmware you can use the module_list -x command to display how much server memory each XIV module has.
If your XIV has 2 TB disks, you should find that you have 16 GB of server memory per module.
This means a 15 module machine has a whopping 240 GB of server RAM.
To be clear, I am not referring to this as 'cache' because a small portion (around 2.5 GB) of the RAM in each module is used by the modules internal Linux operating system.
This means that a 15 module XIV with 2 TB drives and 16 GB of server memory per module, has over 200 GB of cache.
As former Australian Prime Minister Paul Keating once said: "its a beautiful set of numbers"
<XCLIRETURN STATUS="SUCCESS" COMMAND_LINE="module_list -x">
<sdr_version value="SDR Package 46"/>
I have no idea what this website is all about, but you have to love what they have they done with an XIV.
My favorite is the U2 model. With 2TB drives it can hold around 161 million minutes of music!
Plug that sucker into your iPod and put it on shuffle!!
IBM has announced some new XIV power features while withdrawing others.
The changes are being made to simplify the ordering process while making the power choices more robust and better suited to client requirements.
So what changed?
This is pretty well an industry standard for Enterprise class disk.
The USA Announcement letter is here.
The Asia Pacific Announcement letter is here
The European Announcement letter is here.
So your planning to attach your XIV at an AIX host?
Here are some best practices for you to follow.
1) Native XIV detection
The XIV uses a path control module (PCM) that plugs into AIX MPIO. Depending on your AIX level the XIV will be recognised natively by AIX without additional software.
This is nice because it means you can simply run cfgmgr and detect the XIV hdisks without doing any system changes.
If your on the following AIX levels (with TL and SP) then your AIX system will detect the XIV natively. Frankly its a good excuse to perform a system update.
AIX Release APAR Bundled in
AIX 5.3 TL 10 IZ69239 SP 3
AIX 5.3 TL 11 IZ59765 SP 0
AIX 6.1 TL3 IZ63292 SP 3
AIX 6.1 TL 4 IZ59789 SP 0
If your running VIOS, you need to be on VIOS v2.1.2 FP22 to recognise the XIV natively.
Natively detected XIV devices will look like this when displayed using the command:
lsdev -Cc disk
hdisk2 Available 03-00-02 MPIO 2810 XIV Disk
2) XIV Host attachment kit
If you are not on the levels listed above, you can install the XIV Host Attachment Kit to get XIV support.
However at lower AIX and VIOS levels there are issues with queue depth and round robin (its limited to 1)
The following releases do not have the queue depth issue, so they are better levels to be on:
AIX 5.3 TL 10 SP 0,1 and 2
AIX 6.1 TL 4 SP 0,1 and 2
VIOS v2.1.1.x FP-21.x
If your on level lower than those, you can still install the Host Attachment Kit to get XIV device support.
To detect XIV volumes when using the XIV Host Attachment Kit, you use the command xiv_attach.
The very first time you run xiv_attach you will need to reboot the host. After that you can use xiv_attach or cfgmgr (without reboot).
XIV devices detected by the xiv_attach command will look like this when displayed using the command:
lsdev -Cc disk
hdisk3 Available 02-01-02 IBM 2810XIV Fibre Channel Disk
3) The xiv_devlist command
Regardless of what level of AIX your running, you should install the Host Attachment Kit (HAK) to get the wonderful xiv_devlist command.
The HAK uses a specially packaged version of Python which is renamed XPYV (to not get in the way of any system Python already installed).
Just installing the kit does not require a reboot.
The xiv_devlist command is the equivalent of what SDD gave you with datapath query device.
It lets you map an AIX device (an hdisk) to an XIV volume. Its a tool you don't want to live without.
In the example below you can see the hdisk number on the left,
but all the other information (volume size, number of paths, volume name, XIV host) all come from the XIV itself.
This is really useful information.
root@system] # xiv_devlist
Device Size Paths Vol Name Vol Id XIV Id XIV Host
/dev/hdisk26 204.0GB 6/6 PROD-3050 188 7802844 PROD-prd
/dev/hdisk27 42.9GB 6/6 PROD-3051 189 7802844 PROD-prd
In my next blog entries I will tell you about zoning and what fcs, fscsi and hdisk variable work best with XIV.
I will also share a great way to update them.