Anthony's Blog: Using System Storage - An Aussie Storage Blog
The XIV GUI is all about simplicity. Its about taking tasks which on other products are difficult or time consuming, and either eliminating them, or making them as simple as possible.
But for those who like to issue commands via a command line interface (a CLI), the XIV also has a very rich CLI called XCLI.
If your familiar with the XCLI your hopefully aware that list commands can produce much more detailed output if the -x option is used (-x requests XML output).
So here is something you can try out.
If your XIV is on 10.2.1 firmware you can use the module_list -x command to display how much server memory each XIV module has.
If your XIV has 2 TB disks, you should find that you have 16 GB of server memory per module.
This means a 15 module machine has a whopping 240 GB of server RAM.
To be clear, I am not referring to this as 'cache' because a small portion (around 2.5 GB) of the RAM in each module is used by the modules internal Linux operating system.
This means that a 15 module XIV with 2 TB drives and 16 GB of server memory per module, has over 200 GB of cache.
As former Australian Prime Minister Paul Keating once said: "its a beautiful set of numbers"
<XCLIRETURN STATUS="SUCCESS" COMMAND_LINE="module_list -x">
<sdr_version value="SDR Package 46"/>
anthonyv 2000004B9K Tags:  ats xiv ds8700 ibm aix rob svc group ds5300 jackard ibmstorage 2 Comments 11,645 Views
So its that time of the month again. Rob Jackard from the ATS group does a fantastic job summarizing changes to the IBM Storage Support site
and you get all the benefit of his hard work (via me!).
So cast your eyes down the list and look for issues that may affect you....
(2010.08.21) AIX Support Lifecycle Notice- AIX 5.3 TL9 & TL10.
NOTE-1: After November 2010, IBM will no longer provide generally available fixes or interim fixes for new defects on systems at AIX 5300-09 (applies to all Service Packs within TL9). Sometime after May 1, 2011, IBM will no longer provide generally available fixes or interim fixes for new defects on systems at AIX 5300-10 (applies to all Service Packs within TL10).
NOTE-2: As a reminder, IBM is no longer providing generally available fixes or interim fixes for new defects on systems at AIX 5300-06, AIX 5300-07 or AIX 5300-08.
(2010.08.21) AIX Support Lifecycle Notice- AIX 6.1 TL2 & TL3.
NOTE-1: After November 2010, IBM will no longer provide generally available fixes or interim fixes for new defects on systems at AIX 6100-02 (applies to all Service Packs within TL2). Sometime after May 1, 2011, IBM will no longer provide generally available fixes or interim fixes for new defects on systems at AIX 6100-03 (applies to all Service Packs within TL3).
NOTE-2: As a reminder, IBM is no longer providing generally available fixes or interim fixes for new defects on systems at AIX 6100-00 or AIX 6100-01.
(2010.07.29) Disable the TCP/IP port for SDDSRV/PCMSRV if the port is enabled.
(2010.07.23) SDDSRV respawn incorrectly causes system log filesystem overflow with AIX SDD version 126.96.36.199.
NOTE- Contact IBM Support Center to obtain the required temporary fix 188.8.131.52, which is now available.
(2010.07.23) AIX SDD Version 184.108.40.206 cannot be deinstalled due to sddsrv respawning incorrectly.
NOTE: Contact IBM Support Center to obtain the required temporary fix 220.127.116.11, which is now available.
(2010.07.19) IBM TechDoc- Diagnosing Oracle DB performance on AIX using IBM NMON and Oracle Statspack Reports.
(2010.07.19) IBM TechDoc- Maintaining two switch fabrics on NPIV migrations.
(2010.07.07) AIX 5.3 HIPER Alert- devices.common.IBM.mpio.rte 18.104.22.168.
NOTE: Users of MPIO storage running the 5300-12 TL- An operation to change the preferred path of a LUN could hang. A similar hang could be experienced during LPAR migration where it will try to switch the preferred paths. Install APAR IZ77907.
(2010.07.07) AIX 6.1 HIPER Alert- devices.common.IBM.mpio.rte 22.214.171.124.
NOTE: Users of MPIO storage running the 6100-02 TL- An operation to change the preferred path of a LUN could hang. A similar hang could be experienced during LPAR migration where it will try to switch the preferred paths. Install APAR IZ77908.
DS3000 / DS4000 / DS5000:
(2010.08.23) Updated ESM and HDSS Firmware v1.69 package.
(2010.08.20) Updated Disk Controller Firmware v7.70.23.00 code package.
NOTE: Code for DS3950, DS5020, DS5100, DS5300 subsystems.
(2010.08.17) RETAIN Tip# H197049- Issues on full synchronization of RVM LUNS > 2 TB.
(2010.08.17) SAS connectivity to IBM System Storage DS3200 is not supported- IBM BladeCenter JS23, JS43.
(2010.08.16) RETAIN Tip# H197402- Multi-node server ports require same host group for failover- IBM Disk Systems.
(2010.08.09) Updated Disk Controller Firmware v7.60.40.00 code package.
NOTE: Code for DS3950, DS4200 Express, DS4700 Express. DS4800, DS5020, DS5100, DS5300 subsystems.
(2010.07.23) IBM TechDoc- Power Cord Technical Guide for DS5000 Systems.
(2010.07.20) RETAIN Tip# H197172- UEFI systems with HBA and >2 TB LUN may have data errors.
(2010.07.07) RETAIN Tip# H196538: Controller reboots if multiple CASDs attempted.
DS6000 / DS8000:
(2010.08.24) Potential Data Error using Fast Reverse Restore following Establish FlashCopy without Change Recording.
(2010.08.21) DS8000 Code Bundle Information.
(2010.08.20) DS8700 Code Bundle Information.
(2010.08.04) IBM TechDoc- Effective Capacity for IBM DS8700 R5.1.
(2010.07.29) IBM Whitepaper- IBM DS8000 Metro Mirror DR within a Remote Cluster.
(2010.07.28) IBM TechDoc- DS6000 and DS8000 Data Replication.
(2010.07.15) IBM Whitepaper- IBM Handbook using DS8000 Data Replication for Data Migration.
(2010.07.14) DS6000 Microcode Release 126.96.36.199.
(2010.08.26) Important information for N series support.
(2010.08.18) IBM System Storage N series FRU lists.
(2010.08.17) Data ONTAP 7.3.4 Filer Publication Matrix.
(2010.08.17) Data ONTAP 7.3.4 Gateway Publication Matrix.
(2010.07.29) Data ONTAP 8.0 7-Mode Gateway Publication Matrix.
(2010.07.29) Data ONTAP 8.0 7-Mode Filer Publication Matrix.
(2010.07.20) Data Fabric Manager (DFM) 4.0 Publications Matrix.
(2010.07.17) RLM Update to Firmware Version 4.0 Fails with “Error Flashing linux“.
(2010.07.12) IBM System Storage N series FRU (Field Replaceable Unit) lists.
(2010.08.24) IBM SAN b-type Firmware Version 6.x Release Notes.
(2010.08.20) Cisco MDS Supervisor 1-to-2 Upgrade Process.
(2010.07.15) Cisco MDS9000 Field Notice: FN-63132. Potential DIMM Memory Issue in a Small Number of DS-X9530-SF2-K9 Supervisor Cards Manufactured between September 2007 and February 2008.
(2010.08.17) NPIV clients of SSDPCM hosts may experience permanent application errors during SVC concurrent code upgrade or node reset with certain APARs and SSDPCM versions. The risk, although rare, exists in any AIX SSDPCM host or client.
NOTE: The changes made for VIOS client hangs in Technote SSG1S1003579 require additional AIX driver and SDDPCM code updates for a specific SVC error condition.
(2010.08.11) SVC V4.3.x and V5.1.x Cluster Nodes May Repeatedly Reboot and CLI/GUI Access Loss May Occur When Shrinking Space Efficient VDisks.
(2010.08.05) SVC Console (GUI) Requirements for using IPv6.
(2010.08.05) Guidance for Identifying and Changing Managed Disks Assigned as Quorum Disk Candidates.
(2010.08.05) Offline or Degraded Disks May Result in Loss of I/O Access During Code Upgrade.
(2010.08.05) IBM System Storage SAN Volume Controller V5.1.0- Software Installation and Configuration Guide (English Version).
(2010.08.05) IBM System Storage SVC Code V188.8.131.52.
(2010.08.05) IBM System Storage SVC Console (SVCC) V184.108.40.2066.
(2010.08.02) IBM System Storage SVC Code V220.127.116.11.
(2010.08.02) IBM System Storage SVC Console (SVCC) V18.104.22.1683.
(2010.08.02) SAN Volume Controller Concurrent Compatibility and Code Cross Reference.
(2010.07.23) SNMP MIB file for SVC V5.1.0.
(2010.07.21) 2145-CF8 Nodes May Repeatedly Loop Between Boot Codes 100 and 137 When Upgrading to SVC V22.214.171.124 or Later.
NOTE: This issue is resolved in SVC v126.96.36.199.
(2010.07.17) Changes in handling of SSH keys in SVC V5.1.
(2010.07.16) Incorrect 2145-8G4 Node Hardware Shutdown Temperature Setting in V188.8.131.52 – V184.108.40.206.
NOTE: This issue is resolved by APAR IC60083 in SVC V220.127.116.11.
(2010.07.16) Incorrect 2145-8A4 Node Hardware Shutdown Temperature Setting in V18.104.22.168 – V22.214.171.124.
NOTE: This issue is resolved by APAR IC68234 in the SVC V126.96.36.199 release.
(2009.11.19) 20091015 Drive Microcode Package for Solid State Drive.
SSPC / TPC / TPC-R:
(2010.08.30) Open HyperSwap status may report incorrectly via the Tivoli Productivity Center for Replication GUI.
(2010.08.25) IBM TechDoc- Basic Automation of TPC Performance Graphs.
(2010.08.04) TPC 4.1.x – Platform Support: Agents, Servers and GUI.
(2010.08.04) Q3, 2010- IBM Tivoli TotalStorage Productivity Center Suite Customer Support Technical Information Update.
(2010.08.25) IBM TechDoc- Utilizing IBM XIV Storage System snapshot technology in SAP environments.
(2010.08.24) How to Avoid Potential Problems During a Data Migration to an XIV Storage System.
(2010.08.02) IBM XIV Remote Support Proxy version 1.1.0.
(2010.07.19) XIV Volume Sizing Spreadsheet Tool.
(2010.07.08) Potential to inadvertently overwrite volumes using IBM XIV Management Tools (XIVGUI, XIVTop, XCLI) version 2.4.3.
NOTE: This issue is resolved with release 2.4.3.a.
(2010.07.01) IBM Certification: IBM Certified Specialist – XIV Storage System Technical Solutions Version 2.
(2010.07.01) IBM Certification: IBM Certified Specialist – XIV Storage System Replication and Migration Services Version 1.
I know I have already blogged about the XIV GUI and how it just keeps getting better and better....
But I feel the need to share yet another improvement to the way the GUI presents information.
In this case its an additional widget to show the health of each module in the XIV.
A fully configured XIV consists of 15 modules, each module containing components like fans, power supplies and disk drives.
In addition it is of course running firmware (the secret sauce of the XIV).
Another new feature is that now when you hover over the left hand side of each module, a small magnifying glass appears.
When it does, just left click on your mouse and the module slides out (virtually of course!).
If you then hover over a component within the module, health information is displayed. How nice is that?
Don't forget, you can download the GUI and just logon as p10demomode to check it out for yourself (without an XIV).
If you have an existing XIV, its time to upgrade your XIV GUI to version 2.4.3 Build 11.
IBM recently announced the new System Storage DS3500 Express. The DS3500 is an entry level storage system that can be easily serviced and managed by an end-user. It is a very worthy successor to the DS3200/DS3300/DS3400 product line. So I thought I would share with you 10 things I really like about the new IBM DS3500 (in no particular order).
1) Its small
The base unit is only 2U in size and can hold either 12 of the 3.5" disks or 24 of the smaller 2.5" disks (depending on model). Each expansion drawer can also hold 12 of the 3.5" or 24 of the 2.5" disks (depending on model) and you can have 3 of them. So thats a potential 96 disks in 8U of rack space.
2) Its all SAS
In my opinion, Serial Attached SCSI (SAS) is the future of disk attachment. Traditional parallel SCSI is so 20th century and FATA didn't work out too well. I think SATA and FCAL attached disk will eventually be replaced by SAS and the DS3500 is all SAS at the disk back end and SAS by default at the host front end as well.
3) Its got flashcopy
The DS3500 can create two flashcopies without any extra licenses. I really like the fact that if your doing an OS or application upgrade, you can give yourself a quick roll-back point by just reserving some space for a flashcopy repository. This is also a great way to test whether flashcopy is right for your business and if so, buy the license to create more than 2 copies at a time.
4) Its got remote mirror
The DS3000 range up until now did not offer remote mirror capability. This meant that if you wanted a DR solution you needed to buy something to go over the top such as IBM SVC or Softek Replicator. The DS3500 now offers its own native replication that not only fills a spot but is compatible with existing DS4000s and DS5000s that you may already have in your business.
5) Its got nearline
So FATA disk may not have worked out, by nearline SAS is a far better alternative. The 2.5" model offers a 500 GB 7.2 K RPM nearline SAS drive. Or how about a 2 TB drive in the 3.5" form factor? Want some archive disk using nearline where the spindle count will still deliver good performance? Heres the solution.
6) Its green
If we accept that MAID was not the solution for the masses, the better thing is to simply do more with less, which is exactly what the DS3500 does. We are talking around 500W of power usage for a 48 disk two drawer solution (with 2.5" disks). Thats around half the power consumption of the equivalent model with 3.5" disks. This means less power drawn in and less hot air blown out.
7) One model to rule them all
The DS3500 comes in one model: SAS. You want fibre channel? No problem, just add the card. You instead want iSCSI? Same deal, just add the card. All models retain the SAS adapters which are proving so popular in the rack and blade server space.
You need a point solution to provide data-at-rest encryption? Here it is with 300 GB and 600 GB Self Encrypting drives that protect your data with no performance impact. Even better is that the software to manage encryption is rolled into the DS Storage Manager. Talking of which...
9) Easy Management
The DS3500 continues to use an intuitive and easy to use GUI which now includes all the dynamic volume management. This is an improvement over previous models where this had to be done via command line.
10) Its cheap
Being entry level it is priced for that market. You could also place it behind the SVC for a quick encryption solution or as a VDisk mirror repository.
Want to know more? Go talk to your IBM Business Partner and check out the product page here:
I received a question the other day about how the XIV interacts with a customers building UPS.
So I thought I would share my answer with you.
The XIV has two separate line cords (there is an option to have four line cords but I am trying not to complicate this). This means the clients building power provides the XIV with two separate power sources.
As long as one of those two line cords provides input power, then the XIV will continue to operate normally.
If both power sources stop supplying input power then the client is not providing any electricity to the XIV (none at all).
This would suggest the clients computer room has suffered a severe building facility failure and that all of their other equipment has lost power too.
In this situation the XIV will continue to operate normally for 30 seconds on battery power, waiting in the hope that the clients power will come back on at least one of the two line cords.
If after 30 seconds, the XIV has not detected the return of any input power, it must take action to ensure it does not flatten it's internal UPS batteries. So it performs a graceful shutdown and powers itself off. Why wait only 30 seconds? The main reason is brown-out protection. If the client loses power for 20 seconds and then returns power, and does this recursively, they could progressively flatten the battery to the point where the XIV may not be able to gracefully shut down. This is not desirable, so the 30 second timer is a good compromise.
Overall this design allows the client the greatest levels of availability and data protection.
In terms of site EPO, the XIV does not have an EPO switch or interface, because the XIV design has a strict requirement to perform a graceful shutdown prior to power off.
If the client wants to manually power the machine off, they could instead issue a CLI or GUI command to the machine to request shutdown.
Shutdown takes about 30 seconds to complete because the machine needs time to destage cache and meta data to disk prior to shutting down the Linux OS that runs on each module.
So how do you power the XIV back on?
Just press the On switch on each of the three UPS modules (preferably all at once).
So how do you manually power the XIV off?
Always use the xCLI or XIV GUI to shut the XIV down. There are power off buttons on each XIV UPS, but these should be covered by a plate and never used (if they are not covered up, please contact IBM to have this done). We don't use these buttons as they don't let the modules shutdown gracefully.
If you launch an xCLI session from the XIV GUI, issue the following command and then respond to the prompts:
If you want to script the command then you need a script that looks like this:
xcli -m 192.168.30.91 -u admin -p adminadmin shutdown –y
If you choose to use emergency=yes then you may cause data loss, which is clearly not a good idea.
We add the -y parameter because the shutdown command is normally interactive. Clearly this assumes you have not changed the default password ( which is also not a good idea).
The GUI of course also has a shutdown option (that will give you some warning prompts as well):
Check out the details here: http://www-03.ibm.com/systems/storage/san/ctype/9148/index.html
In my current job I commonly get asked about interoperability. The fear of finding your vendor is not willing to support your setup... that's a serious source of anxiety.
After all.... what every IT professional wants to deliver is certainty. I think it is the number one deliverable that a good IT company should provide its clients.
So how does IBM deliver this certainty? The answer is the SSIC.
SSIC stands for "System Storage Interoperation Center" and it can be found here:
For those who suffer from Google-Induced-Laziness-Syndrome or GILS as I like to call it, you can just google for SSIC. For me its usually the first link:
So let me give you an example of how its done (its even a real one!).
I got asked last week if the following configuration would be supported by IBM:
So I visited the SSIC and plugged in the variables. So how is it done? All we need to do is follow our noses and select each of the boxes. Take a look at the screen capture below. We started with Enterprise Disk and got 87,354,141 possible configuration results... 87 million! As you can see from the red boxes, we then just follow the selections to cut down the configuration results:
So how many configuration results exist when we hit the end?
I think we have a winner.... we select the Submit button and the summary page opens up.
We can then confirm our selections as per the screen capture below and then export the data as an XLS file.
We now not only have confidence that our configuration is supported.... we have a document to keep as a permanent record.
If your environment has multiple selections you need to work with, you do not need to get so granular in your selections. As long as the Configuration result count is 100 or less, you can export the results.
This will save you running through the SSIC multiple times.