Anthony's Blog: Using System Storage - An Aussie Storage Blog
Just a quick note about using Oracle Solaris with IBM XIV (I so want to say "SUN Solaris", I need to retrain my brain).
When using IBM XIV with Solaris, you need to install IBM's XIV Host Attachment Kit (delightfully called a HAK).
This is to ensure multi-pathing is correctly configured (regardless of whether your using DMP or MPxIO).
The relevant software, release notes and instruction guides are found here.
Anyway... the whole point of this blog entry is to correct a short coming in the release notes.
They currently fail to mention some minimum system requirements.
I am getting this corrected, but until then... please note the following:
1. The HAK for Solaris 10 supports only Solaris 10 U4 and greater (this is also referred to as Solaris 10 - 08/07).
This means if for instance your on 11/06 (update 3) or 03/05, you will need to first perform a Solaris update.
2. The following patch is mandatory for Solaris 9/SPARC: 118462-03 (it's a prerequisite for HAK installation).
3. Solaris 9 is supported for SPARC only.
4. Solaris 8 is not supported
There has been a lot of chatter on the blog verse about so called vendor blockers (I think you can shorten that to vblock).
This is the idea that a pre-blessed solution is the safest way to build infrastructure in the data centre.
I can see the attraction, but I suggest the best way to get there is with the vendor who is most willing
to work with the widest range of their competitors.
And you cannot get much wider than IBM SVC.
IBM has been supporting SVC in a mixed OS and hardware environment since 2003.
The IBM Storwize V7000 inherits all of the interoperability testing done over 7 years with IBM SVC.
This is an astonishing way to bring a new product into the marketplace.
I cannot think of a new midrange entrant that has somehow managed to get the same level of
interoperability testing and ISV integration on the very day of its birth.
When you look at the picture below you can see the depth of IBM's support matrix for SVC and its
cousin, the Storwize V7000.
I have on occasion raised a laugh from an audience when I describe IBM SVC as a vendor independent virtualization layer.
But the picture doesn't lie... with the IBM Storwize V7000 or IBM SVC... there is no vendor blocking.
Plus with the ability to move your data out of the IBM virtualization layer (using the the migrate to image command), you can
remove IBM from the picture at any time... meaning no vendor blocking... and no vendor locking.
Its that time again! Rob Jackard from the ATS Group has shared with me his list of IBM Storage related updates.
Storwize V7000 already has a huge amount of great material out there, links are below.
Have a quick look... you may see links that are relevant to you!
AIX / Misc:
(2010.10.13) Support Matrix for Subsystem Device Driver, Subsystem Device Driver Path Control Module, and Subsystem Device Driver Device Specific Module.
(2010.10.12) IBM TechDoc- WWPN Determination for IBM Storage v6.0.
(2010.10.07) IBM TechDoc- Typical VIOS Network Configuration in Production Environment.
(2010.10.05) IBM TechDoc- Power Systems SAN Multipath Configuration Using NPIV.
(2010.09.06) IBM SDDPCM- Open HyperSwap status may report incorrectly via the Tivoli Productivity Center for Replication GUI, If the HyperSwap was incomplete and then another unplanned HyperSwap occurs, both copies of the data will be corrupted.
DS3000 / DS4000 / DS5000:
(2010.10.13) RETAIN Tip# H194697- SATA drive hangs or is not ready after power cycle.
(2010.10.06) RETAIN Tip# H196488- DS5000 systems not working with Brocade on 8 Gbps host ports.
(2010.09.29) RETAIN Tip# H197680- VIOS 2.1.3 support for BladeCenter hosts removed from System Storage Interoperation Center (SSIC) web site - IBM System Storage DS3512 (Type 7146), IBM System Storage DS3524 (Type 7146).
(2010.09.23) DS5100, DS5300 customer responsibilities for code installation.
DS8000 / DS6000:
(2010.10.18) IBM TechDoc- IBM Handbook Using DS8000 Data Replication for Data Migration.
(2010.10.16) System firmware upgrade may be required for N7600 / N7800 storage systems with Data ONTAP 7.2 preloaded.
(2010.10.15) N3300/N3600 BMC (Baseboard Management Controller) firmware upgrade does not occur automatically during Data ONTAP 126.96.36.199 upgrade.
(2010.10.14) NEWS: Recommended Releases for IBM System Storage N series Data ONTAP.
(2010.10.13) Potential Issues when Operating and/or Upgrading Code with 7 or more GPFS filesystems, when Operating with SoNAS R188.8.131.52-7a Code and Prior Levels.
(2010.10.04) Risk minimization recommendations for EXN1000 AT-FCX modules running disk shelf firmware prior to revision 34.
(2010.09.10) Data ONTAP 8.0 7-Mode Gateway Publication Matrix.
(2010.09.01) IBM System Storage N series Data ONTAP Matrix.
(2010.08.31) Service Image 30801483 (BIOS 1.7 and Diagnostics 5.3.8) for N7000 Series Publication Matrix.
(2010.09.27) Brocade: Features removed from Web Tools effective with version 6.1.1.
(2010.09.24) Fabric Manager upgrade required to manage switches with new Cisco address prefix.
NOTE: Cisco’s Field Notice #63302 provides additional explanation.
(2010.09.08) IBM SAN b-type Firmware Version 6.x Release Notes.
NOTE: IBM recommends that Open System customers that currently use FOS 6.1 or earlier limit migration to FOS 6.2.2b, 6.3.0d, 6.3.1a, or 6.4.0a only.
NOTE: Customers that have already migrated or that own products with FOS 6.2 or 6.3 versions preinstalled are supported.
SVC / Storwize V7000:
(2010.10.19) IBM System Storage SAN Volume Controller 6.1.0 Configuration Limits and Restrictions.
(2010.10.19) IBM Storwize V7000 6.1.0 Configuration Limits and Restrictions.
(2010.10.18) IBM Storwize V7000 Product Manuals.
(2010.10.14) IBM Storwize V7000 Information Center.
(2010.10.07) SAN Volume Controller and related software- Support Statement.
(2010.09.17) Incorrect 2145-8G4 System Board Part Number on SVC.
NOTE: This problem is fixed in PTF V184.108.40.206 and later versions.
(2010.09.17) Limit on Size of Space-Efficient VDisk.
NOTE: This issue has been addressed by APAR IC61106 in the V220.127.116.11 PTF.
(2010.09.17) Potential Issue when Modifying Remote Copy Configuration After Upgrading to SVC V4.3.1.
NOTE: This issue has been addressed by APAR IC60186 in the V18.104.22.168 PTF.
(2010.09.17) SVC Embedded CIMOM Process Consuming Excessive CPU Resources.
NOTE: All SVC V4.3.1.x customers are advised to upgrade to V22.214.171.124 or later to address this problem.
(2010.09.17) Incorrect 2145-8G4 Node Hardware Shutdown Temperature Setting in V126.96.36.199-V188.8.131.52.
NOTE: This issue was resolved by APAR IC60083 in SVC V184.108.40.206.
(2010.09.17) Potential Issue during SVC Code Upgrade to V220.127.116.11 when Running Global Mirror.
NOTE: This issue is resolved in the V18.104.22.168 release.
(2010.09.17) Space-Efficient VDisk may be taken offline when used capacity exceeds 1022 GB.
NOTE: This issue was resolved by APAR IC58563 in SVC V22.214.171.124.
(2010.09.17) Management Information Base (MIB) file for SNMP.
(2010.09.17) SAN Volume Controller Software Upgrade Test Utility.
NOTE: The utility release level is v4.8.
(2010.09.16) SAN Volume Controller Concurrent Compatibility and Code Cross Reference.
(2010.09.15) IBM System Storage SVC Code V126.96.36.199.
(2010.09.15) SVC V4.2.x End of Service: September 30, 2010.
NOTE: As previously announced on Sept. 8, 2009, IBM has withdrawn support for SAN Volume Controller V4.2.x on Sept. 30, 2010.
SSPC / TPC / TPC-R:
(2010.10.12) Collecting Data for: SSPC Problems.
(2010.09.29) TPC basic database maintenance steps- tutorial.
(2010.09.21) IBM Tivoli Storage Productivity Center and IBM Tivoli Storage Productivity Center for Replication: Flash for Version 4.2.
(2010.09.10) System Storage Productivity Center Flash for Version 188.8.131.52.
(2010.09.10) System Storage Productivity Center Flash for Version 1.4.
(2010.10.07) IBM TechDoc- Recommended Best Practices Considerations for High Availability on IBM XIV Storage System.
(2010.09.13) IBM XIV Storage System Planning Guide.
(2010.09.06) IBM XIV Management Tools (XIVGUI, XIVTop, XCLI) version 2.4.3.b for all platforms.
(2010.09.06) IBM XIV XCLI (only) for Linux/AIX/Solaris/HPUX, version 2.4.3.b.
IBM Interoperability Matrices:
NOTE: Many of the traditional Storage Interoperability matrix files for each specific storage system will be sunsetting, please begin to use and
familiarize yourself with the System Storage Interoperation Center (SSIC):
IBM System Storage DS4000 series- Interoperability Matrix [Last Updated:06/15/2009]
IBM System Storage DS5000 series- Interoperability Matrix [Last Updated:06/15/2009]
IBM SAN Volume Controller (SVC):
-V6.1.x Recommended Software Levels [Last Updated:10/07/2010]*
-V6.1.x Supported Hardware List [Last Updated:10/07/2010]*
-V6.1.x SVC Restrictions [Last Updated:10/19/2010]*
-V5.1.x Recommended Software Levels [Last Updated:09/15/2010]*
-V5.1.x Supported Hardware List [Last Updated:09/03/2010]*
-V5.1.x SVC Restrictions [Last Updated:09/23/2010]*
-V4.3.x Recommended Software Levels [Last Updated:08/05/2010]
-V4.3.x Supported Hardware List [Last Updated:09/03/2010]*
-V4.3.x SVC Restrictions [Last Updated:11/17/2009]
Cisco SAN: [Last Updated:08/31/2010]
NOTE-1: Latest NX-OS support for 4.2(7a) and 5.0(1a). Updated BladeCenter Cisco FCSM support for NX-OS 4.2(3).
NOTE-2: Customers that do not require an upgrade to NX-OS, but require field updates may install SAN-OS 3.3(5).
IBM SAN (Brocade): [Last Updated:11/23/2009]
IBM SAN (McData): [Last Updated:11/06/2008]
TotalStorage Virtualization (TPC):
anthonyv 2000004B9K 5,490 Views
I just got an email that I thought I would share with you.
IBM Storage have a new iPhone and Blackberry app.
I am test driving it as I type this (which is never wize... don't drive and type
You can download the IBM Storage iPhone and Blackberry apps from here:
My first impression is that the product information is going to be really handy because not only is there disk.
there are also Switches and Tape.
So please... download... supply feedback and enjoy!
The SAN Volume Controller (SVC)
has truly come of age in 2010.
Whyte is planning a detailed set of blog posts about the improvements from
both a hardware and software perspective.
New SVC Graphical
User Interface (GUI)
Easy Tier & SSDs
in SVC v6.1
Provisioning new SVC
SVC Code Upgrades
SVC to Backend Controller
SVC and Storwize
Storage Controller /
Host / Tool Interoperability
anthonyv 2000004B9K Tags:  ds5100 2107 ds6000 1750 shark ds5300 ds480 ds8000 2105 xiv ds3400 storwize v7000 8,280 Views
For many years I have been working on a document that lets you translate a World Wide Port Name (WWPN) into a physical location on an IBM Storage System.
I blogged recently about how SVC and Storwize V7000 WWPNs have a slightly different layout.
The contents of that blog entry come straight from that document.
I have now pushed that document out to IBM Techdocs.
You can download it from here:
Feel free to share any feedback you have and share it with your colleagues.
For those of you who have worked with SVC, you will be aware of a quirk when determining the WWPN of an SVC fibre channel port.
The WWPN of each fibre channel port is based on: 50:05:07:68:01:Yz:zz:zz where z:zz:zz is unique for each machine and the Y value is taken from the port position (the red numbers in the boxes shown below).
Port numbers (used for servicing) are always sequentially numbered left to right (viewed from the rear of the node). So the port numbers make sense.
The Storwize V7000 consists of two node canisters . The WWNN of each node canister will be based on 50:05:07:68:02:0z:zz:zz where z:zz:zz is unique for each node canister. It is unrelated to the WWNN of the other node canister (they may be sequential numbers, they may not).
The WWPN of each Storwize V7000 fibre port is based on: 50:05:07:68:02:Yz:zz:zz where z:zz:zz is unique for each node canister and the Y value is taken from the port position. Note that the lower canister is upside down in relation to the upper canister (which you can see from the image below).
The number in each black box (which represents a fibre channel port) is the Y value. It is also the port number, This means the Y value and the port number are the same number. So for example, port 1 (the upper left hand most port), contains a 1, so a WWPN presented by this port would look like: 50:05:07:68:02:1z:zz:zz.
This makes more sense and fixes a 'feature' introduced with the 4F2 model of the SVC. Remember, that the z:zz:zz values for each node canister will be different (just like two SVC nodes had different z:zz:zz values for each node).
The DS8800 is the next step in the evolution of the DS8000 family.
Its an evolution that began in the 1990s with the Enterprise Storage Server (the Shark).
In 2004 we announced the DS8000 and we have progressively brought out more and more powerful models since then.
The DS8800 is the next stage in this progress and brings in 2.5" SAS drives.
So I thought I would get excited by listing 10 things that make me very proud of our new DS8800 announcement:
IBM is announcing a set of remarkable new storage products and enhancements.
· Storage Efficiency
· Ease of use
· Smart technology
The announcements show not only IBMs significant investment in storage but also IBMs tremendous depth of knowledge and experience.
You will rapidly see that the focus is on our new Midrange Storage product, the Storwize V7000. However this is only one of four major releases that you will see (plus many more other incremental releases). From a product perspective the big new announcements are:
XIV. The XIV will support the VMWare VAAI API by updating the firmware to version 10.2.4. To remind you what I am talking about, check out my earlier blog on this subject here.
Storwize V7000. This is a major new product offering in the midrange space. It takes the intelligence and history of the SVC; brings in some disk controller technology from the DS8000; adds SAS version 2 disk enclosures; provides the sub-LUN performance benefits delivered by Easy Tier; uses a simplified GUI influenced strongly by XIV and has a simplified licensing structure. This is all put into a 2U modular form factor. Because the Storwize V7000 uses the same code base as the SAN Volume Controller (SVC), it brings all the smarts of SVC including virtualized disk (using both internal SAS disks and external storage controllers), thin provisioning, transparent data migration and mirroring (including Metro and Global Mirror). Right now there is no RACE technology in the Storwize V7000 (despite IBM using the Storwize brand). But I think you can take the name as a hint of things to come.
DS8800. This is a fantastic incremental new development in the DS8000 family. It takes the long history of DS8000 development and combines it with small form factor (2.5”) SAS version 2 disks connected via 8 Gbps host adapters and 8 Gbps device adapters. The performance numbers, the environmental and floor-space requirements are all improved by a significant factor. It positions the DS8000 for many years of new functions and features.
SVC. For SVC we are releasing SVC version 6.1. This is a major software update to the SVC code. It delivers a remarkable new GUI with Easy Tier and a whole raft of functional improvements.
Other announcements will include enhancements to TPC, IBM Director, the DS3500 and Softek TDMF.
As soon as I have the announce letter URLS, I will post them. There is clearly plenty more to come.
In part three of my series, AIX and XIV.... I will explore the recommended configuration changes you should make to AIX when attaching XIV disk.
So lets get started:
lsattr -El fcs0
Two of the attributes will look like this:
max_xfer_size 0x100000 Maximum Transfer
lsattr -El fscsi0
Two of the attributes will look like this:
Tracking of FC
I suggest you change these values as follows:
lsattr -El hdisk26
Two of the attributes will look like this:
TRANSFER Size True
I suggest you change these values as follows:
By increasing the max_transfer size, we allow the maximum LTG size on each volume group (VG) to be larger. The LTG size of a VG cannot be larger than the smallest max_transfer size of all the hdisks that make up that VG. When the LVM receives a request for an I/O, it breaks the I/O down into what is called logical track group (LTG) sizes before it passes the request down to the device driver of the underlying disks. The LTG is the maximum transfer size of an LV and is common to all the LVs in the VG.
You can display the LTG size by using the lsvg command against the relevant VG.
AIX XIV Utils
root@testserver [/home/anthonyv/aix_xiv_utils-2.0/bin] # ./lshba -x
We use the chhba commands to change fcs and fscsi attributes. We issue a single command to change all the HBAs at once. This command only changes the ODM. We need to reboot for the changes to take effect. Note we need to type yes when prompted, for the script to run.
root@testserver [/home/anthonyv/aix_xiv_utils-2.0/bin] # ./chhba -d
yes -f fast_fail -m 0x200000 -n 2048 -P
In this example we display the relevant attributes of all the XIV hdisks. The settings in this example are NOT correct (queue depth still 32 and max transfer size still 0x40000) so we need to use the chxiv command to correct them.
We use the chxiv command to change the attributes of every XIV hdisk. This command only changes the ODM for XIV disks. We need to reboot for the changes to take effect. Note we need to type yes when prompted, for the script to run.
[/home/anthonyv/aix_xiv_utils-2.0/bin] # ./chxiv -r 64 –m 0x100000 -P
Change algorithm to round_robin with a queue depth of 64 for these disks? yes
Getting new XIV disk information...
AIX_SIZE(MB) ALGORITHM Q_DEPTH SERIAL
Conclusions and gotchas
So at the conclusion of this process, you should have an AIX system with more ideal settings for XIV. There are a couple of gotchas.
1) HBAs being used for tape. The chhba command will not change HBAs in private loop mode. This is to prevent errors like this:
Date/Time: Thu Feb 4 11:59:08 2010
Sequence Number: 250846
Machine Id: 00CB6AC44C00
Node Id: us04od03
Resource Name: fscsi3
SOFTWARE DEVICE DRIVER
SOFTWARE DEVICE DRIVER
INCORRECT HARDWARE CONFIGURATION.
IDENTIFY OFFENDING SOFTWARE COMPONENT
VERIFY SYSTEM CONFIGURATION IS VALID
REFER TO PRODUCT DOCUMENTATION FOR ADDITIONAL INFORMATION
2) Your queue depth settings may still not be deep enough. Periodically run iostat -D 5 and if you consistently notice avgwqsz or sqfull consistently not zero then increase queue depth (you can go up to 256). Don’t be tempted to start at 256 and work down. You may flood the XIV with commands. For the vast majority of clients, 64 is a good number.
3) Do you need to use these scripts? No you don’t. You can use smit or command line to change attributes.
Do you always need to reboot? No you don’t. But you will need to change the relevant devices to
a defined state to change them. For
instance you could change the queue depth on an hdisk the commands below. But only if the hdisk in not part of an online volume group. It remains easier to just change the ODM and reboot for the changes to take affect.
rmdev –l hdisk25
So the next challenge when connecting your XIV to your AIX LPAR is how to zone the SAN (or hopefully, each SAN fabric).
The XIV consists of a number of modules (from 6 to 15), of which a subset are Interface Modules (meaning they have fibre channel and iSCSI interfaces)
Zone the SAN so that each HBA in an LPAR has 3 paths to the XIV.
If an LPAR has two HBAs, then zone the first HBA to modules 4, 6 and 8 and the second HBA to modules 5, 7 and 9.
For the next LPAR, do the reverse and zone the first HBA to modules 5, 7 and 9 and the second HBA to modules 4, 6 and 8.
If we look at a typical dual fabric 15 module XIV, it would be cabled as pictured below.
Port 1 on each XIV module attaches to Fabric 1
Port 3 on each XIV module attaches to Fabric 2.
Ports 2 and 4 are reserved for replication and mirroring.
But look closely at the use of colours in this lovely Visio diagram I created.
Host 1 is zoned using the links in blue.
Host 2 is zoned using the links in red.
Notice how they round robin between the odd and even numbered interface modules.
Rules of thumb
Because there is some evidence that having excessive paths can slightly raise CPU utilisation.
The other reality is that having multiple duplicate paths won't make your system more reliable.
So your planning to attach your XIV at an AIX host?
Here are some best practices for you to follow.
1) Native XIV detection
The XIV uses a path control module (PCM) that plugs into AIX MPIO. Depending on your AIX level the XIV will be recognised natively by AIX without additional software.
This is nice because it means you can simply run cfgmgr and detect the XIV hdisks without doing any system changes.
If your on the following AIX levels (with TL and SP) then your AIX system will detect the XIV natively. Frankly its a good excuse to perform a system update.
AIX Release APAR Bundled in
AIX 5.3 TL 10 IZ69239 SP 3
AIX 5.3 TL 11 IZ59765 SP 0
AIX 6.1 TL3 IZ63292 SP 3
AIX 6.1 TL 4 IZ59789 SP 0
If your running VIOS, you need to be on VIOS v2.1.2 FP22 to recognise the XIV natively.
Natively detected XIV devices will look like this when displayed using the command:
lsdev -Cc disk
hdisk2 Available 03-00-02 MPIO 2810 XIV Disk
2) XIV Host attachment kit
If you are not on the levels listed above, you can install the XIV Host Attachment Kit to get XIV support.
However at lower AIX and VIOS levels there are issues with queue depth and round robin (its limited to 1)
The following releases do not have the queue depth issue, so they are better levels to be on:
AIX 5.3 TL 10 SP 0,1 and 2
AIX 6.1 TL 4 SP 0,1 and 2
VIOS v2.1.1.x FP-21.x
If your on level lower than those, you can still install the Host Attachment Kit to get XIV device support.
To detect XIV volumes when using the XIV Host Attachment Kit, you use the command xiv_attach.
The very first time you run xiv_attach you will need to reboot the host. After that you can use xiv_attach or cfgmgr (without reboot).
XIV devices detected by the xiv_attach command will look like this when displayed using the command:
lsdev -Cc disk
hdisk3 Available 02-01-02 IBM 2810XIV Fibre Channel Disk
3) The xiv_devlist command
Regardless of what level of AIX your running, you should install the Host Attachment Kit (HAK) to get the wonderful xiv_devlist command.
The HAK uses a specially packaged version of Python which is renamed XPYV (to not get in the way of any system Python already installed).
Just installing the kit does not require a reboot.
The xiv_devlist command is the equivalent of what SDD gave you with datapath query device.
It lets you map an AIX device (an hdisk) to an XIV volume. Its a tool you don't want to live without.
In the example below you can see the hdisk number on the left,
but all the other information (volume size, number of paths, volume name, XIV host) all come from the XIV itself.
This is really useful information.
root@system] # xiv_devlist
Device Size Paths Vol Name Vol Id XIV Id XIV Host
/dev/hdisk26 204.0GB 6/6 PROD-3050 188 7802844 PROD-prd
/dev/hdisk27 42.9GB 6/6 PROD-3051 189 7802844 PROD-prd
In my next blog entries I will tell you about zoning and what fcs, fscsi and hdisk variable work best with XIV.
I will also share a great way to update them.
IBM has announced some new XIV power features while withdrawing others.
The changes are being made to simplify the ordering process while making the power choices more robust and better suited to client requirements.
So what changed?
This is pretty well an industry standard for Enterprise class disk.
The USA Announcement letter is here.
The Asia Pacific Announcement letter is here
The European Announcement letter is here.
I am pleased to announce IBM will have a Symposium dedicated just to IBM System Storage.
It will be held at:
Crowne Plaza Darling Harbour
150 Day Street
DURATION: 2.5 days
DATE: 6-8 December 2010 (thats a Monday to Wednesday).
I am personally organising the session list in conjunction with our USA based Symposium organisation team. I am also helping to line up the speakers.
The focus is definitely on helping clients see the IBM vision while getting practical information on how to get the best use of IBM System Storage.
The final session list is still being finalised, once it is, I will be posting it.
Jenny Morris and Danielle Lofting will be organising the venue. They have a fantastic, enthusiastic and very experienced team, so the organisation will be at a very high level.
I have posted the current symposium description below.
As I said... more news to come, so stay tuned.
Come away from our first Annual System Storage® Technical Symposium in Asia Pacific with strategic and tactical knowledge of how to optimise every aspect of your customers' server/storage network. Let IBM® executives, developers and industry expert speakers from all around the world help you realise what the very latest on IBM Storage technology means to your bottom line.
The IBM System Storage Technical Symposium in Sydney will offer sessions and tracks in Archiving & Compliance, Business Continuity, Disk Systems, Open Systems & System z® Storage Management, Security Storage Networking (Data Center Networking (DCN) - Network Attached Storage (NAS) - Storage Area Network (SAN)), Tape Systems, SONAS and Virtualization solutions.
Test-drive the latest technologies through hands-on labs. Roll up your sleeves and get involved with the latest systems and software
Want to enrol? Head here.
This year is the year IBM brought Easy Tier to the market place.
Easy Tier is not a product.... or a piece of hardware: Easy Tier is a software technology... and it is smart technology.
It comes out of IBM's Almaden Research Centre and is designed to fit into any storage product that has a modular software architecture (think internal Unix operating systems like AIX or Linux).
The first product to use and deliver Easy Tier is the DS8700. It is real technology and you can buy it and start using it right now.
Here are some salient points which differentiate Easy Tier from other vendors offerings:
Want to learn more?
Check out the IBM Redpaper.
Or the Youtube video.
Or the Product Page
Or this commentary from the Mesabi Group
And remember... right now you can get it in DS8700.... but the Easy Tier design is well suited to a wider range of products. Watch this space.
I have no idea what this website is all about, but you have to love what they have they done with an XIV.
My favorite is the U2 model. With 2TB drives it can hold around 161 million minutes of music!
Plug that sucker into your iPod and put it on shuffle!!
The XIV GUI is all about simplicity. Its about taking tasks which on other products are difficult or time consuming, and either eliminating them, or making them as simple as possible.
But for those who like to issue commands via a command line interface (a CLI), the XIV also has a very rich CLI called XCLI.
If your familiar with the XCLI your hopefully aware that list commands can produce much more detailed output if the -x option is used (-x requests XML output).
So here is something you can try out.
If your XIV is on 10.2.1 firmware you can use the module_list -x command to display how much server memory each XIV module has.
If your XIV has 2 TB disks, you should find that you have 16 GB of server memory per module.
This means a 15 module machine has a whopping 240 GB of server RAM.
To be clear, I am not referring to this as 'cache' because a small portion (around 2.5 GB) of the RAM in each module is used by the modules internal Linux operating system.
This means that a 15 module XIV with 2 TB drives and 16 GB of server memory per module, has over 200 GB of cache.
As former Australian Prime Minister Paul Keating once said: "its a beautiful set of numbers"
<XCLIRETURN STATUS="SUCCESS" COMMAND_LINE="module_list -x">
<sdr_version value="SDR Package 46"/>
anthonyv 2000004B9K Tags:  ats xiv ds8700 ibm aix rob svc group ds5300 jackard ibmstorage 2 Comments 11,497 Views
So its that time of the month again. Rob Jackard from the ATS group does a fantastic job summarizing changes to the IBM Storage Support site
and you get all the benefit of his hard work (via me!).
So cast your eyes down the list and look for issues that may affect you....
(2010.08.21) AIX Support Lifecycle Notice- AIX 5.3 TL9 & TL10.
NOTE-1: After November 2010, IBM will no longer provide generally available fixes or interim fixes for new defects on systems at AIX 5300-09 (applies to all Service Packs within TL9). Sometime after May 1, 2011, IBM will no longer provide generally available fixes or interim fixes for new defects on systems at AIX 5300-10 (applies to all Service Packs within TL10).
NOTE-2: As a reminder, IBM is no longer providing generally available fixes or interim fixes for new defects on systems at AIX 5300-06, AIX 5300-07 or AIX 5300-08.
(2010.08.21) AIX Support Lifecycle Notice- AIX 6.1 TL2 & TL3.
NOTE-1: After November 2010, IBM will no longer provide generally available fixes or interim fixes for new defects on systems at AIX 6100-02 (applies to all Service Packs within TL2). Sometime after May 1, 2011, IBM will no longer provide generally available fixes or interim fixes for new defects on systems at AIX 6100-03 (applies to all Service Packs within TL3).
NOTE-2: As a reminder, IBM is no longer providing generally available fixes or interim fixes for new defects on systems at AIX 6100-00 or AIX 6100-01.
(2010.07.29) Disable the TCP/IP port for SDDSRV/PCMSRV if the port is enabled.
(2010.07.23) SDDSRV respawn incorrectly causes system log filesystem overflow with AIX SDD version 184.108.40.206.
NOTE- Contact IBM Support Center to obtain the required temporary fix 220.127.116.11, which is now available.
(2010.07.23) AIX SDD Version 18.104.22.168 cannot be deinstalled due to sddsrv respawning incorrectly.
NOTE: Contact IBM Support Center to obtain the required temporary fix 22.214.171.124, which is now available.
(2010.07.19) IBM TechDoc- Diagnosing Oracle DB performance on AIX using IBM NMON and Oracle Statspack Reports.
(2010.07.19) IBM TechDoc- Maintaining two switch fabrics on NPIV migrations.
(2010.07.07) AIX 5.3 HIPER Alert- devices.common.IBM.mpio.rte 126.96.36.199.
NOTE: Users of MPIO storage running the 5300-12 TL- An operation to change the preferred path of a LUN could hang. A similar hang could be experienced during LPAR migration where it will try to switch the preferred paths. Install APAR IZ77907.
(2010.07.07) AIX 6.1 HIPER Alert- devices.common.IBM.mpio.rte 188.8.131.52.
NOTE: Users of MPIO storage running the 6100-02 TL- An operation to change the preferred path of a LUN could hang. A similar hang could be experienced during LPAR migration where it will try to switch the preferred paths. Install APAR IZ77908.
DS3000 / DS4000 / DS5000:
(2010.08.23) Updated ESM and HDSS Firmware v1.69 package.
(2010.08.20) Updated Disk Controller Firmware v7.70.23.00 code package.
NOTE: Code for DS3950, DS5020, DS5100, DS5300 subsystems.
(2010.08.17) RETAIN Tip# H197049- Issues on full synchronization of RVM LUNS > 2 TB.
(2010.08.17) SAS connectivity to IBM System Storage DS3200 is not supported- IBM BladeCenter JS23, JS43.
(2010.08.16) RETAIN Tip# H197402- Multi-node server ports require same host group for failover- IBM Disk Systems.
(2010.08.09) Updated Disk Controller Firmware v7.60.40.00 code package.
NOTE: Code for DS3950, DS4200 Express, DS4700 Express. DS4800, DS5020, DS5100, DS5300 subsystems.
(2010.07.23) IBM TechDoc- Power Cord Technical Guide for DS5000 Systems.
(2010.07.20) RETAIN Tip# H197172- UEFI systems with HBA and >2 TB LUN may have data errors.
(2010.07.07) RETAIN Tip# H196538: Controller reboots if multiple CASDs attempted.
DS6000 / DS8000:
(2010.08.24) Potential Data Error using Fast Reverse Restore following Establish FlashCopy without Change Recording.
(2010.08.21) DS8000 Code Bundle Information.
(2010.08.20) DS8700 Code Bundle Information.
(2010.08.04) IBM TechDoc- Effective Capacity for IBM DS8700 R5.1.
(2010.07.29) IBM Whitepaper- IBM DS8000 Metro Mirror DR within a Remote Cluster.
(2010.07.28) IBM TechDoc- DS6000 and DS8000 Data Replication.
(2010.07.15) IBM Whitepaper- IBM Handbook using DS8000 Data Replication for Data Migration.
(2010.07.14) DS6000 Microcode Release 184.108.40.206.
(2010.08.26) Important information for N series support.
(2010.08.18) IBM System Storage N series FRU lists.
(2010.08.17) Data ONTAP 7.3.4 Filer Publication Matrix.
(2010.08.17) Data ONTAP 7.3.4 Gateway Publication Matrix.
(2010.07.29) Data ONTAP 8.0 7-Mode Gateway Publication Matrix.
(2010.07.29) Data ONTAP 8.0 7-Mode Filer Publication Matrix.
(2010.07.20) Data Fabric Manager (DFM) 4.0 Publications Matrix.
(2010.07.17) RLM Update to Firmware Version 4.0 Fails with “Error Flashing linux“.
(2010.07.12) IBM System Storage N series FRU (Field Replaceable Unit) lists.
(2010.08.24) IBM SAN b-type Firmware Version 6.x Release Notes.
(2010.08.20) Cisco MDS Supervisor 1-to-2 Upgrade Process.
(2010.07.15) Cisco MDS9000 Field Notice: FN-63132. Potential DIMM Memory Issue in a Small Number of DS-X9530-SF2-K9 Supervisor Cards Manufactured between September 2007 and February 2008.
(2010.08.17) NPIV clients of SSDPCM hosts may experience permanent application errors during SVC concurrent code upgrade or node reset with certain APARs and SSDPCM versions. The risk, although rare, exists in any AIX SSDPCM host or client.
NOTE: The changes made for VIOS client hangs in Technote SSG1S1003579 require additional AIX driver and SDDPCM code updates for a specific SVC error condition.
(2010.08.11) SVC V4.3.x and V5.1.x Cluster Nodes May Repeatedly Reboot and CLI/GUI Access Loss May Occur When Shrinking Space Efficient VDisks.
(2010.08.05) SVC Console (GUI) Requirements for using IPv6.
(2010.08.05) Guidance for Identifying and Changing Managed Disks Assigned as Quorum Disk Candidates.
(2010.08.05) Offline or Degraded Disks May Result in Loss of I/O Access During Code Upgrade.
(2010.08.05) IBM System Storage SAN Volume Controller V5.1.0- Software Installation and Configuration Guide (English Version).
(2010.08.05) IBM System Storage SVC Code V220.127.116.11.
(2010.08.05) IBM System Storage SVC Console (SVCC) V18.104.22.1686.
(2010.08.02) IBM System Storage SVC Code V22.214.171.124.
(2010.08.02) IBM System Storage SVC Console (SVCC) V126.96.36.1993.
(2010.08.02) SAN Volume Controller Concurrent Compatibility and Code Cross Reference.
(2010.07.23) SNMP MIB file for SVC V5.1.0.
(2010.07.21) 2145-CF8 Nodes May Repeatedly Loop Between Boot Codes 100 and 137 When Upgrading to SVC V188.8.131.52 or Later.
NOTE: This issue is resolved in SVC v184.108.40.206.
(2010.07.17) Changes in handling of SSH keys in SVC V5.1.
(2010.07.16) Incorrect 2145-8G4 Node Hardware Shutdown Temperature Setting in V220.127.116.11 – V18.104.22.168.
NOTE: This issue is resolved by APAR IC60083 in SVC V22.214.171.124.
(2010.07.16) Incorrect 2145-8A4 Node Hardware Shutdown Temperature Setting in V126.96.36.199 – V188.8.131.52.
NOTE: This issue is resolved by APAR IC68234 in the SVC V184.108.40.206 release.
(2009.11.19) 20091015 Drive Microcode Package for Solid State Drive.
SSPC / TPC / TPC-R:
(2010.08.30) Open HyperSwap status may report incorrectly via the Tivoli Productivity Center for Replication GUI.
(2010.08.25) IBM TechDoc- Basic Automation of TPC Performance Graphs.
(2010.08.04) TPC 4.1.x – Platform Support: Agents, Servers and GUI.
(2010.08.04) Q3, 2010- IBM Tivoli TotalStorage Productivity Center Suite Customer Support Technical Information Update.
(2010.08.25) IBM TechDoc- Utilizing IBM XIV Storage System snapshot technology in SAP environments.
(2010.08.24) How to Avoid Potential Problems During a Data Migration to an XIV Storage System.
(2010.08.02) IBM XIV Remote Support Proxy version 1.1.0.
(2010.07.19) XIV Volume Sizing Spreadsheet Tool.
(2010.07.08) Potential to inadvertently overwrite volumes using IBM XIV Management Tools (XIVGUI, XIVTop, XCLI) version 2.4.3.
NOTE: This issue is resolved with release 2.4.3.a.
(2010.07.01) IBM Certification: IBM Certified Specialist – XIV Storage System Technical Solutions Version 2.
(2010.07.01) IBM Certification: IBM Certified Specialist – XIV Storage System Replication and Migration Services Version 1.
I have been asked this question a few times now, so its worth a blog entry.
Clients love being able to easily view XIV performance statistics.
There is a simple panel that lets you display IOPS, throughput and response times for each host or volume or for the entire machine.
When viewing XIV performance statistics using the built in GUI panels, write I/Os are broken into two types: write hits and write misses.
The question that comes up is... what is the difference? And should I be worried about misses?
The use of the term miss can have negative connotations. To explain why:
So what about a write miss? Does it mean that the write I/O 'missed' the cache?
The answer is.... no!
To explain the difference:
A write hit is the situation where a host write generates less back end disk operations. This is because:
Its been an interesting week in IT retractions.
Microsoft seriously went off the rails with their Meter Maid Booth Babes on the Gold Coast.
Check out the story here or here.
I mention this story not because I want to embarrass Microsoft (who I don't think quite realised what they had signed up for).
To their credit they quickly apologised and moved to correct their mistake.
Instead I mention this because several Microsoft people were more than willing to (quite rightly) publicly express their opinions on the subject.
I thought this was fantastic.
But with great power comes great responsibility.
As an IBMer I have never been told what I can or cannot blog.
However I do of course follow IBM Business Conduct Guidelines as well as IBM Social Networking Guidelines.
So I have to say that I viewed with dismay HDS blogger Pete Gerrs extraordinary attack on Moshe Yanai and the IBM XIV.
He has since rather gracelessly withdrawn the blog entry but his follow on comments need some response.
The XIV has been (and continues to be) a fantastic product for IBM.
Not only is it a great sales success, it has also allowed us to talk to clients who would not normally purchase IBM storage.
Far from damaging IBMs existing product line, it has resulted in those lines growing stronger (just wait and see).
We have a new focus on usability and simplicity, on making the experience of using and managing storage easier and smarter.
To some part, XIV has brought that focus. I personally think we needed it and that we are stronger for it.
As the year comes to a close you will see the benefits of this reinvigoration with some truly fantastic storage product announcements (across the board).
So while hopefully Pete can take some lessons from his very creditable and measured fellow blogger Hu Yoshida,
I will patiently wait for Barry Burke to post that he was wrong about DS8000.
And I will keep trying to get it right the first time.