Anthony's Blog: Using System Storage - An Aussie Storage Blog
I have no idea what this website is all about, but you have to love what they have they done with an XIV.
My favorite is the U2 model. With 2TB drives it can hold around 161 million minutes of music!
Plug that sucker into your iPod and put it on shuffle!!
The XIV GUI is all about simplicity. Its about taking tasks which on other products are difficult or time consuming, and either eliminating them, or making them as simple as possible.
But for those who like to issue commands via a command line interface (a CLI), the XIV also has a very rich CLI called XCLI.
If your familiar with the XCLI your hopefully aware that list commands can produce much more detailed output if the -x option is used (-x requests XML output).
So here is something you can try out.
If your XIV is on 10.2.1 firmware you can use the module_list -x command to display how much server memory each XIV module has.
If your XIV has 2 TB disks, you should find that you have 16 GB of server memory per module.
This means a 15 module machine has a whopping 240 GB of server RAM.
To be clear, I am not referring to this as 'cache' because a small portion (around 2.5 GB) of the RAM in each module is used by the modules internal Linux operating system.
This means that a 15 module XIV with 2 TB drives and 16 GB of server memory per module, has over 200 GB of cache.
As former Australian Prime Minister Paul Keating once said: "its a beautiful set of numbers"
<XCLIRETURN STATUS="SUCCESS" COMMAND_LINE="module_list -x">
<sdr_version value="SDR Package 46"/>
anthonyv 2000004B9K Tags:  ats xiv ds8700 ibm aix rob svc group ds5300 jackard ibmstorage 2 Comments 11,622 Views
So its that time of the month again. Rob Jackard from the ATS group does a fantastic job summarizing changes to the IBM Storage Support site
and you get all the benefit of his hard work (via me!).
So cast your eyes down the list and look for issues that may affect you....
(2010.08.21) AIX Support Lifecycle Notice- AIX 5.3 TL9 & TL10.
NOTE-1: After November 2010, IBM will no longer provide generally available fixes or interim fixes for new defects on systems at AIX 5300-09 (applies to all Service Packs within TL9). Sometime after May 1, 2011, IBM will no longer provide generally available fixes or interim fixes for new defects on systems at AIX 5300-10 (applies to all Service Packs within TL10).
NOTE-2: As a reminder, IBM is no longer providing generally available fixes or interim fixes for new defects on systems at AIX 5300-06, AIX 5300-07 or AIX 5300-08.
(2010.08.21) AIX Support Lifecycle Notice- AIX 6.1 TL2 & TL3.
NOTE-1: After November 2010, IBM will no longer provide generally available fixes or interim fixes for new defects on systems at AIX 6100-02 (applies to all Service Packs within TL2). Sometime after May 1, 2011, IBM will no longer provide generally available fixes or interim fixes for new defects on systems at AIX 6100-03 (applies to all Service Packs within TL3).
NOTE-2: As a reminder, IBM is no longer providing generally available fixes or interim fixes for new defects on systems at AIX 6100-00 or AIX 6100-01.
(2010.07.29) Disable the TCP/IP port for SDDSRV/PCMSRV if the port is enabled.
(2010.07.23) SDDSRV respawn incorrectly causes system log filesystem overflow with AIX SDD version 188.8.131.52.
NOTE- Contact IBM Support Center to obtain the required temporary fix 184.108.40.206, which is now available.
(2010.07.23) AIX SDD Version 220.127.116.11 cannot be deinstalled due to sddsrv respawning incorrectly.
NOTE: Contact IBM Support Center to obtain the required temporary fix 18.104.22.168, which is now available.
(2010.07.19) IBM TechDoc- Diagnosing Oracle DB performance on AIX using IBM NMON and Oracle Statspack Reports.
(2010.07.19) IBM TechDoc- Maintaining two switch fabrics on NPIV migrations.
(2010.07.07) AIX 5.3 HIPER Alert- devices.common.IBM.mpio.rte 22.214.171.124.
NOTE: Users of MPIO storage running the 5300-12 TL- An operation to change the preferred path of a LUN could hang. A similar hang could be experienced during LPAR migration where it will try to switch the preferred paths. Install APAR IZ77907.
(2010.07.07) AIX 6.1 HIPER Alert- devices.common.IBM.mpio.rte 126.96.36.199.
NOTE: Users of MPIO storage running the 6100-02 TL- An operation to change the preferred path of a LUN could hang. A similar hang could be experienced during LPAR migration where it will try to switch the preferred paths. Install APAR IZ77908.
DS3000 / DS4000 / DS5000:
(2010.08.23) Updated ESM and HDSS Firmware v1.69 package.
(2010.08.20) Updated Disk Controller Firmware v7.70.23.00 code package.
NOTE: Code for DS3950, DS5020, DS5100, DS5300 subsystems.
(2010.08.17) RETAIN Tip# H197049- Issues on full synchronization of RVM LUNS > 2 TB.
(2010.08.17) SAS connectivity to IBM System Storage DS3200 is not supported- IBM BladeCenter JS23, JS43.
(2010.08.16) RETAIN Tip# H197402- Multi-node server ports require same host group for failover- IBM Disk Systems.
(2010.08.09) Updated Disk Controller Firmware v7.60.40.00 code package.
NOTE: Code for DS3950, DS4200 Express, DS4700 Express. DS4800, DS5020, DS5100, DS5300 subsystems.
(2010.07.23) IBM TechDoc- Power Cord Technical Guide for DS5000 Systems.
(2010.07.20) RETAIN Tip# H197172- UEFI systems with HBA and >2 TB LUN may have data errors.
(2010.07.07) RETAIN Tip# H196538: Controller reboots if multiple CASDs attempted.
DS6000 / DS8000:
(2010.08.24) Potential Data Error using Fast Reverse Restore following Establish FlashCopy without Change Recording.
(2010.08.21) DS8000 Code Bundle Information.
(2010.08.20) DS8700 Code Bundle Information.
(2010.08.04) IBM TechDoc- Effective Capacity for IBM DS8700 R5.1.
(2010.07.29) IBM Whitepaper- IBM DS8000 Metro Mirror DR within a Remote Cluster.
(2010.07.28) IBM TechDoc- DS6000 and DS8000 Data Replication.
(2010.07.15) IBM Whitepaper- IBM Handbook using DS8000 Data Replication for Data Migration.
(2010.07.14) DS6000 Microcode Release 188.8.131.52.
(2010.08.26) Important information for N series support.
(2010.08.18) IBM System Storage N series FRU lists.
(2010.08.17) Data ONTAP 7.3.4 Filer Publication Matrix.
(2010.08.17) Data ONTAP 7.3.4 Gateway Publication Matrix.
(2010.07.29) Data ONTAP 8.0 7-Mode Gateway Publication Matrix.
(2010.07.29) Data ONTAP 8.0 7-Mode Filer Publication Matrix.
(2010.07.20) Data Fabric Manager (DFM) 4.0 Publications Matrix.
(2010.07.17) RLM Update to Firmware Version 4.0 Fails with “Error Flashing linux“.
(2010.07.12) IBM System Storage N series FRU (Field Replaceable Unit) lists.
(2010.08.24) IBM SAN b-type Firmware Version 6.x Release Notes.
(2010.08.20) Cisco MDS Supervisor 1-to-2 Upgrade Process.
(2010.07.15) Cisco MDS9000 Field Notice: FN-63132. Potential DIMM Memory Issue in a Small Number of DS-X9530-SF2-K9 Supervisor Cards Manufactured between September 2007 and February 2008.
(2010.08.17) NPIV clients of SSDPCM hosts may experience permanent application errors during SVC concurrent code upgrade or node reset with certain APARs and SSDPCM versions. The risk, although rare, exists in any AIX SSDPCM host or client.
NOTE: The changes made for VIOS client hangs in Technote SSG1S1003579 require additional AIX driver and SDDPCM code updates for a specific SVC error condition.
(2010.08.11) SVC V4.3.x and V5.1.x Cluster Nodes May Repeatedly Reboot and CLI/GUI Access Loss May Occur When Shrinking Space Efficient VDisks.
(2010.08.05) SVC Console (GUI) Requirements for using IPv6.
(2010.08.05) Guidance for Identifying and Changing Managed Disks Assigned as Quorum Disk Candidates.
(2010.08.05) Offline or Degraded Disks May Result in Loss of I/O Access During Code Upgrade.
(2010.08.05) IBM System Storage SAN Volume Controller V5.1.0- Software Installation and Configuration Guide (English Version).
(2010.08.05) IBM System Storage SVC Code V184.108.40.206.
(2010.08.05) IBM System Storage SVC Console (SVCC) V220.127.116.116.
(2010.08.02) IBM System Storage SVC Code V18.104.22.168.
(2010.08.02) IBM System Storage SVC Console (SVCC) V22.214.171.1243.
(2010.08.02) SAN Volume Controller Concurrent Compatibility and Code Cross Reference.
(2010.07.23) SNMP MIB file for SVC V5.1.0.
(2010.07.21) 2145-CF8 Nodes May Repeatedly Loop Between Boot Codes 100 and 137 When Upgrading to SVC V126.96.36.199 or Later.
NOTE: This issue is resolved in SVC v188.8.131.52.
(2010.07.17) Changes in handling of SSH keys in SVC V5.1.
(2010.07.16) Incorrect 2145-8G4 Node Hardware Shutdown Temperature Setting in V184.108.40.206 – V220.127.116.11.
NOTE: This issue is resolved by APAR IC60083 in SVC V18.104.22.168.
(2010.07.16) Incorrect 2145-8A4 Node Hardware Shutdown Temperature Setting in V22.214.171.124 – V126.96.36.199.
NOTE: This issue is resolved by APAR IC68234 in the SVC V188.8.131.52 release.
(2009.11.19) 20091015 Drive Microcode Package for Solid State Drive.
SSPC / TPC / TPC-R:
(2010.08.30) Open HyperSwap status may report incorrectly via the Tivoli Productivity Center for Replication GUI.
(2010.08.25) IBM TechDoc- Basic Automation of TPC Performance Graphs.
(2010.08.04) TPC 4.1.x – Platform Support: Agents, Servers and GUI.
(2010.08.04) Q3, 2010- IBM Tivoli TotalStorage Productivity Center Suite Customer Support Technical Information Update.
(2010.08.25) IBM TechDoc- Utilizing IBM XIV Storage System snapshot technology in SAP environments.
(2010.08.24) How to Avoid Potential Problems During a Data Migration to an XIV Storage System.
(2010.08.02) IBM XIV Remote Support Proxy version 1.1.0.
(2010.07.19) XIV Volume Sizing Spreadsheet Tool.
(2010.07.08) Potential to inadvertently overwrite volumes using IBM XIV Management Tools (XIVGUI, XIVTop, XCLI) version 2.4.3.
NOTE: This issue is resolved with release 2.4.3.a.
(2010.07.01) IBM Certification: IBM Certified Specialist – XIV Storage System Technical Solutions Version 2.
(2010.07.01) IBM Certification: IBM Certified Specialist – XIV Storage System Replication and Migration Services Version 1.
I have been asked this question a few times now, so its worth a blog entry.
Clients love being able to easily view XIV performance statistics.
There is a simple panel that lets you display IOPS, throughput and response times for each host or volume or for the entire machine.
When viewing XIV performance statistics using the built in GUI panels, write I/Os are broken into two types: write hits and write misses.
The question that comes up is... what is the difference? And should I be worried about misses?
The use of the term miss can have negative connotations. To explain why:
So what about a write miss? Does it mean that the write I/O 'missed' the cache?
The answer is.... no!
To explain the difference:
A write hit is the situation where a host write generates less back end disk operations. This is because:
Its been an interesting week in IT retractions.
Microsoft seriously went off the rails with their Meter Maid Booth Babes on the Gold Coast.
Check out the story here or here.
I mention this story not because I want to embarrass Microsoft (who I don't think quite realised what they had signed up for).
To their credit they quickly apologised and moved to correct their mistake.
Instead I mention this because several Microsoft people were more than willing to (quite rightly) publicly express their opinions on the subject.
I thought this was fantastic.
But with great power comes great responsibility.
As an IBMer I have never been told what I can or cannot blog.
However I do of course follow IBM Business Conduct Guidelines as well as IBM Social Networking Guidelines.
So I have to say that I viewed with dismay HDS blogger Pete Gerrs extraordinary attack on Moshe Yanai and the IBM XIV.
He has since rather gracelessly withdrawn the blog entry but his follow on comments need some response.
The XIV has been (and continues to be) a fantastic product for IBM.
Not only is it a great sales success, it has also allowed us to talk to clients who would not normally purchase IBM storage.
Far from damaging IBMs existing product line, it has resulted in those lines growing stronger (just wait and see).
We have a new focus on usability and simplicity, on making the experience of using and managing storage easier and smarter.
To some part, XIV has brought that focus. I personally think we needed it and that we are stronger for it.
As the year comes to a close you will see the benefits of this reinvigoration with some truly fantastic storage product announcements (across the board).
So while hopefully Pete can take some lessons from his very creditable and measured fellow blogger Hu Yoshida,
I will patiently wait for Barry Burke to post that he was wrong about DS8000.
And I will keep trying to get it right the first time.
So I had the pleasure last night of observing a capacity upgrade on a client's XIV (so yes this is all real world).
The client in question had an 11 module XIV with 1 TB drives. This meant they had 54,649 GB of useable capacity ( approx 54TB ).
They client had ordered one new XIV module as an incremental capacity upgrade
All hardware upgrades are performed by IBM, so the upgrade work was all done by an IBM System Service Representative (SSR)
Task one for the IBM SSR was to remove the blanking plate at the front of the machine and slide the new module into place.
The module was then secured into place with its two captive screws (the only time a tool was needed).
The next task was delegated to me.... which was attaching the sticker (decal) which showed the relevant module number.
In our case the new module was module number 12.
Here you can see the new module with all the available decals (I think I did a good job).
Note the cables have not been plugged in yet.
Because the XIV is pre-cabled, all that the IBM Service Representative needed to do was plug the ethernet and power cables into the new module.
You can see all the cables plugged in and the lights are now on:
Once this was done, the module booted up and became available in the XIV GUI.
The IBM Service Representative needed to issue two commands to complete the upgrade (literally two mouse clicks in the GUI).
The first command, called an equip, introduces the module to the XIV and places it into the Ready state, as shown below:
The second command issued by the IBM SSR starts a process known as re-distrubution.
At bottom right the message changed from 'Full redundancy' to 'Redistributing'.
What does this mean?
It means the XIV is automatically spreading existing data across the new disks to load level the amount of data on each disk.
The machine will then be in a state of workload balance without any user intervention or host interruption.
This process is done with low priority, meaning the predicted end time kept jumping around as host I/O workload rose and fell.
To monitor the process, we used the XCLI command, monitor_redist as shown below:
The process actually ran all night, moving 8TB of data around the machine to ensure the most ideal data layout.
In the morning the redistribution had finished.
The machine now reported 'Full redundancy' and the useable capacity had risen from 54649 GB to 61744 GB
(the increase varies according to which module is being added).
Two tasks remained for the customer to complete:
1) Increase the size of the relevant storage pool(s) - takes a couple of mouse clicks, perhaps 20 seconds work.
2) Creates new volumes in those pools. Which again, takes just seconds to perform.
The XIV has two concepts when it comes to space:
Hard limit --> How much useable capacity you ACTUALLY have
Soft limit --> How much space you can allocate to volumes.
The ability to set the Soft Limit above the Hard Limit means you can over allocate the hard capacity (if you so choose!).
In the screen captures, the Hard and Soft Limits went up because the amount of useable capacity went up.
So to sum up..... to increase the capacity of the XIV while maintaining perfect load balance, the IBM Service representative performed about
2-3 minutes of physical work and then used two mouse clicks. Redistribution began and ran as a background task.
In the morning the client then increased the size of a storage pool (two mouse clicks) and the space was now ready for new volumes.
Easy as..... XIV.
So something I truly hate to see when visiting computer rooms is fibre cables hanging in the breeze with no dust covers, their precious glass connectors exposed to the world.
Even worse are fibre patch panels and HBAs without dust covers.
When new equipment arrives, every HBA, every patch panel port and every fibre optic cable will have a dust cover.
So what to do with these little guys once you remove them?
When you unplug a cable later you need to immediately re-install those precious covers both onto the cable and into the HBA or patch panel port, to protect the fibre optics from contamination.
I recommend storing dust covers in sealed plastic bags, preferably kept in the relevant rack so they are close to hand.
The picture below (taken from the rear of an XIV) is cute in that it shows a clever re-use of the XIV power cable covers,
but the dust covers are now exposed to contamination from the open air.
Since we are on the subject, take note of the colour coded power cables in the XIV.
Its another example of clever design to make power redundancy visually obvious.
The ends of the power cords are red, yellow and green to indicate which of the three UPSs these cables come from.
The unused cords at the top of the image are free because this machine is not fully populated with modules.
Its a given today that SAN implementations have redundancy by design.
When we sell SAN switches we always urge clients to use dual fabrics. Its accepted industry best practice.
By having hosts with dual HBAs, each host can attach to both fabrics and survive the failure of a SAN switch.
Equally when we sell SAN Volume Controller (SVC) we always sell the SVC nodes in pairs.
Each SVC node can run an SVC I/O group stand-alone, which again allows us to survive a node failure and do things like concurrent firmware updates.
So far so good.
But common practice appears to be that when installing redundant hardware, that we place all the hardware into one rack.
In fact I routinely look in a rack and see Switch One directly mounted above Switch Two.
I see the same with SVC clusters, all the nodes often jammed into the same rack, mounted one on top of the other.
Is this a good idea?
I suspect 99.9% of the time it makes no major difference. But its the 0.1% that can cause great pain.
Imagine going to work on a failed switch and then accidentally powering down the working one.
If each switch was in a separate rack the likelihood of doing this is significantly reduced.
I recently visited a major Australian Bank where each of their Fibre Channel directors were located on opposite ends of a long row of racks, several meters apart.
The separation of distance guaranteed that any event in one rack could not possibly affect the other rack.
Good design... by design..... I liked it.
There is no reason you cannot install SVC nodes in the same way, each node in a separate rack.
Just don't separate them too far apart. When servicing a node you like to be able to see what the front panel of the other node is displaying.
Having said that, SVC split cluster (where we separate the nodes across or between buildings) truly separates the nodes for the highest levels of availability.
anthonyv 2000004B9K 6,301 Views
In many years of providing storage support, my greatest bugbear was time co-ordination.
I am not talking about work-life balance or finding the perfect time slot to schedule a meeting.
I am talking about getting every device in the data center to show log entries that are time syncronized.
As an example: When investigating a SAN pathing issue with SVC, we can have time stamps from:
This is a manual process that can lead to bad conclusions about which device was responsible for a particular issue.
Getting the chicken mixed up with the egg benefits no one.
The solution to prevent this mixup is easy: use NTP.
Practically every device in the market place offers NTP support. For IBM this includes DS8700, XIV, SVC plus Brocade and Cisco SAN switches.
So consult your operating manuals and get NTP set up without delay both on your storage kit and on everything else in the data center.
While your at it, also ensure each device is set to the correct timezone.
Your service providers will thank you and your problem managers will see faster and more accurate issue resolution.
Its been two weeks since I came back to work from a truly fantastic vacation with my family.
And boy what a busy two weeks it has been....
IBM had a great Q2 and since one of my roles is to deliver professional services, this means I am busy helping IBM clients implement their new IBM Storage Solutions.
My main focus at the moment is on DS8700 and XIV and since we had some great successes with both products, I have plenty to work on.
On top of that I am presenting at the Power and Storage Symposium in Melbourne, August 10-13. My two (separate) topics are on XIV Implementation and DS8700 Storage Configuration.
Which means even more things to keep me occupied.
In addition I have just received some exciting news that my application to do an IBM Redbook residency has been accepted.
IBM Redbooks are a great resource and help IBM truly stand out in the market place as an open and honest company with a real commitment to our clients and our products.
IBM Redbooks are the ultimate resource in answering the what, where, why and how questions you need answered when installing and using IBM Products (both software and hardware).
I am not aware of any other vendor who produces documentation of this standard.
Residencies are run in locations where we have large quantities of demonstration and lab gear. This includes Tucson, San Jose, Raleigh, Almaden, Hursley in the UK, Mainz in Germany....
The list goes on...
The residents are normally IBMers, but this is not always the case (so why not apply?). More importantly the residents bring their real world experience to these books.
This particular residency is on DS8000, focusing on this years enhancements to the DS8000 product.
The residency is not listed on the IBM Redbooks residency list because it is for IBMers only. So watch this space (its going be VERY exciting).
It may not surprise you, but residencies are actually quite expensive to run. IBM needs to get a return on investment, so an IBM Redbook residency is only run for products that IBM is strongly investing in.
For products that have a future.
So imagine my surprise when at the same time as this, I read Barry Burke's latest offering to the world.
I am certainly not scared to point anyone at his blog. As Tony Pearson rightly commented in Barry's (moderated) comments section, it is a "
It seems Barry and I live in alternate realities. It just suprises me that his employer chooses to do business this way.
I also left a comment on his blog. His response to me was interesting, he said: "
That at least I can agree with.
As for the accuracy of what he says? For IBM customers (especially the many who have purchased DS8700), I encourage you to ask your local IBM Sales Rep to share with you the DS8000 roadmap.
I think you will be quite impressed.
In the mean time.... I am going to be busy for the foreseeable future.....
(edited 27/07 to add more details on Redbooks)
I recently listed to a great Podcast on VAAI from Greg Knierieman over at the Storage Monkeys website. You can find the podcast here (the whole site is well worth a visit).
He was talking with Marc Farley (his co-host, from 3PAR), Chad Sakac (from EMC) and Chris Evans (the 'Storage Architect'). The topic was VMWares newly announced VAAI.
To quote from the podcast, VAAI is a set of APIs focused on the VMWare kernel to off-load various functions onto the storage.
To get the newly announced VAAI functions you need to upgrade to the newly announced VSphere 4.1 and you will need to upgrade your storage hardware firmware to a version that supports it (when such a version comes out).
Some of the major new functions are:
Hardware accelerated locking (to avoid the need for ESX to use SCSI reserves when doing meta-data updates)
Hardware accelerated full copy (to help VMWare clone data without having to do lots of read and writes)
Hardware accelerated zero (to avoid the need to send vast numbers of 'empty' SCSI write I/Os to zero out blocks)
Given who was on the call, the conversation focused mainly on what EMC, 3PAR and to some extent what NetApp are doing in regards to this development.
Apart from some (good natured?) digging at HP, no other Vendor was really mentioned.
One thing that was mentioned was that some storage hardware architectures will lend themselves far better to VAAI than others.
In particular Chad mentioned that he would expect fullcopy and hardware accelerated zero would work better on V-Max than CLARiiON, due to hardware architecture differences that also benefit 3PAR.
I found that a really interesting observation.
What wasn't mentioned on the podcast was XIV.
So to be clear, the architecture of XIV lends itself very very well to the changes required to support VAAI.
To give an example of how we have done this with other vendors, XIV firmware 10.2.0a brought in support for Symantec Storage Foundation Thin Reclamation.
XIV support for Symantec's Storage Foundation and Thin Reclamation API means that when data is deleted by a user who uses the thin provisioning aware Veritas File System (VxFS), XIV will immediately free unutilised blocks and reclaim such blocks, rather than leaving them with 'garbage' data that wastes space.
So have no doubts, XIVs architecture is very 'friendly' to the sorts of things VMware are trying to achieve with VAAI. To underscore this, Chad also said that a VMware goal was that VMware admins should need "no requisite knowledge of the underlying infrastructure for any task". The goal is to use policies instead. Given this goal, XIV is also a perfect match. With XIV there is no need to think about raid types, raid sizes, disk types, disk sizes, LUN allegiances and trespassing, controller workload balancing or hot spot detection prevention or correction. All of these concerns simply don't exist.
The good news is that XIV is working with the VMware Reference Architecture Lab and the statement of direction is that we will announce VAAI support for XIV later this year. XIV continues to be an excellent choice for VMware environments and when VAAI support is added to XIV, this will only improve.
Finally, Chad made a great quote on the podcast. He said: "Never trust any vendor when they talk about what other vendors are doing"
I think this is a really great statement and one that everyone should take to heart.
I spent the first week of my recent vacation in Sydney (I live in Melbourne).
For someone who has spent many weeks in Sydney on business, this was the first time I actually went there as a real tourist.
First up, it was a great break and my family and I can heartily recommend Sydney to anyone (from any part of the world) who is looking for a "holiday with the lot".
As part of our planning, we clearly had a budget to work to as well as many idea about things to do.
Of course one of the big considerations was where to stay. I did a great deal of Googling around but relied on TripAdvisor heavily to help make our final decision.
I really love tripadvisor for two reasons:
So what has that go to do with IT? Well those who follow storage blogs will have seen a spike recently in discussions on two subjects:
I hope that its obvious that I work for IBM, so I accept that you may choose to view anything I say as potentially coloured by my relationship with my employer.
It has left me pondering where clients should go to get a good handle on what really matters most to them.
To chose the hotel that best matched my holiday requirements I used TripAdvisor.
But what can clients do?
I personally read many vendor and 'independent' blogs to try and ensure my 'world view' is as realistic and informed as possible.
There are some great blogs out there, but I do not know many of these people or their organizations personally. So I always read them through a haze of mild cynicism.
So is there a TripAdvisor for Storage IT? A place for genuine end-users to share their experience with specific solutions?
I have to say that XIV is one product that is crying out for such a beast. My experience is that client satisfaction levels with their XIVs is remarkably high but is that always reflected on the Web?
So far the best place I have seen for shared experiences from 'real' people is the XIV group on LinkedIn.
It would be great to see more actual end users head over there and contribute (so please do!).
Finally I should also point out that if there is a business relationship between IBM (my employer) and TripAdvisor or Linked In, I am not aware of it.
My opinions on these organizations are totally my own.
And the hotel we stayed in? The Quay Grand on Circular Quay. The view from our balcony was priceless. Check it out:
Rob Jackard from the ATS Group has kindly supplied me with this great summation of recent updates to various parts of the IBM Support sites.
Its worth just running your eyes down the list to see if there is anything that might apply to you and your environment.
For XIV users, please update to the latest GUI release, version 2.4.3a (see the link below).
The release notes (which you can also find at the link below) detail an important fix that could prevent an outage when mapping new volumes
to a clustered group of hosts which uses private mappings, (which you might do if you have separate dump or boot drives for clustered hosts).
DS3000 / DS4000 / DS5000:
(2010.06.21) Best Practices for Running an Oracle Database on an IBM Midrange Storage Subsystem.
(2010.06.15) IBM DS3500 ESM and firmware bundle version 3.16.
(2010.06.15) IBM DS3000 HDD firmware package version 4.7.
(2010.06.14) IBM DS3500 Controller Firmware bundle version 7.70.16.01.
(2010.06.03) Retain Tip# H196875: Upgrade from 6.xx firmware to 7.60.28.00 may fail - IBM System Storage.
NOTE: Affected configurations- DS4200 (Type: 1814), DS4700 (Type: 1814), DS4800 (Type: 1815). Release scheduled 2nd QTR 2010.
DS6000 / DS8000:
(2010.06.30) A quick capacity (physical/effective) table for DS6000 and DS8000.
(2010.06.30) Accelerate with ATS: DS8700 Easy Tier Webinar.
(2010.06.23) Potential Data Error on FlashCopy targets using Space Efficient FlashCopy (SEFLC) volumes after a DS8000 LPAR Failover or Failback.
(2010.06.21) IBM DS8000 Storage Virtualization Overview Including Storage Pool Striping, Thin Provisioning, Easy Tier.
(2010.06.02) Support for VMware Site Recovery Manager.
(2010.05.27) Enabling multipath SAN booting with DS8000 and DMMP.
(2010.07.02) Hard Disk Drive (HDD) Firmware for N series Publication Matrix.
(2010.06.29) Firmware release NA01 available for the Seagate SAS disk drive identifiers: X286_S15K6146A15, X287_S15K6288A15, and X289_S15K6420A15.
(2010.06.29) Firmware release NA01 available for the Seagate FC disk drive identifiers: X278_S15K6146F15, X279_S15K6288F15, and X291_S15K6420F15.
(2010.06.07) IBM System Storage N series FRU (Field Replaceable Unit) lists.
(2010.06.01) NEWS: Recommended Release for IBM System Storage N series Data ONTAP.
(2010.06.01) Data ONTAP 7.3.3 recommended for IBM Systems Storage N series.
(2010.06.14) IBM SAN b-type Firmware Version 6.x Release Notes.
(2010.06.01) Data Center Fabric Migration Guide.
(2010.06.16) IBM System Storage SVC Code V184.108.40.206.
(2010.06.16) 2145-CF8 Nodes May Stall on Error 231 During Upgrade to V220.127.116.11.
(2010.06.16) SAN Volume Controller Concurrent Compatibility and Code Cross Reference.
(2010.06.02) Support for VMware Site Recovery Manager.
(2010.06.02) SVC V5.1.0.x Cluster Nodes May Repeatedly Reboot When Performing Multiple Image Mode Migration Commands on the Same VDisk.
NOTE: This problem will be resolved in a future SVC release.
(2010.05.14) IBM System Storage SVC Code V18.104.22.168.
SSPC / TPC / TPC-R:
(2010.06.24) Accelerate with ATS: Working with TPC Disk- Midrange Edition.
(2010.06.24) Accelerate with ATS: TPC Disk Midrange Edition- Installation and Tailoring.
(2010.06.22) (IBM Internal/BP) Managing Virtualized Storage Environments with IBM Tivoli Storage Productivity Center.
(2010.06.21) TPC 4.1.x – Supported Storage Product List.
(2010.06.09) Latest Downloads for Tivoli Storage Productivity Center.
(2010.06.06) Ten things for the new TPC-SE Administrator to do to make TPC 4.1.1 more valuable.
(2010.06.04) Q2, 2010- IBM Tivoli TotalStorage Productivity Center Suite Customer Support Technical Information Update.
(2010.05.12) Tivoli Storage Productivity Center v4.1.x fix pack history technote (FAQ).
NOTE: Latest includes TPC 22.214.171.124 and TPC-R 126.96.36.199.
(2010.07.01) IBM XIV Host Attachment Kit for AIX version 1.5.2.
(2010.07.01) IBM XIV Remote Support Proxy version 1.0.0.
(2010.06.23) IBM XIV Management Tools (XIVGUI, XIVTop, XCLI) version 2.4.3a for all platforms.
(2010.06.23) IBM XIV XCLI (only) for Linux/AIX/Solaris/HPUX, version 2.4.3.
(2010.06.21) ATS XIV- Asynchronous Mirror Webinar.
(2010.06.10) IBM XIV Storage System Application Programming Interface Reference.
(2010.07.04) Oracle Architecture and Tuning on AIX v2.0.
(2010.06.30) Basic Monitoring of I/O on AIX.
(2010.06.11) AIX SDDPCM System Crash During a Dynamic Hardware Replacement of Controller A or B on a DS4K/DS5K Storage Subsystem
(2010.06.07) All hdisks and vpath devices must be removed from host system before upgrading to SDD host attachment script 188.8.131.52 and above. All MPIO hdisks must be removed from host system before upgrading to SDDPCM host attachment script 184.108.40.206.
(2010.06.04) SDDPCM open path failures and VIOS VTD configuration failures.
(2010.06.03) SDD User’s Guide correction for pcmquerypr tool.
So first up, I have been on leave for two weeks... thus my stunning silence
The good news is that I had a great break, the bad news was the size of my inbox on return
So for those of you who read my blog entry titled:
10 things I like about the IBM DS3500https://www.ibm.com/developerworks/mydeveloperworks/blogs/anthonyv/entry/10_things_i_like_about_the_ibm_ds35002?lang=en
I now have reason number 11:
11. You can power it up with NO drives inserted and you can make configuration changes with only ONE drive inserted.
So why is this good?
The DS3200/DS300/DS3400 range has a requirement that four drives be installed before you power the box on, or configure it.
This means the smallest config you can order has to have four drives in it.
The DS3500 has removed that requirement, meaning you can order and build a machine with only one disk drive.
IBM documentation is in the process of being updated to reflect this.
anthonyv 2000004B9K 14,344 Views
So one of the standard work tasks when running a data center is to record the IP addresses assigned to each device.
I am routinely surprised (and frankly a little disappointed) to see posts in the DW System Storage forum from end users reporting they cannot manage their storage devices, because they don't know the assigned IP addresses. This situation often arises due to:
But of course there are times when you don't know a devices IP address for the very simple reason that it arrived that way....
Yesterday I helped a colleague set up an old DS4500 storage device (as part of a more exciting Power7 demonstration he is setting up). The DS4500 came out of our demonstration machine pool and it was not set to the default IP addresses (where controller A is 192.168.128.101 and controller B is 192.168.128.102). Nor was there any accompanying documentation detailing what the IP addresses had been set to... not good.
So we had two choices:
Wireshark is the follow-on product to a previous piece of software called Ethereal. Wireshark is Open Source Software released under the GNU General Public License. It is free to download and install and is a very handy network sniffing tool.
After installing Wireshark on my laptop I selected Interfaces from the the Capture drop down:
From the Capture menu, you will get a list of interfaces. In my case I selected the Start option next to my ethernet interface:
I then attached an ethernet cable between my laptop ethernet port and the ethernet port of the DS4500.
I didn't use a cross-over ethernet cable, as my laptop ethernet port is auto-MDIX (which most ethernet ports are now).
I also didn't set any IP address on my laptop (as there is no point since we didn't know what IP address the DS4500 was using).
What I then looked for in the trace is one of two things.
The other handy trick is that Wireshark is smart enough to decode the manufacturer OID to detect who made the device.
In the first example below, the bottom line gives me the answer I am looking for.
The manufacturer is Symbios, an early brand name used by the original manufacturer of the DS4500 (now owned by LSI).
It is sending a broadcast regarding 10.5.16.6.
So in this case my DS4500 is using IP address 10.5.15.6.
I repeated the exercise with controller B (which unsurprisingly was 10.5.15.7) and then set my laptop to 10.5.15.100 (so I was in the same subnet) and was able to communicate with the DS4500 and complete the setup stage.
This second example is from a Brocade SAN switch. I did this just to prove this process works with other devices.
Because this one is from my own internal lab, I have obscured the IP addresses. But the IP address on the right hand side is the source IP address which is the one I would be attempting to learn. The hint is that it is saying 'Tell 9.185.x.x' which clearly indicates thats who it is. Again Wireshark is smart enough to even tell me the attached device is a Brocade device. I also successfully tried this on a Cisco MDS switch.