I am in Singapore this week running a teach the teacher seminar on Storwize V7000. We are creating more instructors as demand for courses on the product continues to increase.
I am fortunate to be staying at the Marina Bay Sands Resort, which is one of the most mind blowing facilities I have ever seen. Check out this view of the Infinity Pool from the Skydeck up on the 57th floor.
Out my hotel window they are building a huge new facility known as Gardens by the Bay, but frankly I think it looks more a spaceport!
According to Wikipedia these are Supertrees: tree-like structures that dominate the Gardens landscape with heights that range between 25 and 50 metres. They are vertical gardens that perform a multitude of functions, which include planting, shading and working as environmental engines for the gardens.
But doesn't this look like two huge crashed spaceships?
Actually they will be giant conservatories, again according to Wikipedia they are the Flower Dome and the Cloud Forest.
They certainly know how to think big in Singapore!
XIV Gen 3 modules are built on a new generation of Intel microprocessors based on the Nehalemmicro-architecture. Nehalem is the most profound architecture change that Intel has introduced in the 21st century. Some of the key changes and their benefits are:
- Integrated memory controller: The memory controller now sits on the same silicon die as the processors. It runs at the same clock-speed as the processors instead of at the lower speed of an external front-side bus. This dramatically improves memory performance and therefore overall system performance.
- No need for buffered memory: Previously, buffered memory was required to improve the performance of the memory sub-system. Buffered memory is relatively expensive and energy hungry. With the faster Nehalem integrated memory controller, the system can deliver improved performance without needing buffered memory, saving cost as well as energy. XIV Gen 3 will be faster and cooler at the same time using unbuffered DDR3 RAM. And since the memory is cheaper, we can put more in.
- Increased memory capacity: Nehalem supports more memory chips at higher speeds. In XIV Gen 3 this translates into a 50 to 200% increase in system cache, significantly lifting the performance headroom of an already stellar performer.
- No more front-side bus: Memory, second CPU package and peripherals no longer have to share and wait on a single bus to communicate. The connections are now direct or switched, enabling increased parallelism and the ability to do more work simultaneously.
- PCI Express Generation 2: The I/O sub-system doubles in speed with the introduction of PCI Gen-2. This enables faster network and I/O adapters for XIV Gen 3:
- 8 Gbps fibre-channel host connections.
- More iSCSI host connections (including at the entry configuration of 6 modules)
- Multi-channel, low latency infiniband as the inter-module connection.
- A slot for solid state disk (SSD).
- Better systems management instrumentation: The system supports increased monitors for sub-systems for more sophisticated self diagnostics and healing. Remote management capability has also been improved.
Furthermore, the new motherboards have additional expansion capacity (more processors, memory and I/O) that can be utilized to deliver future improvements in performance and increased software functionality.
XIV Gen 3 is not the first storage sub-system to adopt the Nehalem architecture. Some of our competitors (EMC and NetApp for example) have already done so with their dual-controller arrays. XIV Gen 3 takes the Nehalem architecture advantage forward, not twice, but six to fifteen times.
Many thanks to Patrick Lee for writing up this great summation.
Today IBM is announcing a new member of the XIV family, which we are calling XIV Gen3. I thought I would give a brief history of how we got here before I get too carried away with details.
What was Generation 1 of the XIV?
In 2002 an Israeli startup began work on a revolutionary new grid storage architecture. They devoted three years to developing this unique architecture that they called XIV.
They delivered their first system to a customer in 2005. Their product was called Nextra(does it look familiar?).
What was Generation 2 of the XIV?
In December 2007, the IBM Corporation acquired XIV, renaming the product the IBM XIV Storage System. The first IBM version of the product was launched publicly on September 8, 2008. Unofficially within IBM we refer to this as Generation 2 of the XIV.
The differences between Gen1 and Gen2 were not architectural, they were mainly physical. We introduced new disks, new controllers, new interconnects, improved management, additional software functions.
As anyone who has read my blog knows, I have been working on the Generation 2 XIV since the day IBM began planning to release it as an IBM product. So it is very exciting to be able to share with you that we are now releasing Generation 3 of the IBM XIV Storage System.
What is Generation 3 of the XIV?
Generation 3 of the XIV is a new member of the XIV family, that will be an alternative to the Generation 2 XIVs we currently offer. It does not change the fundamental architecture, that remains the same. What it does do is bring significant updates to almost every part of the XIV, including:
- Introducing Infiniband interrconnections between the modules.
- Upgrading the modules to add 2.4 Ghz quad core Nehalem CPUs; new DDR3 RAM and PCI Gen 2 (using 8x slots that can operate at 40 Gbps) .
- Upgrading the host HBAs to operate at 8 Gbps.
- Upgrading the SAS adapter.
- Upgrading the disks to native SAS.
- A New rack.
- A new dedicated SSD slot (per module) for future SSD upgrades.
- Enhancements to the GUI plus a native Mac OS version.
I will be blogging about each of these changes over the coming days and weeks as we move to general availability date, so watch this space. In the meantime, why not visit the official XIV page here and check out the ITG Report linked there.
I have received this question several times, so it's clearly something people are interested in.
The Storwize V7000 has two controllers known as node canisters. It's an active/active storage controller, in that both node canisters are processing I/O at any time and any volume can be happily accessed via either node canister.
The question then gets asked: what happens if a node canister fails and can I test this? The answer to the question of failure is that the second node canister will handle all the I/O on its own. Your host multipathing driver will switch to the remaining paths and life will go on. We know this works because doing a firmware upgrade takes one node canister offline at a time, so if you have already done a firmware update, then you have already tested node canister fail over. But what if you want to test this discretely? There are four ways:
- Walk up to the machine and physically pull out a node canister. This is a bit extreme and is NOT recommended.
- Power off a node canister using the CLI (using the satask stopnode command). This will work for the purposes of testing node failure, but the only way to power on the node canister is to pull it out and reinsert it. This is again a bit extreme and is not recommended. This is also different to an SVC, since each SVC has it's own power on/off button.
- Use the CLI to remove one node from the I/O group (using the svctask rmnode command). This works on an SVC because the nodes are physically separate. On a Storwize V7000 the nodes live in the same enclosure and a candidate node will immediately be added back to the cluster, so as a test this is not that helpful.
- Place one node into service state and leave it there will you check all your hosts. This is my recommended method.
First up this test assumes there is NOTHING else wrong with your Storwize V7000. We are not testing multiple failure here. You need to confirm the Recommend Actions panel as shown below, contains no items. If there are errors listed, fix them first.
Once we are certain our Storwize V7000 is clean and ready for test, we need to connect via the Service Assistant Web GUI. If you have not set up access to the service assistant, please read this blog post first.
So what's the process?
Firstly logon to the service assistant on node 1 and place node 2 into service state. I chose node 2 because normally node 1 is the configuration node (the node that owns the cluster IP address). You need to confirm your connected to node 1 (check at top right) and select node 2 (from the Change Node menu) and then choose to Enter Service State from the drop down and hit GO.
You will get this message confirming your placing node 2 into service state. If it looks correct, select OK.
The GUI will pause on this screen for a short period. Wait for the OK button to un-grey.
You will eventually get to this with Node 1 Active and Node 2 in Service.
Node 2 is now offline. Go and confirm that everything is working as desired on your hosts (half your paths will be offline but your hosts should still be able to access the Storwize V7000 via the other node canister).
When your host checking is complete, you can use the same drop down to Exit Service State on node2 and select GO.
You will get a pop up window to confirm your selection. If the window looks correct, select OK.
You will get the following panel. You will need to wait for the OK button to become available (to un-grey).
Provided both nodes now show as Active, your test is now complete.
I think this picture speaks for itself: Three XIVs. Three cities. Three way iSCSI.
All the mirror connections were created in seconds using drag and drop in the XIV GUI.
I can now take a volume in one city and mirror it to another.
And yes.... IBM Australia now has a demo XIV in each of three major cities, so why not drop by and have a look?
Over at SearchStorage.com.AU they recently published an article entitled Six reasons to adopt storage virtualisation. You can find the article here. The six given reasons are:
- Storage virtualisation reduces complexity
- Storage virtualisation makes it easier to allocate storage
- Better disaster recovery
- Better tiered storage
- Virtual storage improves server virtualisation
- Virtual storage lets you take advantage of advanced virtualisation features
Its a well written article and I agree with every point. But one could be forgiven for reading the article and thinking that either storage virtualisation is new, or that storage virtualisation is something you might consider AFTER doing server virtualisation. Both of which are not true.
IBM embraced storage virtualisation in June 2003 when we announced our SAN Volume Controller (the IBM SVC). I even found a CNET.com article from way back then. You can find it here (the image below is a screen capture of that CNET website).
IBM's SVC product has been enhanced repeatedly since 2003 with an enormous list of supported host servers and backend storage controllers. We have added new functions every year including Easy Tier, split cluster, VAAI, an enhanced GUI and a new form factor for the SVC code in the form of the Storwize V7000.
So let me give you a seventh reason for adopting storage virtualisation: A vendor who has shown genuine support for this technology. No vendor has embraced storage virtualisation with more enthusiasm than IBM. We have an industry leading solution with phenomenalSPC benchmarks, an enormous number of case studies and an architecture that does not lock you in. Indeed it is an architecture that can grow as you grow and that can be upgraded without disruption.
So please consider storage virtualisation from IBM, using either the SVC or the Storwize V7000. If your in Australia, we have demo centers dotted around the country. Many of our Business Partners can also demonstrate IBM storage virtualisation using their own Storwize V7000s. If your in Melbourne feel free to give me a call and schedule a time to drop into Southgate.
Bob Leah is one of our leading lights in the developerWorks team. His blog (found here
) is a great resource for Web designers. He recently created a new set of templates to enable a mobile page for developerWorks blogs. You can read his article about the new template here
This morning I boldly went and installed the new templates and so far I think it looks fantastic, not only on the iPhone, but also the iPad and on regular browsers. My only complaint is that I lost the banner image of my Golden Retriever (my loyal hound Suzie). Bob assures me she will reappear soon. In the meantime, I would love to hear feedback about the new template. This is what it looks like on my iPhone:
Tivoli Pulse is coming to Melbourne July 27 and 28, 2011 at the Crown Promenade in Melbourne.
How many chances do you get to listen to the following speakers in one place?
- Nigel Phair, Director, Centre for Internet Safety, University of Canberra
- Steve Van Aperen, Human Lie Detector and Director of SVA Training and Australian Polygraph Services
- Laura Guio, Vice President, Storage Sales, STG Growth Markets, IBM
- Joao Perez, Vice President of Worldwide Tivoli Software Sales, IBM USA
- Jamie Thomas, Vice President Tivoli Strategy and Development, IBM USA
- Glenn Wightwick, Director, IBM R&D Australia
There are 12 customer case studies presented mainly by customers. There are over 70 sessions in eight streams, expert keynote speakers, case studies and presentations. Pulse 2011 provides the tools to help advance your infrastructure goals.
Registration is free so what excuse do you have? Registration is free so what excuse do you have? Find out more here. You can enroll here. The agenda front page is here. The detailed agenda is here.
I will presenting on day two. talking about Virtualization and Storwize V7000 so maybe I will see you there!
I have an admission: I am a bit of an Apple fanboy. Well actually not a full on Apple fanboy, I have an iPhone and an iPad but I don't have a MacBook (although if IBM start offering a cash payment instead of giving laptops to mobile employees, that might change). But not everything is perfect in the land of Apple. Let me give you an example, one that I routinely find people are not aware of (apologies if you learnt all of this months ago).
The picture below appears to show three identical Apple charger packs (with Australian pins). You may have a similar collection. But are they all identical? Sadly not.
Only those with very good eyes can spot the difference by reading the rather pale decal on the bottom section of each charger. The text is so small and faint, I struggled to take a decent picture, but here is my sad attempt for one of them (they are all different):
So how are my three power adapters different?
- The first is marked as a 10 Watt USB Power Adapter (it came with an iPad). Its output is amusingly marked as 5.1 Volts DC at 2.1 Amps, which suggests 10.7 Watts.
- The second one is marked as a 5 Watt USB Power Adapter (it came with my iPhone). Its output is marked as 5 Volts DC at 1 Amp, which is indeed 5 Watts.
- The third is marked as an iPod USB Power Adapter. No stated wattage, but its output is marked as 5 Volts DC at 1 Amp, which again suggest 5 Watts. So perhaps my 5 Watt adapter and my iPod adapter are actually the same.
The big question that comes up: Are they interchangeable? The answer: Yes but with caveats.
If you have an iPad you should use the 10W adapter. If you use the 5W adapter it will still charge but at a much slower rate. Apple confirm this here where they state:
iPad will also charge, although more slowly, when attached to an iPhone Power Adapter (by which they mean a 5 Watt adapter).
If you have an iPhone or an iPod can you use the 10W adapter? The answer is yes! It will recharge with no ill effects. Apple confirm this here, where they state:
While designed for use with the iPad, you can use the iPad 10W USB Power Adapter to charge all iPhone and iPod models.
So I am putting my 5 Watt and iPod adapter in the cupboard and using the 10 Watt adapter exclusively. If you have an iPad and finds it's recharging slowly, you may be using an older 5 Watt adapter (but you may need a magnifying glass to spot the difference!).
My suggestion to Apple? A few more cents worth of ink please, to make things more obvious.
To close, on my first Apple focused blog entry, let me pose a question:
Will it blend?
I read a great blog post recently on Written Impact that talked about how to create effective presentations. It's well worth reading and can be found here. They describe several different formats that will help you develop interesting presentations, ones that don't put your subjects to sleep.
Talking of presenting, I recently presented at the IBM Power and Storage Symposium in Manila. It was a great event and was very well attended. We even had cake to celebrate IBM's 100th birthday.
There are two IBM Symposiums coming up in Australia that I would love for you to attend:
The next IBM Power Systems Symposium will be held in Sydney running from August 16 to 19, 2011. We are currently finalizing the agenda on this one and while this symposium is dedicated mainly to IBM Power Systems... I will be attending and presenting on storage related topics. To check out the details and enroll, please head over to here.
An IBM Storage Symposium will be held in Melbourne running from November 15 to 17, 2011. The agenda is still being set, so if you have ideas about what you would like to see, please let me know. To check out the details and enroll, please head over to here. And yes! I will be attending and I will be presenting.
The Storwize V7000 and SVC release 6.1 introduced a new WEB GUI interface to assist with service issues, known as the Service Assistant. The Service Assistant interface is a browser-based GUI that is used to service your nodes. Much of what you traditionally did with the SVC front panel can all be done using the Service Assistant GUI. You can see a screen capture of the Service Assistant below:
While I would like to be optimistic and hope that you will never have to use the Service Assistant, you should always ensure your toolkit is equipped with every possible tool. I say this because one thing I have noted is that the majority of installs are not configuring the Service Assistant IP addresses. This is particularly apparent as clients upgrade their SVC clusters to release 6.1.
By default on Storwize V7000, the Service Assistant is accessible on IP addresseshttps://192.168.70.121 for node 1 and https://192.168.70.122 for node 2 (don't try and point your browser at them right now, as your network routing won't work - you would need to set your laptop IP address to the same subnet and be on the same switch. Details to do that are here). For SVC there are no default IP addresses, although we traditionally asked the client to configure one service address per cluster. The best thing for you to do is approach your network admin and ask for two more IP addresses for each Storwize V7000 and/or SVC I/O group. Once you have these two extra IP addresses, record them somewhere and then set them using the normal GUI.
Its an easy five step process as shown in the screen capture below. Go to the Configuration group and then choose Network (step 1). From there select Service IP addresses (step 2) and the relevant node canister (step 3). Choose port one or port two (step 4) and then set the IP address, mask and gateway (step 5).
You can also set them using CLI (replace the word panelname with the panel name of each node, which you can get using the svcinfo lsnode command).
satask chserviceip -serviceip 10.10.10.10 -gw 10.10.10.1 -mask 255.255.255.0 panelname
If you forget these IP addresses, you can reset them using the same CLI commands or using the Initialization tool as documented here.
Finally having set the IP addresses, visit the service assistant by pointing your browser at each address. This is just to confirm you can access it. You logon with your Superuser password. With the process complete, ensure the IP addresses are clearly documented and filed away. So now if requested, you will be able to perform recovery tasks (in the unlikely chance they are needed). If for some reason your browser keeps bringing you to the normal GUI rather than the Service Assistance GUI, just add /service to the URL, e.g. browse to https://10.10.10.10/service rather than https://10.10.10.10.
So what should you do now?
If your an SVC customer on SVC code version v5 and below, please get two IP addresses allocated for each SVC I/O group, so you can set them the moment you upgrade to V6. Do this once the upgrade is complete.
If your an existing Storwize V7000 client or an SVC client already on V6.1 or V6.2 code, then hopefully you should already have set the service IP addresses. If not, please do so and test them.
I thought I would write a quick post about an issue that's not new, but is certainly worth being aware of....
One of the interesting tricks with the change to 8 Gbps Fibre Channel is that it required a change to the way the switch handles its idle time... the quiet time when no one is speaking and nothing is said. In these periods of quiet contemplation, a fibre channel switch will send idles. When the speed of the link increased from 4 Gbps to 8 Gbps, the bit pattern used in these idles proved to not always be suitable, so a different fill pattern was adopted, known as an ARB. All of this came to intrude on our lives when it became apparent that some 8 Gbps storage devices were having trouble connecting to IBM branded 8 Gbps capable Brocade switches because of this change. This led to two things:
- IBM released several alerts regarding how to handle the connection of 8 Gbps capable devices to 8 Gbps capable fibre channel switches.
- Brocade changed their firmware to better handle this situation.
An example of what was said?
"Starting with FOS levels v6.2.0, v6.2.0a & v6.2.0b, Brocade introduced arbff-arbff as the new default fillword setting. This caused problems with any connected 8Gb SVC ports and these levels are unsupported for use with SVC or Storwize V7000.
In 6.2.0c Brocade reintroduced idle-idle as the default fillword and the also added the ability to change the fillword setting from the default of idle-idle to arbff-arbff using the portcfgfillword command. For levels between 6.2.0c and 6.3.1 the setting for SVC and Storwize V7000 should remain at default mode 0.
From FOS v6.3.1a onwards Brocade added two new fillword modes with mode 3 being the new preferred mode which works with all 8Gb devices. This is the recommended setting for SVC and Storwize V7000"
So there are several tips that I will point you to, depending on your product of interest:
Brocade Release Notes
For most environments, Brocade recommends using Mode 3, as it provides more flexibility and compatibility with a wide range of devices. In the event that the default setting or Mode 3 does not work with a particular device, contact your switch vendor for further assistance. IBM publishes all the release notes for Brocade Fabric OS here.
Check out this link if your connecting an 8 Gbps capable DS3500, DS3950 or DS5000 to a 8 Gbps capable switch: http://www-947.ibm.com/support/entry/portal/docdisplay?brand=5000028&lndocid=MIGR-5083089
There is no tip for the DS8800 but the advice remains effectively the same as for the Storwize V7000. I can confirm that using a fill word setting of 3 works without issue.
SAN Volume Controller or Storwize V7000
Check out this link if your connecting a Storwize V7000 or CF8 or CG8 SVC node to an 8 Gbps capable switch: https://www-304.ibm.com/support/docview.wss?uid=ssg1S1003699&wv=1
The XIV Gen3 comes with 8 Gbps capable Fibre Channel connections. It does not support idle Fill Words meaning that the portCfgFillWord value should not be set to 0.
When an IBM System z server attaches an 8 Gbps capable FICON Express-8 CHPID to a Brocade switch with 8 Gbps capable SFPs, you should upgrade your switches or directors to Fabric OS (FOS) 6.4.0c or 6.4.2a and set the fill word to 3 (ARBff).
LTO5 and TS1140
IBM have two tape drives that are capable of 8 Gbps, the LTO-5 drive and the TS1140. Setting the fill word to 3 can actually cause issues with these drives. To avoid issues do one of the following (you only have to do one of these, not all three):
- Load the tape drive with firmware that has the access fairness algorithm fix for loop:
- LTO5 drives should be on BBN0 and beyond (you may need to contact IBM support to get this code.
- TS1140 drives should be on drive firmware 5CD or beyond.
- Change the Fibre Channel topology to point-to-point (N port) (as opposed to L or NL). This is my preferred option.
- Change the Fibre Channel speed to 4Gbps. This sounds slightly retrograde, but it is very rare for an individual drive to sustain a speed above 400 MBps (unless your data is very very compressible).
**** UPDATED 28 Feb 2012 - Added System z FICON and Tape info ****
I recently got a great email from an IBMer in the Netherlands by the name of Jack Tedjai. He sent me two screen shots, taken with the new performance monitor panel (that comes with the SVC and Storwize V7000 6.2 code). He wrote:
I am working on a project to migrate VMware/SRM/DS5100 to SVC Stretch Cluster and one of the goals is to prevent using ISL (4Gbps) and VMware Hypervisor/HBA load during the migration. For the migration we are using VMware Storage vMotion. To minimize the impact of the migration on production, we tested VAAI for Storage vMotion and template deployment and it worked perfectly.
So whats this all about? Well one of the improvements provided with VAAI support is the ability to dramatically offload the I/O processing generated by performing a storage vMotion. Normally a storage vMotion requires an ESX server to issue lots of reads from the source datastore and lots of writes to the target datastore. So there is a lot of I/O flowing from ESX to the SVC, and then from the SVC to its backend disk. What you get is something that looks like the image below. In the top right graph we have traffic from SVC to ESX (host to volume traffic). In the bottom right graph we have traffic from the SVC to its backend disk controllers (DS5100 in this case). This is SVC to MDisk traffic.
When we add VAAI support to the SVC, we suddenly change the picture. Suddenly VMWare does not need to do any of the heavy lifting. There is almost no I/O between VMWare and the SVC (no host to SVC volume traffic) related to the vMotion. The SVC is still doing the work, but it is happening in the background without burning VMWare CPU cycles or HBA ports (in that there is still SVC to MDisk traffic).
This difference translates to: Faster vMotion times, far less SAN I/O and far less VMware CPU being used on this process.
So do VMware support this? They sure do! Check this link here. It currently shows something like this (taken on June 23, 2011):
So what are your next steps?
- Upgrade your Storwize V7000 or SVC to version 6.2 code. Download details arehere.
- Download and install the VAAI driver onto your ESX servers. You can get it from here. If your already using the XIV VAAI driver you need to upgrade from version 184.108.40.206 to version 1.2. There is an installation guide at the same link.
And the blog title? It means friendly greetings in Dutch. So to Jack (and to all of you), vriendelijke groeten and please keep sending me those screen captures.
If your a user of XIV, or your considering purchasing an XIV, then there is one tool that you will truly love. It's called XIVTop. The XIVTop application comes packaged with the XIV GUI and is one of the handiest add-ons I have ever seen. It lets you monitor your XIV in real time, seeing exactly how much IO or throughput is being achieved and at what response time (in milliseconds). You can immediately answer questions like:
- Is poor application response time being caused by poor storage response time?
- What application is currently generating so much traffic on the SAN?
- What effect has performing file de-fragmentation had on performance?
- Are the backups running and how much traffic are they generating?
- What happens when I run multiple application batch jobs at the same time?
The ability to get this information in real time is what makes XIVTop so invaluable.
So in the tradition of always pushing my boundaries, I thought I would create a narrated video about XIVTop. What I discovered is just how terribly hard doing narrated videos are: You need to write a script... you need to stick to the script... you need to not fluff any words.... you need to speak slowly and clearly and not start talking in a strange accent. I had trouble with all of these, so I made take after take after take after take, until I was heartily sick of the process. I have now got a much greater respect for newsreaders and film actors. This narration stuff is hard!
So please check out my final take. It's still far from perfect, but all feedback is very welcome. The only other thing that is quite strange is Youtubes choice of videos to watch after mine. Its worth watching just to see the list. I think the term performance confuses the algorithm.
I joined IBM on June 26 1989, so this Sunday brings up my 22 year anniversary with the company. No small achievement, but I am still three years away from the mystical IBM Quarter Century Club. Of course for some: 22 years is nothing! I recently learned that Robert Neidig, who has been (and remains) a leading light in promoting IBM's Mainframe products, joined IBM on June 21, 1961. So this year bring up his 50th anniversary with the company!
For those with long memories, Bob has worked with the following IBM systems: 1401, 1410, S/360, S/370, 3031, 3032, 3033, 3081, 3083, 3084, 3090, ES9000, S/390, eServer zSeries, and System z. They have all been enhanced by Bob's contributions.
If you want to check out the history of some these world changing products, visit The IBM Mainframe Room. I particularly loved the Photo Album. There are some truly classic images of IBM products of old. If your forward looking, feel free to also visit the System z homepage.
So thanks Bob for your commitment and leadership on your half-centennial, truly a remarkable achievement!
- This is a Severity 1 issue at 2am
I wrote a blog post recently about my favourite podcasts. One of those I listed wasBackground Briefing, a radio program broadcast by the Australian Broadcasting Corporation (Australia's ABC). A recent episode entitled Fatigue Factor really sparked my interest. It talked about the affects of fatigue on professions such as:
- Air Traffic Controllers
- Train drivers
- Truck drivers
It contained some alarming facts about the potential affects of fatigue and is well worth taking the time to listen to. However in my opinion there was one major omission:
It did not mention workers in the IT industry.
For many years I worked as the Account Engineer for several of IBM's System z customers, mainly banks. Most weekends I skipped Saturday night as a sleep night. If I was lucky I might get to sleep from 10pm to 1am and would then head off to vast, noisy, dehydrating air conditioned computer rooms to perform various system changes. If I did my job well, had no hardware issues and the client confirmed everything was running as expected, I got to head home about 7am on Sunday. So that night I would have slept somewhere between zero and three hours. I would then spend the rest of the week recovering, before doing it all over again the following weekend at a different customer.
I mention all of this because fatigue was something I learnt to live with. Even when I moved to a support role, I still occasionally worked through the night on critical situations (something IBM calls Crit Sits). I also worked on a support roster which could involve 3am callouts to assist my fellow IBMers across the Asia Pacific region. So when I later moved to a Pre-Sales role, it certainly did wonders in helping me re-establish normal sleep patterns.
Listening to this podcast really brought home to me that the IT industry is just as guilty of failing to deal with fatigue, as the other industries that the podcast discusses. Now if your thinking this means it's an IBM problem, think again. Most weekends I was working alongside representatives from EMC, HDS, Storagetek, etc. Plus of course there were the clients themselves, many of whom were also missing a nights sleep to satisfy their change and business requirements.
One of the major issues raised in the podcast is that there is no accepted way to measure how fatigued an employee actually is. This is a major problem. There are established tests to confirm how affected someone is by alcohol, or by drugs. But we cannot easily confirm how badly fatigued a worker is; plus many people are unwilling or unable to admit that they are suffering from fatigue
If we think about many of the major IT related outages that have occurred recently, I ponder what role fatigue played in each one. Even if it didn't cause the initial issue, did making your employees work around the clock to resolve an issue, actually extend the outage time? For example, have a read of Amazons explanation of its recent Service Disruption. Just picking on some of the lines in the report:
At 12:47 AM PDT on April 21st, a network change was performed...
At 2:40 AM PDT on April 21st, the team deployed a change...
By 5:30 AM PDT, error rates and latencies again increased ....
At 11:30AM PDT, the team developed a way to prevent....
Was the person doing the change working out of their usual sleep pattern? Was the team working to resolve the issue working out of their normal sleep pattern? Did fatigue compound the outage? Its an interesting idea. Now it may well be that fatigue hadNOTHING to do with this outage. It is pure speculation on my part. But I am certain that the root causes of many of the recent IT meltdowns and their extended after affects (such as Sony's ongoing issues), MUST include the debilitating affects of fatigue.
Plus here is another rather disturbing fact. To quote from the podcast:
... if you're sleep deprived, you're more likely to crave chips over lettuce, and feel less like climbing the stairs. And that can become a vicious cycle, because many people who are overweight are even more prone to sleep disorders....
So please take the time to listen to the podcast. You will find it here and in places likeiTunes.
If your reading my blog, your probably interested in IBM Storage hardware (since apart from Bow Ties, thats all I talk about). So I would hope your already subscribed to IBM's notification service that you will find here. Rob Jackard from the ATS Group (an IBM Business Partner based in the USA) puts together a summary of these notifications which he sends to me on a regular basis. So I am bringing them to you here. Now hopefully none of these alerts are news to you... but please, have a read and if you have not done so already.... SUBSCRIBE!
DS3000 / DS4000 / DS5000:
(2011.06.09) IBM Retain Tip# H202771 – Expanding Dynamic Capacity Expansion (DCE) large arrays may fail due to out of memory conditions.
NOTE: The 7.xx firmware for the DS Storage Controller is affected. This is a permanent restriction. Possible workarounds are available.
(2011.05.27) Documentation: Instructions for opening the IBM System Storage DS Storage Manager interface are incorrect.
(2011.05.25) DS3950 / DS4000 / DS5000 Recommended Firmware Levels.
(2011.05.18) IBM Retain Tip# H202849 – Dynamic Volume Expansion is not possible on a LUN which is in an active mirror relationship with write-mode of ‘Asynchronous not write-consistent’.
NOTE: The DS Storage Controller is affected. A workaround is available.
(2011.05.10) IBM Retain Tip# H202771- Expanding (DCE) large arrays may fail due to out of memory conditions.
DS8000 / DS6000:
(2011.06.07) DS8800 Code Bundle Information.
(2011.06.03) EXN3500 (2857-006) Storage Expansion Unit Publication Matrix.
(2011.05.19) Excessive drive spinning up (0x2 – 0x4 0x1) messages on healthy EXN3000.
(2011.05.05) NEWS: Recommended Releases for IBM System Storage N series Data ONTAP.
(2011.04.28) DataFabric Manager (DFM) 4.0.2 Publication Matrix.
(2011.05.10) Cisco MDS Field Notice: FN-63416 – DS-C9124 & DC-C9148 have incorrect MAC Programming; UMPIRE Program in Place.
(2011.05.19) Intel has reported PAGE FAULT OR CORRUPTED DATA USING 64-BIT APP IN 64-BIT NOS (Fix is Now Available).
SVC / Storwize V7000:
(2011.06.10) IBM SAN Volume Controller Code V220.127.116.11.
(2011.06.10) IBM Storwize V7000 Code V18.104.22.168.
(2011.06.10) SAN Volume Controller and Storwize V7000 Software Upgrade Test Utility V6.5.
(2011.06.10) IBM Storwize V7000 Initialization Tool.
(2011.06.10) Storwize V7000 Concurrent Compatibility and Code Cross Reference.
(2011.06.10) IBM Storwize V7000 V6.2.0 – Installable Information Center and Guides.
(2011.06.10) IBM System Storage SAN Volume Controller and Storwize V7000 V6.2 – Command-Line Interface Guide.
(2011.06.10) IBM System Storage SAN Volume Controller and Storwize V7000 V6.2 – Troubleshooting Guide.
(2011.06.10) Incorrect Usage of Drive Upgrade Command May Cause Loss Of Access to Data.
NOTE: This issue was resolved by APAR IC74636 in the V22.214.171.124 release of the Storwize V7000 software.
(2011.06.10) Storwize V7000 and SAN Volume Controller FlashCopy Replication Operations Involving Volumes Greater Than 2 TB in Size Will Result in Incorrect Data Being Written to the FlashCopy Target Volume.
NOTE: This issue is fixed by APAR IC76806 in the 126.96.36.199 and 188.8.131.52 PTF releases.
(2011.06.08) IBM SAN Volume Controller Code V184.108.40.206.
(2011.06.08) IBM Storwize V7000 Code V220.127.116.11.
(2011.05.27) IBM SAN Volume Controller Code V18.104.22.168.
(2011.05.27) IBM Storwize V7000 Code V22.214.171.124.
(2011.05.27) Storwize V7000 Systems Running V126.96.36.199-V188.8.131.52 Code May Shut Down Unexpectedly During Normal Operation, Resulting in a Loss of Host Access and Potential Loss of Fast-Write Cache Data.
NOTE: If a single node shutdown event does occur when running V184.108.40.206, this node will automatically recover and resume normal operation without requiring any manual intervention. IBM Development is continuing to work on a complete fix for this issue, to be released in a future PTF, however customers should upgrade to V220.127.116.11 to avoid an outage.
(2011.05.09) SVC V4.3.x End of Service – April 30, 2012.
SSPC / TPC / TPC-R:
(2011.06.10) Administration of the TPC Environment: A Guide for TPC Administrators.
(2011.06.08) TPC4.2.x - Supported Storage Products Matrix.
(2011.06.08) TPC 4.1.x – Supported Storage Products List.
(2011.05.23) Shutdown sequence for TPC for Replication.
(2011.05.19) Fabric probe causes instability in Brocade DCFM server.
NOTE: Brocade Defect 332161 has been identified and is resolved in DCFM version 10.4.5.
(2011.05.19) Configuring Oracle for TPC for Databases.
(2011.05.17) TPC web browser support – Firefox 4.x and Internet Explorer 9.
(2011.05.06) TPC- Resolving Issues with Cisco Switches.
(2011.05.05) How to resolve TSPC Server service start problems when TIVGUID is mistakenly uninstalled.
(2011.04.29) Tivoli Storage Productivity Center v4.1.1 Fix Pack 6 (April 2011).
(2011.04.29) TPC database size increases after upgrade to 4.2.1.
(2011.06.09) IBM XIV Host Attachment Kit for AIX v1.6.
(2011.05.24) Potential Problem on XIV Storage System ranging microcode versions 10.2.2 thru 10.2.4.a that can be caused by changing system time via Network Time Protocol (NTP) or when changing the clock via XCLI.
(2011.05.10) IBM XIV Storage System Planning Guide.
As Barry Whyte pointed out in this blog post, the release 6.2 code is available for download and installation onto your SVC and Storwize V7000.
- The Storwize V7000 release of the 6.2 code is here.
- The SVC release of the 6.2 code is here.
I thought I would quickly check out two of the announced features of the 6.2 release: the new Performance Monitor panel and support for greater than 2 TiB MDisks. So on Sunday I got busy and upgraded my lab Storwize V7000 to version 18.104.22.168.
Remember that in nearly every aspect the firmware for the SVC and Storwize V7000 are functionally identical, so while I am showing you a Storwize V7000, it equally applies to an SVC.
Firstly I tried the performance monitor panel, and what better way to show you what I saw than on YouTube? This is my first YouTube video so please forgive me if its not slick. I started the performance monitor and captured two minutes of performance data using Camtasia Recorder. Because it is fairly boring to stare at graphs slowly moving right to left, I then sped it up eight times, and this is the result:
The video is shot in HD, so if what your seeing is grainy or hard to read, change the display to 720p or 1080p. Now if you want to see the performance monitor at its actual speed, here is the original normal speed video. Remember this is the same video as above, just slower. It can also be viewed in 720p.
So what are you seeing?
- The top left hand quadrant is CPU utilization.
- The top right hand quadrant is volume throughput in MBps as well as current volume latency and current IOPS.
- The bottom left hand quadrant is Interface throughput (FC, SAS and iSCSI).
- The bottom right hand quadrant is MDisk throughput in MBps as well as current MDisk latency and current IOPS.
You will note that each metric has a large number (which is the current metric in real time) and a historical graph showing the previous five minutes. You can also change the display to show either node in the I/O group.
I found the monitor to be genuinely real time: the moment I changed something in the SAN (such as starting or stopping IOMeter or starting or stoping a Volume Mirror), I immediately saw a change.
Greater than 2 TB MDisk support
Next I logged onto my lab DS4800 and created two 3.3TiB volumes to present to the Storwize V7000. I chose this size because I had exactly 6.6 TiB worth of available free space on the DS4800 and I wanted to demonstrate multiple large MDisks. On versions 6.1 and below, the reported size of the MDisks would have been 2 TiB (as I discussedhere). Now that I am on release 6.2 with a supported backend controller, I can present larger MDisks. In the example below you can clearly see that the detected (and useable size) is 3.3 TiB per MDisk.
What controllers are supported for huge MDisks?
The supported controller list for large MDisks has been updated. The links for Storwize V7000 6.2 are here and for SVC here. If your backend controller is not on the list, then talk to your IBM Sales Representative about submitting a support request (known as an RPQ).
I recently created a post about the XIV Host Attachment Kit (amusingly called the HAK). IBM has released an update to the HAK, taking us from version 1.5 to version 1.6. The updated versions, along with release notes and installation instructions can be found at the following links:
IBM XIV Host Attachment Kit for AIX, Version 1.6
IBM XIV Host Attachment Kit for HP-UX, Version 1.6
IBM XIV Host Attachment Kit for RHEL, Version 1.6
IBM XIV Host Attachment Kit for SLES, Version 1.6
IBM XIV Host Attachment Kit for Windows, Version 1.6
Whats changed you asked? Great question! Checking the Release Notes for each Operating System (which can be found in the links above), I found some common improvements to the HAK for every OS:
- The xiv_diag command now provides the HAK version number when used with the --version argument. This is handy to confirm what version of HAK you are currently running.
- More information is collected with the xiv_diag command.
- The xiv_devlist command can now display LUN sizes in different capacity units, by using the –u or --size-unit argument. I give an example below.
Usage: -u SIZE_UNIT, --size-unit=SIZE_UNIT
Valid SIZE_UNIT values: MB, GB, TB, MiB, GiB, TiB
- The xiv_devlist output can be saved to a file in CSV or XML format, by adding the –f or --file argument. I give an example below.
There are also several other fixes which are mainly common between Operating Systems. Given that a major part of the HAK are Python scripts such as xiv_attach, xiv_devlistand xiv_diag and given that the output and behaviors of these script are very similar for each OS, this is not surprising.
I installed the new version 1.6 HAK onto my 64-bit Windows 2008 server and found another pleasant surprise: When I ran the xiv_attach command it detected that my Qlogic driver was downlevel. In this example it detected I am running a Qlogic QLE2462 on driver version 9.18.25 and suggested I should instead run driver version 9.19.25.
I then tried out the xiv_devlist command, displaying volume sizes in both decimal (GB) and binary (GiB). Note the syntax I used to get the GiB output: xiv_devlist -u GiB
Finally I offloaded the output of the xiv_devlist command to a CSV file. Again please note the syntax as you may find it useful:
xiv_devlist -t csv -f devlist.csv -u GiB
You could use -t xml instead of getting CSV output. Clearly you could also change the file name devlist.csv to any filename you like.
You do not need to worry about which version of firmware your XIV is running. The release notes confirm HAK version 1.6 will work with XIV firmware 10.1.0, 10.2.0, 10.2.2, 10.2.4 and 10.2.4a, which should cover pretty well every machine in the world.
One final note: Under Known Limitations the release notes state that you should not map a LUN0 volume. This simply means leaving LUN0 disabled (which is the default). In the example below I start mapping volumes from LUN1 and have NOT clicked to enable mapping of volumes to LUN0. This should be the norm.
Any confusion or questions? You know where to find me.
Many months ago I set up my WordPress blog (this is not the one your reading now, but the mirror of this blog I maintain over at Wordpress). One of the configuration choices I had was to enable a mobile version of the site. This setting changes the user experience when using a mobile device. It was a very easy thing to set up:
The difference between the mobile version and the non-m0bile version is fairly stunning as can been seen below, both views from an iPhone 3GS. The mobile version is on the left and the non-mobile version is on the right. Note that there is no difference in the selected URL:
In March, WordPress added a new feature from Onswipe to allow Apple iPad users to have a more iPad friendly user experience. You can read the announcement on Onswipe's blog. Again for the content creator (me), the work to set to this up was practically non-existent, in fact I don't even recall having to turn it on.
And the result? If you visit my blog on an iPad, the look and feel is amazing. It grabs the first image from each blog post to build a really nice front page. It means I will have to take more care with my opening images!
Now the obvious question is: What about Android? If I check the WordPress FAQ found here, it says that support is coming.
So if you like the look of the mobile version, feel free to switch to using my Wordpress blog. It contains all the same posts and is found here:
With Brocade's recent announcement of a 16 Gbps capable Fibre Channel Switchand Director, the question of which cable type to purchase becomes even more relevant. Do you buy OM3 or OM4 cable over OM2?
Now if your saying... OM-what? Let me start at the beginning...
Back when fibre channel was fresh and new and ran at 1 Gbps, the common multi-mode fibre cable that we used had a glass core that was 62.5 microns in diameter. This became known as OM1 type fibre cable. We rapidly switched to 50 micron cores because you could get a reliable signal across a longer distance, say 500 meters maximum rather than 300 meters. The 50 micron cable became known as OM2 type cable.
What has happened since then is that fibre channel speeds have moved from 1 Gbps to 2 GBps to 4 Gbps to 8 Gbps to 16 Gbps. This is exciting stuff, but with every increase in speed, we suffer a decrease in maximum distance. This means that something else needs to change... and that something is the quality of the cables, or more specifically, the modal bandwidth (the signalling rate per distance unit).
With the evolution of 10 Gbps ethernet, the industry produced a new standard of fibre cable which the fibre channel world can happily use. Its called laser optimized cable, or more correctly: OM3. Since then OM3 has been joined by an even higher standard known as OM4.
Lets look at the distances we can achieve with different cable types. You can see in the table below that the modal bandwidth (given in MHz times kilometers), improves as we move to higher quality glass. You can also see that single mode fibre (with the 9 micron core) has not suffered the same issue with decreasing maximum distances as speeds have increased. These numbers come from Brocades SFP specification sheets found here andhere (so there may be slight variations if you view specs from other vendors).
I didn't fill in the table for 1 Gbps and 2 Gbps using OM4 cable, simply because I couldn't find it... but the distances would be very large indeed.
So how can you tell what sort of cable you have? The first hint is the colour, the second is the printing on the cable. Cables that are 50 micron and orange are almost certainly OM2. Cables that are aqua in colour (don't call them green!) are either OM3 or OM4. In the example below I can clearly tell which cable is OM3.
Pictured below is a roll of OM3 cable, all ready for deployment with standard LC connectors. Note you can also get OM3 cable with a smaller LC type connector used on the mSFPs in the high density 64 port blades in the Brocade DCX. You can find additional information on identifying cables here.
So should you be buying OM3 cable over OM2? Or even considering OM4?
The reality is that in many cases, server and storage hardware is often in the same or adjacent racks to the switch hardware. If this is true for your site, OM2 will satisfy the vast bulk of requirements, because the distances are quite short. The most common cable I add to configurations is either 5 or 25 meters long. This is why OM2 is still IBM's cable of choice, since either length would satisfy 16 Gbps connectivity. Checking with some local cable vendors, OM2 cable also remains the cheaper alternative.
Clearly if your computer room is large enough to need cable runs of over 35 meters, then serious consideration should be given to future proofing parts of your cable infrastructure with OM3 (or even OM4). There is nothing wrong with having a mix of cable types - just don't join them together.
I would be curious to know how many sites are choosing to move to OM3? Feel free to comment either way. I think there will be more to come on this subject, and remember.... OM3 and OM4 cables are aqua not green or blue. #.
Would love to hear about your sites recent cabling purchases.
And if the word Aqua reminds you of a late 90s Scandinavian pop group, look no further:
I started my IBM career with very dirty hands.
Every day I would go to work and come home smeared with toner, ink, grease and oil.
No I didn't work for a newspaper or in a garage... I worked for IBM, fixing cheque sorters and printers. This was the late 1980s and early 1990s. The years I spent working on IBMs 3800 and 3825 printers and 3890 cheque sorters were great years. I loved working with my customers and I loved working on those big machines. It was lots of fun... but there were lots of ways to get dirty.
What were these machines? Well for one, the IBM 3800 was the worlds first commercial fan-fold laser printer (released in 1975!). Here is a picture, but I would point out that this 3800 looks remarkably clean:
The 3890 Cheque Sorter was an enormous document processor that could move 2400 cheques per minute. For even better clothing destruction, the 3890 has an ink jet printer that used a special ink that you could easily remove from any garment - provided you used a pair of scissors. As for the IBM 3825 Page Printer, it used Charged Area Development, which without very regular maintenance, could result in huge amounts of toner wandering around inside the machine. No wonder the acronym for that technology is CAD.
And yet in all of this... I wore a suit and tie to work... every day... and I always wore a white shirt. It was an IBM standard that had existed for a very long time. People who turned up for work in a non-white shirt had better be a top performer and only the most remarkable or safety conscious turned up for work wearing something that is now rare in the workplace: The Bow Tie.
The only other IT organization I knew that was just (if not more) obsessed with suit and tie? EDS.
As for the System 38 utopian image below.... thats not me on the right! I never wore tan trousers or short sleeves to work. (Check out the size of those monitors!).
Things changed in the mid 1990s. Suddenly we didn't need to wear a tie. Some of us started wearing corporate branded polo shirts. Times had changed and we changed with them. One irony is that I now regularly wear black business shirts to work, something that I would never have gotten away with in 1990. Yet today the closest I come to toner is when I go and get a printout from the printer.
If your interested in seeing some great photos of how IBMers used to dress, visit the IBM History exhibit here: "The way we wore: A century of IBM attire". You could also head over to IBM's 100 Icons of Progress and in particular visit The Making of IBM to see Thomas J Watson Snr looking very smooth indeed.
I was brought to reminiscing about this when visiting a client on a friday. Friday has become casual clothes day at many organizations. And yet given how far we have come... I am pondering why we bother? In comparison to 20 years ago, every day is casual clothes day. Perhaps its time to put aside the polo shirts and bring back the bow tie? As Dr Who says "Bow Ties are Cool"
So are you with me? Bow Tie Friday?
Comments always welcome.
There was a time when 32 bits was considered a lot. A hell of a lot.
With 32 bits, you can create a hexadecimal number as big as 0xFFFFFFFE (presuming we reserve one bit).
In decimal that's 4,294,967,295. Hey... imagine a bank account balance that big?
If you use 32 bits to count out 512 byte sectors on a disk, you could have a disk that's 4,294,967,295 times 512... or 2,199,023,255,040 bytes! That's sounds huge, right?
Well... actually...no... that's 2 TiB, which most people would refer to as 2 Terabytes. Mmm.. Suddenly I am less impressed (still wouldn't mind that as a bank account though).
Now there are plenty of running Systems that still cannot work with a disk that is larger than 2 TiB. One of the more common is ESX. I am presuming this limitation is going to disappear, so Storage susbsystems need to be ready to create volumes that are larger than 2 TiB.
The good news is that with the May 2011 announcements, IBM is removing the last 2 TiB sizing limitations from its current storage products. There appears to have been some confusion in the past, so I thought I would go through and be clear where each product is at:
Firmware version 07.35.41.00 added support to create volumes larger than 2 TB. The maximum volume size is limited only by the size of the largest array you can create. This capability has been available for some time and hopefully you are already on a much higher release.
DS4000 and DS5000
Firmware version 07.10.22.00 added support to create volumes larger than 2 TB. The maximum volume size is limited only by the size of the largest array you can create. This capability has been available for some time and hopefully you are already on a much higher release.
DS8700 and DS8800
The DS8700 and DS8800 will support the creation of volumes larger than 2 TB once a code release in the 6.1 family has been installed. With this release you will be able to create a volume up to 16 TiB in size. The announcement letter for this capability is here.
The volume size on an XIV is limited only by the soft limit of the pool you are creating the volume in. This allows the possibility of a 161 TB volume.
SVC and Storwize V7000
These two products have two separate concepts:
- Volumes (or VDisks) that hosts can see.
- Managed disk (or MDisks) that are presented by external storage devices to be virtualized. Within this there are two further categories:
- Internal MDisks created using the Storwize V7000 SAS disks.
- External MDisks created by mapping volumes from external storage (such as from a DS4800).
SVC and Storwize V7000 Volumes (VDisks).
Prior to release 5.1 of the SVC firmware, the largest volume or VDisk that you could create using an SVC was 2 TiB in size. With the 5.1 release this was raised to 256 TiB, as announced here. When the Storwize V7000 was announced (with the 6.1 release) it also inherited the ability to create 256 TiB volumes.
Storwize V7000 Internal Managed Disks (Array MDisks).
Because the Storwize V7000 has its own internal disks, it can create RAID arrays. Each RAID array becomes one Mdisk. This means the largest MDisk we can create is limited only by the size of the largest disk (currently 2 TB), times the size of the largest array (16 disks). This means we can make arrays of over 18 TiB in size (using a 12 disk RAID6 array with 2 TB disks). Thus internally the Storwize V7000 supports giant MDisks. We can also present these giant MDisks to an SVC running 6.1 code and the SVC will be able to work with them.
SVC and Storwize V7000 External Managed Disks.
When presenting a volume to the SVC or Storwize V7000 to be virtualized into a pool (a managed disk group) we need to ensure two things are confirmed. Firstly you need to be on firmware version 6.2 as confirmed here for SVC and here for Storwize V7000. Secondly that the controller presenting the volume has to be approved to present a volume greater than 2 TiB. From an architectural point of view, MDisks can be up to 1 PB in size as confirmed here, where it says:
|Capacity for an individual external managed disk|
|Note: External managed disks larger than 2 TB are only supported for certain types of storage systems. Refer to the supported hardware matrix for further details.|
I recommend you go to the supported hardware matrix and confirm if your controller is approved. The links for Storwize V7000 6.2 are here and for SVC here. As of this writing, the list has still not been updated, but I am reliably informed it will include the DS3000, DS4000, DS5000, DS8700 and DS8800. It will not initially include XIV, which will come later. Please also note the following:
- Support for giant MDisks (greater than 2 TiB) is firmware controlled. If the controller (e.g. a DS5300) presenting a giant MDisk is not on the supported list for your SVC/Storwize V7000 firmware version, then only the first 2 TiB of that MDisk will be used.
- If your already presenting a giant MDisk (and using just the first 2 TiB), then just upgrading your SVC/Storwize V7000 firmware won't make the extra space useable. You will need to remove the MDisk from the pool, then do an MDisk discovery and then add the MDisk back to the pool. All of this can of course be done without disruption, using the basic data migration features we have supported since 2003.
What to do in the meantime?
If your currently using an SVC or external MDisks with a Storwize V7000, then you need to work within the 2 TiB MDisk limit (except for Storwize V7000 behind SVC). The recommendation is a single volume per Array for performance reasons (so the disk heads don't have to keep jumping all over the disk to support consecutive extents on different parts of the disk). This can require careful planning. For instance using 7+P RAID5 Arrays of 450 GB drives makes an array that is over 3 TB. What to do in this example?
- Divide it in half? (by creating two 1.5TB volumes)
- Waste space? (a whole 1 TB)
- Use smaller arrays? (a 4+P array of 450GB disks is 1.8 TB)
The answer is that where possible, create single volume arrays using 4+P or larger. If the disk size precludes that, then create multiple volumes per array and preferably split these volumes across different pools (MDisk groups).
Anything else to consider?
Well first up, will your Operating System support giant volumes? Googling produces so much old material that it becomes hard to nail down exact limits. For Microsoft, read this article here. For AIX check out this link. For ESX, check out this link.
Second of course is the consideration of size. File systems that utilize the space of giant volumes could potentially lead to giant timing issues. How long will it take to backup, defragment, index or restore a giant file system based on a giant volume (the restore part in particular)? Outside the scientific, video or geo-physics departments, are giant volumes becoming popular? Are they being held back by practical realities or plain fear? Would love to hear your experiences in the real world.
And a big thank you to Dennis Skinner, Chris Canto and Alexis Giral for their help with this post.
As you would expect, the IBM XIV supports a very wide range of Host Operating Systems. Even better, for most of these Operating Systems, IBM makes available (free-of-charge) a multipathing kit to install on these hosts. We call this the Host Attachment Kit, or HAK. You can find all of the available Host Attachment Kits at the IBM Support site found here. You will find HAKs for AIX, HP-UX, Linux, Solaris and Microsoft Windows.
What is important is that if the HAK is available for your Operating System, we need you to always install it on every host that attaches to IBM XIV. We ask this for the following reasons:
- By having the XIV HAK installed, your hosts are much easier for IBM to support. This is because installing the HAK ensures that your multipathing is setup correctly. When you installing the HAK and then run the xiv_attach command, the HAK will adjust system parameters to optimal values. For example on Windows hosts it ensures that the required MPIO Service is running and that the recommended hot fixes are installed. For Linux hosts it ensures that the multipath.conf file is correct. Every time you map a new volume from your IBM XIV, use should run xiv_attach to ensure you continue to have the correct settings.
- If you have an issue that requires IBM support, the HAK supplies a command known as xiv_diag. This command creates a zipped host log file that will contain useful and relevant information for IBM to analyze.
- The HAK supplies a very valuable command known as xiv_devlist which lets you list all attached volumes and match the host ID to the XIV volume name. If your host is attached to multiple XIVs, you can also map each volume back to it's relevant XIV. Its a command I cannot live without... I love it!
Here is an example of what xiv_devlist will tell you. In this example I have run it on a Windows 2008 machine, but the output is basically the same regardless of host operating system. You can see the operating system identifier (the Device as reported by the operating system, in my example PHYSICALDRIVE0), the name of the volume (as seen on the XIV, in my example W2K8X64-H02_BOOT - Exchange) and the serial number of the XIV providing the volume (in my example 6000081)
The operating system device identifier lets you map an XIV volume from XIV to host. So in this example, I know that the Windows (C drive, which is Windows Disk 0, maps to a volume on the XIV known as W2K8X64-H02_BOOT - Exchange.
And to finish, there are several other commands that are very helpful. For instance thexiv_fc_admin -P command will tell you your WWPNs.
C:\Windows\system32> xiv_fc_admin -P
21:00:00:0d:60:13:b0:8c: [QLogic IBM FCEC Fibre Channel Adapter]: IBM FCEC
21:00:00:0d:60:13:b0:8d: [QLogic IBM FCEC Fibre Channel Adapter]: IBM FCEC
Another useful command is xiv_fc_admin -R because it rescans your bus. In some operating systems it is not obvious how to do this (other than reboot of course).
The nice thing is that regardless of your host operating system, the commands are the same. This is possible because they use the Python programming language. You may notice Python being installed as xpyv when you install the HAK (it is so named to ensure it doesn't interfere with any other Python installs you have).
So please install the HAK on every host that attaches to XIV. You will be making everyones life a lot easier (especially your own).
Oh and by the way, you can confirm whether your Host Operating System can be attached to the XIV by consulting the IBM System Storage Interoperation Center (or SSIC). If the HAK is not available for your Operating System, the SSIC will list other Vendor approved multipathing solutions (such as Veritas DMP).
Hi Team! Just wanted to let everyone know that VisioCafe has been updated with IBM's latest official stencils for use with Microsoft Visio. These include all models of the Storwize V7000, including the newest models: The 2076-312 and 2076-324 (which have the dual port 10 Gbps iSCSI card).
Here is the link to VisioCafe. The Storwize V7000 stencils are in both the IBM-Disk as well as the IBM-Full packages.
Here is a screen capture of the Node Cannisters in the 2076-324. I have circled one of the shiny new 10 Gbps iSCSI cards.
So please stop using the stencils I previously supplied on my IBM developerWorks blog and switch to the official set.
And if you have some examples of Visio diagrams that include the Storwize V7000 I would love to see (and share) them.
I found a link to great video on Jason Boches Virtualization blog and I thought I would post it here as well.
What the video shows is 70 minutes worth of take-offs and landings at Logan International Airport in Boston, compressed into 150 seconds. Its an amazing piece of footage and very cleverly done. Seen anything equally as clever? Would love to hear about it. Enjoy!
A quick blog post about XIV call home..... As with most IBM products, the XIV can call home to IBM using e-mail notifications. I still meet people who call this dial-home, which reflects the 20th century practice of using modems to provide a Remote Support Facility (RSF). The e-mail notifications sent by the XIV allow IBM to track any issues that may occur and respond where appropriate.
This is all good, provided IBM know how to get hold of you if there actually is an issue. I had a situation recently where our internal client records had an out-of-date phone number. This led to a delay in problem resolution, a delay which was avoidable.
One way to help prevent delays is by keeping the XIV up to date with your contact details and as usual, the XIV GUI makes this easy.
From the XIV GUI, head to the Support menu as per the screen capture below:
From there you will find several tabs, three of which are well worth filling in, these being:
- Customer Information: Where is the machine?
- Primary Contact: Who should IBM try contacting first?
- Secondary Contact: Who should IBM try contacting second?
Actually don't hesitate to fill in ALL the tabs, but the point of this exercise is to at least ensure IBM knowwhere the machine is and who to call.
Its worth ensuring the XIV is updated if your support center phone numbers change, or if you relocate the machine to a different site. At some client sites, I find the primary contact is a single person (whose mobile number sadly ends up being the 24 hour storage help desk). If you are that person.... and your leaving the company.... ensure your name and number gets updated by your replacement. After all, its one thing to have IBM calling you at 3am when you manage the machine... but to be rung after you have left the company? Mmmm... thats just plain annoying.
Its a story told many times.....
You order a new storage solution and the world is good.
It's lovely, it's new and it offers mountains of new disk space.... but then... you.... fill it up!
So its off to order some new disks.
The order is in, the order is filled, the disks arrive.
What next? How about we just stick them in?
By just inserting the new disks, they will be made available to configure into RAID arrays from the Internal tab of the Physical Storage Group.
If the drives are showing as Unused, mark them to be Candidate. If they are already showing as Candidate (like most of the disks in my example below), then you are ready to hit the Configure Storage button and follow the guidance of the Wizard.
Of course maybe your enclosures are all full. In this case it's time to order another enclosure (remember we can have up to 10). Once you have racked the enclosure up and cabled the new enclosure to the correct SAS Chain, then use the Add Enclosure menu item shown below to kick off the configuration:
Just a very quick blog post to point you to another blog post that I found particularly pleasing. To be fair, the independent judges on the panel were the ones that selected IBM, rather than EMC itself.... but the recognition is well deserved. Enjoy!
Storage IT offers up many choices, some of which provoke argument so heated, you could almost describe the adherents as religious. I think you might know the sort of arguments I am talking about:
- File vs Block I/O
- iSCSI vs Fibre Channel
- CLI vs GUI
OK.... so maybe that last one isn't quite in the same league. But it is still fascinating to see the variation in usage patterns from sites where every command (of any description) is run via a command line interface (a CLI), to sites where the CLI is viewed with either great fear... or even greater distaste. There are those who view the CLI as... well... so 1970s.....
But the reality is that the CLI will always be with us for one principal reason: scripting. If you cannot script it, you cannot automate it (well actually thats not true, but stick with me here, I am on a roll). Every single major implementation I have ever done (whether it be SVC, XIV, DS8000), I have automated with scripting. I regularly use the concatenatecommand in Excel to build large numbers of commands that I can then run as a script.
So its pleasing to see that all of our products are working towards making the scripters life even easier. For example the XIV has offered a command log in the GUI for some time. I blogged about it here. You simply do a command once in the GUI and then consult the log to find the syntax, making scripting very easy:
With last years release of SVC 6.1 and Storwize V7000, we added this level of smarts to those two products as well. Now every command you run in the GUI will offer you the exact CLI command that was used under-the-covers to do this work. Simply toggle the details tab on the completion panel to see the command (or toggle it back to hide it!).
This weeks announcement of release 6.2 of the SVC and Storwize V7000 firmware, has brought in two more important usability improvements:
- Now when logging onto the CLI using individual user-ids, you can logon using the actual user-id itself, rather than admin. This change has been a long time coming and removes the confusion generated by logging onto the GUI as sayanthony, but then logging into a matching CLI session as admin. Now you would logon to either interface as anthony.
- Now when issuing CLI commands, you have the choice to drop the svctask and svcinfo headers. So instead of issuing the command svcinfo lsnode, you can issue the command lsnode. Both choices remain valid (so we don't break your existing scripts). Making this change is part of a bigger plan to move to a more common CLI.
And there are more improvements coming, so as always, watch this space....
.... and please... share with me... are you a GUI... or a CLI person? Whats your reasoning behind your choice?
*** Updated 25/07/2011: The VAAI plugin can be downloaded from here: http://www-933.ibm.com/support/fixcentral/swg/selectFixes?parent=ibm~Storage_Disk&product=ibm/Storage_Disk/IBM+Storwize+V7000+(2076)&release=6.2&platform=All&function=all ***
The May 9 announcement that SVC and Storwize V7000 will support VAAI is very welcome news. The fundamental point is that the SVC and Storwize V7000 virtualise external storage. This means that the mountains of DS3000, DS4000, DS5000, AMS1000s, CX3s, etc, that are currently being virtualized behind these products, will inherit VAAI as soon as the virtualization layer supports it. This is yet another feature to add to the list of functions that IBM Storage virtualization can provide, such as: EasyTier; Thin Provisioning; multiple consistency groups; snapshots; remote mirroring; dynamic data relocation... the list goes on.
In addition we are releasing a plug-in for vCenter that enables VMware administrators to manage their SVC or Storwize V7000 from within the VMware management environment
Functions will include:
- Volume provisioning and resizing
- Displaying information about volumes
- Viewing general information about Storwize V7000 and SVC systems
- Receiving events and alerts for Storwize V7000 systems and SVC attached to vSphere
- The Storwize V7000 and SVC plug-in for vCenter will also supports virtualized external disk systems
The plug-in will be available at no charge on June 30 (for Version 6.1 software) and July 31 (Version 6.2). Here is a sneak peak of what it will look like:
And to get an independent viewpoint have a read of Stephen Fosketts blog entry here:
With the announced release of DS8000 6.1 code, IBM has moved its three major storage systems to a common GUI platform. This makes me think of aircraft manufacturers who utilize a common cockpit design. For airlines, this is major drawcard when choosing aircraft models. It cuts down on training costs for your pilots. Except in storage IT, there is a major difference in motivation....
First and foremost, the design of the XIV GUI (that has inspired such dramatic change in IBMs other GUIs), was made possible, not by clever XIV GUI developers (don't get me wrong - they ARE clever), but by a remarkably user-friendly architecture. The XIV GUI is a miracle of ease-of-use for end users, made possible because first and foremost, by design, the XIV made it almost impossible to make it hard.
The good news for Storage administrators, is that unlike a jet aircraft, where a pilot needs to spend hundreds of hours in the cockpit before they are considered potentially competent, the XIV GUI can be picked up in minutes and lends itself very well to casual contact. You don't need to keep using it to stay competent.
The challenge for IBM was take more complex products, which require more user decisions, and make the usage experience just as easy. To add to this, the SVC and DS8000 GUIs were driven by WebSphere. Changing these GUIs would require a complete re-write to employ Java script.
First off the rank was the SVC and Storwize V7000. With the release last year of the SVC 6.1 update, the transformation was nothing less than remarkable. End user experience ruled every decision. The key again is that the user does not need to spend hundred of hours learning this GUI or re-learning it every time they go to perform a configuration task. Everything is in its right place. Its much more than an XIV-like GUI. Its a GUI that took the ease of use experience of the XIV and used that to inspire something just as remarkable.
With the release of the 6.1 update for the DS8000, we complete another fundamental step towards a truly common GUI. The DS8000 GUI has undergone a complete re-write. Essentially it has been rebuilt from the ground up. This highlights something fundamental: It confirms the DS8000 has a very strong roadmap.
As you can see from the image below, the transformation from the old design (to the left) to an ease of use model is complete:
In short it a common flight deck, that almost anyone can fly.
Here is a list of all the IBM Asia Pacific and Japan Announcement Letters that were released on May 9. They are in several sections:
New disk drive option for IBM System Storage DS3950 Express Disk Systems
IBM System Storage DS3500 Express Storage System supports next-generation, high-performance 10Gb iSCSI technology
IBM Scale Out Network Attached Storage 1.2.0 supports multiple petabytes of storage
IBM Information Archive offers a new Server (2231-S3M),Disk Controller (2231-D3A), and Disk Expansion Drawer (2231-D3B)
IBM System Storage DS8700 and DS8800 (M/T 239x) delivers DS8000 Function Authorization for I/O Priority Manager and other advanced features
New disk drive option for IBM System Storage DS5020 disk systems
IBM System Storage N series N6270 offers enterprise-class Fibre Channel, iSCSI, and NAS storage with gateway options
IBM System Storage N series function authorizations for IBM System Storage N6270
IBM System Storage DS8700 and DS8800 (M/T 242x) delivers DS8000 I/O Priority Manager and advanced features to enhance data protection for multi-tenant copy services
IBM System Storage EXN3500 SAS expansion unit provides storage for IBM System Storage N series PCIe systems
IBM System Storage DS5000 series supports next generation, high-performance 10Gb iSCSI technology
IBM System Storage Tape Cartridge 3599 models provide enhanced capacity for enterprise tape drives
IBM Virtualization Engine TS7700 is designed to bring efficiency to tape operation and offer versatile models that support attachment to tape libraries
New features for IBM System Storage TS7650 ProtecTIER Deduplication Appliance (3958 AP1) and IBM System Storage TS7650G Gateway Server (3958DD4)
IBM System Storage TS1140 Tape Drive Model E07 delivers higher performance, reliability, and capacity
IBM System Storage TS3500 Tape Library Connector and TS1140 Tape Drive support for the IBM TS3500 Tape Library
New IBM System Storage SAN Volume Controller Storage Engine offers 10 Gigabit Ethernet connectivity
New IBM Storwize V7000 Disk System models 312 and 324 offer 10 Gigabit Ethernet connectivity
IBM Scale Out Network Attached Storage Software V1.2.0 for high availability environments
IBM announces many-to-many, bi-directional replication for IBM System Storage ProtecTIER Enterprise Edition V3.1 and ProtecTIER Appliance Edition V3.1
IBM System Storage ProtecTIER Entry Edition Version 3.1 supports many-to-many, bi-directional data replication
IBM System Storage Linear Tape File System Library Edition Version 2.1
IBM Storwize V7000 Version 6.2 delivers support for VMware VAAI, real-time performance monitoring, and 10 Gigabit iSCSI connectivity
IBM System Storage SAN Volume Controller Version 6.2 delivers support for VMware VAAI, real-time performance monitoring, and 10 Gigabit iSCSI connectivity
There are several withdrawals, but these are only because replacement products have been announced above.
Hardware withdrawal: IBM N series N6060 (2858 Model A22) and N6070 (2858 Model A21) -- Replacements available
Hardware withdrawal: IBM TS7740 (3957) Model V06 and IBM TS7720 (3957) Model VEA and associated features - Replacements available
Hardware Withdrawal: Select models and features for Information Archive (MT 2231) - Some replacements available
Hardware withdrawal: Feature number 3447 from IBM System Storage TS7650 and TS7650G ProtecTIER solutions - Replacement available
Hardware withdrawal: IBM Scale Out Network Attached Storage Models 2851-SI1 and 2851-SS1 - Replacements available
Hardware withdrawal: IBM System Storage SAN Volume Controller 2145 Model CF8 - Replacement available
Its that time of the year again - announcement time! And the May 9 set of storage announcements by IBM is one of the richest set of announcements I have ever seen. Practically every storage product has received updates with new features stretching from Tape Drives to Tape libraries to Disk system updates (from the smallest system to the largest system). We have NAS updates and we have storage virtualization updates. I struggled to decide which subject to start on, to do justice across the board. So let me first list just some of the products that have received updates:
TS1140 - Super fast, massive capacity, enterprise tape technology.
For many years IBM has been using its own technology (as an alternative to LTO) to offer clients a higher class of enterprise tape. The TS1140 is the fourth generation of this technology. Using the new JC media which has 4TB of native capacity, then presuming a compression ratio to 2.5 to 1, you could place 10 TB of compressed data onto a single cart. And you could do this at 250 MBps sustained, which according to Oracle, makes the TS1140 the fastest tape drive in the world! The TS1140 will happily burst at up to 650 MBps - so we now have a tape drive that can truly utilize a 8 Gbps fibre channel port. It reinforces the green credentials of tape by using only 46W of power and supports LTFS, the Long Term File System, which leads me to....
LTFS - Long Term File System
Speaking of LTFS, we have enhanced the LTFS standard to now support tape libraries. So get this idea.... you attach a tape library to your server. All the tapes in the library appear to the operating system as directories. You can select any of these directories and the library will open it up (i.e. mount the tape). Now the contents of the tape itself appear as a directory structure, from which you can add or remove files. In other words, the library and the tapes can be manipulated without any form of backup software sitting between you and the operating system. After the initial tape mount, the directory is locally cached, so you don't need to mount the tape again to see what is on it (and to search the directory). This whole concept has the most amazing potential use cases.
IBM has a truly fantastic tape library with the TS3500. Now we add the ability to shuttle tapes between aisles to create a larger logical library. How do you like the idea of a logical tape library that can hold 300,000 cartridges totaling 2.7 exabytes?
The IBM TS7700 is our Mainframe virtual tape library solution. It gets a major performance boost with the introduction of Power 7 servers plus many other improvements.
In terms of disk we have enhancements to the following products:
DS8000 Family with release 6.1
When we released the DS8800 last year, we committed to deliver a merged code library which would support both DS8700 and DS8800. This would ensure that they both have the same feature set. We now deliver on that commitment, plus supply an enormous set of new features and functions for both products: so both products continue to get major enhancements and updates. These include:
Easy Tier enhancements: Any two disk technologies can now be placed in a pool
I/O Priority Manager: Which allows for quality of service management.
Multi-tenancy management: Allows for the creation of separate Copy Services domains.
Larger LUN sizes: Allows the ability to create LUNs up to 16 TiB in size.
Enhanced GUI: We will now have a common GUI for DS8700, DS8800, Storwize V7000, SVC and XIV.
8Gb/s host adapters for the DS8700
V7000 and SVC Family with release 6.2
The IBM SAN Volume Controller and Storwize V7000 share a common code library, so improvements are common. In the 6.2 release we deliver the following enhancements:
Flash Copy Improvements: Allow remote copies of flashcopy targets
SVC 2145-CG8 Node: New hardware model
10 Gb iSCSI: For both Storwize V7000 and SVC
SVC Solid State Drive Support: Allowing SVCs to use internal SSDs for EasyTier
VMware VAAI: All three VAAI primitives now implemented.
Real Time Performance Statistics: A new GUI panel giving performance info.
Storwize V7000 System Clustering: Allowing us to cluster two Storwize V7000s together.
The DS3500 is IBM's entry level disk rocket ship. I am a huge fan of this box for clients with smaller or point solution requirements. We have enhanced the product with the following:
Double the drives: We now support 192 drives.
Scheduled flashcopies: Gives the ability to have scheduled flashcopies run without external intervention.
Improved volume copy: Gives the ability to create a volume copy without stopping host access.
10 Gb iSCSI: Allows us to add 10 Gb iSCSI to the DS3500.
The DS5000 range consists of DS5020 (also sold as DS3950), the DS5100 and the DS5300. Improvements include:
Scheduled flashcopies: Gives the ability to have scheduled flashcopies run without external intervention.
Improved volume copy: Gives the ability to create a volume copy without stopping host access.
10 Gb iSCSI: Allows us to add 10 Gb iSCSI to the DS5100 and DS5300
T10-PI: Allows selected operating systems to add meta data to track write integrity.
SAS drives: We are adding a 600 GB SAS drive that has a SAS to FC interposer so it can be installed in a EXP5000
I have not listed all of the product announcements. There are improvements to SONAS, our nSeries products, Information Archive, Real Time Compression device.... the list goes on.
I will write up another post with all the links....
I had some fun with my wife's computer this weekend.
She called me over because she was getting multiple messages telling her that the harddrive was failing, all being delivered by a very fancy GUI that looked like this:
I became suspicious immediately: Microsoft have never produced a GUI that looks so slick. Another big hint was that the Help & Support button tried to take me to a very strange URL. I say tried because her machine by this point was close to being a vegetable. The All Programs tab contained nothing, there were no desktop icons and the C: reported that it contained no files. We could not browse to the NET because all icons to start a browser were gone and even when I started a browser manually (from Start --> Run), the browser was set to use an unusual proxy.
Fortunately Doctor Google was very helpful and I rapidly found this URL:
I used the tools and instructions found there and was able to get her computer back into a working state. Many thanks to the authors of that page.
This experience brought home three lessons:
- Her employers anti-virus is useless (her laptop runs a corporate load).
- Google images searches can return poisoned URLs that contain malware. Have a read of this excellent article. My wife was doing a Google Images search, looking for pictures of Wheat Rust, when the infection occurred. I am loath to work out which URL it was, as I don't wish to risk a return to any of those poisoned sites.
- Using no-script is a very good idea, and one that I will be implementing on her PC, especially until her employer comes up with a better anti-virus regime.
All of this excitement distracted me from the main event, preparing for the May 9 announcements. You will see a log of blog posts over the next few days detailing what our developers have been up to. Prepare to hear about some very cool stuff.
In the meantime... feel free to share any other methods you have to avoid malware... and download and install MalwareBytes. It is a very nice piece of software that costs nothing to install and use.
I recently had a client ask me if I had seen this problem in Cisco Device manager: Device Manager was showing them 100% utilisation for CPU on one of their MDS9509s. I had a look at the show tech-support and curiously show process cpu showed practically no CPU usage at all. I suggested a display problem and sure enough, Cisco confirmed it:
Symptom: The show system resources command shows high CPU usage even when there is not
much activity on the switch. In one instance, the CPU utility (user and kernel)
was always 100 percent.
Conditions: You might see this symptom 248 days after the system came up
Curiously the Cisco tech support person stated that in fact a CP switchover every 497 days would prevent the issue reoccurring. This is curious because 248 days is close to half of 497 days. And 497 is ITs number of the beast.
The reason that 497 is a problem number is because of the use of a 32 bit counter to record uptime. If you record a tick for every 10 msec of uptime, then a 32-bit counter will overflow after approximately 497.1 days. This is because a 32 bit counter equates to 2^32, which can count 4,294,967,296 ticks. Because a tick is counted every 10 msec, we create 8,640,000 ticks per day (100*60*60*24). So after 497.102696 days, the counter will overflow. What happens next depends on good programming.
Some classic bugs can be found here, here, here and here. Most of these bugs are old and will almost certainly not affect anybody. But remain on notice: 497 day bugs are still possible. Just Google the search argument: 497.1 day bug.
Now let me be clear: I am not aware of any active disruptive, bring-down-your-business type 497 day bugs. The sky is not falling. But historically many vendors products have had 497 day bugs, some of them nasty. I ponder whether we should schedule a switch reboot every 496 days just to avoid the possibility of a 497 day bug. Its an interesting idea. I certainly endorse staggering initial switch reboots by at least an hour, so that a simultaneous 497 day reboot bug (should one be lurking), would not reboot every switch in every fabric at the same time. And in case your think I am picking on Cisco, when I looked at the client switch in question, it was showing a kernel uptime of 562 days, 23 hours, 35 minutes, 24 seconds. Thats some solid uptime.
Back from a short break (for Easter and the School Holidays) to three great pieces of news:
- A new series of Doctor Who is screening.
- Will and Kates wedding went off without a hitch (its not often I get to yell at the dog to stop barking at possums because there is a Royal Wedding on).
- The VAAI driver for XIV has an official download link.
Ok... maybe the Royal Wedding has no place in my blog, but the VAAI link is very appreciated.
Two ways to get to the driver:
- Get it directly from here
- Go to fix Central and select it from the download list: http://www-933.ibm.com/support/fixcentral/
Remember, your XIV needs to be on 10.2.4a firmware, so you need to be talking to your IBM Service Representative to schedule a concurrent firmware update before you turn the VAAI functions on.
Now if your going, um... what is VAAI and how does it help? Check this blog post out:
If your asking, hey what else will 10.2.4a code bring me?
- How about better write performance?
- How about QoS?
- 10.2.4a code also brings the ability to do 'truck' initialization of an async pair (which lets you pre-load an async secondary for faster initial mirroring, or to convert from sync to async without re-mirroring all your data).
- It also lets you format a snapshot, which means you can keep a snapshot in place and mapped to a host, but it will not consume any space.
Last week IBM released Version 2 of the management plug-in for VMware vCenter. The main benefit of Version 1 (the previous release) was that it allowed you to map your datastores to XIV volumes (i.e. which XIV volume equates to which VMware datastore). This was very handy (especially if you were not paying attention as you allocated volumes to your VMware farm), but you still needed the (very easy to use) XIV GUI as well as (obviously) vCenter to manage your landscape end to end.
With the release of Version 2 of the XIV plug-in, we suddenly have the tantalizing possibility that the VMware administrator will not need to talk to their storage administrator or turn to the XIV GUI for day to day operations.
Well Version 2 offers a new and improved graphical user interface (GUI), as well as brand new and powerful management features and capabilities, including:
- Full control over XIV‐based storage volumes (LUNs), including volume creation, resizing, renaming, migration to a different storage pool, mapping, unmapping, and deletion.
- Easy and integrated allocation of volumes to VMware datastores, used by virtual machines that run on ESX hosts or datacenters.
- The ability to monitor capacity, snapshots and replication.
So from vCenter you can now for instance map yourself some new volumes to create data stores, or re-size existing ones. You can also confirm that each of your datastores is being mirrored.
You can get the plug-in free of charge from here:
There is a users guide here. I urge you to download it and have a read. The Users Guide contains lots of really good examples of how the plug-in can be used with some great screen captures. The release notes are here and also make for very good reading.
I honestly think every VMware installation should be using this plug-in. But I am curious about how it will affect the responsibility divide. If your a one-person shop, the chances are that you love your XIV quite simply because you don't need to administer it. The XIV leaves you free to focus on your VMware farm, rather than fret about hot spots or hot spares or RAID groups. For you, this plug-in just makes your life even easier.
But what about larger companies? Firstly, its important to understand that to perform storage administration, the vCenter plug-in will need an XIV userid that has Storage Admin privileges. Why is this significant? Well what if the team who manage the XIV and the team who manage VMware, are not the same people? What if they are different teams; who maybe have different managers; who may work in different buildings or different cities? What if they work for different companies? Do plug-ins like this one erode the lines and bring these teams together? Or are the functional divides still too strong?
I would love to hear your experiences, both in using the plug-in.... and tearing down the walls.
For someone who blogs so frequently about the IBM XIV, I will let you in on a little pet hate of mine: The XIV uses decimal volume sizes.
The XIV GUI and CLI has the user create volumes using decimal sizing, meaning 1 GB = 1,000,000,000 bytes (1000 to the power of three).
Nearly every host system out there (i.e. Windows, AIX, Linux, VMware, Solaris) display volume sizes in binary, meaning 1 GiB = 1,073,741,824 bytes (1024 to the power of three).
This disparity has a quirky consequence. If the XIV says a volume is 17 GB, the host that uses that volume says it is 16 GiB (which the host often then mis-states as GB). This doesn't mean there is a loss of space, this isn't headroom or formatting - its just a different way of counting bytes. Its not a road block and its easy to understand and work with. But it is a little annoying. (Then again, so is my 32 GB iPhone reporting it has 29.3 GB of space).
The other point is that the IBM SVC, Storwize V7000, DS8000 and DS3000/DS4000/DS5000 families have always used binary sizing (even if their respective interfaces use the term GB as opposed to GiB - yet another pet hate of mine and the Storage Buddhist).
So whats the point of this rant?
The IBM XIV Storage System GUI (Version 3.0) will allow volume creation in both GiB and GB units. The IBM XIV Storage System management GUI version 3.0 will support the creation of volumes in Gigabyte (GB) or in Gibibyte (GiB) or Blocks (where each block is 512 bytes).
So this is a really good change.
The new GUI has not hit the download site yet... but I will be sure to tell you as soon as it has!
*** Update 08/09/2011 - corrected GUI version from 2.5 to 3.0, removed some confusing terms ***
I have some great news regarding VAAI support for XIV.
Let me detail the current situation:
- VMware has approved the IBM driver for VAAI and we can now release it to the public. The IBM_VAAIP_MODULE plugin will be available shortly from the ibm.com website. When the release URL is available I will update this post. In the meantime you can get the driver from your XIV TA (Technical Assistant) or IBM Account Team. If they have somehow missed the news, get them to talk to their XIV Product Manager (or they can always talk to me!).
- The VMware Hardware Compatibility guide found here shows that VMware support the three VAAI primitives with XIV, if you are using ESX 4.1 or ESX 4.1 U1 and your XIV is on firmware release 10.2.4 or higher.
- XIV firmware release 10.2.4a is available for install. Installation of this firmware is non-disruptive (concurrent) and will be performed by IBM.
- The VAAI driver and installation of the 10.2.4a code are all supplied free of charge.
So what should your plan be?
- Ensure VAAI is disabled on your ESX hosts.
- Talk to your local XIV TA or IBM Service Representative (SSR) and arrange to have 10.2.4a firmware installed.
- When 10.2.4a code is installed, you can then begin installing the VAAI driver on each of your ESX 4.1 servers. You will need to reboot each server to install the driver.
A question I get routinely asked relates to Windows disk partition alignment with XIV. If you don't know what I am talking about, take some time to read these very useful pages from our friends at Microsoft. Once you have had a look, come on back and read my perspective.
Disk Partition Alignment (Sector Alignment): Make the Case: Save Hundreds of Thousands of Dollars
Back already? Hopefully you now know that disk partition alignment is all about starting an IO at a logical block address that best matches how the underlying hardware stores your data. So now your wondering, what does this have to do with XIV? Well XIV has two concepts that relate to this: cache and partitions.
XIV cache (the server memory used to speed up reads and writes) is organised into 4 KB blocks (which is nice and small).
So the XIV cache does not care about disk alignment.
But when it comes to writing and read from disk, the XIV writes data into chunks of consecutive logical block addresses (LBAs) that we call partitions. These partitions are 1 MiB in size. What does that concept mean? It means the magical number for XIV is 1024 KB or 1 MB. (actually KiB and MiB, but for the sake of ease, I will stick to the naming used by Microsoft. Given this number is fairly large (other hardware often aligns to 32KB, 64KB or 256KB), for XIV this reduces the potential impact of misaligned partitions. Which is good.
Correct Windows Disk Alignment could give up to a 7% performance improvement when using an offset of 1024 KiB. (1 MiB). I need to be clear, that's not a guaranteedimprovement of 7%. It's a maximum possible improvement. Your particular server will see an improvement somewhere between 0% and 7%. It depends on your workload patterns. The more small and random your workload, the more useful setting the 1024 KB offset will be. The more sequential your workload, the less useful it will be, as only the first and last parts of an I/O could potentially be misaligned. This mis-alignment could equate to a tiny percentage of extra work for the XIV. Sadly there is no metric you can display to detect how much impact misalignment is actually having.
So should you do it? The good news is that new volumes created under Windows 2008 prefer the 1 MB boundary. So a fresh install should already be using the correct values. The bad news is that volumes created under earlier Windows Operating Systems (Such as Windows 2000 and 2003) will almost certainly be misaligned, and correcting the alignment is destructive to the data in the partition.
How to check alignment at the host? Here is an example:
I start diskpart:
Microsoft DiskPart version 6.1.7600
Copyright (C) 1999-2008 Microsoft Corporation.
On computer: ANTHONYV-PC
I list my disks. In this example I have two disks installed in my laptop. I select disk 0:
DISKPART> list disk
Disk ### Status Size Free Dyn Gpt
-------- ------------- ------- ------- --- ---
Disk 0 Online 238 GB 5724 MB
Disk 1 Online 232 GB 1024 KB
DISKPART> select disk 0
Disk 0 is now the selected disk.
Now I list the partitions and see the offset for each one.
DISKPART> list partition
Partition ### Type Size Offset
------------- ---------------- ------- -------
Partition 1 Primary 100 MB 1024 KB
Partition 2 Primary 232 GB 101 MB
Partition 1 has an offset of 1024 KB, which is 1 MB, which is perfect for XIV. Partition 2 has an offset of 101 MB, which is still on the 1MB boundary (it was pushed there by the combination of the size of the first partition (100 MB) and its offset (1 MB). So this is perfect.
For an example of how to create a partition with the correct offset, check out this how-to document, that also provides some good follow on reading:
What about other IBM products?
The IBM SVC and Storwize V7000 prefers 64 KB (or larger) offsets as documented here:
Why? Because the SVC and Storwize V7000 use a concept of grains, where each grain is usually 64KB or 256KB in size.
The DS8000 (regardless of model), also prefers 64 KB offsets. The DS8000 use the concept of logical tracks where each logical track is 64KB.
The DS3000/DS4000/DS5000 range allow the user to set the segment size of a logical volume on creation. The setting that you define should match the segment size defined for the logical drive being used. In the example below, it is 64 KB.
What about VMWare?
The answers are no different. Misalignment can indeed make a difference to client performance. Check this link from NetApp and this document from VMware:
For an EMC perspective, check out this link from someone I respect a great deal, Chad Sakac:
I searched around looking for an image to highlight the theme of alignment. I found this image in the IBM archives for the IBM Mass Storage Facility announced back in 1974. I am sure this product had some interesting alignment challenges.
(edited 24/5/2011 --> removed old Visio Stencils link).
VisioCafe has been updated with IBM's latest official stencils for use with Microsoft Visio. These include all models of the Storwize V7000, including the newest models: The 2076-312 and 2076-324 (which have the dual port 10 Gbps iSCSI card).
Here is the link to VisioCafe. The Storwize V7000 stencils are in both the IBM-Disk as well as the IBM-Full packages.
Remember you can also find my XIV stencils here:
Requests for Visio stencils are one of the most common comments I receive.
More are coming so your requests are being heard!
Over on my Wordpress blog, I have posted an entry on migrating a Linux RHEL host from EMC to XIV.
If that subject interests you, check out my article here:
The XIV 10.2.4 release notes report performance improvements that are worth investigating. Two of the reported improvements listed are:
- Improved write hit performance with small blocks
- Improved write caching performance
I visited a client running 10.2.4 to see if these could be detected in the XIV performance statistics. In this clients case, the upgrade occurred on Feb 14. First up I wanted to show that in the period I am examining there was no major variation in write IO. In other words, before and after the code load, I wanted to confirm the client performed the same level of IO.
Having confirmed that the write IOPS did not vary over the period in question, did the latency change? Here we have some good news. Firstly the latency for Write Hits improved (slightly). A write hit is a write into a 1MB partition that already has some data in cache. It is faster than a write miss because some of the address allocation work has already been done. Write hits and misses both hit cache as I explained here. You can see a change on Feb 14 (when the code was updated):
I then looked at the latency for write misses. Again the latency dropped. This suggests that cache operations in general are being handled faster.
I then started thinking.... are we getting more write cache hits? The answer was YES! This is curious because the client normally does not have much control over where they actual write data to... Clearly the XIV firmware is managing the write cache in a more efficient manner. This is good not only because write hits normally have lower latency than write misses, but also because a write hit can save us destaging a block of data to disk. This is because a write hit could involve over-writing data that had not yet been destaged to disk. So two writes to the same LBA would only result in one write to backend disk.
So in conclusion, the upgrade to 10.2.4 code resulted in a measurable improvement in write IO performance at a real world client. Nice!
Its easy to make a fool of yourself.
Its not hard to do.
All you is need is a moment of inattention combined with a massive assumption. In fact assumptions can bring you undone at any time. A former manager of mine introduced me to the saying: To assume is to make an ass of you and me.
So what was the assumption this time?
One of our business partners sold a client two new XIVs and 4 new IBM SAN40Bs (40 port fibre channel switches). So far so good. When you order the SAN switches you have a choice of ordering 4 Gbps capable SFPs (SFPs are the fibre optic sub assemblies that you plug your cables into) or 8 Gbps capable SFPs. There was a time when the 8 Gbps SFPs were much more expensive than the 4 Gbps, but today they are about 75% of the price of the 4 Gbps. So it makes sense to buy the faster SFPs. But you need to ensure that all the HBAs at the client site are at least 2 Gbps capable, because 8 Gbps SFPs are tri-rate and can only go at 2, 4 or 8 Gbps. Sure enough an assumption was made that this was not an issue... but it was. The client has WDMs that run at 1 Gbps and upgrading those WDMs would be a significant expense.
So I got to thinking... could I force the SFP to 1 Gbps?
If I display the 8 Gbps SFP it reports it is capable of 200, 400, 800 MBps which is code for 2, 4 or 8 Gbps.
But maybe I could force it to 1 Gbps?
Sadly all I did was break the port. A port in Mod_Inv status means the SFP is in an invalid state. This is not going to work.
So what to do? We could not just move the old SFPs into the new switch, as the new 8 Gbps capable Brocade switches only accept Brocade approved SFPs. The only solution was to make it right and swap four of the Brocade 8 Gbps SFPs with Brocade 4 Gbps SFPs. Fortunately as we needed only four, I was able to swap them with little expense or hassle (I contacted our local Brocade rep who happily helped us out).
The end point was a happy client and a lesson re-learnt..... 1 into 8 does not go.
I am curious though... is there much 1 Gbps gear still out there? Is this a common issue?
Over on Wordpress, I have just published an article on SNMP and XIV.
Given some funky formatting, I have decided not to paste it into this blog.
If your interested in monitoring an XIV with SNMP, please head over to here:
A friend of mine sent me a direct message on Twitter that pointed out something interesting.... A blog post I had written on SDDPCM had been copied word for word by another site. A little bit of googling revealed that in fact it had been picked up by two sites. Here is the original, and the copies are here and here.
What bothered me was not that the content was copied without any obvious (well, obvious to me) attempt to acknowledge the original author. In fact in both cases, the copied text included a link to another blog entry I had written, so an alert reader would pick up that the content had come from someone else (still... a little acknowledgement doesn't hurt). To begin with, I was also not concerned with the re-use of my work. After all, I am writing this to be helpful, so if you think something I have written is helpful... and you spread the word... that work is even more helpful (but hey thats what Twitter is for... right?). But then it occurred to me....by copying the article without a link back to the original source (mine), if I find a mistake is made and I update my blog post, those corrections will not flow to the clones. So this potentially undermines my efforts to be helpful.
I also noticed that in each case, the clones had advertisments by Google. Does this mean Google and/or these other bloggers, are actually making money from copying my content?Hmmm... acknowledgement is one thing... a cheque is even nicer.
Or I am reading too much into this?
Still... message to Anthony... if you push content into the public domain you have to be prepared for this.
After tweeting about this, I did learn it is possible to insert sentences into your content that you could then monitor for with Google Alerts. I don't plan to do this myself, but its certainly worth being aware of. This of course also presumes the cloners don't detect these sentences and delete them.
I am very curious to know of similar experiences. Has this happened to you? Did you do anything about it? Were you happy with the result?
When IBM first released the Storwize V7000, we announced it was capable of supporting ten enclosures, but would on initial release support only five. We stated that this restriction would be lifted in Q1.
The good news is that this restriction is indeed now lifted by the release of Storwize V7000 software version 22.214.171.124, which is available for download from here:
You should also check out this link:
Storwize V7000 6.1.0 Configuration Limits and Restrictions
This new level also contains an additional enhancement which I think users will really like, called Critical Fix Notification. The new Critical Fix Notification function enables IBM to warn Storwize V7000 and SVC users if we discover a critical issue in the level of code that they are using. The system will warn users when they log on to the GUI using an internet connected web browser. It works only if the browser being used to connect to the Storwize V7000 or SVC, also has access to the Internet. (The Storwize V7000 and SVC systems themselves do not need to be connected to the Internet.) The function cannot be disabled (which is a good thing) and each time we display a warning, it must be acknowledged (with the option to not warn the user again for that issue).
As I blogged previously, VAAI support for XIV has two dependencies:
- 10.2.4a code
- Vmware Certified driver
Both of these things are very close to release....
In the meantime I have had the chance to demonstrate the uncertified VAAI driver with XIV 10.2.4 code, just to see what affect it has.
And what is the affect?
VAAI dramatically reduces the amount of work that the vSphere 4.1 server needs to do to get things done.
The XIV implementation of VAAI provides the three fundamentals of VAAI:
- Full clone, copying data from one logical unit (LUN based) to another without writing to the ESX server.
- Block Zeroing, assigning zeros to large storage areas without actually sending the zeros to the storage system.
- Hardware Assisted locking, locking a particular range of blocks in a shared logical unit (providing exclusive access to these blocks), instead of using SCSI reservation that locks the entire logical unit.
To test VAAI with XIV, I did two things: a VMDK migration (a Storage Vmotion) and VMDK cloning. I used the vSphere client to time how long the operation took and XIV Top to see how much IO was being generated by the vSphere server. Now please understand, these numbers and timings are based on a lab environment. The speed and peaks will vary from client to client and install to install.
Firstly the migration: I performed a migration of a VMDK from one data store to another. The migration without VAAI took 42 seconds as can be seen from the screen capture below:
The migration generated a peak of 135 MBps of traffic being written to the target volume as can be seen from XIV Top:
I then turned on VAAI and did the same migration. I won't document the process to install the VAAI driver, as it will be different when the certified version is released. However after the driver is installed, I could turn VAAI on and off by toggling these settings from 0 1 and back again:
I we did another VMDK migration with VAAI enabled. This time the migration took 19 seconds (as opposed to 42 seconds), so an immediate improvement occurred.
When I checked XIV Top, there was no IO at all! In other words the vMotion was done with no apparent load on the vSphere HBAs or the SAN. I feel silly showing this screen capture, but this is what I saw.... nothing.
I then did a VMDK clone. The Data store was on XIV, VAAI was not enabled. There was no other IO running on the ESX server. The clone took 40 seconds (as reported by vCenter):
The clone generated a peak of 230 MBps for around 50 seconds (as reported by XIV Top)
We then again activated VAAI and repeated the clone. Now the clone took 15 seconds (as reported by vCenter), so thats 25 seconds faster (more than 50%).
The clone generated a peak of 2 MBps for around 20 seconds (as reported by XIV Top). Almost no fibre channel IO was thus generated by the clone.
As I have blogged before, I will be repeating this whole exercise once I have real live customers running this configuration, so expect further updates.
Things have been pretty revolting lately, and I am not talking about Tunisia or Egypt or Libya (thought actually they could equally apply to my story).
What I am talking about is mother nature, and she is pretty angry with us right now.
In the last few months Australia and New Zealand have seen massive floods in Queensland, Victoria and Western Australia, destructive cyclones hitting Queensland and Western Australia, ferocious bush fires in Western Australia and most recently, a massive earthquake in New Zealand.
The personal loss of life and of property have been shocking and tragic. Each of these events have reminded me how quickly everything we hold dear can be taken away in an instant... by an event over which you have no control.
Which leads me to storage clouds....
If something can be stored electronically, then it can be stored in a cloud. A cloud that is hopefully well backed up, and far away from your own personal location. And no this is not an advertisement... its a suggestion....
Given the events of the last few months, I have started using a storage cloud provider to protect my photos, my music and my insurance information.
I looked for cloud storage providers who:
- Offered a tool that when installed on my laptop/PC, automatically backs up the contents of selected folders. This means I don't have to remember to backup. It should happen automatically.
- Offered a way of accessing the backed up data from anywhere.
- Is reasonably priced.
I considered the following uses:
- Backup all my photos.
- Backup all my music.
- Digitize my insurance documents and back them up. Scan in all my receipts and some photos of the contents of each room of the house. That way if the house burns down... I have a base to work off.
- Scan in important documents that I could not easily replace.
Let me give you an example of a document I would never want to have to replace....
My son is practicing to get his drivers license. In Victoria you need 120 hours of driving experience recorded in a log book. This log book needs to be filled in every time he drives the car. If the log book is lost... those 120 hours would need to be driven again. I cannot tell you how hard it is to find 120 hours of driving opportunities (and I heartily support the 120 hours scheme!). Even if you did feel inclined to create fake entries to recreate the book (which is illegal), frankly creating 120 hours of fake driving log entires would be very hard work. To make things worse... where I am storing this booklet? In the car of course (which is the most convenient place to store it). So what happens if the car is stolen? There goes the logbook.... So the plan I work on is that every time a page is filled up, I scan that page as an image stored on my laptop. The image goes into a folder that is automatically backed up to the cloud. Yes it does depend on my being diligent, but the actual process of copying the file somewhere else is automatic. Now I have 3 copies... the original, the scanned image on my laptop and a third (automatically created) copy way off in the cloud somewhere.
As for personal recommendations:
1) Get 2 GB free on Dropbox. This is a great point solution and a great way to dip your toes in.
2) Get 1GB free on Google Docs. This is a great tool to share files with others.
3) Try 15 days free on Carbonite. These guys look like good value for money.
Are there others? Yes there are... Mozy is one I have seen recommended. There is alsoAmazon S3. I am sure there are plenty more....
Have there been issues with storage cloud providers? A quick search reveals stories like: Flikr deleted a users data and Carbonite lost data due to hardware failure. Still... I have no plans to store my ONLY copy of data in the cloud. For me its a backup medium... not a primary storage location.
Are you convinced?
Are you already using the cloud?
Or are you thinking its too expensive or too insecure?
Better still, have you already been saved by the cloud?
Oh... and my son? He is on 89 driving hours... 31 to go....
Shock horror, I am starting to question the value of my iPhone.....
Actually... I am pondering whether the iPhone is the ideal social media tool I thought it was.
Don't get me wrong, I love my little iPhone... but...
Should I should stop using it to follow blogs and twitter feeds?
This sounds crazy. After all, the iPhone is perfect tool for both functions.
I can start up Google Reader while I am on the train; while sitting on an exercise bike at the gym; while waiting for a lift; in a taxi. Its perfect for filling in time and staying productive. Twitter is just the same.... the iPhone is a perfect tool for reading content.
So whats the problem?
The problem is that while using the iPhone for this purpose I rarely interact, I never create content, I rarely contribute to content. By this I mean I almost never comment on blogs and I hardly ever twitter. This is quite simply because I hate using the keyboard. More and more I leave both twitter and feed readers to when I have time to actually interact via a real keyboard.
So I am curious... does anyone else feel the same way? Are there better devices out there for interaction? Or is it just my fat fingers and slow brain getting in the way?
I am starting the week in Adelaide having travelle here to teach an XIV course:
SSA0AU XIV Technical Training.
I really enjoy teaching, particularly when the students are coming from a non-IBM background. It gives me the chance to better learn how IBM's products compare to our competitors, because the experiences and view points come from real end users. It also helps me to reconfirm my knowledge and understanding of our own products.
There is a very basic rule in IT: If you cannot explain a concept to someone else, you probably don't understand it yourself.
The course consists of a day of lectures and a day of labs (using the XIV Labs inMontpellier France). Here is the course layout.
- Unit 1 - IBM XIV Storage System
- Unit 2 - IBM XIV administration
- Unit 3 - Implementation and configuration
- Unit 4 - Host systems attachment and mappings using FCP
- Unit 5 - Host systems attachment and mapping using iSCSI
- Unit 6 - Copy Services
- Lab 0 - Lab setup and preliminary instructions
- Lab 1 - IBM XIV Storage Management: Installation
- Lab 2 - IBM XIV Storage Management: Configuration
- Lab 3 - Host definition and mappings: Attaching a Windows server to an XIV
- Lab 4 - Host definition and mappings: Attaching an AIX server to an XIV
- Lab 5 - Host definition and mappings: Attaching a Linux server to an XIV
- Lab 6 - IBM XIV configuration: Monitoring
- Lab 7 - IBM XIV Copy Services: Snapshots
- Lab 8 - IBM XIV Copy Services: Remote mirror
The idea is to teach all the concepts on day one and then let the students hit real machines in a remote lab environment on day two. The hands on part is always the best bit as far as I am concerned (learning by doing always beats learning by listening). Students who have never touched the XIV GUI always enjoy this part.
A bigger challenge is when you have a student who already has lots of hands on experience. In those cases I work to consolidate what they have already learned.
I am curious, how often do you wait so long to do a course, that there was not much left to learn by the time you actually got to do it?
Oh and please ignore this strange string: XQ983UH6VUFD
One of the more common themes I blog about is the demand for Visio Stencils. In this regard I have some more good news.
Firstly the VisioCafe site has an updated IBM-Disk stencil set with the following additions:
IBM-SystemStorage-Disk.vss - Added EXP5060 Front, Front Open and Rear Views
- Added DS8x00 Cabinet Rear Views
Secondly, I have started using the collections function on the IBM developerWorks site to share my files with the public. To this end I have posted a collection of all the standard Visio documents I use to build XIV Solution Designs. You can find them here.
They tend to use two basic building blocks, which are the Fibre Channel patch panel and the iSCSI patch panel. For instance this is what the iSCSI patch panel Visio diagram looks like:
And this is an example of what a Fibre Channel Visio diagram looks like.
Your free to use any of my diagrams for any purpose you like. If you have suggestions, comments or donations, please feel free to share. The link again is here.
After 3.5 years of reliable service, the 19" LCD monitor on my sons computer died... and
would not power back on. Warranty long since expired and replacement LCDs being
relatively cheap, I replaced his monitor with a 22" LCD and he happily updated his
Facebook status to suit.
Except there was a problem.... what to do with the dead monitor?
I had three choices:
- Put it in the back shed to collect dust.
- Disassemble it and shove the shredded carcase into the red bin for the weekly council rubbish collection (it being too large to just drop into the bin).
- Wait for the annual council hard garbage collection and place it out the front of our property with all the other unwanted eWaste.
But there was a further problem. This lovely sticker on the back made no mention ofROHS and made even more disturbing mentions of mercury!
- What to do with this monitor?
So I did the usual thing... I googled for a solution.
What I found was this site at Sustainability Victoria, which took me to this site which told me all about a program called ByteBack. One trip to Officeworks in Dandenong later and my dead LCD was off to be recycled at no charge to myself. Not only was my shed less cluttered, but I might even have helped the environment.
I would be curious to know if other people have been able to find similar programs in their locations? If so... please let know, lets spread the word!
After seeing some tweets from @SFoskett regarding XIV support for VAAI, I thought I would supply an update with the information that I have to hand.
IBM added VAAI support in XIV code level 10.2.4. However a code fix has since been written for a VAAI related issue (I don't have the details), which means that we are delaying the official support till the release of 10.2.4a code (which should be early March).
In the meantime we are also waiting for the release of the VMWare certified driver for VAAI and XIV, which should come out at the same time. So there are two dependancies.
As soon as both things are available, I will write a new post confirming this.
And then Steve's matrix found here will hopefully also be updated!
I have some very keen customers lined up to try it out, so I plan to blog about the results as soon as my clients have had some run time to generate meaningful stats.
Which tech related podcasts do you listen to?
Check out my latest blog post on Wordpress:
I think it had to happen eventually.
After much consideration I am moving to WordPress.
Want to know why? You can find out here:
In my last post I talked about the versions of XIV code and XIV Management GUI needed for QoS.
It leads to the question of how to match the Management GUI version to the XIV code version.... which goes with which?
Many storage products have management software that is separate from the software that runs inside the box (more commonly known as the firmware
So for XIV, is there a best practice? What if I have multiple XIVs.... do I need them all to be on the same firmware version?
The good news is that you can simply use the highest available version of Management GUI, regardless of what code versions the XIVs are on.
In other words, if you see a new version of XIV GUI on the download site.... just upgrade to it.
Check out the screen capture below. This was taken with version 2.4.4 of the Management GUI (which is the latest at February 2011):
The XIV on the left is running the GA XIV code, version 10.0.0.a (which came out well over two years ago).
So I am able to run one of the oldest versions of XIV code, with the latest Management GUI. This is very good.
But why is the right hand machine greyed out?
To mix things up. the XIV on the right is running a brand new development (and thus un-released) version of code.
Because I am running a (relatively) older GUI version against a (relatively) newer XIV code version, the GUI is protecting us from a potential mismatch.
This means the GUI is always backward compatible, but is not always forward compatible.
Now speaking of greyed out, we saw a bug on previous versions of the GUI code, where a read-only user id would be confronted with a similar sight.
The machine would be greyed out and thus unmanageable. The work around was to use the older 2.4.2b build 10 version of the GUI.
The good news is that 2.4.4 contains the fix for this bug (so its another reason to upgrade).
That is all for now. Don't forget you can always logon as p10demomode
to get a demo mode view of the XIV GUI.
After a chat with my one of our business partners, I thought I would add some more info about XIV and QoS:
- You need your XIV GUI to be on version 2.4.4 and you need your XIV to be on code version 10.2.4.
If your not on those levels, you will need to upgrade before you can use this feature.
The GUI is easy, just get it from here. The XIV code needs to be upgraded by IBM.
Talk to your local IBM Rep if you need to get this done.
- You can control all aspects of QoS using xCLI.
Why is this cool?
Consider this idea:
Script the movement of hosts into and out of performance classes on a regular basis.
You could have pre-set bandwidth levels based on time of day, or day of week or end of month.
- This feature lends itself to multi-tenancy.
By setting a maximum bandwidth value on each tenant, you can ensure that each tenant gets a fair share of the grid.
I have been exploring some of the new features added in XIV firmware version 10.2.4.
Today I look at QoS (Quality of Service).
This new feature allows you to restrict how much IO (in IOs per second or IOPS) or throughput (in megabytes per second or MBps) an individual host can generate.
We do this by creating a new construct called a Performance Class
. We can create up to four different Performance Classes
and assign different hosts to each of those classes.
You can spot the new menu item under Hosts and Clusters:
The new panel contains a list of all your hosts and a new button at the top of the GUI panel called Add Performance Class
When you create the Performance Class
, you can set either an IOPS or a Bandwidth Limit.
It is also possible in a single Performance Class
to set both an IOPS limit AND a Bandwidth Limit.
In these examples we set just a bandwidth limit.
One small quirk is that the limits will be rounded to a multiple of the number of active interface modules.
So don't be surprised if the numbers you enter are not the numbers that then appear. In the example below I enter 100 (for 100 MBps).
However when the class got created, the value was rounded down to 96 MBps( since I have 6 active interface modules).
To prove a simple point, in this example we have created three Performance Classes, all of which limit bandwidth.
You can see by their names the limit they will impose on any host moved to that Performance Class.
The exercise I performed used an AIX LPAR with an Oracle workload generator, that generates a constant workload of 150 MBps.
The first step was to add the host to the 96 MBps Performance Class.
Then the fun began. Monitoring of the performance of the LPAR was done with XIV Top. We moved the LPAR between performance classes to see the
effect on throughput of each class. All of this was done concurrently with no host interruption.
You can see from the output of XIV Top, that as the performance class was changed, the throughput was gradually throttled back (or allowed up) to that level.
At the end of the process we then removed the LPAR from its Performance Class, returning it to an unrestricted state.
This effectively allowed it to move back up to 150 MBps.
So why is this important?
Some clients had a concern that non-production hosts (such as test and development servers), got an equal share of the XIV performance pie.
In general this is not as issue, as the grid architecture of the XIV works very well with competing IO from multiple sources.
However with the advent of very high performance machines, it is not outside the realms of possibility for an individual server to generate
over 80,000 IOPS or over 1,000 MBps. I have certainly achieved this during benchmarking. If you spin-up several of these runaway
hosts simultaneously, you could saturate the grid and impact more deserving hosts.
So adding this feature makes sense.
What is even more sensible? it is added at no extra cost via a non-disruptive code update.
Thanks again to Rob Jackard of the ATS Group
for supplying me with his excellent summary of support site updates.
Here are some major points to be aware of.
- Storwize V7000 and SVC users, please upgrade to version 126.96.36.199.
- DS3000, DS4000 and DS5000 clients be aware of a 825 reboot cycle. Please read this tip very carefully
- If your installing 8Gbps capable kit, be aware of the fill word setting, there are two relevant tips.
- XIV GUI 2.4.4 is available
(2011.01.27) IBM TechDoc- Power Systems SAN Multipath Configuration Using NPIV.
(2011.01.19) SDDPCM path selection may fail even though non-preferred paths are available.
DS3000 / DS4000 /
(2011.02.04) IBM Retain Tip# H193288-
DS3000, DS4000, DS5000 controllers will reboot every 825
(2011.02.02) IBM Retain Tip# H201983-
Recommended drive firmware upgrade – IBM Disk
NOTE: Disk Systems that may be effected-
DS4000 series, DS5000 series, DS3950 depending on installed disk drive
(2011.02.02) IBM Retain Tip# H196488-
DS3500/DS3950/DS5000 systems not working with Brocade on 8 Gbps host
NOTE: Brocade FC switches running firmware
6.2.0e or above are effected. May need to change the fillword setting from ARB
(2011.02.02) Copy Services User’s Guide –
IBM System Storage DS Storage Manager v10.70.
(2011.01.28) IBM TechDoc- DS8000 Host Ports
and Installation Sequence.
(2011.01.27) Data ONTAP 7.3.5 Gateway
(2011.01.27) IBM System Storage N series
Data ONTAP Matrix.
(2011.01.27) IBM System Storage N series
System Manager 1.1R1 Publication Matrix.
(2011.01.27) IBM System Storage N series
FRU (Field Replaceable Unit) lists.
(2011.01.27) Data ONTAP 7.3.5 Filer
(2011.02.02) IBM Retain Tip# H196488-
DS3500/DS3950/DS5000 systems not working with Brocade on 8 Gbps host
NOTE: Brocade FC switches running firmware
6.2.0e or above are effected. May need to change the fillword setting from ARB
(2010.11.07) Recommended port
fillword setting for 8Gbps Brocade switches with 8Gbps SAN Volume Controller or
(2011.01.17) Intel has reported Page
Fault or Corrupted Data using 64-Bit application in 64-Bit
IBM is working with their suppliers to qualify and release a new BIOS which
corrects this issue.
SVC / Storwize
(2011.02.04) IBM TechDoc- EPIC and IBM
Storwize V7000 – Solution Overview and Performance
(2011.02.04) SAN Volume Controller Node HDD
Failures can Result in Host Multipathing Driver
NOTE: This mechanism was introduced by APAR
IC74194 in V188.8.131.52 PTF release. This APAR will also be included in a future
V6.1.0.x PTF release.
(2011.02.04) IBM SAN Volume Controller Code
(2011.02.04) SAN Volume Controller and Storwize V7000
Software Upgrade Test Utility V5.7.
(2011.02.03) IBM TechDoc- SVC Global
Mirror- A practical review of important parameters.
(2011.02.03) Cache Destage Issue in SVC
V184.108.40.206 and V220.127.116.11 Code May Result in an Incorrect Report that Host Data Has
Been Written Correctly.
NOTE: This issue is resolved in the
V18.104.22.168 PTF release.
(2011.02.03) IBM TechDoc- SVC 22.214.171.124 on
multiple Virtual SAN environment.
(2011.01.27) IBM TechDoc- SVC / Storwize
V7000 Performance Monitor svcmon.
(2011.01.27) IBM TechDoc- IBM Easy Tier on
DS8000, SVC and Storwize V7000 Deployment Considerations Guide January
(2011.01.25) IBM SAN Volume Controller Code
(2011.01.25) IBM Storwize V7000 Code
(2011.01.25) Storwize V7000 Node
Canisters May Shut Down or Reboot Unexpectedly During Normal
This issue has been fully resolved by APAR IC74088 in the V126.96.36.199
(2011.01.19) IBM TechDoc- DS8000 with SVC
SSPC / TPC /
(2011.01.27) TPC 4.2 Install with DB2
9.7, fails dbSchemaInstall on AIX and Linux.
(2011.01.25) IBM System Storage
Productivity Center Flash for Version 1.3.
(2011.01.24) Unstable TPC 4.1 Device
Server on AIX 5.3.
AIX APAR IZ52907 corrects a problem in AIX 5.3, where
the "getgrent_r" routine causes heap corruption due to an invalid "free"
(2011.01.21) IBM XIV Storage System
(2011.01.20) IBM XIV Storage System
This document previously was called the IBM XIV Storage System Theory of
(2011.01.20) IBM XIV Management Tools
(XIVGUI, XIVTop, XCLI), version 2.4.4 for all
(2011.01.20) IBM XIV XCLI (only) for
Linux / AIX / Solaris / HPUX, version 2.4.4.
(2011.01.12) Intel has reported Page
Fault or Corrupted Data using 64-Bit application in 64-Bit
The new BIOS level will be available with the XIV 10.2.4
The demand for Visio stencils is very strong, which says something about how popular Microsoft Visio is.
Its good because it means people are documenting their environments.
I am being asked for ETAs for stencils that are not yet in the IBM Disk zip file
found on Visio Cafe
The most recent update on Jan 17, 2011, added more nSeries stencils as can be seen here
or as reproduced below:
IBM-Racks.vss - Added Rack HMC CR5 and CR6 Front and Rear Views
- Renamed existing HMC models to follow CR# naming scheme
IBM-SystemStorage-NAS.vss - Added N3400, N6060, N7700 and N7900 Front and Rear Views and Controller Modules
- Added EXN3000 Front and Rear Views
- Updated N3300 and N3600 Front Open Views with new disk sizes
I am getting a lot of requests right now for the Storwize V7000 stencils.
The good news is that I have them.
I am unsure why they are not on Visio Cafe yet, so I am posting them here
as a zip file.
Just right click on the link and save as, then unzip and your good to go.
// Edit Feb 3, 2011 - I changed the file to one with correct IBM naming //
Its been a while since my last blog entry... but don't worry (if you were). I have not given up on my blog.
I took three weeks off for vacation: to recharge my batteries and reconnect with my family.
It was a great break which sadly had to come to an end.
I have three rules when I go on leave:
- No checking work emails (which meant disabling Lotus Notes Traveller on my iPhone).
- No making work related phone calls (which meant setting a very clear voicemail greeting).
- No sneak peaks at work related web sites (including this blog).
The first few days were hard ( without meaning to belittle the concept, its a bit like battling an addiction),
but by the second week I had stopped thinking about work altogether and really started to enjoy myself.
Not that I don't enjoy working for IBM (I love it), but we all need to take regular breaks to stay sane.
Of course there is a penalty for ignoring your email for 3 weeks..... about 1000 emails to go through on return.
The good news is... I am now ready for take off.
Lets have a great 2011!
And if your feeling like this guy, maybe its time for a break!
I was recently asked to travel to Taiwan to teach a Storwize V7000 class.
I have done a fair amount of travel with IBM, but this was my first visit to Taipei.
I must say I loved the energy and enthusiasm of the place.
While there I taught a new course: Storwize V7000 Implementation (SSE01)
For those who are familiar with IT education, IBM Systems and Technology Group (STG) Education comes in 4 forms:
1) Lecture Only
2) Lecture with simulated labs
3) Lecture with remote remote labs
4) Lectures with local labs
This particular course is the third type, meaning it consists of lectures combined with remote labs.
This means we connected via VPN to actual machines located in Raleigh, North Carolina.
We had access to multiple pods, where each pod has its own Storwize V7000, plus a DS3400, SAN switches and hosts running AIX and Windows.
In addition IBM Taiwan secured a demonstration Storwize V7000 for us to have in the classroom (to see and touch and do initial config on).
This was the first time the course had been offered in Taiwan and everyone was very excited to be part of it.
The agenda looked something like this:
• Unit 1 - Introduction to the IBM Storwize V7000
• Unit 2 – Enclosures and RAID Arrays
* Labs: initial config, accessing the GUI and CLI, configuring internal and external storage and defining attached hosts
• Unit 3 – Fabric Zoning
• Unit 4 – Thin provisioning and Volume Mirroring
• Unit 5 – Data Migration Facilities part 1
* Labs: Accessing storage from AIX and Windows hosts via fibre channel and iSCSI, Easy Tier Part 1, migration part 1
• Unit 5 – Data Migration Facilities part 2
• Unit 6 – Easy Tier
• Unit 7 – Managing IBM Storwize V7000
* Labs: Migration part 2, Thin provisioning, RAID options, Easy Tier part 2
By the conclusion of the course the students had had many hours enjoying the new GUI, which I thought was really important.
Hands on operation of a system is the best way to learn.
Those that already knew SVC found the course quite easy, but everyone felt that they had learnt a lot.
The chance to use Easy Tier and do a live migration was really important.
I certainly enjoyed teaching the course and look forward to teaching it again!
For those who live in Australia, yes we are going to offer the course locally.
As soon as I have dates I will share them with you.
If your using Microsoft Windows hosts with XIV, there are minimum required Microsoft hotfixes.
They are listed in the Host Attachment Kit (HAK) 'release notes found here: HAK Release Notes
However I have learnt that some additional fixes are recommended, particularly if your planning an XIV firmware update:
•Windows 2003: http://support.microsoft.com/kb/950903
•Windows 2008: http://support.microsoft.com/kb/983554
So please check your systems to see if these hotfixes are installed.
If not, you should plan to schedule them on your next patch Tuesday.
I have noticed something suspicious on the developerWorks forum.
Updates are coming from new users where:
- The users are in places like Zambia or Reunion (which in itself means nothing)
- The user name is always two words separated by a full stop (which in itself also means nothing)
- The user always quotes a previous post. This is the only style of post they create.
- The user always says something inane but not that useful.
Here are some examples of these users, note their locations are quite varied.
Their posts always come in one of three flavours.
1) They create posts with comments like "Thanks for sharing". Here are some examples:
2) They may instead complain about broken links (which are NOT broken), such as these two:
3) Finally, they may kind of ask a question, but without really referring to the material they reference, such as these:
I have sent messages to the majority of these 'people', but so far they have NOT responded, nor do I expect them to.
My long term plan is to simply delete their posts and ban their user ids.
What are they up to? I don't know.
But given that none of their posts ADD anything to the DW forum, I have no fear in REMOVING them.
So I have three requests from my audience:
- If you see such posts, please let me know. You can report as spam using the yellow triangle on the right side of their posts.
- If you think you know what these users may be up to, please let me know.
- If you are one of these users and you feel unhappy about the way I have portrayed you, then please let me know and I will make every effort to correct my mistake. I am honestly just trying to keep the forum 'healthy'.
It could be that I have become unnecessarily paranoid, but this pattern seems to be rather strong.
A final blog entry before Christmas, with a little Christmas present for all our midrange storage users.
The IBM System Storage Interoperation Center (SSIC) found here
now lists all of IBMs LSI based
Midrange and Entry Disk systems. So you can now select VMware vSphere/ESX 4.1...
and then find products like the DS4700 and DS4800.
As you can see, the DS4700 has 5808 different configuration results, so hopefully your environment is in there.
If your environment is still not listed, please contact your IBM FTSS or Business Partner to ask for a SCORE request to get support.
Its been over a week since the Sydney Storage Symposium finished and I now have the feedback results.
The symposium itself was a great success and was a fully booked event (a full house!).
We ran 48 separate sessions across two and a half days,
all of them focused on storage and how IBM provides solutions for this space (including things like Cloud and DR).
These included many lab sessions on SVC, TPC and XIV.
Sessions were also given by IBM partners including Brocade and Qlogic.
In addition Brocade, Qlogic and LSI each manned a vendor stall to share information about their products.
And to top this off, Alexis Giral from the IBM Storage Pre-Sales team in Sydney
manned a Storwize V7000 demonstration stand for the entire event.
Attendees were able to see, hear, touch and most importantly play with, an actual Storwize V7000.
Feedback was sought from every attendee for every session. Feedback was also sought for the entire Symposium.
Of the sessions, the majority scored a net satisfaction rating of 100% (which is an excellent result).
The overall satisfaction rating for the conference was 99%.
The most popular session was: "Introducing the Storwize V7000"
Some of the feedback given across all of the sessions included:
- Great symposium with more value than I expected
- The event was extremely well organised
- Concise and very knowledgeable, ........ is an extremely good presenter, his use of practical experience to relate to the topic is excellent
- Excellent presentation: flowed very well answering questions that previous points raised
- Fantastic as always. Thanks.
- Outstanding! Excellent presentation! Thank you
- Stellar. Excellent!
- Excellent presentation!!! Nothing to improve on.
- Excellent speaker & deep understanding of session.
- Very engaging speaker.
- Had a great session & very interesting topic to attend - thanks.
- Excellent - very knowledgeable & enthusiastic.
- Fantastic. Thank you!
- ....a superb presenter with real world experience.
- Very knowledgeable presenter able to share real-life situations to illustrate the topic
- Enthusiastic & enlightening - learnt lots! Thanks.
- This has been the most enjoyable & informative session yet. Most of what learned will put in practice - very good session.
- Best session ever! Brilliant.
- Wow!!! Excellent work - enjoyed the presentation.
- Very knowledgeable, fluent & interactive with attendees & fielded questions excellently!
- As always great presentation, room was listening very intently throughout presentation
Some of the suggestions given for future symposiums included:
- include more "best practice"
- include more labs
- provide wireless access for attendees
I personally presented four different presentations, which were:
- XIV Implementation
- DS8000 Update
- Storage Pools and Easy Tier
- SAN Best Practices and Problem Determination
The last session listed (SAN Best Practices) was my favourite. The room was packed to over flowing
and the audience was very interactive.
So with all of this feedback and experience under our belts we can begin planning for the 2011 event.
In 2011 we will be holding another IBM System Storage event, this time in Melbourne, so watch this space for more updates.
Its been a busy week!
We just completed our first IBM System Storage Symposium in Sydney.... and it was a great success.
Thanks to everyone who attended and presented.
Meanwhile... there are quite a few updates to the IBM Support site, and as usual Rob Jackard from the ATS Group
has created a great summary which I will reproduce here.
Things of particular note are:
- If you installed SVC code version 188.8.131.52 please update to 184.108.40.206 as per this issue
- The XIV MIB is available for direct download
There are plenty more very informative tips... please read on.
(2010.11.29) IBM AIX High Impact / Highly
Pervasive- An operation to change the MPIO preferred path of a LUN could
Users running with AIX 5.3-TL11 (APAR included in 5.3-TL11-SP5)- install APAR
Users running with AIX 5.3-TL10 (APAR included in 5.3-TL10-SP5)- install APAR
Users running with AIX 5.3-TL09 (APAR included in 5.3-TL09-SP8)- install APAR
(2010.11.02) IBM Techdoc- Power Systems SAN
Multipath Configuration Using NPIV.
DS3000 / DS4000 /
(2010.12.02) Command Line Interface and
Script Commands Programming Guide- IBM System Storage DS3000, DS4000, and
(2010.11.30) RETAIN Tip# H20790-
Restrictions using second ethernet port.
(2010.11.17) Installation and Migration
Guide for Hard Drive and Storage Expansion Enclosure – IBM System Storage
DS3000, DS4000 and DS5000.
(2010.12.02) DS8800 Code Bundle
(2010.11.19) DS6000 Microcode Release
(2010.11.03) DS Storage Manager limitation
(2010.12.03) Fabric OS Firmware v6.2.2d for
Brocade 4 Gigabit SAN Switch Module – IBM
(2010.11.22) Cisco 4 Gigabit SAN Switch
Module firmware v5.0.4 – IBM BladeCenter.
(2010.11.02) Cisco 4 Gigabit SAN Switch
Module firmware v4.2.3 – IBM BladeCenter.
(2010.11.02) Cisco 4 Gigabit SAN Switch
Module firmware v5.0.1a – IBM BladeCenter.
SVC / Storwize
(2010.12.03) Potential Loss of
Access and Data Error When Performing I/O to Thin Provisioned (Space Efficient)
Volumes (Vdisks) With Used Capacity Greater Than 2
NOTE: This issue has been resolved by APAR
IC72825 in the V220.127.116.11 and V18.104.22.168 PTF releases.
(20010.11.27) SVC & Storwize
V7000- Management Information Base (MIB) file for
(2010.11.27) IBM Storwize V7000 Code
(2010.11.27) IBM System Storage SVC Code
(2010.11.26) IBM System Storage SVC Code
(2010.11.26) SAN Volume Controller
and Storwize V7000 Software Upgrade Test Utility.
(2010.11.26) Storwize V7000
Concurrent Compatibility and Code Cross Reference.
(2010.11.23) Potential Issue
Upgrading If Remote SVC Cluster Code Level is Below
(2010.11.22) IBM System Storage SAN Volume
Controller V6.1.0 – CIM Agent Developer’s Guide
(2010.11.22) IBM Storwize V7000 V6.1.0 –
CIM Agent Developer’s Guide [GC27-2292-00].
(2010.11.16) IBM Techdoc- Accelerate with
ATS: Introducing the IBM Storwize V7000.
(2010.11.12) SAN Volume Controller 2145-CF8
Node Solid-State Drives Must be Unmanaged before Upgrading to
(2010.11.12) IBM System Storage SVC Console
(2010.11.12) V22.214.171.124 – SAN Volume
(2010.11.12) V126.96.36.199 – IBM Storwize V7000
(2010.11.11) Guidance When Upgrading IBM
System Storage SAN Volume Controller Clusters That Contain 2145-CF8 Nodes With
(2010.11.11) SVC and Storwize V7000- CIMOM
Unable to Return List of More Than 4000 Remote Copy
(2010.11.11) Disk Space Low Warning When
Upgrading From SAN Volume Controller V5.1.0.x to
(2010.11.11) IBM System Storage SAN Volume
Controller 6.1.0 Configuration Limits and
(2010.11.02) IBM System Storage SAN Volume
Controller V5.1.0 – Customer Documentation.
(2010.10.28) Power and Cooling Requirements
for the IBM Storwize V7000.
(2010.10.22) IBM Storwize V7000 V6.1.0 –
Installable Information Center and Guides.
(2010.10.18) IBM Storwize V7000 V6.1.0 –
Troubleshooting, Recovery, and Maintenance Guide
(2010.10.18) IBM Storwize V7000 Quick
Installation Guide [GC27-2290-00].
(2010.10.18) IBM System Storage SAN Volume
Controller and Storwize V7000 V6.1.0 – CLI User’s Guide
(2010.10.18) IBM System Storage SAN Volume
Contoller V6.1.0 – Software Installation and Configuration Guide
(2010.10.18) IBM System Storage SAN Volume
Controller V6.1.0 – Troubleshooting Guide
(2010.10.18) IBM System Storage SAN Volume
Controller V6.1.0 – Hardware Maintenance
(2010.10.18) IBM System Storage SAN Volume
Controller V5.1.0 – Software Installation and Configuration Guide
(2010.10.18) IBM System Storage SAN Volume
Controller Model 2145-CF8 Hardware Installation Guide
SSPC / TPC /
(2010.12.03) IBM Tivoli Storage
Productivity Center and IBM Tivoli Storage Productivity Center for Replication:
Flash for Version 4.2.
(2010.12.03) System Storage
Productivity Center Flash for Version 1.5.
(2010.12.02) TPC- IC72867: Storage
Optimizer shows error HWNOP0036E for SVC.
(2010.12.02) TPC-R buttons and
functions greyed out in browser GUI.
(2010.12.01) Collecting Data for TPC:
SRA Install and Run-Time problems.
(2010.12.01) TPC- Error WSWS3192E:
return code: (401) Unauthorized.
(2010.12.01) TPC- Probe fails with
error code SRV0081E.
(2010.11.29) TPC- Oldest Orphan Files
(2010.11.22) IC57994 IBM TotalStorage
Productivity Center support of IBM System Storage N Series Gateway
With TotalStorage Productivity Center v188.8.131.52,
support for the IBM System Storage N Series Gateway servers has been
(2010.11.15) Location of the TPC
InstallShield registry ‘IBM-TPC’ directory.
(2010.11.12) IBM Tivoli Storage
Productivity Center 4.2.1 GA (November 2010).
This release will show up as Tivoli Storage
Productivity Center 184.108.40.206 and Tivoli Storage Productivity Center for
(2010.11.12) TPC APAR Fix List –
(2010.11.12) 4.1.1 Interim Fix 2
(October 2010) for Tivoli Storage Productivity Center.
NOTE: This interim fix
(220.127.116.11) has an APAR fix for regression APAR IC70486. It supersedes IBM
Tivoli Storage Productivity Center 4.1.1 fix pack 5
(2010.10.25) TPC v4.2 Views
Documentation – IBM Tivoli Storage Productivity
(2010.10.22) Basic Administration and
Troubleshooting of DB2 for TPC 4.2.
(2010.10.20) TPC 4.2.x – Supported
Storage Products Matrix.
(2010.10.05) TPC 4.2.x – Platform
Support: Agents, Servers and GUI.
(2010.11.29) IBM XIV MIB file for
(2010.11.29) IBM Techdoc- Migrating an
SAP environment from legacy storage to IBM XIV Storage
(2010.11.02) Potential Problem to
10.2.2 & 10.2.2.a XIV Storage System that can be caused by changing system
time via Network Time Protocol (NTP).
NOTE: Many of the
Interoperability matrix files for each specific storage system will be
sunsetting, please begin to use and
familiarize yourself with the System
Storage Interoperation Center (SSIC):
Storage DS4000 series- Interoperability Matrix [Last
Storage DS5000 series- Interoperability Matrix [Last
IBM SAN Volume Controller (SVC):
Recommended Software Levels [Last
Supported Hardware / Software Levels [Last
SVC Restrictions [Last
Recommended Software Levels [Last Updated:09/15/2010]
Supported Hardware List [Last Updated:09/03/2010]
SVC Restrictions [Last Updated:09/23/2010]
Recommended Software Levels [Last Updated:08/05/2010]
Supported Hardware List [Last Updated:09/03/2010]
SVC Restrictions [Last Updated:11/17/2009]
IBM Storwize V7000:
Supported Hardware / Software Levels [Last
SVC Restrictions [Last
Latest NX-OS support for 4.2(7a) and 5.0(1a). Updated BladeCenter Cisco FCSM
support for NX-OS 4.2(3).
Customers that do not require an upgrade to NX-OS, but require field updates may
install SAN-OS 3.3(5).
(Brocade): [Last Updated:11/23/2009]
(McData): [Last Updated:11/06/2008]
I/O Server for Power Systems- Supported Environment:
IBM has published Storage Performance Council (SPC) SPC-2 sequential workload benchmark results for the System Storage DS8800.
The measured throughput of 9,705.74 SPC-2 MBPS™ (9.7 gigabytes per second) leads all other published SPC-2 results!
Its interesting to compare this result with the DS8300 results posted in 2006.
What it shows is how the introduction of new technology like Power-6, PCI-E and SAS v2
combined with the underlying architecture of the DS8000, has proved a winning combination.
Details are online at http://www.storageperformance.org/results/benchmark_results_spc2/#B00051
If your not an IBM Business Partner (or IBMer), then this blog post is sadly not for you.
I just wanted to mention that IBM Business Partners can access some truly excellent XIV education on PartnerWorld.
Check out the link here: http://www-03.ibm.com/certify/tests/edu966.shtml
If your planning to do IBM XIV Certification, the courses you can access from the link above are really excellent.
Thanks to Aaron Tully from Southern Cross Computer Services (SCCS) for pointing this one out to me.
(and good luck on your exam!).
In late 2008 my manager rang me with some exciting news.
I was to go to Tucson Arizona to do launch hardware training on a new product called XIV.
I was soon boarding a Qantas 747 for the long flight to the USA.
My training buddies were Hardware Specialists from all over the world.
Needless to say the XIV blew our minds. It was a total departure to what we were all used to.
Whether it was the data distribution method, the GUI, the licensing model, the rebuild times....
It was like every rule of design, of licensing and expectation of usability was being challenged.
I learnt what the term disruptive technology truly meant.
Once I was back in Australia I immediately began to run training sessions to spread the word.
Something new and exciting was on the way.
A dedicated sales team was formed, led by a remarkable live wire of a man called Steve Coad.
His first dedicated pre-sales resource was a dynamic Scotsman by the name of Derek Cowan.
It was no coincidence that both of them had previously worked at EMC.
Come January 2009 we had our first customer... and this was a huge achievement,
We were struggling with a phenomenal FUD campaign being run by our competitors.
The things they were saying were equally shocking and hilarious.
My favourite was that IBM were giving away free XIVs... vast numbers of them!
(this was before the first XIV had even shipped to Australia).
We learnt very quickly how to counter this FUD and deliver the facts.
And what a set of facts.... client after client would come up after presentations... truly impressed with our vision.
They were really excited about the benefits that this technology could deliver.
The months went by and sale followed sale.
Every new client was precious. Many of these customers had never bought any IBM Storage before.
Some had never bought ANYTHING from IBM.
I was involved with many of these sales, not only presenting and demonstrating as part of our pre-sales team,
but also implementing and supporting the clients after the sale. This continues to this day,
So why tell this story now?
Well... this week the Australia/New Zealand team sold our 101st XIV.
For our region this is a major milestone.
Many of these clients have set their entire strategy on XIV, because it delivers Tier 1 performance and saves them floor space, power, time and manpower.
And this translates straight to saving dollars....
So its been a great journey so far. Thank you so much to every customer who has placed their trust in us and our technology.
And the XIV roadmap? Watch this space.... things just keep getting better.
Just a short update to say that Visio Cafe has some new IBM stencils.
IBM supply the stencils to Visio Cafe who make them available free of charge to all our customers.
You can use these stencils without acknowledgement or payment.
Clearly you still need to buy Visio from Microsoft.
The latest updates can be found here: http://www.visiocafe.com/ibm.htm
IBM-SAN.vss - Added SAN04B-R, SAN06B-R and Converged Switch B32 Front and Rear Views
- Added SAN384B Switch Front Door, Front and Rear Views and Interface Modules
IBM-SystemStorage-Disk.vss - Added DS3512 and DS3524 Front and SAS/FC/iSCSI Single/Dual Controller Rear Views
- Added EXP3512 and EXP3524 Front and Single/Dual Controller Rear Views
- Added DS5020 Front and FC, FC+FC and FC+iSCSI Rear Views
- Added EXP520 Front and Rear Views
// EDIT - Feb 3, 2011 //
// EDIT - May 24, 2011 //
Storwize V7000 stencils are now on VisioCafe
Being a person who walks dogs, visits the gym and uses public transport.... I have plenty of time to listen to podcasts.
The main challenge being that while listening to a podcast.... you actually need to LISTEN.
On more than one occasion I have zoned out, missed something interesting and suddenly thought... what did he/she just say?
One podcast that is a favourite of mine is "Security Now" with Steve Gibson. You can find it here
, I highly recommend it (you wont zone out while listening to it).
This weeks episode (episode 274) discusses two themes that keep cropping up again and again:
- Yet another zero-day vulnerability in Internet Explorer (IE).
- The need for important websites to default to HTTPS rather than HTTP (due to the appearance of FireSheep).
In this podcast Steve discusses how to protect yourself against this latest IE vulnerability, including using this technique: "Of course you could also just not use IE, which would be a fantastic solution"
I heartily agree with Steve and I was pleased that this year IBM chose to standardise on the Firefox browser (read that story here
One of the stated reasons being that Firefox is more Open Standards compliant.
So its nice to see that the new Storwize V7000 (and SVC 6.1) Web based management GUI uses both of these things, in that:
- It works perfectly with Firefox (and Google Chrome).
- It defaults to HTTPs even if you start the GUI using HTTP
Amusingly if you use IE, you will get this rather cheeky comment on the logon panel.
This makes it a simple, safe and secure GUI that uses industry standard best practice.
Please note that you can still choose to use IE... and it will work perfectly. Its just not our recommended Browser.
Of course if you use the CLI, it will also be secured using SSH v2 public/private key encryption (as the SVC has always done).
My hearty recommendation of Steve Gibsons work, like everything I express in this blog, is my personal opinion, and not that of my employer.
I had a great time yesterday running a one day seminar for IBM Business Partners on the Storwize V7000.
Interest was so strong we had to change locations to get a larger room... and then we had to ask for an even larger room!
It was a very positive session with lots of great questions.
A really like it when I get questions.... it means people are awake, listening, interested and more importantly THINKING.
Even more exciting: IBM announced today general availability (GA) of the new IBM Storwize V7000 mid-range disk system.
We have started shipping across multiple geographies around the world, including:
Australia, Bolivia, Denmark, Germany, India, Japan, Korea, the Netherlands, Romania, Saudi Arabia,
South Africa, Sweden, the United Kingdom and the United States.
Don't hesitate to contact your IBM Sales Rep or BP and ask for a demo.
Its been a busy few weeks.
I just spent a week in RTP North Carolina, with the STG Education team.
We ran through our first "Implementing the Storwize V7000" course in a "Teach the Teacher" format.
It was a lot of fun and I met some great fellow IBMers.
It gave me a great opportunity to drive the Storwize V7000 GUI and explore all the new possibilities it opens up.
First up.... the GUI is fantastic. Don't be fooled by the XIV Icons, its the smarts behind what the GUI does that makes it so powerful.
Its a 21st Century GUI following very strong principles of usability and simplicity.
Talking to client after client about this product, I get lots of great questions.
Two questions I get asked on a regular basis about Storwize V7000 are:
1) What is the smallest number of SSDs I can purchase?
The answer is that you can purchase just one. However with one disk you don't get any RAID.
So its better to buy two SSDs for a RAID1 pair. If you buy three SSDs you can form a RAID5 array.
2) Will the Storwize V7000 enforce the creation of hot spares?
The answer is that the pre-sets that the GUI offers you, will suggest the creation of spares.
For every 23 array members with the same drive class on a single
SAS chain which are not RAID 0 members, a single spare is created.
However the GUI will also allow you great flexibility.
You can specify that a smaller or larger number of spares get created.
You can choose to create NO hot spares at all.
You can convert a hot spare drive into a candidate drive ( a 'free' drive).
You can convert a candidate drive into a hot spare drive.
You can set a 'spare goal' to set a minimum number of spares that need to exist (or an event will be logged).
So what you get is a great level of flexibility.
Either follow the pre-sets and get IBM best practice... or choose your own desired spare levels.
If you choose to create no spares using SSDs the Storwize V7000 will use spinning disks to rebuild a failed SSD.
Then when the failed SSD is replaced, the contents of that drive will be failed back.
In a previous blog entry I mentioned a new iPhone and Blackberry app that gives you info on IBM Storage.
I actually now have three IBM supplied iPhone apps that you can get through the Apple Store.
The dW app is a social networking app
that lets you interact with your contacts on the
IBM developerWorks website.
I didn't realize that IBM effectively had its own
Social Networking site..... but that's exactly
what the developerWorks site is!
For more information, check out the October 13
developerWorks Podcast, available here
There is more information here
The IBM Storage and IBM System x iPhone apps are very similar in design and layout.
They both list product types by family, giving specifications for each machine type.
For example these are the specifications listed for the Storwize V7000.
For each product you also get a Description page and Web link pages.
You also get links to Facebook, Youtube,Twitter, LinkedIn and other contacts.
There are still some areas where things can be improved.
Not all of the products have their specifications listed yet.
They instead direct you to the web.
Never the less I think this is a great start. It shows IBM's commitment to both Social Media
and being as informative open and communicative with our customers as possible.
As for Android users, we are listening...
Expect an Android version hopefully before the end of the year.
Oh.... and to find these apps... just open the Apple iStore and search for IBM.
I have been getting a lof of requests for Storwize V7000 BTU values.
You can now get them from here.
Whats interesting is we have published two different values:
- Maximum power consumption and heat output.
- Measured values while performing a typical workload.
This is a good start towards providing more useful values for comparisons.
IBM is a member of the Green Storage Initiative which has created the SNIA Emerald site.
To quote from their website:
"The purpose of the SNIA Emerald is to provide a fair and equitable measurement of storage system power usage
and efficiency through use of a well-defined testing procedure."
To learn more it is well worth listening to the Infosmack podcast here.
I got asked this question during the week, so I thought I would share the answer with you.
The question was: Can I create a full volume copy of an XIV snapshot?
The answer of course was YES... but here is why...
An XIV has two copy concepts:
For a particular volume you can create both snapshots.... and you can also create full volume copies.
You can actually create as many of each as you wish (more than a 1000).
A snapshot is space efficient. It uses redirect-on-write and it uses space only when blocks change on the source.
A full volume copy on the other hand uses the same amount of space as the source (its like an old fashioned flashcopy).
But here is the trick, its all managed with metadata, which makes it very efficient.
In this example I have a source volume (cleverly called "Source Vol" ) that has one snapshot.
You can see there is 7 TB of actual data in that volume.
I also have a space efficient snapshot (called "snapshot_00001").
I want to 'harden up' the Snapshot and convert it to a full volume.
So I right click on the snapshot and select the "Copy this Snapshot" option.
I am asked where to copy the volume to.
I choose the volume called 'Target Vol" which is currently empty (unformatted).
The copy process occurs in the background, but the "Target Vol"
immediately looks exactly like the snapshot.
You can see this in the next screen capture where the "Target Vol"
suddenly contains 7 TB.
I have successfully converted a snapshot into a full volume.
I could then duplicate the snapshot (creating a snap of a the snap), by selecting the snapshot and choosing "Duplicate".
What I then see is a second snapshot with exactly the same creation date and time as the first one.
What this means is that "Target Vol" and "snapshot_00002" are both children of "snapshot_00001"
But now I delete snapshot_00001.
Why is this significant?
Because "Target Vol" is unaffected (regardless of how far the copy process has gotten to) and
snapshot_00002 is unaffected (even though it was a child of snapshot_00001).
What this demonstrates is the powerful way XIV has implemented meta data.
And what it means for users is maximum flexibility. Delivered by XIV.
If your considering a midrange storage solution, there are many choices and many vendors.
I often get asked: Why IBM? What makes your solution special?
So when considering Storwize V7000, consider this:
The 24 disk Storwize V7000 enclosure currently offers four different disk types:
- 300 GB Solid State Drive (E-MLC SSD) with SAS version 2 interface (6 Gbps)
- 300 GB Spinning Disk (10K RPM) with SAS version 2 interface (6 Gbps)
- 450 GB Spinning Disk (10K RPM) with SAS version 2 interface (6 Gbps)
- 600 GB Spinning Disk (10K RPM) with SAS version 2 interface (6 Gbps)
Thats four choices and 24 slots per enclosure. But the questions are:
- Can I mix and match?
- Do I need to order these in 8 packs or 16 packs?
- Do I need a separate enclosure for each disk type?
- What are the rules?
The answer is: There are no rules.
You can order drives in any quantity you like and you can mix drives in an enclosure in any order you like.
So this is a very flexible set of choices.
Now there are rules on RAID arrays, but these are equally flexible.
For instance a RAID10 array can be anywhere from 2 disks (with is practically RAID1) to 16 disks.
RAID5 arrays can use anywhere from 3 to 16 drives, RAID6 uses 5 to 16 drives.
1 - 8
3 – 16
5 – 16
2 – 16
So when making that evaluation, ask our competitors: What are your rules when ordering disks? How restrictive are you?
The Storwize V7000 answer is: There are no rules.
Just a quick note about using Oracle Solaris with IBM XIV (I so want to say "SUN Solaris", I need to retrain my brain).
When using IBM XIV with Solaris, you need to install IBM's XIV Host Attachment Kit (delightfully called a HAK).
This is to ensure multi-pathing is correctly configured (regardless of whether your using DMP or MPxIO).
The relevant software, release notes and instruction guides are found here
Anyway... the whole point of this blog entry is to correct a short coming in the release notes.
They currently fail to mention some minimum system requirements.
I am getting this corrected, but until then... please note the following:
1. The HAK for Solaris 10 supports only Solaris 10 U4 and greater (this is also referred to as Solaris 10 - 08/07).
This means if for instance your on 11/06 (update 3) or 03/05, you will need to first perform a Solaris update.
2. The following patch is mandatory for Solaris 9/SPARC: 118462-03 (it's a prerequisite for HAK installation).
3. Solaris 9 is supported for SPARC only.
4. Solaris 8 is not supported
There has been a lot of chatter on the blog verse about so called vendor blockers (I think you can shorten that to vblock).
This is the idea that a pre-blessed solution is the safest way to build infrastructure in the data centre.
I can see the attraction, but I suggest the best way to get there is with the vendor who is most willing
to work with the widest range of their competitors.
And you cannot get much wider than IBM SVC.
IBM has been supporting SVC in a mixed OS and hardware environment since 2003.
The IBM Storwize V7000 inherits all of the interoperability testing done over 7 years with IBM SVC.
This is an astonishing way to bring a new product into the marketplace.
I cannot think of a new midrange entrant that has somehow managed to get the same level of
interoperability testing and ISV integration on the very day of its birth.
When you look at the picture below you can see the depth of IBM's support matrix for SVC and its
cousin, the Storwize V7000.
I have on occasion raised a laugh from an audience when I describe IBM SVC as a vendor independent virtualization layer.
But the picture doesn't lie... with the IBM Storwize V7000 or IBM SVC... there is no vendor blocking.
Plus with the ability to move your data out of the IBM virtualization layer (using the the migrate to image command), you can
remove IBM from the picture at any time... meaning no vendor blocking... and no vendor locking.
Its that time again! Rob Jackard from the ATS Group has shared with me his list of IBM Storage related updates.
Storwize V7000 already has a huge amount of great material out there, links are below.
Have a quick look... you may see links that are relevant to you!
(2010.10.13) Support Matrix for Subsystem Device Driver, Subsystem
Device Driver Path Control Module, and Subsystem Device Driver Device Specific
(2010.10.12) IBM TechDoc- WWPN Determination for IBM Storage v6.0.
(2010.10.07) IBM TechDoc- Typical VIOS Network Configuration in Production Environment.
(2010.10.05) IBM TechDoc- Power Systems SAN Multipath Configuration Using NPIV.
(2010.09.06) IBM SDDPCM- Open HyperSwap status may report incorrectly via the Tivoli Productivity Center for Replication GUI, If the HyperSwap was incomplete and then another unplanned HyperSwap occurs, both copies of the data will be corrupted.
DS3000 / DS4000 /
(2010.10.13) RETAIN Tip# H194697- SATA
drive hangs or is not ready after power cycle.
(2010.10.06) RETAIN Tip# H196488- DS5000
systems not working with Brocade on 8 Gbps host
(2010.09.29) RETAIN Tip# H197680-
VIOS 2.1.3 support for BladeCenter
hosts removed from System Storage Interoperation Center (SSIC) web site - IBM
System Storage DS3512 (Type 7146), IBM System Storage DS3524 (Type
(2010.09.23) DS5100, DS5300 customer
responsibilities for code installation.
(2010.10.18) IBM TechDoc- IBM Handbook
Using DS8000 Data Replication for Data Migration.
(2010.10.16) System firmware upgrade may be required for N7600 /
N7800 storage systems with Data ONTAP 7.2 preloaded.
(2010.10.15) N3300/N3600 BMC (Baseboard Management Controller) firmware
upgrade does not occur automatically during Data ONTAP 18.104.22.168
(2010.10.14) NEWS: Recommended Releases for
IBM System Storage N series Data ONTAP.
(2010.10.13) Potential Issues when Operating and/or Upgrading Code
with 7 or more GPFS filesystems, when Operating with SoNAS R22.214.171.124-7a Code and
(2010.10.04) Risk minimization
recommendations for EXN1000 AT-FCX modules running disk shelf firmware prior to
(2010.09.10) Data ONTAP 8.0 7-Mode Gateway
(2010.09.01) IBM System Storage N series
Data ONTAP Matrix.
(2010.08.31) Service Image 30801483 (BIOS
1.7 and Diagnostics 5.3.8) for N7000 Series Publication
(2010.09.27) Brocade: Features removed from
Web Tools effective with version 6.1.1.
(2010.09.24) Fabric Manager upgrade
required to manage switches with new Cisco address
NOTE: Cisco’s Field Notice #63302 provides
(2010.09.08) IBM SAN b-type Firmware
Version 6.x Release Notes.
NOTE: IBM recommends that Open System customers that currently
use FOS 6.1 or earlier limit migration to FOS 6.2.2b, 6.3.0d, 6.3.1a, or 6.4.0a
NOTE: Customers that have already migrated
or that own products with FOS 6.2 or 6.3 versions preinstalled are
SVC / Storwize
(2010.10.19) IBM System Storage SAN Volume
Controller 6.1.0 Configuration Limits and
(2010.10.19) IBM Storwize V7000 6.1.0
Configuration Limits and Restrictions.
(2010.10.18) IBM Storwize V7000 Product
(2010.10.14) IBM Storwize V7000 Information
(2010.10.07) SAN Volume Controller and
related software- Support Statement.
(2010.09.17) Incorrect 2145-8G4 System
Board Part Number on SVC.
NOTE: This problem is fixed in PTF V126.96.36.199
and later versions.
(2010.09.17) Limit on Size of
NOTE: This issue has been addressed by APAR
IC61106 in the V188.8.131.52 PTF.
(2010.09.17) Potential Issue when Modifying
Remote Copy Configuration After Upgrading to SVC
NOTE: This issue has been addressed by APAR
IC60186 in the V184.108.40.206 PTF.
(2010.09.17) SVC Embedded CIMOM Process
Consuming Excessive CPU Resources.
NOTE: All SVC V4.3.1.x customers are
advised to upgrade to V220.127.116.11 or later to address this
(2010.09.17) Incorrect 2145-8G4 Node
Hardware Shutdown Temperature Setting in
NOTE: This issue was resolved by APAR
IC60083 in SVC V18.104.22.168.
(2010.09.17) Potential Issue during SVC
Code Upgrade to V22.214.171.124 when Running Global Mirror.
NOTE: This issue is resolved in the
(2010.09.17) Space-Efficient VDisk may be
taken offline when used capacity exceeds 1022 GB.
NOTE: This issue was resolved by APAR
IC58563 in SVC V126.96.36.199.
(2010.09.17) Management Information Base
(MIB) file for SNMP.
(2010.09.17) SAN Volume Controller Software
Upgrade Test Utility.
NOTE: The utility release level is
(2010.09.16) SAN Volume Controller
Concurrent Compatibility and Code Cross Reference.
(2010.09.15) IBM System Storage SVC Code
(2010.09.15) SVC V4.2.x End of Service:
September 30, 2010.
previously announced on Sept. 8, 2009, IBM has withdrawn support for SAN Volume
Controller V4.2.x on Sept. 30, 2010.
SSPC / TPC /
(2010.10.12) Collecting Data for: SSPC
(2010.09.29) TPC basic database
maintenance steps- tutorial.
(2010.09.21) IBM Tivoli Storage
Productivity Center and IBM Tivoli Storage Productivity Center for Replication:
Flash for Version 4.2.
(2010.09.10) System Storage
Productivity Center Flash for Version 188.8.131.52.
(2010.09.10) System Storage
Productivity Center Flash for Version 1.4.
(2010.10.07) IBM TechDoc- Recommended
Best Practices Considerations for High Availability on IBM XIV Storage
(2010.09.13) IBM XIV Storage System
(2010.09.06) IBM XIV Management Tools
(XIVGUI, XIVTop, XCLI) version 2.4.3.b for all
(2010.09.06) IBM XIV XCLI (only) for
Linux/AIX/Solaris/HPUX, version 2.4.3.b.
NOTE: Many of the