Anthony's Blog: Using System Storage - An Aussie Storage Blog
Just a short update to say that Visio Cafe has some new IBM stencils.
IBM supply the stencils to Visio Cafe who make them available free of charge to all our customers.
You can use these stencils without acknowledgement or payment.
Clearly you still need to buy Visio from Microsoft.
The latest updates can be found here: http://www.visiocafe.com/ibm.htm
In part three of my series, AIX and XIV.... I will explore the recommended configuration changes you should make to AIX when attaching XIV disk.
So lets get started:
lsattr -El fcs0
Two of the attributes will look like this:
max_xfer_size 0x100000 Maximum Transfer
lsattr -El fscsi0
Two of the attributes will look like this:
Tracking of FC
I suggest you change these values as follows:
lsattr -El hdisk26
Two of the attributes will look like this:
TRANSFER Size True
I suggest you change these values as follows:
By increasing the max_transfer size, we allow the maximum LTG size on each volume group (VG) to be larger. The LTG size of a VG cannot be larger than the smallest max_transfer size of all the hdisks that make up that VG. When the LVM receives a request for an I/O, it breaks the I/O down into what is called logical track group (LTG) sizes before it passes the request down to the device driver of the underlying disks. The LTG is the maximum transfer size of an LV and is common to all the LVs in the VG.
You can display the LTG size by using the lsvg command against the relevant VG.
AIX XIV Utils
root@testserver [/home/anthonyv/aix_xiv_utils-2.0/bin] # ./lshba -x
We use the chhba commands to change fcs and fscsi attributes. We issue a single command to change all the HBAs at once. This command only changes the ODM. We need to reboot for the changes to take effect. Note we need to type yes when prompted, for the script to run.
root@testserver [/home/anthonyv/aix_xiv_utils-2.0/bin] # ./chhba -d
yes -f fast_fail -m 0x200000 -n 2048 -P
In this example we display the relevant attributes of all the XIV hdisks. The settings in this example are NOT correct (queue depth still 32 and max transfer size still 0x40000) so we need to use the chxiv command to correct them.
We use the chxiv command to change the attributes of every XIV hdisk. This command only changes the ODM for XIV disks. We need to reboot for the changes to take effect. Note we need to type yes when prompted, for the script to run.
[/home/anthonyv/aix_xiv_utils-2.0/bin] # ./chxiv -r 64 –m 0x100000 -P
Change algorithm to round_robin with a queue depth of 64 for these disks? yes
Getting new XIV disk information...
AIX_SIZE(MB) ALGORITHM Q_DEPTH SERIAL
Conclusions and gotchas
So at the conclusion of this process, you should have an AIX system with more ideal settings for XIV. There are a couple of gotchas.
1) HBAs being used for tape. The chhba command will not change HBAs in private loop mode. This is to prevent errors like this:
Date/Time: Thu Feb 4 11:59:08 2010
Sequence Number: 250846
Machine Id: 00CB6AC44C00
Node Id: us04od03
Resource Name: fscsi3
SOFTWARE DEVICE DRIVER
SOFTWARE DEVICE DRIVER
INCORRECT HARDWARE CONFIGURATION.
IDENTIFY OFFENDING SOFTWARE COMPONENT
VERIFY SYSTEM CONFIGURATION IS VALID
REFER TO PRODUCT DOCUMENTATION FOR ADDITIONAL INFORMATION
2) Your queue depth settings may still not be deep enough. Periodically run iostat -D 5 and if you consistently notice avgwqsz or sqfull consistently not zero then increase queue depth (you can go up to 256). Don’t be tempted to start at 256 and work down. You may flood the XIV with commands. For the vast majority of clients, 64 is a good number.
3) Do you need to use these scripts? No you don’t. You can use smit or command line to change attributes.
Do you always need to reboot? No you don’t. But you will need to change the relevant devices to
a defined state to change them. For
instance you could change the queue depth on an hdisk the commands below. But only if the hdisk in not part of an online volume group. It remains easier to just change the ODM and reboot for the changes to take affect.
rmdev –l hdisk25
IBM have offered Enterprise Storage Virtualization since June 2003 with the IBM SAN Volume Controller (SVC). October 2010 saw IBM releasing the Storwize V7000, taking the SVC code and packaging it into a midrange disk product. So now you have four possible choices:
The great thing is that all four choices are valid and all four choices work just fine.
The short answer: YES!
We have a great many customers happily doing this, so I thought I would share some common questions I get around configuration. Firstly there is an InfoCenter page on this which you will find here. Secondly there is a debate about whether we should create individual volume/arrays on the Storwize V7000 or just create a single pool on the Storwize V7000 (which equates to striping on striping). More bench marking is being done to see if one method is truly better than the other, so until then I recommend the method described below. If you have already done stripe on stripe, don't go changing anything until I update this post.
How many ports should I use for Zoning?
The Storwize V7000 has 8 Fibre Channel ports, 4 from each node canister. You need to zone at least two ports from each node canister to your SVC cluster. This is no different to how you would zone a DS5100 or an EMC VNX.
How will the SVC detect the Storwize V7000?
On the SVC you will see two storage controllers, one for each node canister. This is quite normal. The reason for this is that each node canister reports its own WWNN. This is not a problem and will not affect volume failover if one node canister goes offline.
In the example below the SVC has detected two new controllers. The confusing factor is that both report as 2145s, but they are a Storwize V7000. Rename them to reflect what they really are (something like StorwizeV7000_1_Node1 and StorwizeV7000_1_Node2).
How should I define the SVC on the Storwize V7000?
You need to create a new host on the Storwize V7000 and call it something like SVC_1. if the SVC WWPNs don't appear in the WWPN dropdown, you will need to manually add them as shown below:
You can get the SVC WWPNs from your existing zoning or by doing an svcinfo lsnodeagainst each SVC node or display them in the SVC GUI as shown below:
What size Storwize V7000 volumes should I create?
My recommendation is to do the following on the Storwize V7000
Hopefully all of this makes sense. Questions and comments very welcome.
Until the flash is updated showing how to avoid this issue, only update drive firmware when installing a new machine or if all hosts are offline.
IBM recently released new drive firmware for the Storwize V7000, so I thought I would share the process of how I update that firmware. You can download it from here. The details for this new package can be found here. I recommend you perform the drive update before you next update your Storwize V7000 microcode.
I want to be clear that one of the central goals of the Storwize V7000 is to ensure that performing drive firmware updates can be done online without host disruption. This is possible because each drive can be updated in less than around 4 seconds. The scripts I share below leave a 10 second delay between drives just to be safe. I would still prefer that you did the update during a quiet period.
We need to perform this procedure using the command line as there is no way to do this procedure from the GUI (yet).
There are four steps:
Step 1: Upload and run the upgrade utility
From the Putty folder we need to upload the test utility. You will need to change the key file name, userid and IP address (all highlighted in red) to suit your installation.
NOTE: The following command is being run in a Windows command prompt. You need to be in the C:\Program Files\Putty or C:\Program Files (x86)\Putty folder.
pscp -i anthonyv.ppk IBM2076_INSTALL_upgradetest_6.15 firstname.lastname@example.org:/home/admin/upgrade
Having uploaded the file, now start PuTTY and SSH to your Storwize V7000. Logon and issue the following two commands. You are using SSH commands now, not the Windows Command Prompt:
svcservicetask applysoftware -file IBM2076_INSTALL_upgradetest_6.15 svcupgradetest -f -d
If you get a warning window like the one shown below, indicating we have down-level drives, we need to proceed to the next step (note that the enclosure and slot numbers are not the same as drive IDs). If you have a lot of drives, you can drop the -d from the svcupgradetest command to get a summary list.
******************* Warning found ******************* +----------------------+-----------+------------+------------------------------------------+ | Model | Latest FW | Current FW | Drive Info | +----------------------+-----------+------------+------------------------------------------+ | HK230041S | 2920 | 291E | Drive in slot 24 in enclosure 1 | | | | | Drive in slot 23 in enclosure 1 | | ST9450404SS | B548 | B546 | Drive in slot 22 in enclosure 1 | | | | | Drive in slot 21 in enclosure 1 | | | | | Drive in slot 20 in enclosure 1 | | | | | Drive in slot 19 in enclosure 1 | | | | | Drive in slot 18 in enclosure 1 | | | | | Drive in slot 17 in enclosure 1 | | | | | Drive in slot 16 in enclosure 1 | | | | | Drive in slot 15 in enclosure 1 | | | | | Drive in slot 14 in enclosure 1 | | | | | Drive in slot 13 in enclosure 1 | | | | | Drive in slot 12 in enclosure 1 | | | | | Drive in slot 11 in enclosure 1 | | | | | Drive in slot 10 in enclosure 1 | | | | | Drive in slot 9 in enclosure 1 | | | | | Drive in slot 8 in enclosure 1 | | | | | Drive in slot 5 in enclosure 1 | | | | | Drive in slot 6 in enclosure 1 | +----------------------+-----------+------------+------------------------------------------+
Step 2: Upload the drive microcode package
Download the drive update package from here. Put it into the PuTTY folder.
pscp -i anthonyv.ppk IBM2076_DRIVE_20110928 email@example.com:/home/admin/upgrade
Step 3: Apply the drive software
I have written some scripts to help you list the drive IDs that need to be updated and perform the updates. You can upgrade the drives one at a time, or in bulk, depending on how you want to do this. All the remaining commands are all run in a PuTTY session.
Firstly run this script to list all the drive IDs and current firmware levels. We need the drive IDs if we want to update individual drives.
svcinfo lsdrive -nohdr |while read did error use;do svcinfo lsdrive $did |while read id value;do if [[ $id == "firmware_level" ]];then echo $did" "$value;fi;done;done
The output will look something like this, showing the drive ID and that drive's current firmware level. From step 1 we know what the latest firmware level is, so we can compare to the current firmware level:
0 291E 1 291E 2 B546 3 B546 4 B546 5 B546 6 B546 7 B546 8 B546 9 B546 10 B546 11 B546 12 B546 13 B546 14 B546 15 B546 16 B546 17 B546 18 B546 19 B546 20 B546 21 B546 22 B546 23 B546
Now we can update individual drives with this command, which will update drive ID 23. Just keep changing the drive IDs, using the list of down-level drives, until every drive has been updated:
svctask applydrivesoftware -file IBM2076_DRIVE_20110928 -type firmware -drive 23
However you may have a lot of drives and want to upgrade them in bulk. So you could use this command, which updates drive ID 19 and 20 (highlighted in red). You could change and also add extra drives to the list as required:
for did in 19 20;do echo "Updating drive "$did;svctask applydrivesoftware -file IBM2076_DRIVE_20110928 -type firmware -drive $did;sleep 10s;done
If we just wanted to upgrade every single drive in the machine (regardless of their level), we could run this command:
svcinfo lsdrive -nohdr |while read did name IO_group_id;do echo "Updating drive "$did;svctask applydrivesoftware -file IBM2076_DRIVE_20110928 -type firmware -drive $did;sleep 10s;done
When updating multiple drives, I have inserted a 10 second sleep between updates, just to ensure the process runs smoothly. This means each drive takes about 13-15 seconds.
Once we have upgraded every drive, it is time for a final check.
Step 4: Confirm all drives are updated
You have two ways to confirm this. Firstly run the following command to list the firmware level of each drive. Is each drive reflecting the levels reported in Step 1?
svcinfo lsdrive -nohdr |while read did error use;do svcinfo lsdrive $did |while read id value;do if [[ $id == "firmware_level" ]];then echo $did" "$value;fi;done;done
Now run the software upgrade test utility again:
svcupgradetest -f -d
Provided you receive no warnings about drives not being at the recommended levels, you are now finished with the drive updates. Of course you could now proceed to install 220.127.116.11 firmware, but you can do that from the GUI.
anthonyv 2000004B9K Tags:  v7000 ds8800 tier stat svc easy heatmap cmua00007e storwize ds8700 15 Comments 18,527 Views
The IBM Storage Tier Adviser Tool (known as STAT for short) is a clever piece of software that lets you predict how much business value you would get from adding SSDs to your Storwize V7000, SVC, DS8700 or DS8800. This is because you can add SSDs to all of these products and then have hot spots dynamically and automatically migrated to SSD using IBM's Easy Tier technology (which is offered as a no charge feature).
For clients who have not yet purchased SSDs, or who are unsure which Storage Pools to deploy them into, the STAT tool will help with decision-making.
I recently struck a rather simple problem with the STAT tool after installing it: I kept getting a CMUA00007E error. I downloaded the tool from here and installed it successfully onto my Windows 2008 64-bit lab machine (running on a IBM x3850). The install went fine so I then proceeded to download the heatmap from my Storwize V7000. The heatmap file is automatically generated by Easy Tier and is used as an input file for the STAT tool. You can see an example of where to find the heatmap file in the screen capture below:
I then placed the heatmap into the same folder as the STAT tool and tried to generate a report. It failed with this rather annoying message:
C:\Program Files (x86)\IBM\STAT>stat dpa_heat.78G01A6-2.110823.072309.data CMUA00007E The STAT.exe command failed to produce the heat distribution output.
I initially thought I had a bad heatmap, but since it is a binary file, opening it in a text editor did not tell me anything.
Actually the issue was simple: I did not have write authority to that folder. To get around this I instead started the command prompt as Administrator:
I then re-ran the command:
C:\Program Files (x86)\IBM\STAT>stat dpa_heat.78G01A6-2.110823.072309.data CMUA00019I The STAT.exe command has completed.
Having run the tool I was now able to open the index.html in the STAT folder and see much hot data I have in my lab. Turns out that I don't actually have any hot data right now! Don't tell my manager though, he might try to take my SSDs away. #
Having run the tool once, I did not need to use this trick again. It now runs without starting the command prompt as Administrator.
Once your SVC or Storwize V7000 is upgraded to version 6.3 you can start using LDAP for authentication. This means that when you logon, you authenticate with your domain user-id and password rather than a locally created user-id and password.
So why is this important?
So as an exercise I added my lab Storwize V7000 to our domain to show how it is done. This example also applies to an SVC so don't be confused if I only refer to Storwize V7000 from now on.
The first task is to negotiate with your Domain administrator to get a new group setup on the domain. In this example I use a group called IBM_Storage_Admins which lets me use this group for various storage devices (such as an XIV or a SAN Switch).
To create this group we need to logon to the Domain Controller and configure Active Directory. An easy way to do this from the AD controller is to go to Start → Run and type dsa.msc and hit OK. The Active Directory Users and Computers Management Console should open.
Select the groups icon to create a new group.
Enter your group name, in my case: IBM_Storage_Admins and hit OK.
Now right select relevant users who need access to the storage and add them to the IBM_Storage_Admins group. In this example I have selected Anthony (which uses anthonyv as a username).
In this example we are adding anthony into the IBM_Storage_Admins group:
Now it is time to configure the Storwize V7000 so start the Web GUI and logon as Superuser.
Firstly we go to Settings → Directory Services:
We choose the button to Configure Remote Authentication:
We choose LDAP and hit next.
We choose Microsoft Active Directory with no Transport layer Security. We then expand the Advanced Settings. My lab domain is ad.mel.stg.ibm so I use the Administrator ID on the Domain Controller to authenticate access. You could use any user that has authority to query the LDAP directory. We then hit Next.
We then add the domain controller which in this example is 10.1.60.50 and the base domain name chopped into pieces (so ad.mel.stg.ibm becomes dc=ad,dc=mel,dc=stg,dc=ibm ) and hit Finish.
Provided the command completes successfully we have defined the domain controller to the Storwize V7000. Now we need to add a group. Go to Access → Users.
Select the option to add a New User Group.
In this example we want to add a group for users allowed full admin access to the Storwize V7000. This matches the group we created on the Domain Controller. So we call the group IBM_Storage_Admins and we use the Security Administrator role (which is the most powerful role) and tick the box to enable LDAP for this group.
Now to test, I logon to the Storwize V7000 using the domain user-id anthonyv with that users domain password. Remember this user is not defined on the Storwize V7000 itself and that if it all goes wrong, we can still logon as Superuser.
Now I create a volume and delete it. Then I check the audit log from Access → Audit log.
Sure enough, we see exactly who did that command.
This is a great outcome for security,auditing and for easy access administration.
If you have issues, from the Settings → Directory Services menu, use the Global Actions dropdown on the right hand side to Test LDAP Connections and Authentication or re-configure LDAP.
If you already have existing users (what we call Local users), configuring remote authentication using LDAP does not disable or invalidate those local user-ids. This means you can either logon with a local user-id or logon with a Domain user-id. This is handy if the domain controller fails but can confuse you if your local user name and your domain user name are the same name (for example both anthonyv). The Storwize V7000 will look you up in the local user name list first. I suggest removing all local users (except superuser) as this will reduce confusion but still leave you a backdoor in case remote authentication stops working.
If you see any mistakes or have suggestions to improve the way I described this, please let me know.
One of the biggest challenges when working on such a wide variety of products is staying familiar with the GUIs used by each product.
So its worth remembering that there are simulators and demo modes in all of IBM's disk products.
The XIV GUI offers a demonstration mode that can be accessed by simply logging on with a user-id of p10demomode.
No password is needed.
While the demo mode does not let you change anything, the current GUI version lets you 'see' three XIVs including mirroring connections.
The current XIV GUI is version 2.4.3 which can be downloaded from here:
Why not download the GUI and check it out?
You can get a Simulator for the DS Storage Manager from this website:
The really nice thing with the current version is that offers not one but five simulated machines.
This is a great way to practice creating arrays and explore the menu options.
If you change your laptops IP address and the simulated machines won't 'connect', just right click on the Host object and 'Remove' it.
Then right click on the PC object (the top device) and choose 'Automatic Discovery'.
I probably use this simulator about once per week, so I can vouch for its value.
You can install the DS Web GUI and run it in Simulated mode by downloading the DS8000 Storage Manager. There are many versions to download. If you have an existing DS8000, try and match the version you download to the version of your running machine (the link gives details). Else download the latest version.
anthonyv 2000004B9K Tags:  response mbps performance svc v7000 times storwize iops 11 Comments 20,873 Views
With the 6.3 release of the Storwize V7000 and SVC code (which I blogged about here), there are so many new features and functions that I have plenty more to blog about!
The first new feature I blogged about was LDAP support, but an existing feature that has been enhanced is the performance monitor (brought in with release 6.2). When this first came out I put a video on You Tube showing what metrics could be displayed in that release. This is a sped up image with no voiceover:
Now with release 6.3 IBM has added separate graphs for reads and writes plus the ability to display IOPS or MBPS, plus the ability to display graphs of read and write latency. Nice! I got so excited I made another You Tube video, this one with narration. So now you can compare the new to the old:
anthonyv 2000004B9K Tags:  type brocade converter machine silkworm decoder ibm 11 Comments 149,057 Views
IBM has been selling IBM branded Brocade switches since 2001 when we announced the 8-port 2109-S08 and 16-port 2109-S16. These were classic switches that ran at 1 Gbps. They had a front operator panel with a small keypad (a feature which in the rush to fit in more SFPs, did not appear in future models). Since then IBM has gone on to sell many of Brocades switches and directors.
Sometimes you need to convert a Brocade model name to an IBM model name (or the other way around). One way to assure yourself with scientific accuracy which type of switch you are working on, is to telnet or SSH to a switch and issue a switchshowcommand. You will get a switchType value. In this example, my switch is a switchtype 27.2.
IBM_2005_H08a:admin> switchshow switchName: Switch01 switchType: 27.2
Or if you are using the Web GUI, you can also see the switch type on the opening screen. In this example the switch is a type 34.0.
Having scientifically determined the type of switch, we can now use my decoder ring to determine the IBM machine type, IBM model name and the Brocade model name. I have ordered the switches by Type number. There are three things to note:
TYPE SPEED IBM TYPE IBM MODEL NAME BROCADE MODEL NAME 2.x 1 Gbps 3534-1RU Brocade 2010 3.x 1 Gbps 2109-S08 Brocade 2400 6.x 1 Gbps 2109-S16 Brocade 2800 9.x 2 Gbps 2109-F16 Brocade 3800 (Cylon) 10.x 2 Gbps 2109-M12 Brocade 12000 (Ulysses) 12.x 2 Gbps 2109-F32 Brocade 3900 (Terminator) 16.x 2 Gbps 3534-F08 Brocade 3200 (Mojo) 21.x 2 Gbps 2109-M14 Brocade 24000 (Meteor) 22.x 2 Gbps IBM BladeCenter Module Brocade 3016 (Blazer) 26.x 2 Gbps 2005-H16 Brocade 3850 (Dazzler) 27.x 2 Gbps 2005-H08 Brocade 3250 (DazzlerJR) 32.x 4 Gbps 2005-B32 SAN32B-2 Brocade 4100 (Pulsar) 34.x 4 Gbps 2005-B16 SAN16B-2 Brocade 200E (Stealth) 37.x 4 Gbps IBM BladeCenter module Brocade 4020 (Blazer2) 38.x 2 Gbps 2109-A16 SAN16B-R Brocade AP7420 (Mars) 42.x 4 Gbps 2109-M48 SAN256B Brocade 48000 (Saturn) 43.x 4 Gbps HP BladeCenter Module Brocade 4024 44.x 4 Gbps 2005-B64 SAN64B-2 Brocade 4900 (Viking) 46.x 4 Gbps 2005-R18 SAN18B-R Brocade 7500 (Sprint) 46.x 4 Gbps 2005-R04 SAN04B-R Brocade 7500E (Sprint) 58.x 4 Gbps 2005-B5K SAN32B-3 Brocade 5000 (Pulsar2) 62.x 8 Gbps 2499-384 SAN768B Brocade DCX 64.x 8 Gbps 2498-B80 SAN80B-4 Brocade 5300 66.x 8 Gbps 2498-40E SAN40B-4 Express Brocade 5100 66.x 8 Gbps 2498-B40 SAN40B-4 Brocade 5100 67.x 8 Gbps 2498-E32 Encryption Switch Brocade Encryption Switch 71.x 8 Gbps 2498-24E SAN24B-4 Express Brocade 300 71.x 8 Gbps 2498-B24 SAN24B-4 Brocade 300 73.x 8 Gbps 10 port IBM BladeCenter module Brocade 5470 (Blazer3) 73.5 8 Gbps 20 port IBM BladeCenter module Brocade 5470 (Blazer3) 76.x CEE 3758-B32 IBM Converged Switch Brocade 8000 77.x 8 Gbps 2499-192 SAN384B Brocade DCX-4S 83.x 8 Gbps 2498-R06 SAN06B-R Brocade 7800 121.x 16 Gbps 2499-416 SAN384-2 Brocade DCX8510-4 120.x 16 Gbps 2499-816 SAN384-4 Brocade DCX8510-8 109.x 16 Gbps 2499-F48 SAN48B-5 Brocade 6510
If you use Data Center Fabric Manager (DCFM), it actually displays the Switch Type using Brocade model names. Here is an example report from the DCFM we are running in my lab. This level of information is very helpful.
anthonyv 2000004B9K Tags:  xiv logical window vsphere vcenter apache unit vmware tomcat vasa number storage ibm 10 Comments 27,330 Views
If you have combined vSphere 5.0 with XIV, then you may want to try out the new IBM Storage Provider for VMware VASA (vSphere Storage APIs for Storage Awareness). You can download the installation instructions, the release notes and the current version of the IBM VASA provider from here. Clearly because VASA is introduced in vSphere 5.0 your VMware vCenter also needs to be on version 5.0.
Now IBM have had a vCenter plugin for a very long time (which I have written about here, here and here) and while you still need that plugin if you want to do storage volume creation and mapping from within vCenter (as opposed to using the XIV GUI), the VASA provider makes storage awareness more native to vCenter. This is a very important step. It means instead of using vendor added icons and tabs (like the IBM Storage icon and the IBM Storage tab that are added by the IBM Storage Management Console for vCenter), you just use the default vCenter tabs.
Right now version 1.1.1 of the IBM VASA provider delivers information about storage topology, capabilities, and state, as well as events and alerts to VMware. This means you will see new additional information in three tabs: Storage Views, Alarms and Events.
After installing and setting up the VASA provider, in vCenter select your VMware cluster, go to the Storage Views tab and select the view Show all SCSI Volumes (LUNs) there are four columns with more information. The Committed, Thin Provisioned information, Storage Array and Identifier on Array (indicated with red arrows) comes straight from the XIV (hit the Update button at upper right if you are not seeing anything yet). This is really useful information as it lets you correlate the SCSI ID of a LUN to an actual volume on a source array. Here is a cut-down view of that extra information:
If you want a larger screen capture you can find one here.
The Task & Events and Alarms tabs will also now contain events reported by the VASA provider such as thin provisioning threshold alerts (although if you have just installed the provider you may see nothing new, as nothing has occurred yet to provoke an alert or event).
As usual I have some handy tips on the steps you will need to take to get VASA going:
Your setup tasks are now all completed. Now go and explore the panels I detailed above to see what new information you have available to your vCenter server.
Why a separate server for the VASA provider?
The IBM VASA provider uses Apache Tomcat, which by default listens on port 8443. However since vCenter already has a service listening on port 8443, it means we have a clash. I googled and found the Dell and Netapp VASA providers also listen on port 8443 and they also recommend separate servers. I noted Fujitsu's provider uses a different port but still requires a separate server. So it seems if you have multiple vendors you will either have to spin up a separate server for each vendors provider, or start playing with changing the port number. The installation instructions for the IBM VASA Provider explain how to change the default port number if you are truly keen.
I thought I would write a quick post about an issue that's not new, but is certainly worth being aware of....
One of the interesting tricks with the change to 8 Gbps Fibre Channel is that it required a change to the way the switch handles its idle time... the quiet time when no one is speaking and nothing is said. In these periods of quiet contemplation, a fibre channel switch will send idles. When the speed of the link increased from 4 Gbps to 8 Gbps, the bit pattern used in these idles proved to not always be suitable, so a different fill pattern was adopted, known as an ARB. All of this came to intrude on our lives when it became apparent that some 8 Gbps storage devices were having trouble connecting to IBM branded 8 Gbps capable Brocade switches because of this change. This led to two things:
An example of what was said?
"Starting with FOS levels v6.2.0, v6.2.0a & v6.2.0b, Brocade introduced arbff-arbff as the new default fillword setting. This caused problems with any connected 8Gb SVC ports and these levels are unsupported for use with SVC or Storwize V7000.
In 6.2.0c Brocade reintroduced idle-idle as the default fillword and the also added the ability to change the fillword setting from the default of idle-idle to arbff-arbff using the portcfgfillword command. For levels between 6.2.0c and 6.3.1 the setting for SVC and Storwize V7000 should remain at default mode 0.
From FOS v6.3.1a onwards Brocade added two new fillword modes with mode 3 being the new preferred mode which works with all 8Gb devices. This is the recommended setting for SVC and Storwize V7000"
So there are several tips that I will point you to, depending on your product of interest:
Brocade Release Notes
For most environments, Brocade recommends using Mode 3, as it provides more flexibility and compatibility with a wide range of devices. In the event that the default setting or Mode 3 does not work with a particular device, contact your switch vendor for further assistance. IBM publishes all the release notes for Brocade Fabric OS here.
Check out this link if your connecting an 8 Gbps capable DS3500, DS3950 or DS5000 to a 8 Gbps capable switch: http://www-947.ibm.com/support/entry/portal/docdisplay?brand=5000028&lndocid=MIGR-5083089
There is no tip for the DS8800 but the advice remains effectively the same as for the Storwize V7000. I can confirm that using a fill word setting of 3 works without issue.
SAN Volume Controller or Storwize V7000
Check out this link if your connecting a Storwize V7000 or CF8 or CG8 SVC node to an 8 Gbps capable switch: https://www-304.ibm.com/support/docview.wss?uid=ssg1S1003699&wv=1
The XIV Gen3 comes with 8 Gbps capable Fibre Channel connections. It does not support idle Fill Words meaning that the portCfgFillWord value should not be set to 0.
When an IBM System z server attaches an 8 Gbps capable FICON Express-8 CHPID to a Brocade switch with 8 Gbps capable SFPs, you should upgrade your switches or directors to Fabric OS (FOS) 6.4.0c or 6.4.2a and set the fill word to 3 (ARBff).
LTO5 and TS1140
IBM have two tape drives that are capable of 8 Gbps, the LTO-5 drive and the TS1140. Setting the fill word to 3 can actually cause issues with these drives. To avoid issues do one of the following (you only have to do one of these, not all three):
**** UPDATED 28 Feb 2012 - Added System z FICON and Tape info ****
IBM recently announced the new System Storage DS3500 Express. The DS3500 is an entry level storage system that can be easily serviced and managed by an end-user. It is a very worthy successor to the DS3200/DS3300/DS3400 product line. So I thought I would share with you 10 things I really like about the new IBM DS3500 (in no particular order).
1) Its small
The base unit is only 2U in size and can hold either 12 of the 3.5" disks or 24 of the smaller 2.5" disks (depending on model). Each expansion drawer can also hold 12 of the 3.5" or 24 of the 2.5" disks (depending on model) and you can have 3 of them. So thats a potential 96 disks in 8U of rack space.
2) Its all SAS
In my opinion, Serial Attached SCSI (SAS) is the future of disk attachment. Traditional parallel SCSI is so 20th century and FATA didn't work out too well. I think SATA and FCAL attached disk will eventually be replaced by SAS and the DS3500 is all SAS at the disk back end and SAS by default at the host front end as well.
3) Its got flashcopy
The DS3500 can create two flashcopies without any extra licenses. I really like the fact that if your doing an OS or application upgrade, you can give yourself a quick roll-back point by just reserving some space for a flashcopy repository. This is also a great way to test whether flashcopy is right for your business and if so, buy the license to create more than 2 copies at a time.
4) Its got remote mirror
The DS3000 range up until now did not offer remote mirror capability. This meant that if you wanted a DR solution you needed to buy something to go over the top such as IBM SVC or Softek Replicator. The DS3500 now offers its own native replication that not only fills a spot but is compatible with existing DS4000s and DS5000s that you may already have in your business.
5) Its got nearline
So FATA disk may not have worked out, by nearline SAS is a far better alternative. The 2.5" model offers a 500 GB 7.2 K RPM nearline SAS drive. Or how about a 2 TB drive in the 3.5" form factor? Want some archive disk using nearline where the spindle count will still deliver good performance? Heres the solution.
6) Its green
If we accept that MAID was not the solution for the masses, the better thing is to simply do more with less, which is exactly what the DS3500 does. We are talking around 500W of power usage for a 48 disk two drawer solution (with 2.5" disks). Thats around half the power consumption of the equivalent model with 3.5" disks. This means less power drawn in and less hot air blown out.
7) One model to rule them all
The DS3500 comes in one model: SAS. You want fibre channel? No problem, just add the card. You instead want iSCSI? Same deal, just add the card. All models retain the SAS adapters which are proving so popular in the rack and blade server space.
You need a point solution to provide data-at-rest encryption? Here it is with 300 GB and 600 GB Self Encrypting drives that protect your data with no performance impact. Even better is that the software to manage encryption is rolled into the DS Storage Manager. Talking of which...
9) Easy Management
The DS3500 continues to use an intuitive and easy to use GUI which now includes all the dynamic volume management. This is an improvement over previous models where this had to be done via command line.
10) Its cheap
Being entry level it is priced for that market. You could also place it behind the SVC for a quick encryption solution or as a VDisk mirror repository.
Want to know more? Go talk to your IBM Business Partner and check out the product page here:
anthonyv 2000004B9K Tags:  command-line kernel svc interface v7000 firmware uptime. unified linux storwize ibm 8 Comments 36,850 Views
It is ironic that only days after I wrote that 497 is the IT number of the beast, I learn that Linux has another unfortunate number: 208.
The reason for this is a defect in the internal Linux kernel used in recent firmware levels of SVC, Storwize V7000 and Storwize V7000 Unified nodes. This defect will cause each node to reboot after 208 days of uptime. This issue exists in unfixed versions of the 6.2 and 6.3 level of firmware, so a large number of users are going to need to take some action on this (except those who are still on a 4.x, 5.x, 6.0 or 6.1 release). If you have done a code update after June 2011, then you are probably affected. This means that if you are an IBM client you need to read this alert now and determine how far you are into that 208 day period. If you are an IBMer or an IBM Business Partner, you need to make sure your clients are aware of this issue, though hopefully they have signed up for IBM My Notifications and have already been notified by e-mail.
In short what needs to happen is that you must:
To give you an example of the process, my lab machine is on software version 18.104.22.168 which you can see in the screen capture below. So when I check the table in the alert, I see that version 22.214.171.124 was made available on January 24, 2012, which means the 208 day period cannot possibly end before August 19, 2012.
Regardless, I need to know the uptime of my nodes, so I download the Software Upgrade Test Utility (in case you have an older copy, we need at least version 7.9) and run it using the Upgrade Wizard (NOTE! We are NOT updating anything here, just checking):
I Launch the Upgrade Wizard, use it to upload the tool and follow the prompts to run it, so that I get to see the output of that tool. The output in this example shows the uptime of each node is 56 days, so I have a maximum of 152 days remaining before I have to take any action. At this point I select Cancel. You can run this tool as often as you like to keep checking uptime.
Note if you are on 6.1 or 6.2 code you may see a timeout error when running the tool, especially for the first time. If you do see an error, please follow the instructions in the section titled "When running the the upgrade test utility v7.5 or later on Storwize V7000 v6.1 or v6.2" at the Test Utility download site.
As per the Alert:
*** Updated April 4, 2012 with links to fix levels ***
If you have any questions or need help, please reach out to your IBM support team or leave me a comment or a tweet.
*** April 10: The IBM Web Alert has been updated with new information on what to do if your uptime has actually gone past 208 days without a reboot. In short you still need to take action. Please read the updated alert and follow the instructions given there. ***
anthonyv 2000004B9K Tags:  performance v7000 storwize monitor 6.2 ibm svc 8 Comments 11,578 Views
As Barry Whyte pointed out in this blog post, the release 6.2 code is available for download and installation onto your SVC and Storwize V7000.
I thought I would quickly check out two of the announced features of the 6.2 release: the new Performance Monitor panel and support for greater than 2 TiB MDisks. So on Sunday I got busy and upgraded my lab Storwize V7000 to version 126.96.36.199.
Remember that in nearly every aspect the firmware for the SVC and Storwize V7000 are functionally identical, so while I am showing you a Storwize V7000, it equally applies to an SVC.
Firstly I tried the performance monitor panel, and what better way to show you what I saw than on YouTube? This is my first YouTube video so please forgive me if its not slick. I started the performance monitor and captured two minutes of performance data using Camtasia Recorder. Because it is fairly boring to stare at graphs slowly moving right to left, I then sped it up eight times, and this is the result:
The video is shot in HD, so if what your seeing is grainy or hard to read, change the display to 720p or 1080p. Now if you want to see the performance monitor at its actual speed, here is the original normal speed video. Remember this is the same video as above, just slower. It can also be viewed in 720p.
So what are you seeing?
You will note that each metric has a large number (which is the current metric in real time) and a historical graph showing the previous five minutes. You can also change the display to show either node in the I/O group.
I found the monitor to be genuinely real time: the moment I changed something in the SAN (such as starting or stopping IOMeter or starting or stoping a Volume Mirror), I immediately saw a change.
Greater than 2 TB MDisk support
Next I logged onto my lab DS4800 and created two 3.3TiB volumes to present to the Storwize V7000. I chose this size because I had exactly 6.6 TiB worth of available free space on the DS4800 and I wanted to demonstrate multiple large MDisks. On versions 6.1 and below, the reported size of the MDisks would have been 2 TiB (as I discussedhere). Now that I am on release 6.2 with a supported backend controller, I can present larger MDisks. In the example below you can clearly see that the detected (and useable size) is 3.3 TiB per MDisk.
What controllers are supported for huge MDisks?
The supported controller list for large MDisks has been updated. The links for Storwize V7000 6.2 are here and for SVC here. If your backend controller is not on the list, then talk to your IBM Sales Representative about submitting a support request (known as an RPQ).
anthonyv 2000004B9K Tags:  esx san srm v7000 storwize ibm vmware sra storage netapp vsphere xiv. xiv tagged svc and 7 Comments 22,234 Views
I always laugh when people say to me: I wouldn't know what to blog about!
When you work in pre-sales support, you constantly get asked questions and each one of them could be the subject of a new blog post. Right now the most common question I am getting is:
I am implementing VMware Site Recovery Manager (SRM). One of the components I need are vendor specific Site Recovery Agents (SRA). I have searched IBM's website but cannot find them. Where are they?
So the short answer is: you get them from the VMware SRM download site.
So where are the SRAs? On each of the pages below use the Show Details button to see what version SRAs are being shipped with that SRM (although sometimes the pages take a few days between an SRA being added and the page being updated):
There are a few more questions I routinely get asked:
Does IBM actually have an SRA download site?
The answer is yes, but it is an FTP site only for SRAs written by IBM. It is principally a repository for older SRAs and beta SRAs but you can also find the current SRAs on it. You can find the site here. Note however that it is NOT the official source. For that you need to use the VMware site.
What about the SRA for LSI/Engenio based products like the DS4800?
These used to also be found on the LSI site, but since LSI sold Engenio to NetApp, it is no longer available from the LSI or NetApp websites. You need to download the current version from the VMware sites listed above. There is a version for SRM 5 on the VMware download site.
What about nSeries SRAs?
If you need an nSeries SRA, again you should go to the VMware download pages. There are separate SRAs listed and available for IBM nSeries (as opposed to an SRA for NetApp branded filers).
What about an SRA for XIV with SRM version 5?
The answer: The SRA for XIV with SRM 5 (and 5.0.1) is now available from VMware. If you have access to download SRM, you will be able to download SRA version 2.1.0. It is the same SRA for both XIV Generation2 and Gen3.
What about an SRA for Storwize V7000 and SVC version 6.3 code?
The answer: It is coming. We are working to make it available as soon as possible. I will update this post as soon as I have a date for you (we are talking weeks, not months).
*** Update March 23, 2012 - Added details on SRM 5.0.1 ***