If you want to read something visionary, I just finished The Medium is the Massage by Marshall McLuhan. With dramatic changes occurring in the world, often driven by social media, it is amazing to read a book that understood this so well yet was written in 1967. In fact Marshall passed away in 1980, well before most people got near a TCP/IP address and when the role of the Internet in societal change was still many many years away. A fascinating read (with some amazing art work!).
I have also been reading the excellent Steve Jobs bio by Walter Isaacson, what a fantastic book! I did not know about the friendship between Steve Jobs and Larry Ellison, nor about Al Gores role in Apple. The details of the early collaboration between Steve Jobs and Bill Gates are really interesting, as is the amazing back story of how Pixar came to be. The book is as much a history of the personal computer as it is a bio of Jobs himself. Reading the book was also quite thought-provoking, especially on themes like: motivation; ignoring the impossible; creating A-teams and the impact of office layout on creativity and teamwork.
At one point the book refers to the Macintosh software dating game, which led me to watch this video. It is also a revelation. Enjoy!
The XIV GUI (which you can download here) is available for a very wide variety of platforms:
AIX 5.3, 6.1, 7.1
HP-UX 11i v2, 11i v3
Linux (RHEL 5, SLES 10 and 11)
Mac OS X 10.6
Solaris 9 and 10
Windows (various versions including Windows ME!)
My gut feel is that the truly vast number of installations and downloads are for the Windows GUI version. I suspect the rest rarely get a look in, I mean do people actually use the XIV GUI on AIX, Solaris or HP-UX hosts? I really doubt it.
However you can also get just the XIV command line interface (also called the XCLI) for the following operating systems:
Now I can see why these would be popular. Being able to script XIV CLI commands and execute them locally makes perfect sense, so all the major Operating Systems are represented. But it is curious that a separate Windows CLI installer is not listed. Of course you get the XIV CLI when you install the Windows GUI, but I am equally curious if there are users who want to avoid having the GUI present on servers that only need the CLI.
An example of this would be if you use Commvault SnapProtect with XIV, since each client will need to issue XCLI commands to drive the hardware based snapshots. So the good news is that you can actually force the XIV GUI installer to install only the CLI component. You can do this by using the following command (in a command prompt with the XIV GUI installer file in that directory):
You will need to change the file name at the start of the command to suit the version you have downloaded. I tested it on version 3.01 and it worked just fine, all it installed was the XIV CLI. So keep this in your back pocket and use when required.
And if you ARE using the XIV GUI on AIX, HP-UX or Solaris, I would love to hear which platform you are using and why. And if you are still using Windows ME.... your persistence is admirable.
Now this is interesting: IBM is offering a ratings system that allows customers who bought IBM products to write reviews and leave ratings (out of five) on IBM Storage, Power and System Z products, straight from the main ibm.com website.
Many of the preventable issues that occur in a SAN fabric can be avoided by using the right management and monitoring software. One way to get this software is to create or adapt open source packages. While I really like the idea (and price) of roll-your-own solutions, it is not always practical. Apart from the fact that you need to have staff with the relevant skills to do this, long-term maintenance can prove difficult when key people move on. Unfortunately the other extreme (which is far more common) is that many shops actually do nothing at all, ending up without any overall SAN management and monitoring methodology.
An ideal off the shelf solution alternative in a Brocade SAN fabric is to use IBM Network Advisor, the successor product to Data Center Fabric Manager (DCFM). IBM Network Advisor actually has its heritage in a great product called EFCM (Enterprise Fabric Connectivity Manager), that Brocade picked up when they bought McData . I loved working with EFCM and McData switches, especially the McData 6140, which was truly a great SAN director. When Brocade purchased McData they combined EFCM with their own Fabric Manager to create DCFM. They have since combined it with their Network switch management software to create Network Advisor, bringing things to a whole new level. The IBM announcement letter for this software is here.
Now the first thing you may be wondering is: OK so this software sounds great, but how much will it cost? The good news is that trying it out wont cost you anything. It's free to download and trial for 75 days. You can find the download site here.
To demo it, you can spin up a Windows 2008 guest from a template in your favorite Hypervisor. This means you don't even need to request separate hardware to do this trial.
So what benefits should you expect to see? Well first up I am talking about preventing issues like these:
Mistakes made when performing zoning updates
Failure to create regular configuration backups (which especially hurts after a switch failure)
Difficulties upgrading firmware or simply too many upgrades to get through
Poor (or no) switch and performance monitoring
Poor (or no) error notification (including notification back to IBM)
Difficulty collecting log data
Lack of report creation software
In some ways you can sum up the benefits of the software quite easily by looking at the three central menus of IBM Network Advisor: Configure, Monitor, Report.
To give you a view of some of the menu choices, you can see just how rich the options are:
From a configuration perspective you can manage the zonesets of all your fabrics from the one place. This means you don't need to jump between switches. More importantly it gives you a clear indication of what a zoning update is adding AND removing. Accidental removal of a required zone is a very common cause of zoning related SAN issues:
Do you mean to remove that zone?
It can automatically backup your switch configurations. Backing up your configs is frankly a mandatory task that is routinely never done. If a switch fails, then any customization and zoning (if it is a single switch fabric) is lost. This can be a major issue, especially if a business partner or former employee set the switch up. If we schedule a regular backup you won't need to remember, because IBM Network Advisor will do it for you:
Firmware updates also become a far simpler affair. IBM Network Advisor has a built-in FTP server and happily acts as a firmware repository. If you're facing a set of Kangaroo hops, this is a great way to make the whole process very very simple. It will perform compatibility checks before you start and also act as a repository for both firmware and release notes (which is a really nice touch).
From a monitoring perspective, the ability to set up call home to IBM is a huge advantage and a vital step in building a SAN with the highest levels of availability. The added bonus is that you can use IBM Network Advisor to generate a supportsave (a log offload file that you will invariably be asked for during trouble shooting) off every switch in your fleet in one go (you can also set it up to perform this on a regular basis), significantly boosting productivity and aiding in trouble shooting. You can also set up Fabric Watch across the entire fleet of switches, all from a single interface.
If you own DCFM already, then you are eligible for a free upgrade. If after trialing the software you feel that the significant availability benefits this software will give you are worth achieving, talk to your IBM Sales Rep or Business Partner to get a price. I personally think you will find it very reasonable, plus I guarantee that it will not be shelfware and will prove to be a vital tool in getting the most from your SAN.
But... if after trialling IBM Network Advisor you're still determined to try to avoid paying for software, then you could always consider the open-source alternative (rather than do nothing). Check out this document written by Andy Loftus and Chad Kerner from the National Center for Supercomputing Applications at the University of Illinois. It's a great example of a lessons learning document that describes how they built their own monitoring solution. You will find all of their documents and scripts here. As I said, roll-your-own might avoid vendor costs, but they have costs all of their own. Does your team have the skills, willpower and time to do this and maintain it? I would love to hear about your experiences either way.
If you're in Melbourne (that's Melbourne Australia, not Melbourne Florida) why not come along to the next Australian IBM Tivoli User Group meeting. The subject? Tivoli Storage!
The time and date: 10am to 1pm on Friday Nov 25th, 2011 (lunch included!) . The location: IBM building, Seminar Room, 60 City Rd, Southgate.
Now we need you to register to ensure that enough food is ordered for lunch, so please hit the following website, sign up if necessary and then register your intention to attend (Customers and Business Partners are very welcome):
10:00am - The meeting will open with a welcome and introductions from group leaders Nik Hatzikos (from IAG) and Richard Whybrow (from Hertz).
Session 1: Tivoli Storage "Latest Release" session. There is LOTS to talk about. This session will cover TSM 6.3, TSM for VE 6.3, Tivoli Storage FlashCopy Manager V3.1 and TPC 4.2.2 - Presented by Jacques Butcher, IBM Tivoli Storage Specialist. Jacques is a fountain of knowledge with lots of real world experience.
Session 2: MemberTalk - TSM V6 Upgrade. We will hear some interesting feedback from one of our members on a recent TSM V6 upgrade project. Presented by Richard Whybrow from Hertz, Richard is a great presenter who loves to create multi-media.
Session 3: Round table discussion - anyone attending is welcome to bring up a topic for discussion (preferably about Tivoli Storage #
Lunch will be supplied courtesy of IBM.
This is a great opportunity to meet and network with like minded peers at the end of year Tivoli User Group meeting. And hopefully Richard will show us a movie or two!
I am unsure about unnatural love, but perhaps the level of enthusiasm he is seeing comes from: ease of use, awesome GUI, consistent performance, freedom from planning RAID groups, simple growth and upgrade path... I could keep going... it all adds up.
So if you are a member of the cult of XIV, I have a little present for you: A really nice and simple reporting tool.
Here is what you need to do:
1) Download XIV Capacity Report 3.7 from this link. Click where it says Downloading this file.
2) You will get a zip file with five files in it. Unzip them into a folder on a Windows workstation. The Windows workstation also needs the XIV GUI installed on it (actually you only need the XCLI, but the Windows version of the GUI will give you that).
3) Of the five files you just unzipped, you need to edit the file called: xiv_capacity_report_get_files.vbs. Open that file with a text editor (such as Notepad). The easiest way to do this is to right-select the file and choose edit.
4) You need to edit the section that looks like this:
' *********** Edit this list of IP/names and user/password for your own configs ************************
myConfigs.Add "1", "-m 188.8.131.52 -u admin -p adminadmin"
myConfigs.Add "2", "-m 184.108.40.206 -u admin -p adminadmin"
Lets say you have two XIVs, the details for which are:
XIV1 : Management: IP 10.1.10.100 Userid: admin Password: passw0rd XIV2 : Management: IP 10.1.20.100 Userid: admin Password: passw0rd
So we edit the section I mentioned above and make it look like this:
' *********** Edit this list of IP/names and user/password for your own configs ************************
myConfigs.Add "1", "-m 10.1.10.100 -u admin -p passw0rd"
myConfigs.Add "2", "-m 10.1.20.100 -u admin -p passw0rd"
Now save the file and we are done editing. If you only have one XIV, then delete the line starting with myConfigs.Add "2" (or put an apostrophe at the start of the line to comment it out). If you have more than two XIVs, just add extra lines for myConfigs.Add "3", myConfigs.Add "4" and so on, adding details for each machine as shown above. You can ignore the lines further down in the file that start with an apostrophe, these are just examples.
Unless you acquire another XIV, you will not have to do this file editing again.
5) Now double-click on the icon: xiv_create_capacity_report.bat. This is a Windows bat file that will create a Windows command prompt while it is running. It uses XCLI commands, so if the XIV GUI or XCLI is not installed, it won't work. The output will be a new folder with today's date and time. Inside that folder will be a report that will be named something like: xiv_capacity_report_2011_10_30_17_6_36.xls
You can now open the report and check it out (presuming you have Microsoft Excel or some other software that can open XLS files). On my laptop I get a message talking about file formats, when I open the file.
You can ignore this message. If you save the file as an XLS you won't get this message again.
The report itself will have five tabs as shown below:
For every column in every tab, filtering (or sorting) is already setup. This makes it really easy to re-arrange the data to suit what you're looking for.
Arrays Tab List details about all your XIVs including: serial numbers, code versions, soft and hard capacity, how much of the soft and hard space is allocated, how much is free and how much space is being consumed. Great place to grab the machine serial number or confirm which machine has space available.
Pools tab Lists every pool in every XIV showing every possible sizing metric you could possibly want. Cells will be coloured red or yellow if limits are being reached. It is a great place to confirm if your pools are filling up and whether a pool is a good candidate to be changed to Thin Provisioning. Sort column L (allocated vs used) or column N (Hard Capacity Utilization) to identify good candidates for swapping to Thin Provisioning. These are the pools that can give up some hard space.
Hosts tab Will list every defined host for every XIV. You can straight away spot how much space has been allocated to each host and more importantly, how much is being used. Cells will be coloured yellow or red if limits are being reached. Some nice tricks:
Sort by column F (Allocated vs Used) to identify hosts that have asked for lots of space, but not used much of it.
Compare column G (# of volumes) with column I (# volumes mirrored). You may have critical hosts that require every volume to be mirrored, so a quick compare will confirm if there are exceptions.
Volumes tab Will list every volume defined on every XIV. This is a great tab to check which volumes are being mirrored, how many snapshots exist for each volume and how much space is being used by each volume. Again cells in the Used column will be coloured red or yellow if space is becoming short. Some great tricks here:
Sort column F or G (Used GB and %) to identify volumes with no or little data in them. Perhaps they are not really needed? Perhaps they are over-sized or should be in a Thin Provisioning pool.
Sort column H (Mirrored) to identify all volumes where Mirrored = No. Should they be mirrored?
Sort column K (Host Mapped) to identify all volumes not mapped to a host. Unmapped volumes are a great potential source of space!
Failures tab The Failures tab shows any failed components in your machines (like failed disks).
So please download the tool and try it out. Service providers love using this tool for reporting, it is so quick and easy to set up and run. Every time you run the tool you get a new report, so you can automate report creation and keep a nice history.
If you were signed into IBM developerWorks when you downloaded the tool and an update is made available, you should be notified by email, provided your IBM ID is set-up properly with a valid e-mail address.
And as for cults... there is only one cult I ever really liked and they really were called The Cult. The video takes about 15 seconds to get going and yes, the lead singer is dressed like a pirate. Enjoy! (if you like 80s rock...)
The IBM ProtecTIER performs in-line de-duplication of your backup data, enabling much faster backups and much faster restore times. De-duplicating your backups allows you to store a lot "more on the floor".
One of the advertised capabilities of ProtecTIER is that you can get a de-duplication ratio of up to 25 to 1. This sounds great, but advertising this sort of ratio is a blessing and a curse. On the one hand it shows the potential capability of the device, but it can also create very high expectations. In reality the ratio you will achieve is totally dependent on the type of data you back up (video versus database versus big empty files, etc) and way you back it up (full backups versus incremental backups). In my experience, somewhere between 8:1 and 16:1 is a realistic expectation. The reason for this is that your backup data needs to actually contain duplicate data, that is... data the ProtecTIER has already stored in its repository, for de-duplication to work. If every piece of data you backup is unique, encrypted or somehow obfuscated to appear different to the last backup, then no duplicate data will be detected. The result? Your de-dup ratio will be very low.
Backing up Lotus Domino databases is a good case in point. When backing up your Lotus Notes databases you may only see a 2.5 :1 dedup ratio, which is clearly not a good result. The issue may well be with a function called compaction. Compaction re-arranges all of the data contained in the NSF (Notes Storage Format) databases to reclaim space. While this function helps to reduce space utilization from the perspective of Lotus Domino, it also changes the layout and data pattern of every single NSF. So the next time ProtecTIER receives blocks from these databases, they all look unique, so the de-dup ratio naturally ends up being very low. However running compaction is a best practice for Lotus Domino, so disabling it is not a solution.
The solution involves using a tool called DAOS (Domino Attachment and Object Service), which removes all the email attachments from the NSF files and stores them separately. This not only provides substantial space savings for Domino (because it only stores each unique attachment once) but also means that compaction can still run on the NSF files (which are now attachment free). The result at one customer? The combined de-dup ratio went up to 8.5:1 (which was about 2.5:1 on the NSF but almost 20:1 on the attachment files).
The only caveat is that Lotus Domino needs to be at version 8.5 to use DAOS. More information on performing backups with DAOS can be found here.
Thanks to Francois Morin for sharing this with me.
I got an email today from the SNIA (Storage Networking Industry Association) announcing the availability of the SNIA Emerald™ Power Efficiency Measurement Specification. I have blogged about this in the past as something the industry definately needs to do: A standardized way to compare the power consumption of different vendor products.
I have seen many clients try and do this (normally with some sort of spreadsheet) but the power numbers that vendors release are often worst case or for fully blown configurations. For instance the DS8000 power usage numbers are for a fully configured machine (where every possible slot is populated). Most clients do not buy this configuration, so the power numbers appear far higher than they actually would be for the machine they purchased.
Even worse is that power consumption needs to be matched by relative performance. How much bang are you getting per watt of power consumed?
Now only HP and IBM have so far submitted results; and only for one product each. But it is a start and it shows that vendors are willing to participate and contribute. Lets hope the results table gets more results in quick order....
For completeness, here is what the SNIA shared with me (everything below is from the SNIA and is not my own work):
The Storage Networking Industry Association (SNIA) Green Storage Initiative (GSI) has announced the availability of the SNIA Emerald™ Power Efficiency Measurement Specification which was developed collaboratively among more than 25 member companies. Additionally, the GSI announced the SNIA Emerald Program. In combination, the SNIA Emerald specification provides a vendor-neutral power efficiency test measurement set of methods and the SNIA Emerald Program provides an industry-wide repository of measured test data.
"The SNIA is proud to deliver the SNIA Emerald program to a global IT industry and national bodies concerned with energy efficiency," said Leah Schoeb, chair of the SNIA Green Storage Initiative. "We are providing end-users and the industry at-large with a means to test, measure and evaluate storage systems power usage and efficiency, at a time when datacentre energy usage is projected to increase by 19% in 2012 according to Data Center Dynamics 2011 Census Report.”
The SNIA Emerald Power Efficiency Measurement Specification consists of the following elements:
Taxonomy: An industry-wide means of segmenting storage systems for products that span the range from consumer solutions to enterprise configurations which will be used to categorize the test results.
Test Methodology: A detailed and consistent means of testing various types of storage systems with load generators and power measurement instruments.
Test Metrics - Idle Measurement Test: The idle test applies to storage systems and components which are configured, powered up, connected to one or more hosts and capable of satisfying externally initiated, application-level initiated IO requests within normal response time constraints, but no such IO requests are being submitted.
Test Metrics - Active Measurement Test: Testing of storage products and components are said to be in an “active” state when they are processing externally initiated, application-level requests for data transfer between host(s) and the storage product(s).
The SNIA Emerald Program website will provide the industry with the resources needed to learn about, evaluate, test and submit storage system power usage and efficiency test results acquired by using the SNIA Emerald Power Efficiency Measurement Specification. The Program is available to the industry at large, with no requirements of membership. At time of public unveiling of the SNIA Emerald Program website, both SNIA GSI members HP and IBM have submitted test results for their respective storage systems commonly found deployed in data centers around the world. Storage system manufacturers and industry testing labs can download the SNIA Emerald Power Efficiency Measurement Specification from the SNIA Emerald website (www.sniaemerald.com). SNIA also recommends down loading SNIA Emerald User Guide that provides step-by-step guidance on how to setup a test and measurement environment for a storage system under test, and then submit measured test results to the SNIA Emerald Program. Once submitted test results are approved for public posting, manufacturers will obtain a SNIA Emerald Program logo to highlight their program participation. In turn, the industry at large can view the posted test results of various storage systems and review products that met and undergone the SNIA Emerald testing requirements. The specification and program test report do address disclosing configuration information for the system under test about energy-saving storage capacity optimizations that the system may have including features such as deduplication and thin provisioning.
Craig Scroggie, Chairman of SNIA ANZ commented “The program that SNIA has announced is the culmination of many years of collaboration between the industry manufacturers and represents a significant step forward in helping data centre managers and storage administrators get a consistent view of storage power consumption. Australia and New Zealand customers will benefit from this announcement and it is timely given the recent concerns over the impact of carbon tax legislation.”
Reference note 1: 2011 Data Center Industry Census
So lets be honest here. Watching corporate advertising on YouTube is not my favourite thing to do. But watching people I know... People who are passionate and articulate and who are talking about a subject they understand with tremendous depth... that's worth taking your time to check out.
Brian Carmody is the XIV Technical Product Manager. There are few people who can explain a deeply technical concept as well as he can. I have spotted him in two videos so far. If you can forgive the (very) cheesy music and the shaky handheld-camera-like graphics, listen to Brian, Yossi Siles and Robert Cancilla talking about XIV.
With the announcement that you can order an XIV with 3 TB SAS disks, IBM now have some amazing capacity options and some equally clever growth options with XIV Gen3.
As you hopefully know, the XIV consists of modules that each contain 12 disks. An XIV can have 6, 9, 10, 11, 12, 13, 14 or 15 modules (all modules must have the same size disk). You can start at any of those points and then grow without interruption or outage up to 15 modules (that's 243.3 TB!). There is practically no planning required to do a capacity upgrade and the data relocation to re-balance between the nodes is done automatically by the machine (without any end-user intervention).
The useable capacity sizing with 3 TB drives stretches from 84.1 TB with 6 modules to 243.3 TB with 15 modules (these are decimal TB).
However the Capacity on Demand (CoD) options are far more interesting. With CoD you effectively buy a certain amount of capacity up front but also get up to 3 more modules shipped with the machine. You can start using this extra capacity when your business requirements demand it, at which point you will be asked by IBM to purchase it. The advantage here is that you physically get a bigger machine up front with all the performance benefits that bestows, plus you don't have to contact IBM to start using that extra capacity. Lets look at the possible configurations.
So lets take a scenario. You need 100 TB today, but you know this will grow to 130 TB over the next 12 months. So you could purchase an XIV with 9 physical modules (using 3 TB drives), with 7 CoD activations. This means IBM ship a machine that physically has 132 TB and that physically has 108 drives in 9 modules. Your data will be spread over all these drives and all of these modules will be active and working. However you have effectively only paid for 103 TB of that space up front. If you order extra CoD activations, you could also order extra physical modules. As long as you stick to the chart above and have at least one un-activated module, you stay in the CoD program.
When your data requirements exceed 103 TB you just start using the extra space, no license keys or special tasks required. Nice!
So having told you how great it is... are there any disadvantages?
1) You need to actually buy the storage... eventually. Depending on the CoD contract there will be a point when IBM expect you to purchase this extra capacity. The whole point of CoD is that it is like pre-ordering capacity without actually paying for it up front. If your really not certain you need extra capacity, your probably better off not ordering CoD capacity in the first place. Instead order capacity upgrades as you require them.
2) There is nothing to stop you using the storage. Now this is a curious disadvantage because it means that if you have paid for 103 TB and you start using 105 TB, the machine will not tell you off, or yell at you. So is this a good thing or a bad thing? Well I really like the flexibility so I think it is a good thing. Plus there is a nice command called cod_list which displays consumed capacity to help keep you on the path. You can also display it in the GUI. So it just means you need to keep an eye on volume and pool creation to ensure you don't start configuring extra capacity until your prepared to pay for it.
You can also use CoD with 2 TB drives on XIV Gen3 so this is another option. With 2 TB drives, the useable capacities look like this:
For those of you planning to move to ESXi 5.0, IBM have found an annoying (but not show stopping) issue with the way XCOPY is implemented in the VAAI driver. With ESX/EXi 4.1, IBM supplied the VAAI driver, but with ESXi 5.0 this changed and VMware now manage this themselves. It has since emerged that the way VMware implemented XCOPY in this driver does not totally work with the way IBM implemented XCOPY in both the XIV and the Storwize V7000 and SVC.
This is the current situation with the first three VAAI primitives in ESXi 5.0:
Hardware accelerated locking: Also known as Atomic Test and Set (ATS), this function works fine when ESXi 5.0 detects a volume from an XIV, Storwize V7000 or SVC. In fact the moment ESXi 5.0 detects a LUN from any of these products it uses ATS to confirm that VAAI is possible. So this is goodness.
Hardware accelerated initialization: Also known as write same, this function offloads almost all effort on the part of ESXi to write zeros across disks. This function works fine when ESXi 5.0 works with XIV, Storwize V7000 or SVC. So this is also goodness.
Hardware Accelerated Move: Also known as XCOPY, full copy or clone blocks, this function works fine with XIV, Storwize V7000 and SVC if you clone a virtual machine and place the new copy into the same datastore as the source. This means creating multiple clones of a VMDK inside the one datastore will still be accelerated by VAAI. So far so good, but unfortunately on XIV, if you place the clone in a different datastore on the same XIV, it will not be hardware accelerated. This means the clone is still created, but in the old-fashioned way (reading from the source and writing to the target). As for storage vMotion, it also reverts to working in the old-fashioned way, reading from the source and writing to the target, rather than the hardware accelerated way.
So to be clear, this issue with XCOPY:
Does not affect ESX/ESXi 4.1 at all.
Occurs no matter what version of VAAI compliant XIV, Storwize V7000 or SVC code level your running, or what model of XIV (A14 or 114).
Does not affect Atomic Test and Set or Write Same.
Does not affect clone operations on an SVC or Storwize V7000.
Does not prevent you using cloning on XIV, it just means that this task will not be hardware accelerated if the target datastore is different from the source.
Does not prevent you using storage vMotion, it just means that this task will not be hardware accelerated.
How will this be fixed? Well right now it looks like it will be fixed in new firmware on the IBM hardware. Watch this space and I will update you as soon as I have more news to hand.
As for the fourth VAAI primitive, unmap, which is used for space reclamation on thin provisioning capable hardware, please also watch this space on when IBM hardware will support it... BUT in my opinion it does not matter right now, because this new unmap function in ESXi 5.0 can potentially cause issues. This is described here: http://kb.vmware.com/kb/2007427
So until VMware confirm a fix, I recommend that you run the following commands on all ESXi 5.0 boxes which connect to IBM Storage to disable unmap. I tested the following syntax to confirm it works:
First confirm the value of the unmap setting (1 means enabled, 0 means disabled):
~ # esxcli system settings advanced list -o /VMFS3/EnableBlockDelete | grep "Int Value"
Int Value: 1
Default Int Value: 1
Then disable it:
~ # esxcli system settings advanced set --int-value 0 --option /VMFS3/EnableBlockDelete
Then confirm it is disabled:
~ # esxcli system settings advanced list -o /VMFS3/EnableBlockDelete | grep "Int Value"
Int Value: 0
Default Int Value: 1
The eternal question: Which hardware/software combinations are tested and supported? If you use IBM Storage hardware and you need to answer this question you need to be using the IBM System Storage Interoperation Center, or SSIC, which you will find here:
I use this site a lot and rely heavily on the output it creates. I thought I knew the site well, but I recently learnt some really handy tricks that you might find helpful...
1) Export all the interop data for a single product version.
If you want to download every interoperability test result for a specific product version, you can select the relevant version from the Product Version box of the SSIC and then select Export Selected Product Version (xls).
In the example below we want to see all the results for XIV Gen3 which uses XIV Software version 11.
a) Use the scroll bar in the Product Version box to bring up the XIV product versions. Youdon't need to make a selection from the Product Family or Product Model boxes.
b) Select IBM Storage System (11) from the Product version list.
c) Select the option to Export Selected Product Version (xls). A spreadsheet compressed into a ZIP file will be downloaded in your browser.
So that is just two clicks and the result is a giant spreadsheet. Reminds me of when interop matrices were giant PDFs.
2) Changing your selections from an existing search
As you make selections, the webpage leaves what are called cookie crumbs. They will appear at the top of the page and can be seen in the example below, numbered 1 to 6. You can use those cookie crumbs to go backwards at any point, to any point.
3) Start anywhere
It seems human nature to always start at the top and work downwards. But in fact you can start anywhere on the SSIC and work in any direction. There are no real restrictions on the combinations you can attempt to build. Every time you make a selection in a different box, the number of configuration results will drop. For instance just click FICON in the Connection Protocol box... or just select IBM AIX 7.1 from the Operating System box. Then work up or down from there.
Hopefully these suggestions will help you work more effectively with the SSIC.
I know the walls are coming down... but there are still many organizational barriers that can exist in IT. How about:
The Networking team (who may possibly be allied to or at war with the firewall/security team)
The Storage admin team (possibly split between open and System z)
The Systems admin teams (possibly split between System z and open and then split again into Windows and Unix or VMware and Unix)
The Applications admin teams (don't get me started on how those guys and gals can get split up)
The Security team
Team work and co-operation? Sure it's an option.... but then an option means its optional.... right?
So when vendors come along with plug-ins and products that dare to connect two worlds... is this a unifying force, or is it anti-matter, or do they just get ignored and not used?
A case in point being the IBM Storage Management Console for VMware vCenter which you can download from here. I have written about this plug-in before, but with the release of version 2.6 (that supports vSphere 5.0), I thought I would try something out. Installing the plug-in potentially offloads a lot of storage management from the storage admin to the VMware admin. But what if the storage admin does not WANT to offload this work?
The answer is to give the VMware admin read-only access.
When you configure your IBM storage device to the plug-in, you supply the plug-in with log-in credentials (so it can log into your IBM storage device and collect the required information). If the user-id supplied only has read-only access to the XIV for instance, the plug-in still works... but not for any operations that change resources. You cannot see the pools on the XIV, but you can still see your volumes and any snapshots that have been created (but annoyingly you cannot see mirrors).
This does have one big advantage. You can clearly match the VMware datatstore name to the XIV volume name. You can also identify which XIV supplied the volume.
This is must for large installations regardless of what storage admin tasks (if any) you want to allow the VMware admin to perform.
I also tested this with Storwize V7000 with a user in the Monitor category and got pretty well the same results. A nice bonus is that I could also see the state of the mirrors as well as the flashcopies. In the example below, all of this information would normally not be visible to the VMware admin, so this is very handy stuff.
Of course I get to also visit one-man bands where the same (exhausted) individual manages the VMware servers, the Operating System Guests, the Network, the Firewall, the Exchange server, the SQL servers and pretty well everything else including getting the elevators and coffee machine fixed. For those people, they need all the help they can get.