IBM has today announced a whole swag of planned new features across the entire IBM Storage product line. You can read the announcement letter here and I have also dropped the text at the bottom of this blog post (to save you clicking on the link).
It's a very impressive list, but to hone in on a few of the more exciting offerings:
IBM Easy Tier will be enhanced to cache hot data in SSD storage installed in a client server. Looks like it will initially be a combination of DS8700/DS8800 and AIX with or Linux servers. I am sure there are plenty who will immediately think of EMC VFCache, so I am keen to get more details so I can see how the two compare. If you are curious in the meantime, check out this EMC fact sheet and then read this fascinating interview with the CMO of FusionIO.
A new high density storage module will be made available, initially I suspect for the DS8800. This is a really important step as we are seeing a lot of new technologies emerging in the SSD space. This is because the technical requirements of SSD don't always line up with the architectures of existing storage controllers, so a custom built enclosure designed just for SSD makes perfect sense.
The IBM XIV will be enhanced with the ability to cluster multiple XIVs together and migrate volumes non-disruptively between them. The non-disruptive volume migration is a great new feature which should definitely help with swapping XIVs out as new models come available.
There are plenty of other new features as well, so check out the announcement letter reproduced below:
IBM® intends to support a number of new enhancements to a variety of IBM storage systems in the future. These enhancements will leverage innovative research on intelligent algorithms, automation, and virtualization that is being incorporated into products in the IBM storage portfolio. The statements of direction highlighted here are intended to provide a glimpse into the IBM storage roadmap for selected product capabilities.
IBM intends to deliver:
Advanced Easy Tier™ capabilities on selected IBM storage systems, including the IBM System Storage® DS8000® , designed to leverage direct-attached solid-state storage on selected AIX® and Linux™ servers. Easy Tier will manage the solid-state storage as a large and low latency cache for the "hottest" data, while preserving advanced disk system functions, such as RAID protection and remote mirroring.
An application-aware storage application programming interface (API) to help deploy storage more efficiently by enabling applications and middleware to direct more optimal placement of data by communicating important information about current workload activity and application performance requirements.
A new high-density flash storage module for selected IBM disk systems, including the IBM System Storage DS8000 . The new module will accelerate performance to another level with cost-effective, high-density solid-state drives (SSDs).
IBM intends to extend IBM Active Cloud Engine™ capabilities to:
Allow files on selected NAS devices to be virtualized by SONAS and Storwize® V7000 Unified. Virtualization capabilities provide access across a unified global namespace, while facilitating transparent file migrations in parallel with normal operations. This capability will help provide customer investment protection as clients continue to leverage their existing NAS assets while exploiting the capabilities of IBM Active Cloud Engine .
Enable file collaboration globally via IBM Active Cloud Engine . This capability will help enhance productivity where users at geographically dispersed locations can both share and modify the same file.
IBM intends to deliver Cloud features to SONAS and Storwize V7000 Unified to support:
Web Storage Services, a standards-based object store and API that implements the Cloud Data Management Interface (CDMI) standard from Storage Networking Industry Association (SNIA) to support the implementation of storage cloud services.
Self-service portal designed to speed storage provisioning, monitoring, and reporting.
IBM intends to support an increased scalability of capacity, performance, and host bandwidth by clustering IBM XIV® Gen3 systems together and providing the capability to migrate volumes across the cluster without disrupting applications. Management of the cluster will remain simple with consolidated views and shared configurations across the systems. These capabilities are intended to help clients address the scalability and management requirements for effective cloud computing.
IBM intends to extend NAS data retention enhancements for IBM Storwize V7000 Unified and IBM SONAS to provide file "immutability" to help support file integrity from the time the file is designated as immutable through its lifecycle. Immutability is intended to secure files from inadvertent or malicious change or deletion.
IBM intends to enable Real-time Compression for block and file workloads on Storwize V7000 Unified systems. This enhancement is designed to help clients experience the same high-performance compression for active primary block and file workloads on Storwize V7000 Unified that is being announced for block workloads on Storwize V7000. IBM Storwize V7000 Real-time Compression is designed to deliver enhanced storage efficiency with potential benefits including lower storage acquisition cost (because of the ability to purchase less hardware), reduced storage growth, and lower rack space, power, and cooling requirements.
All statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only. The information in the above paragraphs are intended to outline our general product direction and should not be relied on in making a purchasing decision. The information is for informational purposes only and may not be incorporated into any contract. This information is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for our products remains at our sole discretion.
If you have combined vSphere 5.0 with XIV, then you may want to try out the new IBM Storage Provider for VMware VASA (vSphere Storage APIs for Storage Awareness). You can download the installation instructions, the release notes and the current version of the IBM VASA provider from here. Clearly because VASA is introduced in vSphere 5.0 your VMware vCenter also needs to be on version 5.0.
Now IBM have had a vCenter plugin for a very long time (which I have written about here, here and here) and while you still need that plugin if you want to do storage volume creation and mapping from within vCenter (as opposed to using the XIV GUI), the VASA provider makes storage awareness more native to vCenter. This is a very important step. It means instead of using vendor added icons and tabs (like the IBM Storage icon and the IBM Storage tab that are added by the IBM Storage Management Console for vCenter), you just use the default vCenter tabs.
Right now version 1.1.1 of the IBM VASA provider delivers information about storage topology, capabilities, and state, as well as events and alerts to VMware. This means you will see new additional information in three tabs: Storage Views, Alarms and Events.
After installing and setting up the VASA provider, in vCenter select your VMware cluster, go to the Storage Views tab and select the view Show all SCSI Volumes (LUNs) there are four columns with more information. The Committed, Thin Provisioned information, Storage Array and Identifier on Array (indicated with red arrows) comes straight from the XIV (hit the Update button at upper right if you are not seeing anything yet). This is really useful information as it lets you correlate the SCSI ID of a LUN to an actual volume on a source array. Here is a cut-down view of that extra information:
If you want a larger screen capture you can find one here.
The Task & Events and Alarms tabs will also now contain events reported by the VASA provider such as thin provisioning threshold alerts (although if you have just installed the provider you may see nothing new, as nothing has occurred yet to provoke an alert or event).
As usual I have some handy tips on the steps you will need to take to get VASA going:
First up you will need to identify a virtual machine to run the provider on (or just create a new one). I chose to deploy a new instance of Windows 2008 from a template. Because the VASA provider communicates to vCenter via an Apache Tomcat server listening on port 8443, that port needs to be free and unblocked. This also means you should not run the VASA provider in the same instance of Windows as the vCenter server (see below for more information as to why).
Download the IBM Storage Provider for VMware VASA as per the link above (use version 1.1.1, see the user comments in this post for details about a bug in version 1.1.0).
Install the provider in the Windows VM you created in step 1. The tasks are detailed in the Installation Instructions, but it is a simple follow-your-nose application installation. As per most XIV software packages, it will install a runtime environment (xPYV which is Python) as part of the install.
Now we need to define the credentials that VMware vCenter will use to authenticate to the IBM VASA Storage Provider. These should be unique (and are not an XIV userid and password - this is only between vCenter and the provider software). In my example I use xivvasa and pa55w0rd. The truststore password is used to encrypt the username and password details (so that they are not stored in plain text). Open a Windows command prompt (make sure to right select and open it as an Administrator) and enter the following commands:
cd "C:\Program Files (x86)\IBM\IBM Storage Provider for VMware VASA\bin" vasa_util register -u xivvasa -p pa55w0rd -t changeit
Don't close the command prompt, because we now need to define the XIV to the IBM VASA provider.
You need the IP address of your XIV and a valid user and password on the XIV that can be used to logon to the XIV. So in this example my XIV is using 10.1.60.100 and I am using the default admin username and password (which I know does not set a good example). This is the command you need to run:
If this command fails, reporting your firmware is invalid, you are probably using the original 1.1.0 version of the VASA provider, go back to the IBM Fix Central website and make sure you have the latest version (at least version 1.1.1). If it reports the firmware cannot be read, make sure you are running the Command Prompt as an Administrator.
Once you successfully added the XIV to the provider, you need to restart the Apache webserver. Do this by starting the services.msc panel and looking for the Apache Tomcat IBMVASA service as pictured below. Stop it and then start it. Once you have done that you can logoff from the VASA VM.
Now connect to your vSphere Client (which needs to be on at least version 5.0.0) and from the Home panel, open the Storage Providers panel.Then select the option to Add a new provider. The URL needs to include the correct port number (by default 8443), so it will look something like this (where the provider is running on 10.1.60.193). Note also that the VASA provider version number is in the URL, so if you upgrade the provider you will need to change the URL (currently v1.1.1):
The Login and password should match the user id and password you defined in step 4 (remember it is not logging into the XIV, it is logging into the VASA provider).
If you get a message saying your user id and password are wrong, you probably forgot to stop and start Apache in step 6 above. If you succeed you should see a new provider listed. Highlight the provider and select sync to update the last sync time.
Your setup tasks are now all completed. Now go and explore the panels I detailed above to see what new information you have available to your vCenter server.
Why a separate server for the VASA provider?
The IBM VASA provider uses Apache Tomcat, which by default listens on port 8443. However since vCenter already has a service listening on port 8443, it means we have a clash. I googled and found the Dell and Netapp VASA providers also listen on port 8443 and they also recommend separate servers. I noted Fujitsu's provider uses a different port but still requires a separate server. So it seems if you have multiple vendors you will either have to spin up a separate server for each vendors provider, or start playing with changing the port number. The installation instructions for the IBM VASA Provider explain how to change the default port number if you are truly keen.
I always laugh when people say to me: I wouldn't know what to blog about!
When you work in pre-sales support, you constantly get asked questions and each one of them could be the subject of a new blog post. Right now the most common question I am getting is:
I am implementing VMware Site Recovery Manager (SRM). One of the components I need are vendor specific Site Recovery Agents (SRA). I have searched IBM's website but cannot find them. Where are they?
So the short answer is: you get them from the VMware SRM download site. However before downloading, there is a key task that absolutely needs to be performed:
Visit the VMware vCenter Site Recovery Manager Storage Partner Compatibility Matrix. This site will confirm what products are supported by each version of SRM. You can find it here, but clearly you need to check back regularly to ensure you have the latest information.
Now find your storage device in the matrix and confirm what firmware levels are supported. This is really important. For example, the Feb 27, 2012 edition of the matrix tells me that the Storwize V7000 is supported for SRM version 5.0, but only when running Storwize V7000 firmware version 6.1 or 6.2. This is significant because if you upgrade to version 6.3 you are not supported. In fact that combination doesn't actually work yet, as detailed here. Clearly something you need to be aware of when planning firmware updates.
So where are the SRAs? On each of the pages below use the Show Details button to see what version SRAs are being shipped with that SRM (although sometimes the pages take a few days between an SRA being added and the page being updated):
There are a few more questions I routinely get asked:
Does IBM actually have an SRA download site?
The answer is yes, but it is an FTP site only for SRAs written by IBM. It is principally a repository for older SRAs and beta SRAs but you can also find the current SRAs on it. You can find the site here. Note however that it is NOT the official source. For that you need to use the VMware site.
What about the SRA for LSI/Engenio based products like the DS4800?
These used to also be found on the LSI site, but since LSI sold Engenio to NetApp, it is no longer available from the LSI or NetApp websites. You need to download the current version from the VMware sites listed above. There is a version for SRM 5 on the VMware download site.
What about nSeries SRAs?
If you need an nSeries SRA, again you should go to the VMware download pages. There are separate SRAs listed and available for IBM nSeries (as opposed to an SRA for NetApp branded filers).
What about an SRA for XIV with SRM version 5?
The answer: The SRA for XIV with SRM 5 (and 5.0.1) is now available from VMware. If you have access to download SRM, you will be able to download SRA version 2.1.0. It is the same SRA for both XIV Generation2 and Gen3.
What about an SRA for Storwize V7000 and SVC version 6.3 code?
The answer: It is coming. We are working to make it available as soon as possible. I will update this post as soon as I have a date for you (we are talking weeks, not months).
*** Update March 23, 2012 - Added details on SRM 5.0.1 ***
The updated XIV GUI that supports version 11.1 of the XIV software (which adds support for SSD Read Cache) is now available for download. This brings the XIV GUI to version 3.1 and you can download it for Windows, Mac, Linux, Solaris, AIX and HP-UX from here.
So what benefits will you get?
The new GUI will display information about the SSD read cache. For instance the statistics panel will now also report on SSD cache hits (as opposed to memory cache hits). The GUI will also display the presence and health of the SSD in each module (presuming they have been ordered for that machine). You can clearly see that it is located at the rear of the module!
It supports the IPv6 protocol. So if your XIV system has code level 11.1.0 or above, you can manage that XIV over an IPv6 connection (after using the updated GUI to define the new addresses).
The GUI can now manage up to 81 systems from a single console. Yes you read that right: 81 systems. So let's think about that: IBM would only take the GUI to that number if there were clients who were approaching that number. Outstanding!
Enhanced search and filtering. Allows you to search across all managed devices and also filter what gets displayed. The search function is really nice. You get to it from the View menu as shown: In this example I search for the term test and get a considerable number of hits. If you notice the first column uses some very nice icons to indicate the resource type (such as a volume, pool or host cluster):
The GUI now displays un-mapped LUNs as a separate category. This is also a very nice enhancement.
One other change is that if you start the XIV GUI in demo mode it now also displays an XIV Gen3 (so you can see the Gen3 patch panel).
If you are running Generation 2 XIVs (on 10.x.x code) you will benefit from those last three improvements so there is something for everyone.
I have updated my IBM Storage WWPN Determination Guide to version 6.5. You can find the updated guide on IBM Techdocs here.
The main change is that new DS8800s are now presenting slightly different WWPNs, so I added three new pages to describe the changes.
If this guide is new to you, its purpose it to let you take a WWPN and decode it so you can work out not only which type of storage that WWPN came from, but the actual port on that storage. People doing implementation services, problem determination, storage zoning and day to day configuration maintenance will get a lot of use out of this document. If you think there is an area that could be improved or products you would like added, please let me know.
It is also important to point out that IBM Storage uses persistent WWPN, which means if a host adapter in an IBM Storage device has to be replaced, it will always present the same WWPNs as the old adapter. This means no changes to zoning are needed after a hardware failure.
I also host the book on slideshare, so you can also view and download it from there:
Its been a long time coming, but I finally joined the cult of Mac in the form of a new MacBook Pro. Having not used an Apple Mac for over 15 years, I must say I am truly loving what they have done with the operating system and the hardware (my last Mac was a Mac SE bought in 1990).
Now this post is not a rant from a new convert to everything Apple. In fact my main gripe is that what you rapidly discover when you move to Mac OS is that not every piece of software is going to work in your new world. While Lotus Notes and Sametime have very nice Mac versions, my day to day work involves IBM Storage and there are several tools that I need that are Windows only. These include Capacity and Disk Magic (used to size solutions) and eConfig (used to order IBM products). This means for certain applications I need to use a Hypervisor (such as VMware Fusion or Parallels).
But what about managing IBM Storage? Well I have some good news on that front:
SAN Volume Controller and Storwize V7000: Because these products are managed from a web page, they are operating system agnostic. To be clear, officially only Firefox 3.5 (and above) and IE 7.0 and 8.0 are supported (support details are right at the bottom of this page while setup details are here). Since IE is no longer available for Mac, you should install Firefox (or try out Safari or Chrome, I have tried all three without issue).
XIV: The XIV GUI is available in a native Mac OS version from here. The release notes state that the XIV GUI works on Mac OS X 10.6 but I am happily using it on Mac OS X 10.7 (Lion). The Mac OS X installation process is simply beautiful (just drag and drop, one of the truly nice features of Mac OS X) and of course it works just as nicely on Mac as it does on the other supported operating systems.
Drag and drop done right.
Attaching OS X to IBM Storage
Of course maybe you want to attach your Mac OS X box to IBM Storage. If you visit the SSIC you will find IBM supports OS X on pretty well it's entire range including SVC, Storwize V7000, XIV, DS3500 and DCS3700. Mainly these use the ATTO HBA and multipath device driver. If your particular setup is not there, get your IBM Pre-sales Support to open a support request, depending on your request, approvals are normally very fast.
Of course I have to mention the iPhone and iPad. IBM have the XIV Mobile Dashboard for both devices, which I previously blogged about here (iPad) and here (iPhone). These are really elegant apps that even have a cool YouTube video.
Of course now I want all the goodies promised in Mountain Lion. With the convergence of OS X and iOS, I would love to see even more converged tools. A man can dream....
There is a demo mode, but right now there is no tick box to activate it. Simply use the word demo in all three fields at login. In other words:
IP address: demo User ID: demo Password: demo
2) Retina display requirement
The Mobile Dashboard was written for the Retina display (that comes with an iPhone 4 or iPhone 4S). This sadly means that the iPhone 3GS and earlier will not be able to use the new Mobile Dashboard. This wasn't done as part of some devious plan on IBM's part to force you to buy a new iPhone, the developers simply needed the better resolution to draw those graphs and provide the richest and clearest display of information on a single page (you will believe it when you see it, the detail is quite stunning).
The Apple Store clearly states the hardware and iOS requirements on the download page, however you can still try to install it on an iPhone 3GS. Curiously what you get is this rather bizarre message:
The reason you get this message is simple: There is no way to specify when uploading a new app to Apple that you need the Retina display. So instead the developer needs to specify a feature that is not found on earlier iPhones, such as a camera flash. So it is not that the XIV Mobile Dashboard needs a flash in your camera, it is simply a quirk of the Apple store.
And for those of you who are using Android devices, your calls are being heard. Watch this space for developments in that direction.
I got a question about Veritas DMP and XIV, so I thought I would write a quick post with some details on the subject.
A fundamental requirement for a host attached to a fibre channel SAN, is the use of multi-pathing software. One alternative to achieve this (that IBM support for most operating systems attaching to XIV) is Symantec Dynamic Multi Pathing (DMP). A nice way to find out whether this is the case for your particular operating system is to head to the SSIC, choose Enterprise Disk → XIV Storage System → Your product version and then Export the Selected Product Version to get a spreadsheet of every supported environment. Now under the multi-path heading of each page you will see what choices are supported.
It works with heterogeneous storage and server platforms (so you could have EMC and IBM attached to the same server at the same time).
You can centrally manage all storage paths from one central management GUI.
Then the question becomes, if I choose to go down the DMP route, do we still need the XIV Host Attachment Kit (HAK)?
The answer is a definite yes!
Veritas DMP and Solaris
If you're using DMP with Solaris, when you run XIV HAK wizard, it will scan for existing dynamic multi-pathing solutions. Valid solutions for the Solaris operating system are Solaris Multiplexed I/O (MPxIO) or Veritas Dynamic-Multipathing (VxDMP). If VxDMP is already installed and configured on the host, it is preferred over MPxIO.
Veritas DMP and Windows
For a Windows host the important point is that Veritas Storage Foundation Dynamic Multipathing (DMP) does not rely on the native multipath I/O (MPIO) capabilities of the Windows Server operating system. Instead, it provides its own custom multipath I/O solution. Because these two solutions cannot co‐exist on the same host, perform the following procedure if you intend to use the Veritas solution:
Install the Veritas Storage Foundation package (if it is not already installed).
Restart the host.
Install the IBM XIV Host Attachment Kit (or run the portable version).
The HAK will perform whatever system changes it detects are necessary while still allowing DMP to perform the multipathing. This may require a reboot (to install Windows hot fixes).
As I said, the HAK will ensure that the required hot fixes are present. These hot fixes are fairly important. To understand what tasks the HAK will want to perform WITHOUT performing them, use the portable HAK and run:
This will tell you what tasks will be undertaken when you run the command without the -i parameter. I detailed this behaviour here.
One benefit of the HAK is the wonderful xiv_devlist command. Even if you are using DMP, the xiv_devlist command will still work, although you may need to specify veritas as per this example:
xiv_devlist -m veritas
Need more documentation?
This is all documented in the XIV Host Attachment Users Guide which you can find here .
In that post I detailed how the XIV began as the Nextra, was then released as the IBM XIV and then updated to the XIV Gen3. So this means last year we saw release 3.0 of the XIV.
At the risk of getting over excited, some of the achievements of the IBM XIV have been truly remarkable:
There are 59 Clients with more than 1 PB each of usable XIV capacity
There are 16 Clients with more than 2 PB each of usable XIV capacity
I am sure some competitors will find larger numbers to try and drown out this achievement, but the point is this: These are FANTASTIC numbers. It shows that despite all the FUD, the XIV is a success for IBM and a success for IBM's customers.
So at the time of the Gen3 release, IBM made no secret of the fact that they planned to add the option of SSD as a read cache layer. In fact each and every Gen3 shipped so far has the mounting and attachment hardware needed to support those SSDs.
Now with release 3.1 IBM turns that promise into a reality.
So... to answer some possible questions:
How can I get some of this SSD goodness?
Order the feature! For existing machines, IBM will need to update the firmware of your XIV Gen3 (non-disruptively) to add SSD support. There will also be an updated version of the XIV GUI. Once those are in place, an IBM Service Representative will add an SSD to each interface module. All of this will be completed without interruption to your operations.
How much read cache will I get?
Each XIV Gen3 Module already has 24 GB of server RAM. Since an XIV can vary from 6 to 15 modules (based on capacity), that gives you between 144 GB and 360 GB of server RAM to provide read and write cache. If you add the SSD option you will get a 400 GB SSD per module. This means we get between 2.4 TB to 6 TB of additional read cache (depending on module count). The SSDs are not used as write cache.
What administration will I need to perform?
How about none? This is XIV: it's all about making it simple. It's no surprise that practically every IBM Storage device now uses the XIV GUI. These guys wrote the book on making things easy to use.
But seriously, no administration? Well... there are two things you may want to do:
Check how many SSD based read hits you are getting (versus memory based read hits). It's always nice to see just how effective these SSDs are proving themselves to be.
Turn SSD read caching off or on at a per volume level (by default it is on for all volumes). I don't anticipate many clients will need or want to do this, but the option is there and it is very easy to do.
Won't these SSDs wear out or slow down over time?
These are the two great fears of SSD... and XIV development has combined their art with some great work from IBM Research to make sure this is not an issue. The way data is written out to the SSD is handled in a very sophisticated manner. The end result will be consistent and predictable performance with a very long operational life. I will give you more details about exactly how this is done in a future post.
What happens if one of these SSD fails?
Because the SSD is not used as write cache, no data can be lost. Data in memory cache is de-staged by that module to both SAS disk and asynchronously to SSD (although not all data will necessarily go to SSD). So there are no bottlenecks and there is no risk. The other modules will keep using their SSDs and IBM will replace the failed SSD non-disruptively.
What sort of performance improvement will I see?
Depending on application and data patterns you should see your IOPS more than double. A three times improvement is quite possible. Response times could drop by more than two thirds. In many ways these are obvious results.
IBM intend to demonstrate using industry standard benchmarks what the performance of an XIV Gen3 with SSD will be. I can tell you these numbers are going to be very impressive. Watch this space.
Is that it? Any thing else in this release?
Release 3.1 also adds:
The ability to mirror between Generation 2 and Gen3 XIVs.
All the base support for IPv6 is now in place (although there are still some certification tests to complete)
Improvements to system thresholds (such as maximum pool size)
GUI enhancements (mainly to add panels for the SSD cache)
A new iPhone app (in addition to the existing iPad app)
If you interested in the current state of play with XIV, there are a huge number of new resources that have been created or updated as part of the XIV 3.1 update, so I thought I would give you a list. If you are a customer then please scan down to see if there is anything here that interests you. If you an IBMer or IBM Business Partner (or IBM competitor!), this is all mandatory reading. Either way, check out the new YouTube video, it is very cool.
As promised here is the new video on YouTube that shows the new XIV iPhone App!
I just checked the Apple App Store and cannot see the application yet (only the iPad version). I will update you the moment the iPhone version becomes available for download (and yes it will have a demo mode).
For more XIV related materials (white papers, demos, videos, case studies...), I invite you to pop over to the XIV area of the ibm web site: ibm.com/storage/disk/xiv. You'll find links to materials throughout, such as the SPC report and ISV white papers; click on the Resources tab for a consolidated list of the most recent materials.
I have previously blogged about two XIV report generation tools that you can download and start using. This is just a short update to let you know there are updated versions of both tools, plus a new one that has just been added. These tools are all on my files section at the IBM developerWorks site (where you can also find my Visios).
To sum up what these tools do:
XIV Capacity Report
This Script creates an XLS or CSV file that contains 4 very useful tabs: Systems, Pools, Hosts, Volumes. You can use this to report on your storage, find un-mapped or un-mirrored volumes, check your consumption, etc. Clients, Business Partners and Cloud providers love this nice and simple tool.
It is currently up to version 3.9 and you can find it here.
XIV Performance Report
This Script creates an XLS or CSV file that gives the same information as the XIV Top utility but for a range of days (so we are looking at historic versus current performance). You could for example see what were the most busy volumes for the past 3 days or for the previous week. You can easily spot if host HBAs are not being used or if XIV interface traffic is not being balanced.
It is currently up to version 3.9 and you can find it here.
XIV Usage Report - NEW!
This Script creates an Excel file that shows you the current and historic usage of your volumes and pools. It also gives a trend prediction that will help estimate when your pools or volumes will be full of data. This is great for trend and growth analysis.
It is currently on version 3.9 and you can find it here.
One of the many tools in your XIV toolkit is the Host Attachment Kit or HAK. Two of my favorite commands provided by the HAK are xiv_attach and xiv_fc_admin which we use to configure our hosts. Of course users want to be exactly sure what changes these commands might make and while the current output gives some good information, some of our users wanted more. So the good news is that the XIV Host Attachment Kit version 1.7.1 is now available and the Windows and Linux versions now have a new extra verbose mode which you can access with the -i parameter as shown:
By using -i with the commands you will see in even more detail what changes are needed to configure your host to attach to an XIV, but without any changes actually being performed (which is a very cool thing). If you still need more information on the output you can use Appendix A of the updated XIV Host Attachment Kit Users Guide, which you can get from here. The guide has some very useful information on best practices and VMware setup and is really mandatory reading for people using XIV.
To be clear the new verbose parameter only works with HAK version 1.7.1, which you can download from here. Other notable changes in version 1.7.1 are over 40 bug fixes and over 20 improvements, including support for RHEL 5.7, 6.1, and 6.2 (but please check the SSIC to confirm support).
As I previously blogged, the XIV Host Attachment Kit now comes in a portable version. This means you can run the HAK commands without having to first install any software or make any changes on your server. When you download the HAK you get both the portable and full installer versions.
In other news the XIV Host Software team now have their own blog! Check it out here and add it to your RSS list.
I am getting this question on a very regular basis:
"We have just upgraded to ESXi 5.0 but we cannot find the VAAI driver on the IBM Website"
The answer? There is no vendor supplied driver because no driver is needed. ESXi 5.0 uses a SCSI T10 compliant set of commands that all vendors need to support for VAAI to work.
But of course in the tradition of all answered questions, it leads to another question:
"Once I have upgraded to ESXi 5.0 how can I tell if VAAI is really working?"
The good news is that it is very easy to spot if ESXi 5.0 has detected a VAAI capable LUN. The moment a new LUN is detected by ESXi 5.0 it tries out an Atomic Test and Set command. If that works, you will see that Hardware Acceleration shows as Supported in vCenter. In the screen capture below I have three datastores, two from XIV and one from Storwize V7000, all presented to an ESXi 5.0 server. I dragged the Hardware Acceleration column over from the right hand side to help with the screen capture (in case your vCenter looks different), but you can see the Hardware Acceleration column shows each DataStore as Supported (and did so the moment the volume was detected).
Of course having seen the Hardware Acceleration Supported message only proves that Atomic Test and Set works. To confirm if XCopy (Hardware Accelerated Move) is working, on SVC or Storwize V7000 we can use the Performance monitoring panel. In the example below I first performed a storage vMotion, moving a virtual machine between two Datastores located on the same Storwize V7000 (running 184.108.40.206 firmware). I then performed a clone of the same virtual machine, where the source was on one datastore and the target was placed on another (but both located on the same Storwize V7000). What you can clearly see is that both operations (storage vMotion and cloning) generated no volume traffic, only MDisk traffic. This means that the ESXi server is doing none of the work and the storage is doing all of the work.
The XIV GUI (which you can download here) is available for a very wide variety of platforms:
AIX 5.3, 6.1, 7.1
HP-UX 11i v2, 11i v3
Linux (RHEL 5, SLES 10 and 11)
Mac OS X 10.6
Solaris 9 and 10
Windows (various versions including Windows ME!)
My gut feel is that the truly vast number of installations and downloads are for the Windows GUI version. I suspect the rest rarely get a look in, I mean do people actually use the XIV GUI on AIX, Solaris or HP-UX hosts? I really doubt it.
However you can also get just the XIV command line interface (also called the XCLI) for the following operating systems:
Now I can see why these would be popular. Being able to script XIV CLI commands and execute them locally makes perfect sense, so all the major Operating Systems are represented. But it is curious that a separate Windows CLI installer is not listed. Of course you get the XIV CLI when you install the Windows GUI, but I am equally curious if there are users who want to avoid having the GUI present on servers that only need the CLI.
An example of this would be if you use Commvault SnapProtect with XIV, since each client will need to issue XCLI commands to drive the hardware based snapshots. So the good news is that you can actually force the XIV GUI installer to install only the CLI component. You can do this by using the following command (in a command prompt with the XIV GUI installer file in that directory):
You will need to change the file name at the start of the command to suit the version you have downloaded. I tested it on version 3.01 and it worked just fine, all it installed was the XIV CLI. So keep this in your back pocket and use when required.
And if you ARE using the XIV GUI on AIX, HP-UX or Solaris, I would love to hear which platform you are using and why. And if you are still using Windows ME.... your persistence is admirable.
Lets imagine a new rack server or a new blade server has been added to your Fibre Channel SAN. The first job for the SAN administrator is to zone it to the storage it requires access to. The task normally runs something like this:
Identify the WWPNs for the new server HBA. We can do this using Qlogic SAN Surfer or Emulex HBAnywhere, or by looking at the WWPNs reported by the Fibre Channel switch or by using datapath query wwpn (with SDD and SDDDSM) or by using the xiv_fc_admin -P command with the XIV HAK. There are lots of different ways, you get the idea.
On fabric 1 create a new alias for the server HBA port cabled to that fabric.
For each storage device that the server needs access to on fabric 1 (or possibly just switch 1), create a new zone and include the new server alias and the alias for every relevant storage port on that device. Repeat if you have other storage devices (so two XIVs means two new zones).
Put the new zone (or zones) into the active zoneset (or a clone of it) and activate it.
Repeat on fabric 2 (after waiting a decent interval to ensure no mistakes were made in fabric 1... well I hope you wait.... you do don't you?).
The main trap here is that when creating a zone, you need to ensure you select all of the correct storage aliases for your selected storage device. For instance we could have a simple layout like this:
Fabric 1 contains our new server (in this example an IBM x3850) and three XIV ports:
This means when creating the zone I need to identify and select four separate aliases. What I could do instead is create an alias with all my XIV target ports in it. Now I only have two aliases to select in that fabric:
This means when creating the zone I need to identify and select three separate aliases. What I could do instead is create an alias with both my Storwize V7000 WWPNs in it. Now I only have two aliases to select in that fabric:
This method of amalgamating multiple storage port aliases works fine for devices like DS8000, SVC, Storwize V7000 and XIV. I use this method all the time to simplify zoning and I find it reduces both mistakes and the time required to complete zoning tasks.
The only exceptions are:
Don't do it for DS3000, DS4000, DS5000 or DCS3700 as the controllers on these devices do not like to see each other through the switch.
Don't combine ports from different storage devices, so if you have two XIVs in a fabric create one alias for the target ports of each XIV (although you could combine ports from different SVC I/O groups within the same SVC cluster into one alias). You should still use individual aliases for ports being used for migration or replication purposes.
Don't use the WWNN to create an alias. Always create multi-WWPN aliases so you have granular control of which ports go into the alias. If you use the WWNN from an XIV you will also implicitly include any ports that are being used for replication or migration and thus zone them to the host, which makes no sense.
I would love to hear any techniques you have to make your (and my) life easier.