Kia ora everyone.
Here you have my newest IBM publication, it is called:
Using the IBM Spectrum Accelerate Family in VMware Environments: IBM XIV, IBM FlashSystem A9000 and IBM FlashSystem A9000R, and IBM Spectrum Accelerate
Spectrum Control Base is becoming our main weapon against Nutanix, Nexenta and others SDS vendors.
It allows system administrators to have a single point of view of Computing and Storage.
This "Receipt book" is a project from Bertrand Dufrasne, from ITSO and my participation was sponsored by my manager and mentor Eric Wong, whom I thank for supporting this initiative.
So, want to know more? Give me a call or download the redpaper!
It is available at:
I invite you to talk to your Business Partner about Spectrum Control Base, it is NO COST for IBM storage customers, so no excuse on trying it! :P
PS: Don't forget to review it please! ;)
Cloud Storage Solutions
Kia ora everyone.
We have just released the IBM Storage Driver for OpenStack v1.2.0 and would like to share some of the more important features in this release.
The IBM Storage Driver for OpenStack is a software component that integrates with the OpenStack cloud environment and enables utilization of storage resources provided by the IBM XIV Storage System.
So what's new in our latest release?
Support for the OpenStack Grizzly release, which adds support for multiple storage backends on a single Cinder node. During the installation of the IBM Storage Driver for OpenStack v.1.2.0 on the Cinder node, the installation wizard will enable the configuration of multiple XIV storage backends, may it be two or more separate XIV systems or storage pools on the same XIV system.
Our Storage Driver now supports Fibre Channel (FC) storage connectivity to the compute node, in addition to the already supported iSCSI protocol.
To ease the installation and setup process of the new IBM Storage driver, we have enhanced the unattended installation of the IBM Storage Driver for Open Stack with the option to use a pre-configured ini file which includes the parameters of all the storage backends to be configured during the installation process.
In addition we introduced several fixes and code stability improvements.
The software and the related documentation can be downloaded from IBM Fix Central
The IBM HSG Team
In this post we would like to introduce the latest version of the IBM Storage Driver for OpenStack, which we released on January 31st, 2013.
If by any chance you are unfamiliar with OpenStack, check out the official OpenStack project website at: www.openstack.org
To put in a few words, OpenStack is an open-source software platform for building private and public cloud environments. The IBM driver is a software component that enables storage provisioning of the IBM XIV Storage Systems in OpenStack cloud environments.
The latest IBM driver version was preceded by an earlier one (1.1.0), which enabled basic storage provisioning operations, such as volume creation and deletion from OpenStack Cinder Node, and volume attachment to OpenStack VMs.
The new release (1.1.1) further enhances the integration between OpenStack and IBM XIV Storage System by adding support for XIV volume snapshot functions.
For example, let's look at the following OpenStack Web UI management page:
In this example, if you want to create a snapshot of ‘vol1’ on IBM XIV Storage System, you can use the ‘Create Snapshot’ action using either OpenStack CLI or Web UI.
Here is your created snapshot, as can be viewed from the XIV GUI:
Similarly, you can use OpenStack's ‘Create volume from snapshot’ action to create a volume based on an existing snapshot.
An additional ease-of-use enhancement in version 1.1.1 is the new unattended installation option, allowing no user interaction during the installation, perfect for automating the installation process. You can use the following command format to install the IBM driver in an unattended mode:
Cinder-node# ./install.sh –s –a <xiv_address> –u <username> –p <password> –o <pool_name>
The IBM Storage Driver for OpenStack can be downloaded here along with the Installation Guide and Release Notes.
Whenever you choose to integrate your IBM XIV Storage System with an OpenStack cloud environment, our driver is available to help you achieve this goal. We are happy to provide this newer version that further facilitates and enhances the utilization of your XIV storage resources and capabilities in your cloud environment.
As always, you are welcome to share your thoughts with us.
- The IBM Storage Host Software Development Group
Once again, we are proud to announce the release of the latest IBM XIV Host Attachment Kit 1.10.0.
The main feature of this release is Windows Server 2012 support. From this version onward, HAK is fully compatible with your new & shiny installation of Windows Server 2012. Go ahead and take it for a spin!
As part of our ongoing efforts to improve ease of use, we have translated the Windows Installer of HAK to Japanese as well, for the benefit of our Japanese speaking users. More languages to follow.
In addition to the above, we have also resolved several bugs and known issues from previous versions, including among others:
Of course that's not all - You may read more about these improvements as well as the rest of the enhancements and fixes in the Release Notes.
We hope you will enjoy this new release of the Host Attachment Kit, at least as much as we enjoyed developing it!
Follow this link to download the installable or portable packages and access the release documentation.
If you have comments, questions or requests, please write to us in the comments section below. We'd love to hear from you.
The IBM Host Software Team
We are happy to announce our newest edition of the IBM XIV Host Attachment Kit, version 1.8.0. This release includes a new utility and support for new operating system versions, as well as several bug fixes and code infrastructure improvements.
And now, let's dive into the details:
Starting with our cool new utility, called "xiv_syslist"... This new tool in the HAK will provide great assitance to host and storage administrators, by exposing useful storage array details from the server/host point of view. What kind of details? Without turning this post into a User Guide, the list of storage array details includes (for each array connected to the host): Array name, Array Serial, Management IPs, Connected Modules & Ports, Connectivity Types, Host Name Defined and Host Ports Definition State. The output can be formatted as xml or csv, as well as regular text output.The "xiv_syslist" utility is available on all platforms supported by the HAK.
In addition we have broadened our list of supported OS versions and now it also includes: SLES 11 SP2, RHEL 5.8, Solaris 10 u10, AIX 7.1 TL1 and AIX 6.1 TL7.
We would love to hear from you, please leave your questions and comments below.
The IBM XIV Host Software team
DmitriyIsayev 310000T63H Tags:  flash hyperswap sra high availability xiv vmware openstack 1 Comment 1,196 Views
Let’s face it: even the most resilient, robust, and secure storage sites, located above ground or even underground, could potentially be exposed, in a certain degree, to all kinds of disasters that may temporarily or indefinitely halt the site operation. This could be an extreme weather condition or natural disaster (hurricane, tornado, earthquake, etc.), a prolonged and unplanned interruption of power supply (power grid or power station failure), an accident, act of war, cyberattack, and who knows what else – God forbid…
Enterprises that rely on non-stop, continuous, and unbreakable access to data, as well as on the ability to keep existing data integrity in parallel to on-going update/writing operations at the highest possible standards, must have disaster prevention and recovery mechanisms in place, ready to be used at any point in time.
Enterprise-class storage systems, such as IBM XIV Gen3, IBM FlashSystem A9000, and IBM FlashSystem A9000R, provide advanced site mirroring capabilities, either synchronous or asynchronous, including 3-site mirroring for XIV Gen3, and HyperSwap for FlashSystem A9000 and A9000R. These native underlying replication and high availability (HA) technologies allow these IBM storage systems to serve not only conventional hosts and clusters, but also virtual machines in VMware and Microsoft cloud environments, as well as OpenStack cloud nodes.
Virtual machines duplicates can be deployed at a secondary backup site, together with the backup storage systems to serve those sites.
For detailed information about employing HyperSwap for VMware and Microsoft environments, refer to the IBM Redpaper: ‘IBM HyperSwap for IBM FlashSystem A9000 and A9000R’. Additional data protection scenarios are described in this IBM Redpaper: ‘IBM FlashSystem A9000 and A9000R Business Continuity Solutions’.
This somewhat belated blog post is meant to clarify the past, present, and future development of the IBM Storage Driver for OpenStack. For over 4 years, ever since the OpenStack Folsom release (when Cinder was still part of Nova), IBM Storage Driver for OpenStack has been released by IBM to support the integration of IBM XIV and IBM DS8000 storage systems with OpenStack cloud environments. In parallel, the IBM SAN Volume Controller, IBM Storwize Family, and IBM FlashSystem V9000 storage systems were already included in the OpenStack community source code. Throughout this period, the IBM XIV and IBM DS8000 storage driver remained proprietary. However, as community standards called for opening the driver source code, a decision to incorporate the driver into the community code has been made.
We decided to take the dive into the open-source code in version 2.0.0 of IBM Storage Driver for OpenStack, aiming to include IBM FlashSystem A9000 and A9000R, IBM XIV, IBM Spectrum Accelerate (deployable software), and IBM DS8000 Family systems in the community source code for the OpenStack Ocata release.
The journey of opening the code has been a unique one, as most drivers start as open, and do not evolve from a closed source code. But the challenge was met, and, as we are actually nearing the next release of version 2.1.0, we can make a full use of countless benefits of any open source software. Among them are:
Stay tuned for our next blog post when the community-based version 2.1.0 is released.
If you are looking for the driver’s up-to-date compatibility matrix, you can find it here.
The IBM Flash Centers of Competency has posted a very good blog about our latest SCB version
You can read the blog here
The last several releases of the OpenStack Cinder project addressed various disaster recovery (DR) scenarios. In Mitaka, replication v2.1 (codename ‘Cheesecake’) was introduced. This admin API was designed to solve a true DR scenario, when an entire backend is failed over to another backend on a secondary site. There are other DR scenarios in which the user wants to have better control over the replication granularity. For this purpose, the Cinder team has decided to provide tenant-facing group APIs, allowing volume grouping for a tenant. This enables volume group replication and failover without affecting the entire backend.
The IBM Storage Driver for OpenStack supports the following IBM storage systems:
The user can choose between a generic group replication and consistency group (CG) replication. For consistency group replication, the driver utilizes the storage capabilities to handle CGs and replicate them to a remote site. On the other hand, in generic group replication, the driver replicates each volume individually. In addition, the user can select the replication type. For example, IBM FlashSystem A9000 and A9000R storage systems support both synchronous and asynchronous replication.The minimum Cinder client version that supports group replication is 3.38.
The following functions were added to the IBM Storage Driver for OpenStack to support group replication:
The following example illustrates configuration of a replicated consistency group.
#cinder type-create rep-vol-1
#cinder type-key rep-vol-1 set replication_type='<is> sync' replication_enabled='<is> True'
#cinder group-type-create rep-cg-1
#cinder group-type-key rep-cg-1 set group_replication_enabled='<is> True' replication_type='<is> sync' consistent_group_snapshot_enabled='<is> True'
#cinder group-create rep-cg-1 rep-vol-1 --name replicated-cg-1
#cinder create --name vol-1 --volume-type rep-vol-1 1
#cinder group-update --add-volumes 91492ed9-c3cf-4732-a525-60e146510b90 replicated-cg-1
#cinder group-enable-replication replicated-cg-1
#cinder group-disable-replication replicated-cg-1
#cinder group-failover-replication replicated-cg-1
Host connection is a vital part of any storage system deployment procedure. Basically, a host is the physical computer or virtual machine that uses storage resources of the storage system, rather than having its own local storage. Accordingly, the hosts are defined on the storage system itself, allowing for this interaction.
Only hosts that are mapped to storage volumes can access those volumes on the storage system. Also, all host I/O operations must be equitably distributed among the storage interface modules. This workload balance is ensured by the storage administrator, who also monitors and assesses it over time when host traffic patterns change.
However, the host operating systems and network protocols come in different types and vendors. This variety presents a challenge to host administrators who must configure the host properly before it can perform on-going and uninterrupted I/O operations on the storage system. This is where the IBM Storage Host Attachment Kit (HAK) comes in handy.
The HAK for the IBM Spectrum Accelerate family – namely XIV, FlashSystem A9000 and A9000R, as well as the software-defined storage (SDS) Spectrum Accelerate deployable software – provides the host administrator with all the required tools for automatic and simpler host diagnostics, configuration, and attachment to the storage system. In addition, the HAK facilitates the monitoring and management of storage volumes from the host.
The HAK can be run on Linux (RHEL or SLES), Microsoft Windows Server, or IBM AIX hosts, and uses iSCSI or Fibre Channel SANs for connecting to the storage systems. Moreover, you can forgo the host installation altogether and use the HAK as a portable package (disk-on-key version). This allows you to run its utilities from a shared network drive or a portable USB flash drive.
Once the host is attached to storage system, defined on the storage device itself, and have the storage volumes mapped to the host, you can:
In all, the IBM Storage Host Attachment Kit is a nifty small-footprint program, indispensable for fast and smart storage deployment. Provided free of charge, it’s available for download from the IBM Fix Central.
For information about the latest version, check IBM Knowledge Center.
It would be quite accurate to state that virtualization technology today is gaining momentum, allowing companies to lower their total cost of ownership (TCO), reduce their datacenter footprint, ensure faster server provisioning, and more. However, implementing server virtualization without accounting for the storage element could cause problems, such as uneven resource sharing, as well as degradation in performance and data integrity.
Various IBM storage systems provide virtual enterprise storage for virtual servers. Specifically, the IBM storage systems work in concert with the following VMware virtualization platforms, standards, and environments:
To leverage, manage, and centralize this entire connectivity from a single interface, the IBM Spectrum Control Base Edition (SCBE) software package enables a simplified deployment and more efficient integration of different IBM storage systems (also referred to as storage arrays) and the VMware vCloud suite.
IBM Storage with VMware vSphere Web Client (vWC)
Without SCBE, when VMware administrators run out of disk space and request more space from the storage administrators, the storage increase provisioning may take quite a long time to implement and complete. This occurs mostly because of miscellaneous enterprise “red tape” procedures and processes. Moreover, before provisioning additional storage, storage administrators must fully understand the specific needs of the VMware administrators, then find available space on the attached storage systems, create and map storage volumes, and so on.
The SCBE solution uses an abstraction layer through a storage service and pre-defined sets of storage resources (pools) and capabilities (compression, encryption, etc). The services are prepared by storage administrators and provided for VMware administrators for fast and easy self-provisioning whenever required.
IBM Storage with VMware VASA and virtual volumes (VVols)
In the traditional data storage implementations, storage administrators manage multiple VMFS-based datastores with up to 7 TB in size and without explicit indication of storage capabilities. In addition, all VM snapshot and replication procedures are performed on the ESXi server side, without leveraging much faster hardware-based storage system capabilities.
The virtual volume (VVol) approach eliminates the VM granularity by introducing a single large VVol-based datastore with dynamic definition of storage policies (services) for provisioning storage resources exactly according to application requirements.
IBM Storage with VMware vRealize Orchestrator (vRO)
With the IBM Storage Plug-in for VMware vRealize Orchestrator, VMware administrators can include storage discovery and provisioning into their workflows. The plug-in allows issuing automated workflows with storage volumes that are attached to the vRealize Orchestrator server, eliminating the need for manual storage provisioning by storage administrators.
IBM Storage with VMware vRealize Operations Manager (vROps)
Monitoring IBM storage systems without VMware vROPs is limited to information about their physical objects. When using the IBM Storage Management Pack for vROPs, the storage systems are integrated into global monitoring network at the network operations center (NOC). The systems appear on vROPs dashboards, presenting in detail all relations between storage system objects (disks, ports, volumes etc) and VMware objects (datastores and VMs).
The long and arduous road to HyperSwap is over! We are happy to announce a release of a new software version of our blazingly fast FlashSystem A9000 and A9000R. This is version 12.1, which brings HyperSwap to our storage clients. HyperSwap solution is an ingenious way of providing high availability based on active-active pairing of storage systems per volume or per consistency group. Each volume or consistency group pair uses synchronous replication to keep both systems updated at all times. When certain conditions apply, an automatic and completely transparent failover is performed, so that the applications experience no downtime. As soon as the actual failure is recovered, the pair is automatically resynchronized.
For easier integration with VMware, Linux, AIX and Windows platforms, we've upgraded these Cloud Storage Solution products:
Upgrade now to unleash the full power of the HyperSwap disaster recovery!
Yossi Siles, a Senior Offering Manager at IBM, published this article on the new software version for our all-flash IBM FlashSystem A9000 and A9000R storage systems with IBM HyperSwap support.
You may ask: version 3.2.0 is already available so soon? Our answer: yes. Although earlier this month we released IBM Spectrum Control Base Edition version 3.1.1, version 3.2.0 is a genuine breakthrough, bringing support for the Microsoft PowerShell automation tool, in addition to our already well-established VMware cloud integration offerings. The new IBM Storage Automation Plug-in for PowerShell provides 'cmdlets' for automated storage management, including provisioning, host mapping, volume expansion, and other storage-related tasks.
For the first time, you can download the plug-in from the Microsoft PowerShell Gallery website at: https://www.powershellgallery.com/packages/SpectrumControlBase-Client. For further details about this new version, including fixes and an experimental Flocker feature (might not be included in future versions), check out the IBM Spectrum Control Base Edition 3.2.0 release notes on IBM Knowledge Center at: https://ibm.biz/Bds7gK.
Alon Marx 270006TGYS 2,181 Views
We are happy to announce that we have released several drivers to support the new version of the OpenStack Cinder project: Ocata. This, however, is not an ordinary release, and here's why: previously, the drivers for IBM SAN Volume Controller, IBM Storwize Family, and IBM FlashSystem V9000 storage systems were the only ones included in the OpenStack community source code. Now, this latest release is a huge step forward aimed to include IBM FlashSystem A9000 and A9000R, IBM XIV, IBM Spectrum Accelerate (deployable software), and IBM DS8000 Family systems in the community source code as well, moving from proprietary code to being fully open-source. So henceforth, the IBM Storage Driver for OpenStack is part of the OpenStack Cinder repository. Accordingly, the driver documentation has also migrated from IBM Knowledge Center to the OpenStack community website: https://docs.openstack.org/ocata/config-reference/block-storage/drivers/ibm-storage-volume-driver.html
This release was developed in accordance with the OpenStack Ocata specifications, emphasizing our continuous commitment for providing IBM storage customers with cutting-edge OpenStack capabilities. In addition to the aforementioned integration in the community source code, this driver version includes enhancements in replication functionality (also known as "cheesecake") of the supported IBM systems, as well as improvements in multipath support, and auto-selection of I/O groups for Storwize storage systems.
As always, we are here to help with deployment and operation of our OpenStack drivers. Your feedback is important to us, and we'll be happy if you let us know what you think about this milestone release.