From the good old days of DOS everyone knew the benefits compression. Back in those days disk capacity was scarce.
In those days, PC's had 40MB HDD capacity and programs like Foxpro 2.6 and Windows 3.1 could not be accomodated on a single disk. One had to remove Windows 3.1 installation to make space for Foxpro. Soon came newer version of DOS with a program called “Stacker”. Stacker had the possibility to compress the disk space data and thus more space was available for applications.
Gone are the days of 40MB HDD's and soon the disk capacity increased.
In the current era of technology the data is growing tremendously. Organizations especially are facing the issues with structured and unstructured data.
IBM has a wide variety of storages available ranging from small and medium business to large enterprises with scalable capacity, and in order to give its clients more value from Storage a compression enabled storage was introduced. IBM first introduced Random Access Compression Engine (R.A.C.E) technology in the IBM Real-time Compression(RtC) appliances. IBM then integrated the same technology in IBM Storwize V7000 family in 2013.
RtC is seamlessly integrated with Storwize V7000 system software stack to compress data before writing it to disk resulting in up to 80% storage capacity savings depending on the type of data. It is effectively equal to five times more capacity out of the same physical capacity in your system. RtC allows you to compress data even before it is written to your disks and is completely transparent to the applications and at the same time maintaining data consistency. It is implemented without any changes to applications, hosts, fabric, network..etc..
Since its inception, many users started implementing RtC with their Storwize V7000. Even though RtC provided great disk space savings by compressing the data, implementation of RtC in first generation of Storwize V7000 came with performance penalty. When enabled, RtC used significant processing power of the system causing performance bottlenecks and thus the benefit offered by RtC was dwarfed by these performance issues.
IBM addressed this issue in the next generation of Storwize V7000 system by making use of hardware compression acceleration with Intel®QuickAssist Acceleration Technology, that provided dedicated processing power and greater throughput for compression.
With the new hardware compression acceleration and better hardware resources, the Storwize V7000 Gen 2 easily overcame the performance penalties seen with Storwize V7000 Gen 1 systems. The performance of Gen 2 compressed volumes exceeds the non compressed volumes of Storwize Gen 1 systems.
In order to showcase the benefits of Gen 2, benchmarking was performed with VMware's VMMark tool and Oracle databases with OLTP workloads.
Following benefits were observed on Storwize V7000 Gen2 over V7000 Gen1 for Oracle benchmarks -
- 70+% compression ratio for Oracle database files.
- Three times faster response time
- Five times faster in virtual disk (vdisk) read latency
- Four times faster managed disk (mdisk) response time
- Three time less managed disk (MDisk) write operations (compression reduces back-end I/O load, making the system more efficient, thus delivering better performance)
- With a higher number of processors, second generation of Storwize V7000 system is seamlessly able to support I/O activity with compression enabled
Following benefits were observed on Storwize V7000 Gen2 over V7000 Gen1 for VMMark benchmarks -
- Average 50% compression observed for Redhat and Windows virtual machines.
- e-Commerce workload shows 30% improvement in benchmarking scores
- e-Commerce workload shows 35 % less latency
- Mail server workload and Web application workload benchmarking scores were similar across both generations. However, lower processor utilization was observed on Gen2 even after running benchmarks over compressed volumes.
For more details, refer following ISV technical papers
Using IBM Storwize V7000 Real time compression feature with Oracle
Benefits of IBM Storwize V7000 Real-time Compression feature with VMware vSphere 5.5
Disclaimer : The thoughts expressed above are collective thoughts of Shashank Shingornikar and Mandar Vaidya. They do not necessary represent that of their employer.
MandarVaidya 270004UATR Tags:  time real race stacker access compression storwize v7000 random engine rtc 2 Comments 17,039 Views
From the good old days of DOS everyone knew the benefits compression. Back in those days disk capacity was scarce.
SandeepZende 2700073BPY Tags:  xiv automation sde sdi ibm plug-in xaas vmware vrealize software defined infrastructure spectrum sds saas orchestrator cloud service storage hybrid private public control 1 Comment 17,674 Views
IBM Storage at your service courtesy of IBM Spectrum Control and VMware vRealize Automation
In today's emerging or, I would say, stabilizing world of IT cloud, everything needs to be delivered in "as a service" fashion. Because of this there is a growing demand for any IT solution to be available as a service. Organizations are thinking creatively to come up with the new IT solution as a service and then there are organizations who are developing cloud platforms which help other organizations quickly deploy their cloud solution. And this is where the race has begun.
Also refer the recorded demos below :
Demo: IBM Spectrum Control & VMware vRealize Automation - Configuration
Demo: IBM Spectrum Control & VMware vRealize Automation - Datastore Creation
For more information: https://www.ibm.com/systems/storage
Disclaimer: Above are my personal thoughts and not necessarily of my employer.
MandarVaidya 270004UATR Tags:  volumes virtual vasa ibm vmware control software base vsphere storage spectrum vasa2.0 defined edge2015 xiv edition vvol edge 16,323 Views
Official release of VMware vSphere Virtual Volumes (VVol) in Q1 2015 has generated tremendous interest with customers. VVol extends VMware's software defined story to its storage partners and it completely changes the paradigm in which storage is consumed by the hypervisor. With VVol implementation, storage intensive tasks are off-loaded by the server hypervisor to application-aware, policy-driven storage. It also simplifies storage management, puts the virtual machines in charge of their own storage, and gives more fine-grained control over virtual machine storage. With Virtual Volumes, an individual virtual machine, not the datastore, becomes a unit of storage management, while storage hardware gains complete control over virtual disk content, layout, and management.
The figure below shows a pictorial representation of a Virtual Volumes implementation with XIV using IBM Spectrum Control Base Edition.
IBM Spectrum Control Base Edition implements the VMware Virtual Volumes APIs, providing a separate management bridge between vSphere and XIV storage. This management bridge separates the data path from the management path. IBM Spectrum Control Base Edition enables communication between vSphere stack (ESXi hosts, vCenter server and the vSphere Web Client) and IBM XIV storage. IBM Spectrum Control Base Edition maps virtual disk objects related to virtual machines and their derivatives such as snapshots and clones, directly to the XIV storage system.
Here are some videos you might also like to view to hear directly from VMware and IBM on our strategic partnership and joint VVol development efforts.
Healthcare Industry Transformation: Realize value driven patient outcomes with IBM Storage Solutions for Medical Imaging
Prashant Avashia 2700049YRY Tags:  dicom storwize v7000 pacs unified xiv ds8870 healthcare imaging_3.0 vna virtualization svc spectrum_scale medical imaging 13,589 Views
Lately, patients are expecting their physicians to provide them a higher quality healthcare with intelligent, immediate insights from their radiological images, clinician notes and lab results. They are demanding simple diagnostic guidance, customized treatment options & immediate digital access to their personal medical information on their mobile devices, securely.
Shashank Shingornikar 270007BQAN Tags:  server oracle performance ds8k ibm easy asm aix ds8870 tier caching 15,442 Views
How far have you gone before in tuning your database ? be it Oracle or DB2. The efforts are never good enough and before you breathe easy the battle begins .. again and again ...
Now you can relax a bit ... With IBM Easy Tier Server functionality available with Easy Tier, you'll be able to get more work done in terms of improvement in transactions per second (TPS).
So what exactly is Easy Tier Server ?
IBM Easy Tier Server is a unified storage caching and tiering solution across AIX servers and supported direct-attached storage (DAS) flash drives. Easy Tier Server allows the most frequently accessed or “hottest” data to be placed (cached) closer to the hosts, thus overcoming the SAN latency. The Easy Tier Server core relies on DS8870 cooperating with heterogeneous hosts to make a global decision on which data to copy to the hosts’ local SSDs, for improved application response time. Therefore, DAS SSD devices play an important role in an Easy Tier Server implementation. Specializing in high I/O performance, SSD cache has the upper hand in cost per input/output operations per second (IOPS).
The Easy Tier technology has evolved over years and is now in its fifth generation. Easy Tier Server is one of several Easy Tier enhancements, introduced with the DS8000 Licensed Machine Code 7.7.10.xx.xx. Both Easy Tier and Easy Tier Server licenses, although required, are available at no cost.
Which workloads are best fit for Easy Tier Server ?
Because Easy Tier Server implements a read-only local DAS cache on the hosts, there are some particular scenarios that can take the best advantage of this feature. These are
Under the hood
The Easy Tier Server feature consists of two major components
The Easy Tier Server coherency server runs in the DS8870 and manages how data is placed onto the internal flash caches on the attached hosts. Also, it integrates with Easy Tier data placement functions for the best optimization on DS8870 internal tiers (SSD, Enterprise, and Nearline). The coherency server asynchronously communicates with the hosts system (the coherency clients) and generates caching advice for each coherency client, which is based on Easy Tier placement and statistics.
The Easy Tier Server coherency client runs on the host system and keeps local caches on DAS solid-state drives. The coherency client uses the Easy Tier Server protocol to establish system-aware caching that interfaces with the coherency server. An Easy Tier Server coherency client driver cooperates with the operating system to direct I/Os either to local DAS cache or to DS8870, in a transparent way to the applications.
The POWER system has a DAS attached which is used by Easy Tier Server Coherency Client Driver to create local cache. Easy Tier Server coherency clients are designed to route I/O read hits to the application host DAS, while sending read misses directly to DS8870. In the same way, the write I/Os are routed to DS8870 and cache pages related to the I/O address spaces are invalidated on the client’s local cache to keep cache coherency and data integrity. The coherency client and coherency server share statistics to ensure that the best caching decisions are made.
And the bottom line is ?
In the lab a brokerage OLTP workload was executed simulating maximum amount of read requests. In the beginning of the run the hdisks configured for the ASM DATA disk group showed maximum utilization as no caching was enabled. 60 minutes in the run, the caching was enabled on the database host running the workload. Soon after the caching was enabled, the Easy Tier Server starts migration of hot extents from DS8870 to the database host running the Easy Tier Server coherent client. Over a period, as more and more hot extents are migrated from DS8870, maximum activity was observed on cache devices and lesser activity from DS8870 storage. As more and more extents (containing required data) were cached, the read operations requests were satisfied locally thus eliminating the need to read data from storage. The effective utilization of locally cached data showed 100% improvement in the TPS observed during the test run.
Whether it is a latency sensitive environment, high read/write ratio applications, or a highly parallel processing system, there is an increasing need to process data quickly and Easy Tier Server can be considered for these situations.In cases where the read performance of the storage can lead to a major bottleneck to the environment, there is a high value in faster storage, and therefore, a good fit for the Easy Tier Server.
Publications and Resources
A white paper WP102534 is available on IBM Techdocs website that provides detailed information on the testing effort.
IBM System Storage DS8870: Architecture and Implementation, SG24-8085
Views / thoughts expressed above are my own, not necessarily of my employer.
Building an enterprise class on-premise file sync and share solution using IBM Spectrum Scale for object storage and ownCloud
UdayasuryanKodoly 270003M1JP 15,006 Views
Real-time collaboration and information sharing are key drivers of an enterprise’s productivity and innovation. Finding solutions to enable such dynamic sharing in an enterprise setting while maintaining control, however, can be a challenge. Some organizations look to consumer-grade, cloud-based file sharing options that offer the scalability, ease of use and access users want but store sensitive company data on external servers. This exposes organizations to risks of data leaks while limiting IT visibility. Other options include using existing enterprise collaboration and content management systems that might be challenging to maintain and cumbersome for users.
What exactly is the solution?
The combined IBM® Spectrum Scale for object storage and ownCloud software technologies helps enterprises to build highly scalable, secure, and flexible on-premise file sync and share solution. The ownCloud provides universal file access through a common file access layer to the IBM Spectrum Scale for object storage. The data files are kept in on-premise Spectrum Scale for object storage. ownCloud allows enterprises IT organizations to regain control of sensitive data with managed file sync and share which gives users universal file access to all of their data:
Why enterprises wants on-premise files sync and share solution?
Storing data off-premise may strip an organization’s ability to manage and control its data, or to ensure that data can be deleted. Few enterprises, however, are willing to forgo the benefits that cloud services provide in the advancement of agility and improved business processes. That leaves them struggling with how to use these technologies without importing security risks. They also recognize that users are increasingly able to migrate to external services that provide them greater flexibility and mobility than that offered by the enterprise.
By retaining on-premises manageability of file sync and share services, though, IT can use a private cloud solution to reconcile the need for cloud technology with the requirements for security, privacy, and regain control of sensitive data without unwanted exposure. With the ability to enhance control and govern access to files, IT administrators can set sophisticated rules for user and device connections and prevent access based upon those rules. Further, the capabilities and extensibility of on-premise file sync and share match the ease of use and complete access that first drove consumption of cloud services, yet IT controls sensitive assets in its own cloud environment.
Solution Lab testing
This solution consists of multiple servers installed with ownCloud server software. The ownCloud is a PHP web application running on top of Apache on Linux. This PHP application manages every aspect of ownCloud, from user-management to plug-ins, file sharing and storage. Attached to the PHP application is a database where ownCloud stores user information, user-shared file details, plug-in application states, and the ownCloud file cache (a performance accelerator). ownCloud accesses the database through an abstraction layer, enabling support for Oracle, MySQL, SQL Server, and PostgreSQL. Complete webserver logging is provided through webserver logs, and user and system logs are provided in a separate ownCloud log, or can be directed to a syslog file.
In the lab testing environment, an Active Directory (AD) is integrated with the ownCloud for user account provisioning. IBM Spectrum Scale for object storage is configured with local authentication. It is possible to configure IBM Spectrum Scale for object storage with enterprise directory server such as AD or Lightweight Directory Access Protocol (LDAP).
OpenStack Swift is installed on the protocol node(s) of the IBM Spectrum Scale for object storage.
IBM Spectrum Scale is a proven, enterprise-class file system, and OpenStack Swift is a best-of-breed object-based storage system. IBM Spectrum Scale for object storage combines these technologies to provide a new type of cloud storage that includes efficient data protection and recovery, proven scalability, and performance; snapshot and backup and recovery support; and information lifecycle management. Through these features, IBM Spectrum Scale for object storage can help simplify data management and allow enterprises to realize the full value of their data.
ownCloud is a self-hosted file sync and share server. It provides access to on-premises data through a web interface, sync clients while providing a platform to view, sync and share across devices easily, while gives the enterprises the ability to manage and control their data. ownCloud’s open architecture is extensible through a simple but powerful APIs for applications and plug-ins and works with seamlessly with IBM Spectrum Scale for object storage.
The combined IBM Spectrum Scale for object storage and ownCloud server technologies helps enterprises to build highly scalable, secure, and flexible on-premise file sync and share solution.
To learn more about the solution, please see the solution technical paper: https://www-304.ibm.com/partnerworld/wps/servlet/ContentHandler/stg_ast_sto_wp_on-premise-file-syn-share-owncloud
Shashank Shingornikar 270007BQAN Tags:  vsphere oracle xiv compression performance rtc ibm savings database vmware 1 Comment 18,023 Views
After successfully implementing Real-time Compression feature in Storwize V7000, IBM has taken a step further bringing this patented technology in IBM XIV storage system. In a recent announcement of XIV 11.6.0 release, the Real-time Compression feature is seamlessly integrated in XIV storage system. Eliminating the need to add any extra hardware, the IBM Random Access Compression Engine (RACE) technology is now integrated with XIV storage system software stack to compress data before writing it to disk (above cache mechanism) resulting in up to 80% storage capacity savings.
It is designed with transparency in mind so that it can be implemented without changes to applications, hosts, networks, fabrics, or external storage systems. The solution is not visible to hosts, thus users and applications continue to work as is. To estimate the compression savings on an existing XIV non compressed volumes, Comprestimator utility is now integrated with XIV software.
On the XIV system the compression ratio for all uncompressed volumes in the system is continuously estimated, even before enabling compression. The figure shows the various stages of volumes on the system ranging from uncompressed to potential savings and finally the total amount of compression on a volume
What are the Compression benefits for XIV ?
With the inline implementation of Real-time Compression the IBM XIV now delivers dramatic cost savings without need for extra hardware and provides following benefits :
So How does it work ?
Real-time compression implementation in XIV storage uses above cache architecture where data is compressed or de-compressed between the I/O interface and the cache. The compression node runs on every module of XIV taking advantage of parallel architecture of XIV. It compresses the portion of volume which only belongs to the module and thus distributing compression workload across all the modules of XIV. Hence, Real-time Compression implementation in XIV have minimal impact on the performance delivered by XIV.
Whenever write operations happen, data is compressed before they enter cache and acknowledgment is sent back to the host. During read operations, reads are stored compressed in cache and data is de-compressed when they are read from cache using RACE before passing it to the host. During XIV mirroring operation, data is compressed only once and compressed data is sent across the network reducing network bandwidth.
What will benefit more from Compression ?
Are there any guidelines for Compression?
Anything I can refer to ?
Real-time Compression not only works best with randomly accessed data such as database like IBM DB2, Oracle, MS-SQL Server but it also provides good results with server virtualization solutions like VMware, KVM, Hyper-V. When using Oracle databases, compressed volumes take advantage of above cache architecture compressing the writes seamlessly. A 57% compression has been observed during creation of a terabyte of data with minimal performance penalty. ( Publication : WP102551 )
VMware vSphere virtual machines can be seamlessly deployed on the compressed volumes, often with the compression savings of 50% to75%, allowing customers to reduce the storage capacity required for vitalized environments. ( Publication : WP102552 )
Microsoft Hyper-V virtualization helps customers maximize System x server and other resource use. Included in Windows Server, Microsoft Hyper-V virtualization helps reduce costs by allowing a greater number of application workloads to be hosted on fewer physical servers. When using Microsoft SQL Server 2012 SP1 OLTP data files and VM Windows Server 2012 R2 system files stored in Hyper-V virtual disk and XIV compressed volume achieved 73% compression savings ( Publication : WP102553 )
What about the performance ?
While the team tested the compression benefits and compiled the paper, another team from IBM Tel Aviv lab, had been busy with performance testing of the Oracle database hosted on the IBM XIV compressed volumes.
In the test setup, the team used, both compressed and uncompressed volumes configured on XIV for better parallelism. These volumes were mapped to the ESX system hosting the database server to create multiple VMFS file systems. A 5 TB database was created on the VMFS volumes using the Benchmark Factory tool. During the test run of 12 hour, load starting with 1,000 to a maximum of 30,000 users was made to put the system under a realistic production load. The I/O per second (IOPS) and response time information shown by the Benchmark Factory tool is shown by Figure below. Each point on the graph indicates an addition of 2500 users. The graph clearly indicates that the application has minimum impact in terms of response time when using the compressed volumes.
Blog Authors: Mandar Vaidya, Shashank Shingornikar
EricJohnson 270003X5UR Tags:  azure center 2012 data r2 vm virtual system site replication planned microsoft protection dr automation unplanned scvmm cloud services ibm asr disaster replica xiv 11.6.x failover hyper-v recovery vmm machine san 19,656 Views
For information technology (IT) customers looking to control site expansion costs, Microsoft offers their Azure cloud services. To appeal to larger customers with existing disaster recovery (DR) models that span multiple sites and use SAN solutions, Microsoft recently added Azure Site Recovery to their cloud services mix. This allows Microsoft to target a full spectrum of potential customers for their cloud services. For small customers that cannot afford the costs (both CAPEX and OPEX) associated with additional sites, traditional Azure services meet their DR requirements. However, in order to increase further Azure business revenue, Microsoft realized they needed to attract more large businesses with existing SAN infrastructures by appealing to cost-conscious CIOs facing common IT budget constraints. In essence, Microsoft cloud services continues to appeal to smaller customers who can not add data centers or sites and larger customers who wish to control site sprawl. Why bother with the cost and management headaches associated with maintaining additional sites for disaster recovery to meet customer service level agreements by protecting business critical data and services, when you can let Microsoft protect them for you and save your money and sanity for other high priority business needs? That is where Microsoft Azure and ASR services come into play.
Heralded as Microsoft’s cloud computing platform, Azure provides a simple, reliable, and extensible web-based interface or front end that is tightly integrated with a Microsoft System Center VMM and SQL Server back end. While the overall Azure model is multi-tiered, think of it, more or less, as a web management portal that uses Internet Information Services (IIS) at its foundation with VMM as the engine that drives its cloud tasks. VMM in turn, stores all of the cloud configuration and environment data in a SQL Server database. Of course, the Azure cloud itself is based on the Microsoft System Center application suite and consists of a Microsoft global network of secure data centers that offer compute, storage, network, and application resources to help protect your data and offset the high availability and administrative costs of building and managing additional sites. Even though Azure has multiple tiers, the storage array aspects of Azure Site Recovery using SAN replication for on-premises clouds and how replication differs from traditional Hyper-V replica implementations is the primary focus of this blog.
The Hyper-V replica feature is designed to protect VMs hosted by different servers using a built-in replication mechanism at the VM level. A primary site VM can asynchronously replicate to a designated replica site using an Ethernet network infrastructure including local area networks (LAN) or wide area networks (WAN). The designated replica remains offline in a stand-by state pending planned or unplanned VM failovers. After the initial VM copy is replicated to the secondary site, asynchronous replication occurs for only the primary VM changes. This network-based replication does not require shared storage or specific storage hardware and Hyper-V replicas can be established between stand-alone or highly available (HA) VMs, or a combination of both. The Hyper-V servers can be geographically dispersed and the VMs are not even required to belong to a domain. Thus, the Hyper-V replica requirements are rather basic and easy to implement yet are restricted to asynchronous network replication only.
Until recently, Azure could only leverage Hyper-V replicas using a network replication channel but now can use SAN replication between two on-premises VMM sites or clouds. With the addition of a Hyper-V replica SAN replication channel, synchronous replication can be used to eliminate asynchronous lag times and multiple VM consistency is possible using Azure Site Recovery. However, it is important to realize that asynchronous SAN replication behavior is similar to asynchronous Hyper-V network replication because after the initial VM copy is replicated to the secondary site, asynchronous SAN replication occurs for only the primary VM changes. However, if using IBM Real-time Compression, performance gains are also realized because less data is required to replicate over the SAN. No matter the storage options such as compression, with just a few clicks in the Azure Site Recovery management portal, simple orchestration of IBM XIV replication and disaster recovery for Microsoft Hyper-V environments can be automated in the form of planned and unplanned site failovers. In a practical sense, this collection of Azure SAN replication enhancements and disaster recovery functionality is an extension of past Microsoft System Center VMM storage automation features.
So with the introduction of Microsoft ASR cloud services, larger customers have the option to provide DR for their private clouds using IBM XIV SAN replication but they also can take advantage of hybrid cloud protection by subscribing to Microsoft Azure services. This services model gives customers the opportunity to protect their existing data center and SAN infrastructure investments while enticing them to purchase additional Microsoft Azure cloud services. Refer to Figure 1 below for a general Microsoft cloud layout that uses ASR with IBM XIV SAN replication:
Figure 1: Microsoft Azure Site Recovery using IBM XIV SAN replication general lab configuration
For further information about Microsoft ASR using IBM XIV Storage System Gen3, including step-by-step configuration processes, please refer to the following white paper:
MandarVaidya 270004UATR Tags:  site downtimes ibm mirroring svc virtualization manager disaster replication storage policies recovery volume business controller srm vsphere storwize family adapter continuity planned san spectrum sra virtualize vmware 23,569 Views
Everyone who works in mission critical environments understand the need of having effective disaster recovery solution. Organizations demand disaster recovery operations fully automated and could be executed in a repeatable manner making them always ready for disaster situations. In addition, organizations always demanded seamless migration of applications across the sites for planned activities.
What is IBM and VMware’s joint DR solution in a virtualized environment?
IBM SAN Volume Controller (SVC) stretched cluster with VMware Site Recovery Manager (SRM) providing support for stretch cluster (announcement link) is an ideal combination for disaster recovery solution using IBM Storwize Family Storage Replication Adapter (SRA). It offers customers the ability to survive a wide range of failures transparently by planning for disaster avoidance, disaster recovery and mobility. This solution also offers planned live migration of applications running on virtual machines across the sites by orchestrating cross vCenter vMotion operations, enabling zero-downtime application mobility.
IBM SVC is an industry leading storage virtualization solution that can virtualize storage devices practically from all other storage vendors. With stretched cluster implementation, customers can enjoy active-active configurations with servers and ESXi hosts can connect to storage cluster nodes at all sites. It helps to create balanced workloads across all nodes of clusters and provides disaster recovery capabilities in case of site failures.
How to configure solution?
There are documents available individually describing IBM SVC stretched cluster and VMware Site Recovery Manager and their benefits and respective configuration details. Purpose of this blog is to touch key steps and guidelines required to configure solution together for planned and unplanned downtimes.
What configuration is needed on SVC?
SVC supports stretched cluster configuration for some time now. Stretched cluster implementation allows the configuration of two nodes in an I/O group which are separated by a distance between two locations. These two locations (sites) can be two racks in a data center, two buildings in a campus, or two labs between supported distances. A third site is configured to host a quorum device that provides an automatic tie-break in the event of a potential link failure between the two main sites.
In SVC, volume mirroring feature is used to keep two physical copies of a volume. Each volume can belong to a different pool. In case of stretched cluster feature, a mirrored volume can be configured from the external storages across two physically separated sites.
Any special need for vCenter and SRM installation for supporting this solution?
SRM stretch cluster support takes advantage of vSphere’s ability to perform vMotion across the sites and across the vCenter server instances. Therefore, the two vCenter server instances will need to be configured (at protected and recovery sites) in enhanced linked mode to enable cross vCenter vMotion.
Install SRM server instances at protected and recovery sites and register SRM server instances with Platform Service Controllers at each site respectively.
Where does IBM SRA come into picture?
What’s new while creating vSphere storage policy?
How to configure SRM for this solution?
Why to test recovery plan?
Okay. I've recovery plan but what’s next??
Hopefully above steps will give overview of various configuration steps required to setup a solution and plan accordingly. For additional details related to the configuration, refer technical guide Implementing disaster recovery using IBM SAN Volume Controller and VMware Site Recovery Manager.
Disclaimer : These are my personal views and do not necessarily reflect that of my employer.
Boost Citrix ShareFile's on-prem file sync and share with IBM Spectrum Scale or IBM Storwize V7000 Unified
SandeepZende 2700073BPY Tags:  cloud content storage and sync management scale private collaboration cifs file spectrum citrix smb v7000 on-premise device share ibm on-prem mobile public storwize unified sharefile storagezone 16,284 Views
With the surge of smartphones and tablets is also the surge of applications available for storing and sharing data. Employees of many organization tend to use the applications of their choice which are comfortable to them to store and share data. This puts organizations at risk of data security and puts pressure on them to think of having an official application which will allows its user to store and share data securely and also have the control of its data.
To gain the control over data, organizations need a very robust, reliable and easy to integrate storage devices for its file sharing applications. Along with security and control over its data, organizations are looking for additional services or features which can enhance the productivity of its employees.
Citrix offers ShareFile—an enterprise-class, IT-managed, secure file sync and sharing service. ShareFile offers IT the ability to control sensitive corporate data while meeting the mobility and collaboration needs of users and the data security requirements of the enterprise.
Citrix provides multiple options to store the data be it on premise, in the cloud or a combination of both to meet the needs for data sovereignty, compliance, performance and costs. For organizations that require increased data protection, ShareFile offers customers the ability to encrypt data with their own encryption keys.
IBM has more than one option to take care of the on-premise storage solution and one being the answer to a highly scalable file sync and share solution and that is IBM Spectrum Scale. The other storage option is IBM Storwize V7000 Unified which provides a unique combination of file as well as block storage option to small and medium level file sync and share solutions.
ShareFile extends an organizations’ data strategy to include existing network file drives, SharePoint and OneDrive for Business - allowing a single point of access to all data sources. StorageZone Connectors makes it easy to securely access documents which otherwise cannot be accessed outside of corporate networks or on mobile devices. Access any enterprise content management (ECM) system with StorageZone Connectors SDK, expanding the types of data users can access and edit on-the-go via ShareFile.
Advanced security features including remote wipe, device lock, passcode protection, white/black listings and data expiration policies allow you to determine exactly how sensitive data is stored, accessed and shared. Track and log activity in real-time and create custom reports to meet compliance requirements.
While IBM Spectrum Scale brings the scalability and performance with it, it also can add the value by using the features below:
Other features of IBM Storwize V7000 Unified:
Below is the high level flow diagram of the solution using IBM Spectrum Scale
For the high level overview of Citrix ShareFile and IBM Storage Systems, follow the link below:
For more information of a solution with IBM Spectrum Scale, follow the link below:
For more information of a solution with IBM Storwize V7000 Unified, follow the link below:
Disclaimer: Above are my personal thoughts and not necessarily of my employer.
Shashank Shingornikar 270007BQAN Tags:  storwize v7000 unified ibm instance oracle database container docker storage 16,907 Views
Dockerizing Oracle Database
Docker is a next buzzword on the net now. While a lot of work has been done in dockerizing various applications under docker softwares like Oracle still poses challenges during installation / configuration / execution. This blog entry gives the users a flavour in integrating Oracle Database for running in Dockerized environment.
Are you sure this can be done ?
Should this be done ?
So, what are the ideal use case ?
What does the flow looks like ?
Database instances are spanned using this image. In order to create / save the database on a persistent storage, a data only container can be created. The data container together with container based on ORACLE_HOME will be used to create database instance and database underneath.
What do you say are the advantages of this ?
Easy to grow Hybrid Cloud solutions using IBM Spectrum storage and VMware for business continuity!!!
MandarVaidya 270004UATR Tags:  accelerate virtulization recovery cloud vcenter vmware interconnect site softlayer vrealize manager esxi spectrum disaster xiv hybrid ibm automation srm orchestrator storage 18,889 Views
The growing shift toward cloud computing and need for flexibility are making hybrid cloud solutions a serious business imperative. Hybrid cloud done right is an effective, highly agile, cost saving alternative to traditional storage only. For many organizations, disaster recovery is a primary need and the debut use case for bringing public cloud into their environment.
We have built some exciting hybrid cloud scenarios using a winning combination of IBM Spectrum Accelerate family offerings and VMware. IBM Spectrum Accelerate is software defined storage (SDS) from proven enterprise-class XIV technology. True to the calling card of SDS, it deploys on heterogeneous hardware – and IBM makes it deployable in every possible way: in private cloud, hybrid cloud and public cloud solutions, including as a service. It runs on purpose-built or customer-chosen commodity servers, can be hosted on public cloud infrastructures such as IBM Bluemix (IBM SoftLayer®), and is even available by a third party vendor as a pre-installed appliance. It can be licensed to run on XIV, FlashSystems A9000, A9000R systems for long-term investment value. It is available as a service, IBM Spectrum Accelerate on Cloud, ordered through IBM Passport Advantage® and supported by IBM Lab Services, for deploying by the terabyte on IBM Cloud.
For detailed implementation and configuration details, register and attend session #5028 "Implementing Disaster Recovery Solution across Hybrid Cloud using IBM Spectrum Storage" on Wednesday, 22nd March 2017 at InterConnect 2017.
1) Hybrid cloud disaster recovery solution leveraging VMware Site Recovery Manager and IBM Spectrum storage. Diagram below will provide architectural overview of the demo we are showing.
We can either have IBM XIV storage or IBM Spectrum Accelerate storage at the protected site and IBM Spectrum Accelerate at the recovery site. The Spectrum Accelerate instance can be located in another rack in the other data center, or span two physically separate data centers or be in a public cloud such as IBM Bluemix.
2) Orchestrated and automated storage provisioning using vRealize Automation and vRealize Orchestrator with IBM Spectrum Accelerate family offerings spanning hybrid cloud
In this solution, we use the IBM® Spectrum Control Base Edition integration with VMware vRealize Orchestrator and vRealize Automation to take the service around infrastructure beyond orchestration. IBM Spectrum Control Base Edition is a centralized cloud integration system that consolidates IBM storage provisioning, virtualization, cloud, automation, and monitoring solutions through a unified server platform. By using VMware’s Advanced Service Designer feature in vRealize Automation and vRealize Orchestrator, together with IBM Spectrum Accelerate, we show how you can deliver XaaS (Anything-as-a-Service) across hybrid cloud deployments to your users.
Shashank Shingornikar 270007BQAN 11,377 Views
Update : Oracle has confirmed server-side caching as a generic storage technology. Thus the use of server-side caching in the production environment does not have any certification requirements.
Within an application ecosystem there are always challenges and performance most of times tops the chart.
In spite of having a well-balanced storage system and fully tuned Oracle Database instance, with ever growing users there is always a need to get more out of system. So from the tuning point when the efforts start, the entire stack has to be looked. While application server, database host can be easily upgraded, and to some extent network too, but when it comes to storage things not always easy. Really? Well think again..
The solutions right is here. Starting with AIX Version 7.1 with Technology Level 4 Service Pack 5 (7100-04-05) and later, IBM Storage Systems support a new feature called server-side caching that uses SAN attached flash storage to improve read performance. IBM FlashSystems are best known for their microsecond latency. When it comes to performance, nothing can stand in the way.
The server-side caching is very well supported right from the OS level and it is also well integrated to make use of the power of IBM FlashSystems. By making the server-side caching available at operating system level is not only storage agnostic but also eliminates the need for data migration to newer storage system. Moreover, now that the FlashSystem is available as part of SAN infrastructure, it can be shared with multiple servers that require performance improvement from storage.
In the following sections, we take a deeper dive at this feature to understand the key components involved in building a solution targeted towards Oracle databases.
For more information about configuring various storage data caching modes, refer to the IBM Knowledge Center for AIX at: ibm.com/support/knowledgecenter/ssw_aix_71/com.ibm.aix.osdevice/caching_configuring.htm
For more information about dedicated cache configuration refer to IBM Knowledge Center for AIX at: ibm.com/support/knowledgecenter/ssw_aix_71/com.ibm.aix.osdevice/caching_dedicated_mode.htm
Benefits of server-side cache
During the lab testing, various cache sizes were configured on the database host. The workload was run with a constant number of virtual users to see the effect of different cache sizes. It was observed that the cache takes approximately 30 to 45 * minutes to warm up. During this warm-up period heavy write and read activity was seen since the data not found in cache must be written to cache (write operation to cache). At the same time the data found in cache is read (read operations to cache) and returned to the application.
Note: The warmup time may be different depending on the size of cache device.
Figure 1 shows the TPS data captured during a five hour run. The run was started without starting the server-side cache. We see that the application quickly attains a saturation level and the TPS at a baseline value. One hour into the run, the cache was started, and there is a gradual improvement seen in the TPS. After the configured cache is completely filled, there is no more room for new data to cache. At this point, the flash cache is serving the application read requests. Data not found in cache will be written to cache based on the caching algorithm. Blocks heavily accessed are seen as hot blocks. Blocks transition from cold to warm to hot state and then from hot to warm to cold state depending on their access pattern.
For the run data showed in Figure 1, a cache size of 128 GB (that is, 10% of the database size) was configured with only 58 virtual users simulating a business application. Figure 2 shows the I/O activity recorded on the base storage system, and at 0:38:36 shows the increased read activity enabled by the sever-side cache system which leads to the TPS improvement.
In the tests documented above, a flash cache that is 10% of the database size was used. Table 1 shows relative TPS measurements made with larger cache partition sizes used.
Note: The TPS throughput listed in Table 3 is observed in the controlled lab environment. The actual values for TPS might change depending on the workload and overall network traffic.
Detailed technical white paper : https://www.ibm.com/common/ssi/cgi-bin/ssialias?htmlfid=TSW03489USEN&
Visit IBM booth at Oracle OpenWorld and I'll be there in person if you would like to know more.
See you in San Francisco !!
EricJohnson 270003X5UR Tags:  spectrum cloud azure machine virtualize microsoft automated failover ibm unplanned vmm san johnson asr channel storwize virtual v3000 hyper-v v5000 eric planned dr v7000 recovery b. scvmm multisite disaster site manager replication 7.7 15,125 Views
With the introduction of IBM Spectrum Virtualize software 7.7, the IBM Storwize product family now supports on-premises Microsoft Azure Site Recovery (ASR) using a Hyper-V replica Storage Area Network (SAN) replication channel. Rather than share similar content as my previous blog about IBM XIV Microsoft ASR at https://www.ibm.com/developerworks/community/blogs/bb3d5479-8e6c-45dc-9cc3-d46716d3a749/entry/Failover_Microsoft_cloud_site_within_minutes_using_ASR_with_IBM_XIV?lang=en, I thought I would share a few of the key test differences between the IBM XIV and IBM Storwize when implementing this Microsoft solution. Think of it more as a support blog that reveals a few workarounds to help expedite your Microsoft ASR cloud disaster recovery solution.
There are 3 primary differences that I noticed when testing this solution:
Could not retrieve a certificate from the 9.x.x.x server because of the error: The underlying connection was closed: An unexpected error occurred on a send.
Details: An existing connection was forcibly closed by the remote host (0x80072746)
Workaround: Use the PowerShell to add the IBM Storwize storage device:
$RunAsAcct = Get-SCRunAsAccount -Name "V5000RunAsAcct"
Note:In my example above, RunAsAcct was first created in the SCVMM console using a preferred naming convention. Also, -NetworkDeviceName is the IP Address of the Storwize management IP. - Name is the Fully Qualified Domain Name (FQDN) of the Storwize system or management IP.
Job ID: 1592a88c-3245-4a63-b232-b5595999dbfb-2016-08-05 20:34:34Z ActivityId: 985f9a49-073a-4606-a9ee-3ff9ed27014f
Task execution has timed out while waiting for job to complete on VMM. (Error code: 600)
Workaround: After the associated job completes in the VMM console (also in the Storwize Storage Management web interface, you should see the IBM Storwize remote copy change to a consistent synchronized state), restart the ASR management portal job. You can also perform IBM Storwize Storage Management manual steps to create a 4+ TB volume at both sites and then define remote copy consistency groups and member volumes. Manually complete the remote copy synchronization and in the VMM console, create a primary site replication group that includes the 4+ TB remote copy volume(s). Afterwards, you should be able to use the ASR management portal to add a replication group to enable protection for the larger volumes.
Note: In my test environment, ASR job timeouts occurred for 4 TB or larger volumes. Your results may vary.
Workaround: Users can manually create compressed volumes using the IBM Storwize Storage Management interface. IBM recommends (and only supports) creating a Storwize pool that contains compressed volumes exclusively and then refresh or rescan the VMM storage array to detect new storage pools and volumes. In other words, pools should not mix compressed volumes with regular or thin provisioned volumes. After, the volumes can be assigned to any VMM host group where they can be used for cluster shared volumes (CSVs). At this stage, perform the manual IBM Storwize Storage Management remote copy steps for 4+ TB volumes in the step 2 Workaround above.
For detailed step-by-step processes and further information about how to enable multisite on-premises cloud protection using Microsoft Azure Site Recovery with IBM Storwize, refer to the following website:
SanjaySudam 270005GKHS Tags:  milestone elastic spectrum ibm scale genetec ess storwize v5030 video storage server surveillance 15,207 Views
As the use of camera security surveillance system grows due to increased security concerns, so the demand for the better image quality. Also the demand for the longer video retention times are growing due to the legal compliance. The increase in data produced due to higher image qualities and the longer retention means more data to be stored and archived at the storage level.
IBM Storage options and solutions:
With the more data generated from the higher resolution cameras and longer retentions, storage system plays an important role in the video surveillance architecture. The storage system in the architecture is as critical as the cameras in the surveillance infrastructure. Different types of the storage systems can be selected in the same architecture based on the video data life cycle requirement.
Some of the key following points needs to be consider before selecting the storage system
To address the ever increasing and challenging storage requirements of the video surveillance systems, IBM offers a validated storage solutions with the Industry leading solutions from the Genetec and Milestone systems. IBM team has performed extensive testing and provides a range of options for the medium to very large enterprise solutions based on your requirements.
IBM Storwize platform:
IBM Storwize systems provide an easy to use solutions for the medium to enterprise workloads. This platform has been tested with the Genetec and Milestone systems and provides a very cost effective solutions for the medium to large customers.
IBM team has performed testing of the Genetec Security Center and Milestone XProtect Corporate suite of products with the IBM Storwize in IBM lab and published the validated architecture for the surveillance system.
Figure 1: DVS high level architecture with the IBM Storwize system.
IBM Storwize system was connected to the multiple Video Management and Surveillance systems with the 16Gbps Fiber Channel connectivity. Single Storwize system can accommodate up to 1200 cameras depending on the camera resolution and retention period of the content.
For detailed information about the genetic reference architecture, please refer the solution document https://public.dhe.ibm.com/common/ssi/ecm/ts/en/tsw03523usen/TSW03523USEN.PDF
For detailed information about Milestone reference architecture, please refer the solution document
Spectrum Scale and Elastic Storage Server:
Elastic Storage Server powered by the Spectrum Scale provides a highly scalable solutions required for the very large surveillance customers like Airports, Metro city surveillance.
Spectrum scale and Elastic Storage Server based solutions were tested with the Genetec Security center and Milestone XProtect Corporate. Spectrum scale based solution provides a single name space based hierarchal storage options to store the content on the various storage tiers based on the data life cycle of the video content. It moves the data transparently between various storage tiers without impacting the video surveillance system.
Spectrum Scale based solution has been integrated with the Spectrum Archive Enterprise Edition to move the data seamless to the tape storage unit and reduce the overall Total Cost of the solution.
Figure 2: DVS high level architecture with Spectrum scale and ESS
For detailed information about the Milestone XProtect Corporate, Spectrum and Elastic Storage Server, please refer the below solution document
For detailed information about the Genetec Security Center with the Elastic Storage Server, Please refer the below solution documents