Modified on by MandarVaidya
The growing shift toward cloud computing and need for flexibility are making hybrid cloud solutions a serious business imperative. Hybrid cloud done right is an effective, highly agile, cost saving alternative to traditional storage only. For many organizations, disaster recovery is a primary need and the debut use case for bringing public cloud into their environment.
We have built some exciting hybrid cloud scenarios using a winning combination of IBM Spectrum Accelerate family offerings and VMware. IBM Spectrum Accelerate is software defined storage (SDS) from proven enterprise-class XIV technology. True to the calling card of SDS, it deploys on heterogeneous hardware – and IBM makes it deployable in every possible way: in private cloud, hybrid cloud and public cloud solutions, including as a service. It runs on purpose-built or customer-chosen commodity servers, can be hosted on public cloud infrastructures such as IBM Bluemix (IBM SoftLayer®), and is even available by a third party vendor as a pre-installed appliance. It can be licensed to run on XIV, FlashSystems A9000, A9000R systems for long-term investment value. It is available as a service, IBM Spectrum Accelerate on Cloud, ordered through IBM Passport Advantage® and supported by IBM Lab Services, for deploying by the terabyte on IBM Cloud.
The design and mature technology underlying IBM Spectrum Accelerate offer a faster path to deploying and managing a hybrid cloud built for agility, ease of use and cost savings, providing:
- Advanced VMware-centric hybrid cloud solutions including disaster recovery with XIV systems
- Exceptional performance, availability and advanced features from proven technology
- An efficient hyper converged infrastructure managed with an award winning GUI and vCenter
- The ease of hosting, moving and managing workloads in a single pane hybrid cloud environment
Coming to InterConnect 2017, Las Vegas, USA? Visit us in IBM Systems booth #344 (20-22 March 2017) for exciting live hybrid cloud demos:
For detailed implementation and configuration details, register and attend session #5028 "Implementing Disaster Recovery Solution across Hybrid Cloud using IBM Spectrum Storage" on Wednesday, 22nd March 2017 at InterConnect 2017.
1) Hybrid cloud disaster recovery solution leveraging VMware Site Recovery Manager and IBM Spectrum storage. Diagram below will provide architectural overview of the demo we are showing.
We can either have IBM XIV storage or IBM Spectrum Accelerate storage at the protected site and IBM Spectrum Accelerate at the recovery site. The Spectrum Accelerate instance can be located in another rack in the other data center, or span two physically separate data centers or be in a public cloud such as IBM Bluemix.
This hybrid cloud solution uses a fully tested and certified IBM XIV Site Recovery Adapter (SRA) to deliver business continuity across a wide range of failures. It further brings flexibility to an organization by enabling migration of virtual infrastructure workloads between data center and cloud.
2) Orchestrated and automated storage provisioning using vRealize Automation and vRealize Orchestrator with IBM Spectrum Accelerate family offerings spanning hybrid cloud
In this solution, we use the IBM® Spectrum Control Base Edition integration with VMware vRealize Orchestrator and vRealize Automation to take the service around infrastructure beyond orchestration. IBM Spectrum Control Base Edition is a centralized cloud integration system that consolidates IBM storage provisioning, virtualization, cloud, automation, and monitoring solutions through a unified server platform. By using VMware’s Advanced Service Designer feature in vRealize Automation and vRealize Orchestrator, together with IBM Spectrum Accelerate, we show how you can deliver XaaS (Anything-as-a-Service) across hybrid cloud deployments to your users.
Modified on by Shashank Shingornikar
Dockerizing Oracle Database
Docker is a next buzzword on the net now. While a lot of work has been done in dockerizing various applications under docker softwares like Oracle still poses challenges during installation / configuration / execution. This blog entry gives the users a flavour in integrating Oracle Database for running in Dockerized environment.
Are you sure this can be done ?
Yes. Not only can be done, when its available you'll hardly notice any difference as compared to instance / database available on a bare metal / VM.
Should this be done ?
For running Oracle Database production Instances we suggest NOT. Oracle will not support this officially.
So, what are the ideal use case ?
The use cases for this could be, creating an environment for testing PSU or CPU patch provided by Oracle or think of it as an environment for your developers who has a specific requirement or even when you want to create a training environment. Even your experienced DBA's will find it handy as a sandbox for personal testing.
What does the flow looks like ?
In the POC environment we made this a multi step process for better understanding and granular control over the process. Here is the basic flow.
Platform Image (PI) is the base image on which customization's are done using Dockerfile. PI is read only in nature hence in order to customize it, Docker creates an intermediate container which holds only the changes made to PI. These intermediate containers when saved they'll save the state in the form of read-only images. The process continues till a ORACLE_HOME image is created containing Oracle binaries.
Database instances are spanned using this image. In order to create / save the database on a persistent storage, a data only container can be created. The data container together with container based on ORACLE_HOME will be used to create database instance and database underneath.
Here is how the final picture will look like. In this environment two 12c docarized database instances are running. The CDB database looks like the as shown in the picture above. In order to create / clone / plug in the PDB's a NFS mount point is mapped from Storwize V7000 system on Docker host.
What do you say are the advantages of this ?
There are several advantages of this. Although there are multiple images, each of this image being read only gives a consistent starting point every time a container is spawned on the image. Each container has its own namespace isolation consisting of PID, Network, Control groups (CGROUPS). Thus each container when running behaves like a independent host in-spite of the fact that it is based on same image. The images can be saved or pushed to an on premise repository thus making them readily available to another docker host. The images can be moved across various platforms such as servers across a network, laptop or even on public cloud space such as Amazon Web Services.
Modified on by SandeepZende
With the surge of smartphones and tablets is also the surge of applications available for storing and sharing data. Employees of many organization tend to use the applications of their choice which are comfortable to them to store and share data. This puts organizations at risk of data security and puts pressure on them to think of having an official application which will allows its user to store and share data securely and also have the control of its data.
To gain the control over data, organizations need a very robust, reliable and easy to integrate storage devices for its file sharing applications. Along with security and control over its data, organizations are looking for additional services or features which can enhance the productivity of its employees.
Citrix offers ShareFile—an enterprise-class, IT-managed, secure file sync and sharing service. ShareFile offers IT the ability to control sensitive corporate data while meeting the mobility and collaboration needs of users and the data security requirements of the enterprise.
Citrix provides multiple options to store the data be it on premise, in the cloud or a combination of both to meet the needs for data sovereignty, compliance, performance and costs. For organizations that require increased data protection, ShareFile offers customers the ability to encrypt data with their own encryption keys.
IBM has more than one option to take care of the on-premise storage solution and one being the answer to a highly scalable file sync and share solution and that is IBM Spectrum Scale. The other storage option is IBM Storwize V7000 Unified which provides a unique combination of file as well as block storage option to small and medium level file sync and share solutions.
ShareFile extends an organizations’ data strategy to include existing network file drives, SharePoint and OneDrive for Business - allowing a single point of access to all data sources. StorageZone Connectors makes it easy to securely access documents which otherwise cannot be accessed outside of corporate networks or on mobile devices. Access any enterprise content management (ECM) system with StorageZone Connectors SDK, expanding the types of data users can access and edit on-the-go via ShareFile.
Advanced security features including remote wipe, device lock, passcode protection, white/black listings and data expiration policies allow you to determine exactly how sensitive data is stored, accessed and shared. Track and log activity in real-time and create custom reports to meet compliance requirements.
While IBM Spectrum Scale brings the scalability and performance with it, it also can add the value by using the features below:
- File encryption and secure erase
- Transparent flash cache
- Network performance monitoring
- Active File Management (AFM) parallel data transfers
- NFS version 4 support and data migration
- Backup and restore improvements
- File Placement Optimizer (FPO) enhancements.
Other features of IBM Storwize V7000 Unified:
- IBM Storage Mobile Dashboard
- Dynamic Migration
- IBM Easy Tier
- Thin provisioning
- Flash drives
- Active File Management (AFM) parallel data transfers
- IBM HyperSwap
- IBM Real-time Compression
- Encryption for virtualized storage
Below is the high level flow diagram of the solution using IBM Spectrum Scale
For the high level overview of Citrix ShareFile and IBM Storage Systems, follow the link below:
For more information of a solution with IBM Spectrum Scale, follow the link below:
For more information of a solution with IBM Storwize V7000 Unified, follow the link below:
Disclaimer: Above are my personal thoughts and not necessarily of my employer.
Modified on by MandarVaidya
Everyone who works in mission critical environments understand the need of having effective disaster recovery solution. Organizations demand disaster recovery operations fully automated and could be executed in a repeatable manner making them always ready for disaster situations. In addition, organizations always demanded seamless migration of applications across the sites for planned activities.
What is IBM and VMware’s joint DR solution in a virtualized environment?
IBM SAN Volume Controller (SVC) stretched cluster with VMware Site Recovery Manager (SRM) providing support for stretch cluster (announcement link) is an ideal combination for disaster recovery solution using IBM Storwize Family Storage Replication Adapter (SRA). It offers customers the ability to survive a wide range of failures transparently by planning for disaster avoidance, disaster recovery and mobility. This solution also offers planned live migration of applications running on virtual machines across the sites by orchestrating cross vCenter vMotion operations, enabling zero-downtime application mobility.
IBM SVC is an industry leading storage virtualization solution that can virtualize storage devices practically from all other storage vendors. With stretched cluster implementation, customers can enjoy active-active configurations with servers and ESXi hosts can connect to storage cluster nodes at all sites. It helps to create balanced workloads across all nodes of clusters and provides disaster recovery capabilities in case of site failures.
VMware SRM can be seamlessly configured with IBM SVC stretched clusters using IBM Storwize Family SRAs. For configuring the solution, SVC nodes are set up in stretched cluster configuration with ESXi servers able to access storage across both the sites. Quorum site is set up as per IBM SVC stretched cluster configuration requirements to resolve tie-break situation in case of link failure between the two main sites. Each VMware vCenter server is configured to manage the ESXi servers at each site. VMware SRM is installed on each site to configure and automate the disaster recovery solution.
How to configure solution?
There are documents available individually describing IBM SVC stretched cluster and VMware Site Recovery Manager and their benefits and respective configuration details. Purpose of this blog is to touch key steps and guidelines required to configure solution together for planned and unplanned downtimes.
What configuration is needed on SVC?
- Configure SVC in a stretched cluster mode
SVC supports stretched cluster configuration for some time now. Stretched cluster implementation allows the configuration of two nodes in an I/O group which are separated by a distance between two locations. These two locations (sites) can be two racks in a data center, two buildings in a campus, or two labs between supported distances. A third site is configured to host a quorum device that provides an automatic tie-break in the event of a potential link failure between the two main sites.
- Configure mirrored volume on a SVC stretched cluster
In SVC, volume mirroring feature is used to keep two physical copies of a volume. Each volume can belong to a different pool. In case of stretched cluster feature, a mirrored volume can be configured from the external storages across two physically separated sites.
Any special need for vCenter and SRM installation for supporting this solution?
SRM stretch cluster support takes advantage of vSphere’s ability to perform vMotion across the sites and across the vCenter server instances. Therefore, the two vCenter server instances will need to be configured (at protected and recovery sites) in enhanced linked mode to enable cross vCenter vMotion.
- SRM installation at protected and recovery sites
Install SRM server instances at protected and recovery sites and register SRM server instances with Platform Service Controllers at each site respectively.
Where does IBM SRA come into picture?
IBM Storwize Family SRA is a software add-on that integrates with SRM to run the failover. It extends SRM capabilities and uses replication and mirroring as part of the SRM comprehensive Disaster Recovery Planning (DRP) solution. IBM Storwize Family SRA is installed at protected and recovery site and it works with SRM instance to run failovers.
What’s new while creating vSphere storage policy?
Site Recovery Manager 6.1 adds a new type of protection group which is a storage policy-based protection group. Storage policy-based protection groups use vSphere storage profiles to identify protected datastores and virtual machines. They automate the process of protecting and unprotecting virtual machines and adding and removing datastores from protection groups. In order to easily identify IBM storage objects in vSphere inventory, you can create an IBM storage tag to create tag rule based storage policy and then associate stretched datastore to a storage policy using IBM storage tag based rules.
How to configure SRM for this solution?
- After pairing the sites together, IBM Storwize Family SRA should be registered with the SRM server instances at primary and recovery sites and then configure array manager using SVC nodes.
- Configure bidirectional Network Mappings, Folder Mappings, Resource Mappings, and Placeholder Datastores Mappings between protected and recovery sites.
- NEW ⇒ SRM 6.1 allows you to configure storage policy based protection group using storage policy mappings. When the storage policy at the protected site is mapped to storage policy at the recovery site, SRM places the recovered virtual machines in the vCenter server inventory and on datastores on the recovery site according to the storage policies that is mapped to on the recovery site.
- NEW ⇒ Storage policy based protection group enables automated protection of virtual machines that are associated with a storage policy which in turn are created by tagging them to reside on a particular datastore. When a virtual machine is associated or disassociated with a storage policy, SRM automatically protects or unprotects it.
- Configure a recovery plan using storage policy based protection group.
Why to test recovery plan?
The tested recovery plan make the environment ready for disaster recovery situations by running almost every aspect of a recovery plan. It is strongly recommended to test the recovery plan for planned migration and disaster recovery situations to avoid surprises.
Okay. I've recovery plan but what’s next??
Failover and reprotect recovery plan: After successfully testing a recovery plan, recovery plan is ready for either planned failover or disaster recovery situations. After fail over, recovery site becomes primary. SRM provides reprotect function to provide automated protection in a reverse direction.
Hopefully above steps will give overview of various configuration steps required to setup a solution and plan accordingly. For additional details related to the configuration, refer technical guide Implementing disaster recovery using IBM SAN Volume Controller and VMware Site Recovery Manager.
Disclaimer : These are my personal views and do not necessarily reflect that of my employer.
Modified on by EricJohnson
For information technology (IT) customers looking to control site expansion costs, Microsoft offers their Azure cloud services. To appeal to larger customers with existing disaster recovery (DR) models that span multiple sites and use SAN solutions, Microsoft recently added Azure Site Recovery to their cloud services mix. This allows Microsoft to target a full spectrum of potential customers for their cloud services. For small customers that cannot afford the costs (both CAPEX and OPEX) associated with additional sites, traditional Azure services meet their DR requirements. However, in order to increase further Azure business revenue, Microsoft realized they needed to attract more large businesses with existing SAN infrastructures by appealing to cost-conscious CIOs facing common IT budget constraints. In essence, Microsoft cloud services continues to appeal to smaller customers who can not add data centers or sites and larger customers who wish to control site sprawl. Why bother with the cost and management headaches associated with maintaining additional sites for disaster recovery to meet customer service level agreements by protecting business critical data and services, when you can let Microsoft protect them for you and save your money and sanity for other high priority business needs? That is where Microsoft Azure and ASR services come into play.
Heralded as Microsoft’s cloud computing platform, Azure provides a simple, reliable, and extensible web-based interface or front end that is tightly integrated with a Microsoft System Center VMM and SQL Server back end. While the overall Azure model is multi-tiered, think of it, more or less, as a web management portal that uses Internet Information Services (IIS) at its foundation with VMM as the engine that drives its cloud tasks. VMM in turn, stores all of the cloud configuration and environment data in a SQL Server database. Of course, the Azure cloud itself is based on the Microsoft System Center application suite and consists of a Microsoft global network of secure data centers that offer compute, storage, network, and application resources to help protect your data and offset the high availability and administrative costs of building and managing additional sites. Even though Azure has multiple tiers, the storage array aspects of Azure Site Recovery using SAN replication for on-premises clouds and how replication differs from traditional Hyper-V replica implementations is the primary focus of this blog.
The Hyper-V replica feature is designed to protect VMs hosted by different servers using a built-in replication mechanism at the VM level. A primary site VM can asynchronously replicate to a designated replica site using an Ethernet network infrastructure including local area networks (LAN) or wide area networks (WAN). The designated replica remains offline in a stand-by state pending planned or unplanned VM failovers. After the initial VM copy is replicated to the secondary site, asynchronous replication occurs for only the primary VM changes. This network-based replication does not require shared storage or specific storage hardware and Hyper-V replicas can be established between stand-alone or highly available (HA) VMs, or a combination of both. The Hyper-V servers can be geographically dispersed and the VMs are not even required to belong to a domain. Thus, the Hyper-V replica requirements are rather basic and easy to implement yet are restricted to asynchronous network replication only.
Until recently, Azure could only leverage Hyper-V replicas using a network replication channel but now can use SAN replication between two on-premises VMM sites or clouds. With the addition of a Hyper-V replica SAN replication channel, synchronous replication can be used to eliminate asynchronous lag times and multiple VM consistency is possible using Azure Site Recovery. However, it is important to realize that asynchronous SAN replication behavior is similar to asynchronous Hyper-V network replication because after the initial VM copy is replicated to the secondary site, asynchronous SAN replication occurs for only the primary VM changes. However, if using IBM Real-time Compression, performance gains are also realized because less data is required to replicate over the SAN. No matter the storage options such as compression, with just a few clicks in the Azure Site Recovery management portal, simple orchestration of IBM XIV replication and disaster recovery for Microsoft Hyper-V environments can be automated in the form of planned and unplanned site failovers. In a practical sense, this collection of Azure SAN replication enhancements and disaster recovery functionality is an extension of past Microsoft System Center VMM storage automation features.
So with the introduction of Microsoft ASR cloud services, larger customers have the option to provide DR for their private clouds using IBM XIV SAN replication but they also can take advantage of hybrid cloud protection by subscribing to Microsoft Azure services. This services model gives customers the opportunity to protect their existing data center and SAN infrastructure investments while enticing them to purchase additional Microsoft Azure cloud services. Refer to Figure 1 below for a general Microsoft cloud layout that uses ASR with IBM XIV SAN replication:
Figure 1: Microsoft Azure Site Recovery using IBM XIV SAN replication general lab configuration
For further information about Microsoft ASR using IBM XIV Storage System Gen3, including step-by-step configuration processes, please refer to the following white paper:
Modified on by Shashank Shingornikar
After successfully implementing Real-time Compression feature in Storwize V7000, IBM has taken a step further bringing this patented technology in IBM XIV storage system. In a recent announcement of XIV 11.6.0 release, the Real-time Compression feature is seamlessly integrated in XIV storage system. Eliminating the need to add any extra hardware, the IBM Random Access Compression Engine (RACE) technology is now integrated with XIV storage system software stack to compress data before writing it to disk (above cache mechanism) resulting in up to 80% storage capacity savings.
It is designed with transparency in mind so that it can be implemented without changes to applications, hosts, networks, fabrics, or external storage systems. The solution is not visible to hosts, thus users and applications continue to work as is. To estimate the compression savings on an existing XIV non compressed volumes, Comprestimator utility is now integrated with XIV software.
What does Compression has in store for me ?
On the XIV system the compression ratio for all uncompressed volumes in the system is continuously estimated, even before enabling compression. The figure shows the various stages of volumes on the system ranging from uncompressed to potential savings and finally the total amount of compression on a volume
What are the Compression benefits for XIV ?
With the inline implementation of Real-time Compression the IBM XIV now delivers dramatic cost savings without need for extra hardware and provides following benefits :
- Increases usable capacity per rack typically to one Petabyte or more with Real-time Compression, greatly reducing effective cost per capacity
- Replicates compressed data faster and using less bandwidth, freeing up bandwidth for other uses
- Continuously displays predicted or actual compression ratios for all volumes
- Converts non-compressed volumes to compressed non-disruptively
So How does it work ?
Real-time compression implementation in XIV storage uses above cache architecture where data is compressed or de-compressed between the I/O interface and the cache. The compression node runs on every module of XIV taking advantage of parallel architecture of XIV. It compresses the portion of volume which only belongs to the module and thus distributing compression workload across all the modules of XIV. Hence, Real-time Compression implementation in XIV have minimal impact on the performance delivered by XIV.
Whenever write operations happen, data is compressed before they enter cache and acknowledgment is sent back to the host. During read operations, reads are stored compressed in cache and data is de-compressed when they are read from cache using RACE before passing it to the host. During XIV mirroring operation, data is compressed only once and compressed data is sent across the network reducing network bandwidth.
What will benefit more from Compression ?
- Database environments – DB2, Oracle, MS-SQL, and so on
- Database Applications – SAP, Oracle applications, and so on
- Server/Desktop Virtualization – KVM, VMware, Hyper-V, and so on
- Other compressible workloads – seismic, engineering, and so on
- Email – Microsoft Exchange, and so on
Are there any guidelines for Compression?
- IBM Real-time Compression is appropriate for data that has the following characteristics:
- Any data for which the Comprestimator tool estimates 25% or higher savings
- Volumes that contain data that is not already compressed (for example, un-compressed image and video files)
- Data for which application based encryption is not used or data that is not sent encrypted to the XIV.
Anything I can refer to ?
Real-time Compression not only works best with randomly accessed data such as database like IBM DB2, Oracle, MS-SQL Server but it also provides good results with server virtualization solutions like VMware, KVM, Hyper-V. When using Oracle databases, compressed volumes take advantage of above cache architecture compressing the writes seamlessly. A 57% compression has been observed during creation of a terabyte of data with minimal performance penalty. ( Publication : WP102551 )
VMware vSphere virtual machines can be seamlessly deployed on the compressed volumes, often with the compression savings of 50% to75%, allowing customers to reduce the storage capacity required for vitalized environments. ( Publication : WP102552 )
Microsoft Hyper-V virtualization helps customers maximize System x server and other resource use. Included in Windows Server, Microsoft Hyper-V virtualization helps reduce costs by allowing a greater number of application workloads to be hosted on fewer physical servers. When using Microsoft SQL Server 2012 SP1 OLTP data files and VM Windows Server 2012 R2 system files stored in Hyper-V virtual disk and XIV compressed volume achieved 73% compression savings ( Publication : WP102553 )
What about the performance ?
While the team tested the compression benefits and compiled the paper, another team from IBM Tel Aviv lab, had been busy with performance testing of the Oracle database hosted on the IBM XIV compressed volumes.
In the test setup, the team used, both compressed and uncompressed volumes configured on XIV for better parallelism. These volumes were mapped to the ESX system hosting the database server to create multiple VMFS file systems. A 5 TB database was created on the VMFS volumes using the Benchmark Factory tool. During the test run of 12 hour, load starting with 1,000 to a maximum of 30,000 users was made to put the system under a realistic production load. The I/O per second (IOPS) and response time information shown by the Benchmark Factory tool is shown by Figure below. Each point on the graph indicates an addition of 2500 users. The graph clearly indicates that the application has minimum impact in terms of response time when using the compressed volumes.
Blog Authors: Mandar Vaidya, Shashank Shingornikar
Modified on by UdayasuryanKodoly
Real-time collaboration and information sharing are key drivers of an enterprise’s productivity and innovation. Finding solutions to enable such dynamic sharing in an enterprise setting while maintaining control, however, can be a challenge. Some organizations look to consumer-grade, cloud-based file sharing options that offer the scalability, ease of use and access users want but store sensitive company data on external servers. This exposes organizations to risks of data leaks while limiting IT visibility. Other options include using existing enterprise collaboration and content management systems that might be challenging to maintain and cumbersome for users.
What exactly is the solution?
The combined IBM® Spectrum Scale for object storage and ownCloud software technologies helps enterprises to build highly scalable, secure, and flexible on-premise file sync and share solution. The ownCloud provides universal file access through a common file access layer to the IBM Spectrum Scale for object storage. The data files are kept in on-premise Spectrum Scale for object storage. ownCloud allows enterprises IT organizations to regain control of sensitive data with managed file sync and share which gives users universal file access to all of their data:
- Manage and protect data on-premise – Using IBM Spectrum Scale for object storage, with the complete software stack running on servers inside the data center, controlled by trusted administrators, managed to established policies.
- Integrate with existing IT system resources and policies – such as authentication systems, user directories, governance workflows, intrusion detection, monitoring, logging and storage management.
- Provide access through a comprehensive set of application programming interfaces ( API) and mobile libraries to customize system capabilities, meet unique service requirements, and accommodate changing user needs.
Why enterprises wants on-premise files sync and share solution?
Storing data off-premise may strip an organization’s ability to manage and control its data, or to ensure that data can be deleted. Few enterprises, however, are willing to forgo the benefits that cloud services provide in the advancement of agility and improved business processes. That leaves them struggling with how to use these technologies without importing security risks. They also recognize that users are increasingly able to migrate to external services that provide them greater flexibility and mobility than that offered by the enterprise.
By retaining on-premises manageability of file sync and share services, though, IT can use a private cloud solution to reconcile the need for cloud technology with the requirements for security, privacy, and regain control of sensitive data without unwanted exposure. With the ability to enhance control and govern access to files, IT administrators can set sophisticated rules for user and device connections and prevent access based upon those rules. Further, the capabilities and extensibility of on-premise file sync and share match the ease of use and complete access that first drove consumption of cloud services, yet IT controls sensitive assets in its own cloud environment.
Solution Lab testing
This solution consists of multiple servers installed with ownCloud server software. The ownCloud is a PHP web application running on top of Apache on Linux. This PHP application manages every aspect of ownCloud, from user-management to plug-ins, file sharing and storage. Attached to the PHP application is a database where ownCloud stores user information, user-shared file details, plug-in application states, and the ownCloud file cache (a performance accelerator). ownCloud accesses the database through an abstraction layer, enabling support for Oracle, MySQL, SQL Server, and PostgreSQL. Complete webserver logging is provided through webserver logs, and user and system logs are provided in a separate ownCloud log, or can be directed to a syslog file.
In the lab testing environment, an Active Directory (AD) is integrated with the ownCloud for user account provisioning. IBM Spectrum Scale for object storage is configured with local authentication. It is possible to configure IBM Spectrum Scale for object storage with enterprise directory server such as AD or Lightweight Directory Access Protocol (LDAP).
OpenStack Swift is installed on the protocol node(s) of the IBM Spectrum Scale for object storage.
IBM Spectrum Scale is a proven, enterprise-class file system, and OpenStack Swift is a best-of-breed object-based storage system. IBM Spectrum Scale for object storage combines these technologies to provide a new type of cloud storage that includes efficient data protection and recovery, proven scalability, and performance; snapshot and backup and recovery support; and information lifecycle management. Through these features, IBM Spectrum Scale for object storage can help simplify data management and allow enterprises to realize the full value of their data.
ownCloud is a self-hosted file sync and share server. It provides access to on-premises data through a web interface, sync clients while providing a platform to view, sync and share across devices easily, while gives the enterprises the ability to manage and control their data. ownCloud’s open architecture is extensible through a simple but powerful APIs for applications and plug-ins and works with seamlessly with IBM Spectrum Scale for object storage.
The combined IBM Spectrum Scale for object storage and ownCloud server technologies helps enterprises to build highly scalable, secure, and flexible on-premise file sync and share solution.
To learn more about the solution, please see the solution technical paper: https://www-304.ibm.com/partnerworld/wps/servlet/ContentHandler/stg_ast_sto_wp_on-premise-file-syn-share-owncloud
Modified on by Shashank Shingornikar
How far have you gone before in tuning your database ? be it Oracle or DB2. The efforts are never good enough and before you breathe easy the battle begins .. again and again ...
Now you can relax a bit ... With IBM Easy Tier Server functionality available with Easy Tier, you'll be able to get more work done in terms of improvement in transactions per second (TPS).
So what exactly is Easy Tier Server ?
IBM Easy Tier Server is a unified storage caching and tiering solution across AIX servers and supported direct-attached storage (DAS) flash drives. Easy Tier Server allows the most frequently accessed or “hottest” data to be placed (cached) closer to the hosts, thus overcoming the SAN latency. The Easy Tier Server core relies on DS8870 cooperating with heterogeneous hosts to make a global decision on which data to copy to the hosts’ local SSDs, for improved application response time. Therefore, DAS SSD devices play an important role in an Easy Tier Server implementation. Specializing in high I/O performance, SSD cache has the upper hand in cost per input/output operations per second (IOPS).
The Easy Tier technology has evolved over years and is now in its fifth generation. Easy Tier Server is one of several Easy Tier enhancements, introduced with the DS8000 Licensed Machine Code 7.7.10.xx.xx. Both Easy Tier and Easy Tier Server licenses, although required, are available at no cost.
Which workloads are best fit for Easy Tier Server ?
Because Easy Tier Server implements a read-only local DAS cache on the hosts, there are some particular scenarios that can take the best advantage of this feature. These are
- Real-time analytics workload
- Large content data
- Online transaction processing (OLTP) workload
- Virtual machine (VM) consolidation
- Big Data
Under the hood
The Easy Tier Server feature consists of two major components
- The Easy Tier Server coherency server
The Easy Tier Server coherency server runs in the DS8870 and manages how data is placed onto the internal flash caches on the attached hosts. Also, it integrates with Easy Tier data placement functions for the best optimization on DS8870 internal tiers (SSD, Enterprise, and Nearline). The coherency server asynchronously communicates with the hosts system (the coherency clients) and generates caching advice for each coherency client, which is based on Easy Tier placement and statistics.
- The Easy Tier Server coherency client
The Easy Tier Server coherency client runs on the host system and keeps local caches on DAS solid-state drives. The coherency client uses the Easy Tier Server protocol to establish system-aware caching that interfaces with the coherency server. An Easy Tier Server coherency client driver cooperates with the operating system to direct I/Os either to local DAS cache or to DS8870, in a transparent way to the applications.
The POWER system has a DAS attached which is used by Easy Tier Server Coherency Client Driver to create local cache. Easy Tier Server coherency clients are designed to route I/O read hits to the application host DAS, while sending read misses directly to DS8870. In the same way, the write I/Os are routed to DS8870 and cache pages related to the I/O address spaces are invalidated on the client’s local cache to keep cache coherency and data integrity. The coherency client and coherency server share statistics to ensure that the best caching decisions are made.
And the bottom line is ?
In the lab a brokerage OLTP workload was executed simulating maximum amount of read requests. In the beginning of the run the hdisks configured for the ASM DATA disk group showed maximum utilization as no caching was enabled. 60 minutes in the run, the caching was enabled on the database host running the workload. Soon after the caching was enabled, the Easy Tier Server starts migration of hot extents from DS8870 to the database host running the Easy Tier Server coherent client. Over a period, as more and more hot extents are migrated from DS8870, maximum activity was observed on cache devices and lesser activity from DS8870 storage. As more and more extents (containing required data) were cached, the read operations requests were satisfied locally thus eliminating the need to read data from storage. The effective utilization of locally cached data showed 100% improvement in the TPS observed during the test run.
Whether it is a latency sensitive environment, high read/write ratio applications, or a highly parallel processing system, there is an increasing need to process data quickly and Easy Tier Server can be considered for these situations.In cases where the read performance of the storage can lead to a major bottleneck to the environment, there is a high value in faster storage, and therefore, a good fit for the Easy Tier Server.
Publications and Resources
A white paper WP102534 is available on IBM Techdocs website that provides detailed information on the testing effort.
IBM System Storage DS8870: Architecture and Implementation, SG24-8085
IBM System Storage DS8000 Host Attachment and Interoperability, SG24-8887
IBM System Storage DS88870 Product Guide
IBM System Storage DS8000 Easy Tier, REDP-4667
IBM System Storage DS8000 Easy Tier Heat Map Transfer, REDP-5015
IBM System Storage DS8000: Easy Tier Application, REDP-5014
Views / thoughts expressed above are my own, not necessarily of my employer.
Modified on by Prashant Avashia
Lately, patients are expecting their physicians to provide them a higher quality healthcare with intelligent, immediate insights from their radiological images, clinician notes and lab results. They are demanding simple diagnostic guidance, customized treatment options & immediate digital access to their personal medical information on their mobile devices, securely.
The primary driver for the transformation of radiological imaging services from volume-based imaging to patient-centric value-based imaging enables a closer dialogue between the radiologists, physicians & specialists. This transformation delivers superior patient experience, higher clinical accountability, relevant diagnostic insights & clinical decisions, minimizes medical errors, minimizes complexity of care delivery models, establishes consistent outcomes for chronic diseases, achieves tighter integration with non-radiology systems (EMR, Ambulatory, HIS, Lab Services, and more) and improved communication across the entire healthcare enterprise. As defined by American College of Radiology, Imaging 3.0 is a multiphase program initiative covering services, technology, tools and processes. Leveraging this initiative, radiologists adapt how they manage their practices, patient care and their own futures as the industry transforms itself from volume to value based imaging services.
The current reality is that many healthcare systems are not designed to facilitate easy information sharing across the enterprise – and particularly true with medical imaging data. With lack of standards, and with no clear integration and interoperability between imaging and non-radiology systems, it diminishes communication between physicians, radiologists and the specialists, with minimal or no access to real-time symptomatic evidence in the collaborative point-of-care process. Potentially, it contributes to process delays, clinical workflow inefficiencies, and diagnostic errors.
New Era Healthcare Environments with Mobile & Cloud capabilities demand digital transformation with better data economics. It is important to provide patients the ability to securely view, download & transmit health information quickly. IBM realizes many of these transformational challenges in a Healthcare Enterprise. IBM has successfully delivered pre-qualified Medical Imaging & Archiving solutions with leading Business Partners (in Healthcare Industry) on enabling and deploying their Cardiology, PACS and Enterprise Imaging applications on IBM Storage Systems including IBM System Storwize® V7000, DS8870®, IBM XIV®, IBM System Storage® San Volume Controller, and/or the IBM FlashSystemTM. IBM Spectrum StorageTM based Solutions deliver the potential of extracting insights from data volumes, and increase business agility by offering its functional capabilities as software, or as a cloud, or as a managed service offering. These all, are complete, ready-to-deploy, proven, high-performance solutions that help accelerate the time-to-value, with reliability, security and speed.
Many happy, satisfied IBM clients, including emergency centers, radiology departments, multi-specialty clinical groups, and hospital networks are currently running their imaging applications, on a variety of IBM Storage systems. They are running their imaging solutions, on premise at a local datacenter, globally at multiple datacenters, or delivering them from the cloud. They are leveraging DS8870 for enterprise critical applications, Storwize family for virtualization capabilities, FlashSystem for application and patient-data acceleration facilitating life-critical response, or XIV for cloud-specific standard based deployments. IBM storage systems also support built-in data encryption capabilities, instantaneous video imaging for angioplasty, laparoscopy, endoscopy or other clinical procedures, and real-time compression of non-imaging data.
As patient-centric delivery models continually evolve with the transition from volume based imaging services to value based imaging services, the IBM solution choices become very important in designing and implementing flexible storage architectures for imaging applications and facilitate reliable, secure and fast access to patient data, anywhere. IBM will continue to partner with leading Business Partners (in Healthcare) to deliver proven, superior storage solutions that will ultimately improve provider collaboration and better patient outcomes, and at significantly lower costs.
To learn more, I recommend checking out the following paper at URL: http://www.ibm.com/common/ssi/cgi-bin/ssialias?subtype=WH&infotype=SA&appname=SNDE_HL_HL_USEN&htmlfid=HLW03016USEN&attachment=HLW03016USEN.PDF
Modified on by MandarVaidya
Official release of VMware vSphere Virtual Volumes (VVol) in Q1 2015 has generated tremendous interest with customers. VVol extends VMware's software defined story to its storage partners and it completely changes the paradigm in which storage is consumed by the hypervisor. With VVol implementation, storage intensive tasks are off-loaded by the server hypervisor to application-aware, policy-driven storage. It also simplifies storage management, puts the virtual machines in charge of their own storage, and gives more fine-grained control over virtual machine storage. With Virtual Volumes, an individual virtual machine, not the datastore, becomes a unit of storage management, while storage hardware gains complete control over virtual disk content, layout, and management.
IBM is VMware’s strategic alliance partner and is a key design partner for VVol. IBM has announced support of VVol with XIV storage in lock-step with VMware’s general availability of vSphere 6.0 product. IBM’s integration of Virtual Volumes in XIV is based on the VMware API for Storage Awareness (VASA 2.0) delivered by IBM Spectrum Control Base Edition. This integration facilitates off-loading of the following storage-intensive virtual machine operations to IBM XIV storage with predictable performance and effective capacity utilization.
Snapshot operations of a virtual machine using Virtual Volumes datastore
Cloning of virtual machine using Virtual Volumes datastore
Storage migration of virtual machine from non- VVol to Virtual Volumes datastore
The figure below shows a pictorial representation of a Virtual Volumes implementation with XIV using IBM Spectrum Control Base Edition.
IBM Spectrum Control Base Edition implements the VMware Virtual Volumes APIs, providing a separate management bridge between vSphere and XIV storage. This management bridge separates the data path from the management path. IBM Spectrum Control Base Edition enables communication between vSphere stack (ESXi hosts, vCenter server and the vSphere Web Client) and IBM XIV storage. IBM Spectrum Control Base Edition maps virtual disk objects related to virtual machines and their derivatives such as snapshots and clones, directly to the XIV storage system.
ESXi hosts access Virtual Volumes through an intermediate point in data path, called the Protocol Endpoint (PE). It is also referred to as the Administrative Logical Unit (ALU) on XIV storage. ALU allows XIV storage to carry out storage-related tasks on behalf of the ESXi host.
Virtual Volumes reside on storage containers on XIV. Storage containers represent groupings of Virtual Volumes attached to a virtual machine. IBM Spectrum Control Base Edition associates a storage container with a single XIV pool. The storage containers are characterized by a storage service which combines storage capacity along with storage attributes such as encryption, thick/thin provisioning type. This storage container acts as a virtual datastore and matches application specific requirements of a virtual machine .
For detail step by step implementation of VVol in IBM XIV using IBM Spectrum Control Base Edition, refer to this technical paper : https://www.ibm.com/partnerworld/page/stg_ast_sto_wp-vmware-vsphere-virtual-volumes-using-xiv
IBM XIV delivers excellent levels of storage abstraction, easy automated provisioning and policy compliant capabilities through its integration with VVol. IBM Spectrum Control Base Edition delivers the VASA capabilities for XIV’s tight integration with VVol and plays a strategic role in IBM’s software defined storage initiative by providing storage agility and efficiencies required for today’s demanding application workloads.
Here are some videos you might also like to view to hear directly from VMware and IBM on our strategic partnership and joint VVol development efforts.
Powerful IBM XIV Storage Integration with VMware Virtual Volumes - Laura Guio
VMware vSphere Virtual Volumes and IBM XIV: A perfect fit
Additionally we have a Virtual Volume demo you should check out:
vSphere Virtual Volumes (VVOL) with IBM XIV Storage System
If you happen to onsite at the IBM Edge2015 event in Las Vegas the week of May 11th, be sure to attend the IBM-VMware session on Monday or Friday on this very topic:
Monday, 5/11 4:30 - 5:30 pm, San Polo 3503
Friday, 5/15 10:30 - 11:30 am, San Polo 3503
IBM Spectrum Control Base Edition: Orchestrate and Automate IBM Storage with VMware
Presenters: Yossi Siles, IBM and Rawlinson Rivera, VMware